WorldWideScience

Sample records for video decoding algorithms

  1. The serial message-passing schedule for LDPC decoding algorithms

    Science.gov (United States)

    Liu, Mingshan; Liu, Shanshan; Zhou, Yuan; Jiang, Xue

    2015-12-01

    The conventional message-passing schedule for LDPC decoding algorithms is the so-called flooding schedule. It has the disadvantage that the updated messages cannot be used until next iteration, thus reducing the convergence speed . In this case, the Layered Decoding algorithm (LBP) based on serial message-passing schedule is proposed. In this paper the decoding principle of LBP algorithm is briefly introduced, and then proposed its two improved algorithms, the grouped serial decoding algorithm (Grouped LBP) and the semi-serial decoding algorithm .They can improve LBP algorithm's decoding speed while maintaining a good decoding performance.

  2. Decoding algorithm for vortex communications receiver

    Science.gov (United States)

    Kupferman, Judy; Arnon, Shlomi

    2018-01-01

    Vortex light beams can provide a tremendous alphabet for encoding information. We derive a symbol decoding algorithm for a direct detection matrix detector vortex beam receiver using Laguerre Gauss (LG) modes, and develop a mathematical model of symbol error rate (SER) for this receiver. We compare SER as a function of signal to noise ratio (SNR) for our algorithm and for the Pearson correlation algorithm. To our knowledge, this is the first comprehensive treatment of a decoding algorithm of a matrix detector for an LG receiver.

  3. Decoding Interleaved Gabidulin Codes using Alekhnovich's Algorithm

    DEFF Research Database (Denmark)

    Puchinger, Sven; Müelich, Sven; Mödinger, David

    2017-01-01

    We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent.......We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent....

  4. Decoding Hermitian Codes with Sudan's Algorithm

    DEFF Research Database (Denmark)

    Høholdt, Tom; Nielsen, Rasmus Refslund

    1999-01-01

    We present an efficient implementation of Sudan's algorithm for list decoding Hermitian codes beyond half the minimum distance. The main ingredients are an explicit method to calculate so-called increasing zero bases, an efficient interpolation algorithm for finding the Q-polynomial, and a reduct......We present an efficient implementation of Sudan's algorithm for list decoding Hermitian codes beyond half the minimum distance. The main ingredients are an explicit method to calculate so-called increasing zero bases, an efficient interpolation algorithm for finding the Q...

  5. Bitstream decoding processor for fast entropy decoding of variable length coding-based multiformat videos

    Science.gov (United States)

    Jo, Hyunho; Sim, Donggyu

    2014-06-01

    We present a bitstream decoding processor for entropy decoding of variable length coding-based multiformat videos. Since most of the computational complexity of entropy decoders comes from bitstream accesses and table look-up process, the developed bitstream processing unit (BsPU) has several designated instructions to access bitstreams and to minimize branch operations in the table look-up process. In addition, the instruction for bitstream access has the capability to remove emulation prevention bytes (EPBs) of H.264/AVC without initial delay, repeated memory accesses, and additional buffer. Experimental results show that the proposed method for EPB removal achieves a speed-up of 1.23 times compared to the conventional EPB removal method. In addition, the BsPU achieves speed-ups of 5.6 and 3.5 times in entropy decoding of H.264/AVC and MPEG-4 Visual bitstreams, respectively, compared to an existing processor without designated instructions and a new table mapping algorithm. The BsPU is implemented on a Xilinx Virtex5 LX330 field-programmable gate array. The MPEG-4 Visual (ASP, Level 5) and H.264/AVC (Main Profile, Level 4) are processed using the developed BsPU with a core clock speed of under 250 MHz in real time.

  6. Video coding for decoding power-constrained embedded devices

    Science.gov (United States)

    Lu, Ligang; Sheinin, Vadim

    2004-01-01

    Low power dissipation and fast processing time are crucial requirements for embedded multimedia devices. This paper presents a technique in video coding to decrease the power consumption at a standard video decoder. Coupled with a small dedicated video internal memory cache on a decoder, the technique can substantially decrease the amount of data traffic to the external memory at the decoder. A decrease in data traffic to the external memory at decoder will result in multiple benefits: faster real-time processing and power savings. The encoder, given prior knowledge of the decoder"s dedicated video internal memory cache management scheme, regulates its choice of motion compensated predictors to reduce the decoder"s external memory accesses. This technique can be used in any standard or proprietary encoder scheme to generate a compliant output bit stream decodable by standard CPU-based and dedicated hardware-based decoders for power savings with the best quality-power cost trade off. Our simulation results show that with a relatively small amount of dedicated video internal memory cache, the technique may decrease the traffic between CPU and external memory over 50%.

  7. Processing Decoded Video for Backlight Dimming

    DEFF Research Database (Denmark)

    Burini, Nino; Korhonen, Jari

    Quality of digital image and video signals on TV screens is aected by many factors, including the display technology and compression standards. An accurate knowledge of the characteristics of the display andof the video signals can be used to develop advanced algorithms that improve the visual...... rendition of the signals, particularly in the case of LCDs with dynamic local backlight. This thesis shows that it is possible to model LCDs with dynamic backlight to design algorithms that improve the visual quality of 2D and 3D content, and that digital video coding artifacts like blocking or ringing can...... be reduced with post-processing. LCD screens with dynamic local backlight are modeled in their main aspects, like pixel luminance, light diusion and light perception. Following the model, novel algorithms based on optimization are presented and extended, then reduced in complexity, to produce backlights...

  8. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  9. Distributed Coding/Decoding Complexity in Video Sensor Networks

    Science.gov (United States)

    Cordeiro, Paulo J.; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972

  10. A Unified Approach to Restoration, Deinterlacing and Resolution Enhancement in Decoding MPEG-2 Video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Martins, Bo

    2002-01-01

    approach to restoration, chrominance upsampling, deinterlacing, and resolution enhancement. A decoded MPEG-2 sequence for interlaced standard definition television (SDTV) in 4:2:0 is converted to: (1) improved quality interlaced SDTV in 4:2:0; (2) interlaced SDTV in 4:4:4; (3) progressive SDTV in 4:4:4; (4......The quality and spatial resolution of video can be improved by combining multiple pictures to form a single superresolution picture. We address the special problems associated with pictures of variable but somehow parameterized quality such as MPEG-decoded video. Our algorithm provides a unified......) interlaced high-definition TV (HDTV) in 4:2:0; (5) progressive HDTV in 4:2:0. These conversions also provide features such as freeze frame and zoom. The algorithm is mainly targeted at bit rates of 4-8 Mb/s. The algorithm is based on motion-compensated spatial upsampling from multiple images and decimation...

  11. Gradient Descent Bit Flipping Algorithms for Decoding LDPC Codes

    OpenAIRE

    Wadayama, Tadashi; Nakamura, Keisuke; Yagita, Masayuki; Funahashi, Yuuki; Usami, Shogo; Takumi, Ichi

    2007-01-01

    A novel class of bit-flipping (BF) algorithms for decoding low-density parity-check (LDPC) codes is presented. The proposed algorithms, which are called gradient descent bit flipping (GDBF) algorithms, can be regarded as simplified gradient descent algorithms. Based on gradient descent formulation, the proposed algorithms are naturally derived from a simple non-linear objective function.

  12. Decoding Algorithms for Random Linear Network Codes

    DEFF Research Database (Denmark)

    Heide, Janus; Pedersen, Morten Videbæk; Fitzek, Frank

    2011-01-01

    We consider the problem of efficient decoding of a random linear code over a finite field. In particular we are interested in the case where the code is random, relatively sparse, and use the binary finite field as an example. The goal is to decode the data using fewer operations to potentially a...

  13. Interactive video algorithms and technologies

    CERN Document Server

    Hammoud, Riad

    2006-01-01

    This book covers both algorithms and technologies of interactive videos, so that businesses in IT and data managements, scientists and software engineers in video processing and computer vision, coaches and instructors that use video technology in teaching, and finally end-users will greatly benefit from it. This book contains excellent scientific contributions made by a number of pioneering scientists and experts from around the globe. It consists of five parts. The first part introduces the reader to interactive video and video summarization and presents effective methodologies for automatic abstraction of a single video sequence, a set of video sequences, and a combined audio-video sequence. In the second part, a list of advanced algorithms and methodologies for automatic and semi-automatic analysis and editing of audio-video documents are presented. The third part tackles a more challenging level of automatic video re-structuring, filtering of video stream by extracting of highlights, events, and meaningf...

  14. Processing Decoded Video for LCD-LED Backlight Display

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan

    on local LED-LCD backlight. Second, removing the digital video codec artifacts such as blocking and ringing artifacts by post-processing algorithms. A novel algorithm based on image features with optimal balance between visual quality and power consumption was developed. In addition, to remove flickering......The quality of digital images and video signal on visual media such as TV screens and LCD displays is affected by two main factors; the display technology and compression standards. Accurate knowledge about the characteristics of display and the video signal can be utilized to develop advanced...... algorithms for signal (image or video) enhancement. One particular application of such algorithms is the case of LCDs with dynamic local backlight. The thesis addressed two main problems; first, designing algorithms that improve the visual quality of perceived image and video and reduce power consumption...

  15. Decoding Reed–Solomon Codes Using Euclid's Algorithm

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 12; Issue 4. Decoding Reed–Solomon Codes Using Euclid's Algorithm. Priti Shankar. General Article Volume 12 ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India.

  16. Continuous BP Decoding Algorithm for a Low-Density Parity-Check Coded Hybrid ARQ System

    Science.gov (United States)

    Park, Sangjoon; Choi, Sooyong; Hwang, Seung-Hoon

    A continuous belief propagation (BP) decoding algorithm for a hybrid automatic repeat request (ARQ) system is proposed in this paper. The proposed continuous BP decoding algorithm utilizes the extrinsic information generated in the last iteration of the previous transmission for a continuous progression of the decoding through retransmissions. This allows the continuous BP decoding algorithm to accelerate the decoding convergence for codeword determination, especially when the number of retransmissions is large or a currently combined packet has punctured nodes. Simulation results verify the effectiveness of the proposed continuous BP decoding algorithm.

  17. Systolic array architecture for convolutional decoding algorithms: Viterbi algorithm and stack algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Chang, C.Y.

    1986-01-01

    New results on efficient forms of decoding convolutional codes based on Viterbi and stack algorithms using systolic array architecture are presented. Some theoretical aspects of systolic arrays are also investigated. First, systolic array implementation of Viterbi algorithm is considered, and various properties of convolutional codes are derived. A technique called strongly connected trellis decoding is introduced to increase the efficient utilization of all the systolic array processors. The issues dealing with the composite branch metric generation, survivor updating, overall system architecture, throughput rate, and computations overhead ratio are also investigated. Second, the existing stack algorithm is modified and restated in a more concise version so that it can be efficiently implemented by a special type of systolic array called systolic priority queue. Three general schemes of systolic priority queue based on random access memory, shift register, and ripple register are proposed. Finally, a systematic approach is presented to design systolic arrays for certain general classes of recursively formulated algorithms.

  18. Mutiple LDPC Decoding using Bitplane Correlation for Transform Domain Wyner-Ziv Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Huang, Xin; Forchhammer, Søren

    2011-01-01

    Distributed video coding (DVC) is an emerging video coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. This paper considers a Low Density Parity Check (LDPC) based Transform Domain Wyner-Ziv (TDWZ) video...... codec. To improve the LDPC coding performance in the context of TDWZ, this paper proposes a Wyner-Ziv video codec using bitplane correlation through multiple parallel LDPC decoding. The proposed scheme utilizes inter bitplane correlation to enhance the bitplane decoding performance. Experimental results...

  19. ESVD: An Integrated Energy Scalable Framework for Low-Power Video Decoding Systems

    Directory of Open Access Journals (Sweden)

    Wen Ji

    2010-01-01

    Full Text Available Video applications using mobile wireless devices are a challenging task due to the limited capacity of batteries. The higher complex functionality of video decoding needs high resource requirements. Thus, power efficient control has become more critical design with devices integrating complex video processing techniques. Previous works on power efficient control in video decoding systems often aim at the low complexity design and not explicitly consider the scalable impact of subfunctions in decoding process, and seldom consider the relationship with the features of compressed video date. This paper is dedicated to developing an energy-scalable video decoding (ESVD strategy for energy-limited mobile terminals. First, ESVE can dynamically adapt the variable energy resources due to the device aware technique. Second, ESVD combines the decoder control with decoded data, through classifying the data into different partition profiles according to its characteristics. Third, it introduces utility theoretical analysis during the resource allocation process, so as to maximize the resource utilization. Finally, it adapts the energy resource as different energy budget and generates the scalable video decoding output under energy-limited systems. Experimental results demonstrate the efficiency of the proposed approach.

  20. Weighted-Bit-Flipping-Based Sequential Scheduling Decoding Algorithms for LDPC Codes

    Directory of Open Access Journals (Sweden)

    Qing Zhu

    2013-01-01

    Full Text Available Low-density parity-check (LDPC codes can be applied in a lot of different scenarios such as video broadcasting and satellite communications. LDPC codes are commonly decoded by an iterative algorithm called belief propagation (BP over the corresponding Tanner graph. The original BP updates all the variable-nodes simultaneously, followed by all the check-nodes simultaneously as well. We propose a sequential scheduling algorithm based on weighted bit-flipping (WBF algorithm for the sake of improving the convergence speed. Notoriously, WBF is a low-complexity and simple algorithm. We combine it with BP to obtain advantages of these two algorithms. Flipping function used in WBF is borrowed to determine the priority of scheduling. Simulation results show that it can provide a good tradeoff between FER performance and computation complexity for short-length LDPC codes.

  1. Efficient Dual Domain Decoding of Linear Block Codes Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Ahmed Azouaoui

    2012-01-01

    Full Text Available A computationally efficient algorithm for decoding block codes is developed using a genetic algorithm (GA. The proposed algorithm uses the dual code in contrast to the existing genetic decoders in the literature that use the code itself. Hence, this new approach reduces the complexity of decoding the codes of high rates. We simulated our algorithm in various transmission channels. The performance of this algorithm is investigated and compared with competitor decoding algorithms including Maini and Shakeel ones. The results show that the proposed algorithm gives large gains over the Chase-2 decoding algorithm and reach the performance of the OSD-3 for some quadratic residue (QR codes. Further, we define a new crossover operator that exploits the domain specific information and compare it with uniform and two point crossover. The complexity of this algorithm is also discussed and compared to other algorithms.

  2. A BitTorrent-Based Dynamic Bandwidth Adaptation Algorithm for Video Streaming

    Science.gov (United States)

    Hsu, Tz-Heng; Liang, You-Sheng; Chiang, Meng-Shu

    In this paper, we propose a BitTorrent-based dynamic bandwidth adaptation algorithm for video streaming. Two mechanisms to improve the original BitTorrent protocol are proposed: (1) the decoding order frame first (DOFF) frame selection algorithm and (2) the rarest I frame first (RIFF) frame selection algorithm. With the proposed algorithms, a peer can periodically check the number of downloaded frames in the buffer and then allocate the available bandwidth adaptively for video streaming. As a result, users can have smooth video playout experience with the proposed algorithms.

  3. An Iterative Decoding Algorithm for Fusion of Multimodal Information

    Science.gov (United States)

    Shivappa, Shankar T.; Rao, Bhaskar D.; Trivedi, Mohan M.

    2007-12-01

    Human activity analysis in an intelligent space is typically based on multimodal informational cues. Use of multiple modalities gives us a lot of advantages. But information fusion from different sources is a problem that has to be addressed. In this paper, we propose an iterative algorithm to fuse information from multimodal sources. We draw inspiration from the theory of turbo codes. We draw an analogy between the redundant parity bits of the constituent codes of a turbo code and the information from different sensors in a multimodal system. A hidden Markov model is used to model the sequence of observations of individual modalities. The decoded state likelihoods from one modality are used as additional information in decoding the states of the other modalities. This procedure is repeated until a certain convergence criterion is met. The resulting iterative algorithm is shown to have lower error rates than the individual models alone. The algorithm is then applied to a real-world problem of speech segmentation using audio and visual cues.

  4. On the Development and Optimization of HEVC Video Decoders Using High-Level Dataflow Modeling

    OpenAIRE

    Jerbi, Khaled; Yviquel, Hervé; Sanchez, Alexandre; Renzi, Daniele; De Saint Jorre, Damien; Alberti, Claudio; Mattavelli, Marco; Raulet, Mickael

    2017-01-01

    International audience; With the emergence of the High Efficiency Video Coding (HEVC) standard, a dataflow description of the decoder part was developed as part of the MPEG-B standard. This dataflow description presented modest framerate results which led us to propose methodolo-gies to improve the performance. In this paper, we introduce architectural improvements by exposing more parallelism using YUV and frame-based parallel decoding. We also present platform optimizations based on the use...

  5. Real-time high definition H.264 video decode using the Xbox 360 GPU

    Science.gov (United States)

    Arevalo Baeza, Juan Carlos; Chen, William; Christoffersen, Eric; Dinu, Daniel; Friemel, Barry

    2007-09-01

    The Xbox 360 is powered by three dual pipeline 3.2 GHz IBM PowerPC processors and a 500 MHz ATI graphics processing unit. The Graphics Processing Unit (GPU) is a special-purpose device, intended to create advanced visual effects and to render realistic scenes for the latest Xbox 360 games. In this paper, we report work on using the GPU as a parallel processing unit to accelerate the decoding of H.264/AVC high-definition (1920x1080) video. We report our experiences in developing a real-time, software-only high-definition video decoder for the Xbox 360.

  6. Biased Random Key Genetic Algorithm with Hybrid Decoding for Multi-objective Optimization

    OpenAIRE

    Tangpattanakul, Panwadee; Jozefowiez, Nicolas; Lopez, Pierre

    2013-01-01

    International audience; A biased random key genetic algorithm (BRKGA) is an efficient method for solving combinatorial optimization problems. It can be applied to solve both single-objective and multi-objective optimization problems. The BRKGA operates on a chromosome encoded as a key vector of real values between [0,1]. Generally, the chromosome has to be decoded by using a single decoding method in order to obtain a feasible solution. This paper presents a hybrid decoding, which combines th...

  7. Emotional Decoding in Facial Expression, Scripts and Videos: A Comparison between Normal, Autistic and Asperger Children

    Science.gov (United States)

    Balconi, Michela; Amenta, Simona; Ferrari, Chiara

    2012-01-01

    ASD subjects are described as showing particular difficulty in decoding emotional patterns. This paper explored linguistic and conceptual skills in response to emotional stimuli presented as emotional faces, scripts (pictures) and interactive situations (videos). Participants with autism, Asperger syndrome and control participants were shown…

  8. A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting

    Directory of Open Access Journals (Sweden)

    Yingxian Zhang

    2014-01-01

    Full Text Available We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length.

  9. Reduced Complexity Iterative Decoding of 3D-Product Block Codes Based on Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Abdeslam Ahmadi

    2012-01-01

    Full Text Available Two iterative decoding algorithms of 3D-product block codes (3D-PBC based on genetic algorithms (GAs are presented. The first algorithm uses the Chase-Pyndiah SISO, and the second one uses the list-based SISO decoding algorithm (LBDA based on order- reprocessing. We applied these algorithms over AWGN channel to symmetric 3D-PBC constructed from BCH codes. The simulation results show that the first algorithm outperforms the Chase-Pyndiah one and is only 1.38 dB away from the Shannon capacity limit at BER of 10−5 for BCH (31, 21, 53 and 1.4 dB for BCH (16, 11, 43. The simulations of the LBDA-based GA on the BCH (16, 11, 43 show that its performances outperform the first algorithm and is about 1.33 dB from the Shannon limit. Furthermore, these algorithms can be applied to any arbitrary 3D binary product block codes, without the need of a hard-in hard-out decoder. We show also that the two proposed decoders are less complex than both Chase-Pyndiah algorithm for codes with large correction capacity and LBDA for large parameter. Those features make the decoders based on genetic algorithms efficient and attractive.

  10. The Guruswami--Sudan Decoding Algorithm for Reed--Solomon Codes

    Science.gov (United States)

    McEliece, R. J.

    2003-01-01

    This article is a tutorial discussion of the Guruswami-Sudan (GS) Reed-Solomon decoding algorithm, including self-contained treatments of the Kotter and Roth-Ruckenstein (RR) improvements. It also contains a number of new results, including a rigorous discussion of the average size of the decoder's list, an improvement in the RR algorithm's stopping rule, a simplified treatment of the combinatorics of weighted monomial orders, and a proof of the monotonicity of the GS decoding radius as a function of the interpolation multiplicity.

  11. Performance Analysis of Iterative Decoding Algorithms for PEG LDPC Codes in Nakagami Fading Channels

    Directory of Open Access Journals (Sweden)

    O. Al Rasheed

    2013-11-01

    Full Text Available In this paper we give a comparative analysis of decoding algorithms of Low Density Parity Check (LDPC codes in a channel with the Nakagami distribution of the fading envelope. We consider the Progressive Edge-Growth (PEG method and Improved PEG method for the parity check matrix construction, which can be used to avoid short girths, small trapping sets and a high level of error floor. A comparative analysis of several classes of LDPC codes in various propagation conditions and decoded using different decoding algorithms is also presented.

  12. Unsupervised decoding of long-term, naturalistic human neural recordings with automated video and audio annotations

    Directory of Open Access Journals (Sweden)

    Nancy X.R. Wang

    2016-04-01

    Full Text Available Fully automated decoding of human activities and intentions from direct neural recordings is a tantalizing challenge in brain-computer interfacing. Implementing Brain Computer Interfaces (BCIs outside carefully controlled experiments in laboratory settings requires adaptive and scalable strategies with minimal supervision. Here we describe an unsupervised approach to decoding neural states from naturalistic human brain recordings. We analyzed continuous, long-term electrocorticography (ECoG data recorded over many days from the brain of subjects in a hospital room, with simultaneous audio and video recordings. We discovered coherent clusters in high-dimensional ECoG recordings using hierarchical clustering and automatically annotated them using speech and movement labels extracted from audio and video. To our knowledge, this represents the first time techniques from computer vision and speech processing have been used for natural ECoG decoding. Interpretable behaviors were decoded from ECoG data, including moving, speaking and resting; the results were assessed by comparison with manual annotation. Discovered clusters were projected back onto the brain revealing features consistent with known functional areas, opening the door to automated functional brain mapping in natural settings.

  13. High-threshold decoding algorithms for the gauge color code

    Science.gov (United States)

    Zeng, William; Brown, Benjamin

    Gauge color codes are topological quantum error correcting codes on three dimensional lattices. They have garnered recent interest due to two important properties: (1) they admit a universal transversal gate set, and (2) their structure allows reliable error correction using syndrome data obtained from a measurement circuit of constant depth. Both of these properties make gauge color codes intriguing candidates for low overhead fault-tolerant quantum computation. Recent work by Brown et al. calculated a threshold of 0.31% for a particular gauge color code lattice using a simple clustering decoder and phenomenological noise. We show that we can achieve improved threshold error rates using the efficient Wootton and Loss Markov-chain Monte Carlo (MCMC) decoding. In the case of the surface code, the MCMC decoder produced a threshold close to that code's upper bound. While no upper bound is known for gauge color codes, the thresholds we present here may give a better estimate.

  14. Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection

    Directory of Open Access Journals (Sweden)

    Jiahui Meng

    2018-01-01

    Full Text Available In order to improve the performance of non-binary low-density parity check codes (LDPC hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER of 10−5 over an additive white Gaussian noise (AWGN channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.

  15. Design and analysis of closed-loop decoder adaptation algorithms for brain-machine interfaces.

    Science.gov (United States)

    Dangi, Siddharth; Orsborn, Amy L; Moorman, Helene G; Carmena, Jose M

    2013-07-01

    Closed-loop decoder adaptation (CLDA) is an emerging paradigm for achieving rapid performance improvements in online brain-machine interface (BMI) operation. Designing an effective CLDA algorithm requires making multiple important decisions, including choosing the timescale of adaptation, selecting which decoder parameters to adapt, crafting the corresponding update rules, and designing CLDA parameters. These design choices, combined with the specific settings of CLDA parameters, will directly affect the algorithm's ability to make decoder parameters converge to values that optimize performance. In this article, we present a general framework for the design and analysis of CLDA algorithms and support our results with experimental data of two monkeys performing a BMI task. First, we analyze and compare existing CLDA algorithms to highlight the importance of four critical design elements: the adaptation timescale, selective parameter adaptation, smooth decoder updates, and intuitive CLDA parameters. Second, we introduce mathematical convergence analysis using measures such as mean-squared error and KL divergence as a useful paradigm for evaluating the convergence properties of a prototype CLDA algorithm before experimental testing. By applying these measures to an existing CLDA algorithm, we demonstrate that our convergence analysis is an effective analytical tool that can ultimately inform and improve the design of CLDA algorithms.

  16. Video error concealment using block matching and frequency selective extrapolation algorithms

    Science.gov (United States)

    P. K., Rajani; Khaparde, Arti

    2017-06-01

    Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.

  17. More accurate recombination prediction in HIV-1 using a robust decoding algorithm for HMMs

    Directory of Open Access Journals (Sweden)

    Brown Daniel G

    2011-05-01

    Full Text Available Abstract Background Identifying recombinations in HIV is important for studying the epidemiology of the virus and aids in the design of potential vaccines and treatments. The previous widely-used tool for this task uses the Viterbi algorithm in a hidden Markov model to model recombinant sequences. Results We apply a new decoding algorithm for this HMM that improves prediction accuracy. Exactly locating breakpoints is usually impossible, since different subtypes are highly conserved in some sequence regions. Our algorithm identifies these sites up to a certain error tolerance. Our new algorithm is more accurate in predicting the location of recombination breakpoints. Our implementation of the algorithm is available at http://www.cs.uwaterloo.ca/~jmtruszk/jphmm_balls.tar.gz. Conclusions By explicitly accounting for uncertainty in breakpoint positions, our algorithm offers more reliable predictions of recombination breakpoints in HIV-1. We also document a new domain of use for our new decoding approach in HMMs.

  18. Double-layer video transmission over decode-and-forward wireless relay networks using hierarchical modulation.

    Science.gov (United States)

    Nguyen, Tu V; Cosman, Pamela C; Milstein, Laurence B

    2014-04-01

    We consider a wireless relay network with a single source, a single destination, and a multiple relay. The relays are half-duplex and use the decode-and-forward protocol. The transmit source is a layered video bitstream, which can be partitioned into two layers, a base layer (BL) and an enhancement layer (EL), where the BL is more important than the EL in terms of the source distortion. The source broadcasts both layers to the relays and the destination using hierarchical 16-QAM. Each relay detects and transmits successfully decoded layers to the destination using either hierarchical 16-QAM or QPSK. The destination can thus receive multiple signals, each of which can include either only the BL or both the BL and the EL. We derive the optimal linear combining method at the destination, where the uncoded bit error rate is minimized. We also present a suboptimal combining method with a closed-form solution, which performs very close to the optimal. We use the proposed double-layer transmission scheme with our combining methods for transmitting layered video bitstreams. Numerical results show that the double-layer scheme can gain 2-2.5 dB in channel signal-to-noise ratio or 5-7 dB in video peak signal-to-noise ratio, compared with the classical single-layer scheme using conventional modulation.

  19. No-reference pixel based video quality assessment for HEVC decoded video

    DEFF Research Database (Denmark)

    Huang, Xin; Søgaard, Jacob; Forchhammer, Søren

    2017-01-01

    This paper proposes a No-Reference (NR) Video Quality Assessment (VQA) method for videos subject to the distortion given by the High Efficiency Video Coding (HEVC) scheme. The assessment is performed without access to the bitstream. The proposed analysis is based on the transform coefficients...

  20. A Multi-Frame Post-Processing Approach to Improved Decoding of H.264/AVC Video

    DEFF Research Database (Denmark)

    Huang, Xin; Li, Huiying; Forchhammer, Søren

    2007-01-01

    Video compression techniques may yield visually annoying artifacts for limited bitrate coding. In order to improve video quality, a multi-frame based motion compensated filtering algorithm is reported based on combining multiple pictures to form a single super-resolution picture and decimation...

  1. Minimum decoding trellis length and truncation depth of wrap-around Viterbi algorithm for TBCC in mobile WiMAX

    Directory of Open Access Journals (Sweden)

    Liu Yu-Sun

    2011-01-01

    Full Text Available Abstract The performance of the wrap-around Viterbi decoding algorithm with finite truncation depth and fixed decoding trellis length is investigated for tail-biting convolutional codes in the mobile WiMAX standard. Upper bounds on the error probabilities induced by finite truncation depth and the uncertainty of the initial state are derived for the AWGN channel. The truncation depth and the decoding trellis length that yield negligible performance loss are obtained for all transmission rates over the Rayleigh channel using computer simulations. The results show that the circular decoding algorithm with an appropriately chosen truncation depth and a decoding trellis just a fraction longer than the original received code words can achieve almost the same performance as the optimal maximum likelihood decoding algorithm in mobile WiMAX. A rule of thumb for the values of the truncation depth and the trellis tail length is also proposed.

  2. DIALOGISM, DECODING AND INTERACTIVE DIGITAL MARKETING CONCERNING THE BODILY PRACTICES IN THE ON-LINE VIDEO OLYMPIKUS.MOV FESTIVAL

    Directory of Open Access Journals (Sweden)

    Marcos Roberto Godoi

    2010-12-01

    Full Text Available The aim was to understand the meanings associated bodily practices in the videos of the festival OLYMPIKUS.MOV, and also analyze the reception/interaction Internet users on these videos. The methodology consisted the description of the videos, the selection of comments and further analysis based on the notion of dialogic language of Mikhail Bakhtin and the types of decoding Stuart Hall. We concluded that there was a large range of bodily practices and meanings related to them in the videos. Reception/interaction Internet provides a critical dialogue, congratulation, and suggestive questioning on the videos.

  3. Attacking Automatic Video Analysis Algorithms: A Case Study of Google Cloud Video Intelligence API

    OpenAIRE

    Hosseini, Hossein; Xiao, Baicen; Clark, Andrew; Poovendran, Radha

    2017-01-01

    Due to the growth of video data on Internet, automatic video analysis has gained a lot of attention from academia as well as companies such as Facebook, Twitter and Google. In this paper, we examine the robustness of video analysis algorithms in adversarial settings. Specifically, we propose targeted attacks on two fundamental classes of video analysis algorithms, namely video classification and shot detection. We show that an adversary can subtly manipulate a video in such a way that a human...

  4. Real-time demonstration hardware for enhanced DPCM video compression algorithm

    Science.gov (United States)

    Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.

    The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development

  5. The Treeterbi and Parallel Treeterbi algorithms: efficient, optimal decoding for ordinary, generalized and pair HMMs

    DEFF Research Database (Denmark)

    Keibler, Evan; Arumugam, Manimozhiyan; Brent, Michael R

    2007-01-01

    MOTIVATION: Hidden Markov models (HMMs) and generalized HMMs been successfully applied to many problems, but the standard Viterbi algorithm for computing the most probable interpretation of an input sequence (known as decoding) requires memory proportional to the length of the sequence, which can......, our pair HMM based cDNA-to-genome aligner. AVAILABILITY: The TWINSCAN/N-SCAN/PAIRAGON open source software package is available from http://genes.cse.wustl.edu....... be prohibitive. Existing approaches to reducing memory usage either sacrifice optimality or trade increased running time for reduced memory. RESULTS: We developed two novel decoding algorithms, Treeterbi and Parallel Treeterbi, and implemented them in the TWINSCAN/N-SCAN gene-prediction system. The worst case...

  6. Fast prediction algorithm for multiview video coding

    Science.gov (United States)

    Abdelazim, Abdelrahman; Mein, Stephen James; Varley, Martin Roy; Ait-Boudaoud, Djamel

    2013-03-01

    The H.264/multiview video coding (MVC) standard has been developed to enable efficient coding for three-dimensional and multiple viewpoint video sequences. The inter-view statistical dependencies are utilized and an inter-view prediction is employed to provide more efficient coding; however, this increases the overall encoding complexity. Motion homogeneity is exploited here to selectively enable inter-view prediction, and to reduce complexity in the motion estimation (ME) and the mode selection processes. This has been accomplished by defining situations that relate macro-blocks' motion characteristics to the mode selection and the inter-view prediction processes. When comparing the proposed algorithm to the H.264/MVC reference software and other recent work, the experimental results demonstrate a significant reduction in ME time while maintaining similar rate-distortion performance.

  7. VLSI Architecture for Configurable and Low-Complexity Design of Hard-Decision Viterbi Decoding Algorithm

    Directory of Open Access Journals (Sweden)

    Rachmad Vidya Wicaksana Putra

    2016-06-01

    Full Text Available Convolutional encoding and data decoding are fundamental processes in convolutional error correction. One of the most popular error correction methods in decoding is the Viterbi algorithm. It is extensively implemented in many digital communication applications. Its VLSI design challenges are about area, speed, power, complexity and configurability. In this research, we specifically propose a VLSI architecture for a configurable and low-complexity design of a hard-decision Viterbi decoding algorithm. The configurable and low-complexity design is achieved by designing a generic VLSI architecture, optimizing each processing element (PE at the logical operation level and designing a conditional adapter. The proposed design can be configured for any predefined number of trace-backs, only by changing the trace-back parameter value. Its computational process only needs N + 2 clock cycles latency, with N is the number of trace-backs. Its configurability function has been proven for N = 8, N = 16, N = 32 and N = 64. Furthermore, the proposed design was synthesized and evaluated in Xilinx and Altera FPGA target boards for area consumption and speed performance.

  8. A novel 3D scalable video compression algorithm

    Science.gov (United States)

    Somasundaram, Siva; Subbalakshmi, Koduvayur P.

    2003-05-01

    In this paper we propose a scalable video coding scheme that utilizes the embedded block coding with optimal truncation (EBCOT) compression algorithm. Three dimensional spatio-temporal decomposition of the video sequence succeeded by compression using the EBCOT generates a SNR and resolution scalable bit stream. The proposed video coding algorithm not only performs closer to the MPEG-4 video coding standard in compression efficiency but also provides better SNR and resolution scalability. Experimental results show that the performance of the proposed algorithm does better than the 3-D SPIHT (Set Partitioning in Hierarchial Trees) algorithm by 1.5dB.

  9. MPEG-2 Compressed-Domain Algorithms for Video Analysis

    Directory of Open Access Journals (Sweden)

    Hesseler Wolfgang

    2006-01-01

    Full Text Available This paper presents new algorithms for extracting metadata from video sequences in the MPEG-2 compressed domain. Three algorithms for efficient low-level metadata extraction in preprocessing stages are described. The first algorithm detects camera motion using the motion vector field of an MPEG-2 video. The second method extends the idea of motion detection to a limited region of interest, yielding an efficient algorithm to track objects inside video sequences. The third algorithm performs a cut detection using macroblock types and motion vectors.

  10. Singer product apertures—A coded aperture system with a fast decoding algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Byard, Kevin, E-mail: kevin.byard@aut.ac.nz [School of Economics, Faculty of Business, Economics and Law, Auckland University of Technology, Auckland 1142 (New Zealand); Shutler, Paul M.E. [National Institute of Education, Nanyang Technological University, 1 Nanyang Walk, Singapore 637616 (Singapore)

    2017-06-01

    A new type of coded aperture configuration that enables fast decoding of the coded aperture shadowgram data is presented. Based on the products of incidence vectors generated from the Singer difference sets, we call these Singer product apertures. For a range of aperture dimensions, we compare experimentally the performance of three decoding methods: standard decoding, induction decoding and direct vector decoding. In all cases the induction and direct vector methods are several orders of magnitude faster than the standard method, with direct vector decoding being significantly faster than induction decoding. For apertures of the same dimensions the increase in speed offered by direct vector decoding over induction decoding is better for lower throughput apertures.

  11. Progressive Image Transmission Based on Joint Source-Channel Decoding Using Adaptive Sum-Product Algorithm

    Directory of Open Access Journals (Sweden)

    David G. Daut

    2007-03-01

    Full Text Available A joint source-channel decoding method is designed to accelerate the iterative log-domain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec making it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. The positions of bits belonging to error-free coding passes are then fed back to the channel decoder. The log-likelihood ratios (LLRs of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the nonsource controlled decoding method by up to 3 dB in terms of PSNR.

  12. Progressive Image Transmission Based on Joint Source-Channel Decoding Using Adaptive Sum-Product Algorithm

    Directory of Open Access Journals (Sweden)

    Liu Weiliang

    2007-01-01

    Full Text Available A joint source-channel decoding method is designed to accelerate the iterative log-domain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec making it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. The positions of bits belonging to error-free coding passes are then fed back to the channel decoder. The log-likelihood ratios (LLRs of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the nonsource controlled decoding method by up to 3 dB in terms of PSNR.

  13. Continuity-Aware Scheduling Algorithm for Scalable Video Streaming

    Directory of Open Access Journals (Sweden)

    Atinat Palawan

    2016-05-01

    Full Text Available The consumer demand for retrieving and delivering visual content through consumer electronic devices has increased rapidly in recent years. The quality of video in packet networks is susceptible to certain traffic characteristics: average bandwidth availability, loss, delay and delay variation (jitter. This paper presents a scheduling algorithm that modifies the stream of scalable video to combat jitter. The algorithm provides unequal look-ahead by safeguarding the base layer (without the need for overhead of the scalable video. The results of the experiments show that our scheduling algorithm reduces the number of frames with a violated deadline and significantly improves the continuity of the video stream without compromising the average Y Peek Signal-to-Noise Ratio (PSNR.

  14. A Super-resolution Reconstruction Algorithm for Surveillance Video

    Directory of Open Access Journals (Sweden)

    Jian Shao

    2017-01-01

    Full Text Available Recent technological developments have resulted in surveillance video becoming a primary method of preserving public security. Many city crimes are observed in surveillance video. The most abundant evidence collected by the police is also acquired through surveillance video sources. Surveillance video footage offers very strong support for solving criminal cases, therefore, creating an effective policy, and applying useful methods to the retrieval of additional evidence is becoming increasingly important. However, surveillance video has had its failings, namely, video footage being captured in low resolution (LR and bad visual quality. In this paper, we discuss the characteristics of surveillance video and describe the manual feature registration – maximum a posteriori – projection onto convex sets to develop a super-resolution reconstruction method, which improves the quality of surveillance video. From this method, we can make optimal use of information contained in the LR video image, but we can also control the image edge clearly as well as the convergence of the algorithm. Finally, we make a suggestion on how to adjust the algorithm adaptability by analyzing the prior information of target image.

  15. A soft decoding algorithm and hardware implementation for the visual prosthesis based on high order soft demodulation.

    Science.gov (United States)

    Yang, Yuan; Quan, Nannan; Bu, Jingjing; Li, Xueping; Yu, Ningmei

    2016-09-26

    High order modulation and demodulation technology can solve the frequency requirement between the wireless energy transmission and data communication. In order to achieve reliable wireless data communication based on high order modulation technology for visual prosthesis, this work proposed a Reed-Solomon (RS) error correcting code (ECC) circuit on the basis of differential amplitude and phase shift keying (DAPSK) soft demodulation. Firstly, recognizing the weakness of the traditional DAPSK soft demodulation algorithm based on division that is complex for hardware implementation, an improved phase soft demodulation algorithm for visual prosthesis to reduce the hardware complexity is put forward. Based on this new algorithm, an improved RS soft decoding method is hence proposed. In this new decoding method, the combination of Chase algorithm and hard decoding algorithms is used to achieve soft decoding. In order to meet the requirements of implantable visual prosthesis, the method to calculate reliability of symbol-level based on multiplication of bit reliability is derived, which reduces the testing vectors number of Chase algorithm. The proposed algorithms are verified by MATLAB simulation and FPGA experimental results. During MATLAB simulation, the biological channel attenuation property model is added into the ECC circuit. The data rate is 8 Mbps in the MATLAB simulation and FPGA experiments. MATLAB simulation results show that the improved phase soft demodulation algorithm proposed in this paper saves hardware resources without losing bit error rate (BER) performance. Compared with the traditional demodulation circuit, the coding gain of the ECC circuit has been improved by about 3 dB under the same BER of [Formula: see text]. The FPGA experimental results show that under the condition of data demodulation error with wireless coils 3 cm away, the system can correct it. The greater the distance, the higher the BER. Then we use a bit error rate analyzer to

  16. Research of Video Steganalysis Algorithm Based on H265 Protocol

    Directory of Open Access Journals (Sweden)

    Wu Kaicheng

    2015-01-01

    This paper researches LSB matching VSA based on H265 protocol with the research background of 26 original Video sequences, it firstly extracts classification features out from training samples as input of SVM, and trains in SVM to obtain high-quality category classification model, and then tests whether there is suspicious information in the video sample. The experimental results show that VSA algorithm based on LSB matching can be more practical to obtain all frame embedded secret information and carrier and video of local frame embedded. In addition, VSA adopts the method of frame by frame with a strong robustness in resisting attack in the corresponding time domain.

  17. Decoding the encoding of functional brain networks: An fMRI classification comparison of non-negative matrix factorization (NMF), independent component analysis (ICA), and sparse coding algorithms.

    Science.gov (United States)

    Xie, Jianwen; Douglas, Pamela K; Wu, Ying Nian; Brody, Arthur L; Anderson, Ariana E

    2017-04-15

    Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet other mathematical constraints provide alternate biologically-plausible frameworks for generating brain networks. Non-negative matrix factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks within scan for different constraints are used as basis functions to encode observed functional activity. These encodings are then decoded using machine learning, by using the time series weights to predict within scan whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. The sparse coding algorithm of L1 Regularized Learning outperformed 4 variations of ICA (pICA and sparse coding algorithms. Holding constant the effect of the extraction algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (pICA. Negative BOLD signal may capture task-related activations. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Use of NTRIP for Optimizing the Decoding Algorithm for Real-Time Data Streams

    Directory of Open Access Journals (Sweden)

    Zhanke He

    2014-10-01

    Full Text Available As a network transmission protocol, Networked Transport of RTCM via Internet Protocol (NTRIP is widely used in GPS and Global Orbiting Navigational Satellite System (GLONASS Augmentation systems, such as Continuous Operational Reference System (CORS, Wide Area Augmentation System (WAAS and Satellite Based Augmentation Systems (SBAS. With the deployment of BeiDou Navigation Satellite system(BDS to serve the Asia-Pacific region, there are increasing needs for ground monitoring of the BeiDou Navigation Satellite system and the development of the high-precision real-time BeiDou products. This paper aims to optimize the decoding algorithm of NTRIP Client data streams and the user authentication strategies of the NTRIP Caster based on NTRIP. The proposed method greatly enhances the handling efficiency and significantly reduces the data transmission delay compared with the Federal Agency for Cartography and Geodesy (BKG NTRIP. Meanwhile, a transcoding method is proposed to facilitate the data transformation from the BINary EXchange (BINEX format to the RTCM format. The transformation scheme thus solves the problem of handing real-time data streams from Trimble receivers in the BeiDou Navigation Satellite System indigenously developed by China.

  19. Algorithms used for read-out optical system pointing to multiplexed computer generated 1D-Fourier holograms and decoding the encrypted information

    Science.gov (United States)

    Donchenko, Sergey S.; Odinokov, Sergey B.; Betin, Alexandr U.; Hanevich, Pavel; Semishko, Sergey; Zlokazov, Evgenii Y.

    2017-05-01

    The holographic disk reading device for recovery of CGFH is described. Principle of its work is shown. Analyzed approaches for developing algorithms, used in this device: guidance and decoding. Listed results of experimental researches.

  20. An efficient video dehazing algorithm based on spectral clustering

    Science.gov (United States)

    Zhao, Fan; Yao, Zao; Song, XiaoFang; Yao, Yi

    2017-07-01

    Image and video dehazing is a popular topic in the field of computer vision and digital image processing. A fast, optimized dehazing algorithm was recently proposed that enhances contrast and reduces flickering artifacts in a dehazed video sequence by minimizing a cost function that makes transmission values spatially and temporally coherent. However, its fixed-size block partitioning leads to block effects. Further, the weak edges in a hazy image are not addressed. Hence, a video dehazing algorithm based on customized spectral clustering is proposed. To avoid block artifacts, the spectral clustering is customized to segment static scenes to ensure the same target has the same transmission value. Assuming that dehazed edge images have richer detail than before restoration, an edge cost function is added to the ransmission model. The experimental results demonstrate that the proposed method provides higher dehazing quality and lower time complexity than the previous technique.

  1. Video Segmentation Using Fast Marching and Region Growing Algorithms

    Directory of Open Access Journals (Sweden)

    Eftychis Sifakis

    2002-04-01

    Full Text Available The algorithm presented in this paper is comprised of three main stages: (1 classification of the image sequence and, in the case of a moving camera, parametric motion estimation, (2 change detection having as reference a fixed frame, an appropriately selected frame or a displaced frame, and (3 object localization using local colour features. The image sequence classification is based on statistical tests on the frame difference. The change detection module uses a two-label fast marching algorithm. Finally, the object localization uses a region growing algorithm based on the colour similarity. Video object segmentation results are shown using the COST 211 data set.

  2. A baseline algorithm for face detection and tracking in video

    Science.gov (United States)

    Manohar, Vasant; Soundararajan, Padmanabhan; Korzhova, Valentina; Boonstra, Matthew; Goldgof, Dmitry; Kasturi, Rangachar

    2007-10-01

    Establishing benchmark datasets, performance metrics and baseline algorithms have considerable research significance in gauging the progress in any application domain. These primarily allow both users and developers to compare the performance of various algorithms on a common platform. In our earlier works, we focused on developing performance metrics and establishing a substantial dataset with ground truth for object detection and tracking tasks (text and face) in two video domains -- broadcast news and meetings. In this paper, we present the results of a face detection and tracking algorithm on broadcast news videos with the objective of establishing a baseline performance for this task-domain pair. The detection algorithm uses a statistical approach that was originally developed by Viola and Jones and later extended by Lienhart. The algorithm uses a feature set that is Haar-like and a cascade of boosted decision tree classifiers as a statistical model. In this work, we used the Intel Open Source Computer Vision Library (OpenCV) implementation of the Haar face detection algorithm. The optimal values for the tunable parameters of this implementation were found through an experimental design strategy commonly used in statistical analyses of industrial processes. Tracking was accomplished as continuous detection with the detected objects in two frames mapped using a greedy algorithm based on the distances between the centroids of bounding boxes. Results on the evaluation set containing 50 sequences (~ 2.5 mins.) using the developed performance metrics show good performance of the algorithm reflecting the state-of-the-art which makes it an appropriate choice as the baseline algorithm for the problem.

  3. Motion estimation for video coding efficient algorithms and architectures

    CERN Document Server

    Chakrabarti, Indrajit; Chatterjee, Sumit Kumar

    2015-01-01

    The need of video compression in the modern age of visual communication cannot be over-emphasized. This monograph will provide useful information to the postgraduate students and researchers who wish to work in the domain of VLSI design for video processing applications. In this book, one can find an in-depth discussion of several motion estimation algorithms and their VLSI implementation as conceived and developed by the authors. It records an account of research done involving fast three step search, successive elimination, one-bit transformation and its effective combination with diamond search and dynamic pixel truncation techniques. Two appendices provide a number of instances of proof of concept through Matlab and Verilog program segments. In this aspect, the book can be considered as first of its kind. The architectures have been developed with an eye to their applicability in everyday low-power handheld appliances including video camcorders and smartphones.

  4. Fast motion prediction algorithm for multiview video coding

    Science.gov (United States)

    Abdelazim, Abdelrahman; Zhang, Guang Y.; Mein, Stephen J.; Varley, Martin R.; Ait-Boudaoud, Djamel

    2011-06-01

    Multiview Video Coding (MVC) is an extension to the H.264/MPEG-4 AVC video compression standard developed with joint efforts by MPEG/VCEG to enable efficient encoding of sequences captured simultaneously from multiple cameras using a single video stream. Therefore the design is aimed at exploiting inter-view dependencies in addition to reducing temporal redundancies. However, this further increases the overall encoding complexity In this paper, the high correlation between a macroblock and its enclosed partitions is utilised to estimate motion homogeneity, and based on the result inter-view prediction is selectively enabled or disabled. Moreover, if the MVC is divided into three layers in terms of motion prediction; the first being the full and sub-pixel motion search, the second being the mode selection process and the third being repetition of the first and second for inter-view prediction, the proposed algorithm significantly reduces the complexity in the three layers. To assess the proposed algorithm, a comprehensive set of experiments were conducted. The results show that the proposed algorithm significantly reduces the motion estimation time whilst maintaining similar Rate Distortion performance, when compared to both the H.264/MVC reference software and recently reported work.

  5. Message-Passing Algorithms for Channel Estimation and Decoding Using Approximate Inference

    DEFF Research Database (Denmark)

    Badiu, Mihai Alin; Kirkelund, Gunvor Elisabeth; Manchón, Carles Navarro

    2012-01-01

    We design iterative receiver schemes for a generic communication system by treating channel estimation and information decoding as an inference problem in graphical models. We introduce a recently proposed inference framework that combines belief propagation (BP) and the mean field (MF) approxima...

  6. Optimization of MPEG decoding

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1999-01-01

    MPEG-2 video decoding is examined. A unified approach to quality improvement, chrominance upsampling, de-interlacing and superresolution is presented. The information over several frames is combined as part of the processing.......MPEG-2 video decoding is examined. A unified approach to quality improvement, chrominance upsampling, de-interlacing and superresolution is presented. The information over several frames is combined as part of the processing....

  7. New algorithm for iris recognition based on video sequences

    Science.gov (United States)

    Bourennane, Salah; Fossati, Caroline; Ketchantang, William

    2010-07-01

    Among existing biometrics, iris recognition systems are among the most accurate personal biometric identification systems. However, the acquisition of a workable iris image requires strict cooperation of the user; otherwise, the image will be rejected by a verification module because of its poor quality, inducing a high false reject rate (FRR). The FRR may also increase when iris localization fails or when the pupil is too dilated. To improve the existing methods, we propose to use video sequences acquired in real time by a camera. In order to keep the same computational load to identify the iris, we propose a new method to estimate the iris characteristics. First, we propose a new iris texture characterization based on Fourier-Mellin transform, which is less sensitive to pupil dilatations than previous methods. Then, we develop a new iris localization algorithm that is robust to variations of quality (partial occlusions due to eyelids and eyelashes, light reflects, etc.), and finally, we introduce a fast and new criterion of suitable image selection from an iris video sequence for an accurate recognition. The accuracy of each step of the algorithm in the whole proposed recognition process is tested and evaluated using our own iris video database and several public image databases, such as CASIA, UBIRIS, and BATH.

  8. Novel Intermode Prediction Algorithm for High Efficiency Video Coding Encoder

    Directory of Open Access Journals (Sweden)

    Chan-seob Park

    2014-01-01

    Full Text Available The joint collaborative team on video coding (JCT-VC is developing the next-generation video coding standard which is called high efficiency video coding (HEVC. In the HEVC, there are three units in block structure: coding unit (CU, prediction unit (PU, and transform unit (TU. The CU is the basic unit of region splitting like macroblock (MB. Each CU performs recursive splitting into four blocks with equal size, starting from the tree block. In this paper, we propose a fast CU depth decision algorithm for HEVC technology to reduce its computational complexity. In 2N×2N PU, the proposed method compares the rate-distortion (RD cost and determines the depth using the compared information. Moreover, in order to speed up the encoding time, the efficient merge SKIP detection method is developed additionally based on the contextual mode information of neighboring CUs. Experimental result shows that the proposed algorithm achieves the average time-saving factor of 44.84% in the random access (RA at Main profile configuration with the HEVC test model (HM 10.0 reference software. Compared to HM 10.0 encoder, a small BD-bitrate loss of 0.17% is also observed without significant loss of image quality.

  9. Error-erasure decoding of product codes.

    Science.gov (United States)

    Wainberg, S.

    1972-01-01

    Two error-erasure decoding algorithms for product codes that correct all the error-erasure patterns guaranteed correctable by the minimum Hamming distance of the product code are given. The first algorithm works when at least one of the component codes is majority-logic decodable. The second algorithm works for any product code. Both algorithms use the decoders of the component codes.

  10. Comprehensive Analysis and Evaluation of Background Subtraction Algorithms for Surveillance Video

    National Research Council Canada - National Science Library

    Yan Feng; Shengmei Luo; Yumin Tian; Shuo Deng; Haihong Zheng

    2014-01-01

    .... Then, the algorithms were implemented and tested using different videos with ground truth, such as baseline, dynamic background, camera jitter, and intermittent object motion and shadow scenarios...

  11. Low-Complexity Saliency Detection Algorithm for Fast Perceptual Video Coding

    Directory of Open Access Journals (Sweden)

    Pengyu Liu

    2013-01-01

    Full Text Available A low-complexity saliency detection algorithm for perceptual video coding is proposed; low-level encoding information is adopted as the characteristics of visual perception analysis. Firstly, this algorithm employs motion vector (MV to extract temporal saliency region through fast MV noise filtering and translational MV checking procedure. Secondly, spatial saliency region is detected based on optimal prediction mode distributions in I-frame and P-frame. Then, it combines the spatiotemporal saliency detection results to define the video region of interest (VROI. The simulation results validate that the proposed algorithm can avoid a large amount of computation work in the visual perception characteristics analysis processing compared with other existing algorithms; it also has better performance in saliency detection for videos and can realize fast saliency detection. It can be used as a part of the video standard codec at medium-to-low bit-rates or combined with other algorithms in fast video coding.

  12. Adaptive decoding of convolutional codes

    Directory of Open Access Journals (Sweden)

    K. Hueske

    2007-06-01

    Full Text Available Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  13. Performance analysis of a decoding algorithm for algebraic-geometry codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund

    1999-01-01

    that in the typical case, where the error points are "independent," one can prove that the algorithm always fails, that is gives a wrong or no answer, except for high rates where it does much better than expected. This explains the simulation results presented by O'Sullivan at the 1997 ISIT, We also show...

  14. 3D video coding for embedded devices energy efficient algorithms and architectures

    CERN Document Server

    Zatt, Bruno; Bampi, Sergio; Henkel, Jörg

    2013-01-01

    This book shows readers how to develop energy-efficient algorithms and hardware architectures to enable high-definition 3D video coding on resource-constrained embedded devices.  Users of the Multiview Video Coding (MVC) standard face the challenge of exploiting its 3D video-specific coding tools for increasing compression efficiency at the cost of increasing computational complexity and, consequently, the energy consumption.  This book enables readers to reduce the multiview video coding energy consumption through jointly considering the algorithmic and architectural levels.  Coverage includes an introduction to 3D videos and an extensive discussion of the current state-of-the-art of 3D video coding, as well as energy-efficient algorithms for 3D video coding and energy-efficient hardware architecture for 3D video coding.     ·         Discusses challenges related to performance and power in 3D video coding for embedded devices; ·         Describes energy-efficient algorithms for reduci...

  15. An Algorithm of Extracting I-Frame in Compressed Video

    Directory of Open Access Journals (Sweden)

    Zhu Yaling

    2015-01-01

    Full Text Available The MPEG video data includes three types of frames, that is: I-frame, P-frame and B-frame. However, the I-frame records the main information of video data, the P-frame and the B-frame are just regarded as motion compensations of the I-frame. This paper presents the approach which analyzes the MPEG video stream in the compressed domain, and find out the key frame of MPEG video stream by extracting the I-frame. Experiments indicated that this method can be automatically realized in the compressed MPEG video and it will lay the foundation for the video processing in the future.

  16. High efficiency video coding (HEVC) algorithms and architectures

    CERN Document Server

    Budagavi, Madhukar; Sullivan, Gary

    2014-01-01

    This book provides developers, engineers, researchers and students with detailed knowledge about the High Efficiency Video Coding (HEVC) standard. HEVC is the successor to the widely successful H.264/AVC video compression standard, and it provides around twice as much compression as H.264/AVC for the same level of quality. The applications for HEVC will not only cover the space of the well-known current uses and capabilities of digital video – they will also include the deployment of new services and the delivery of enhanced video quality, such as ultra-high-definition television (UHDTV) and video with higher dynamic range, wider range of representable color, and greater representation precision than what is typically found today. HEVC is the next major generation of video coding design – a flexible, reliable and robust solution that will support the next decade of video applications and ease the burden of video on world-wide network traffic. This book provides a detailed explanation of the various parts ...

  17. Simulations for Full Unit-memory and Partial Unit-memory Convolutional Codes with Real-time Minimal-byte-error Probability Decoding Algorithm

    Science.gov (United States)

    Vo, Q. D.

    1984-01-01

    A program which was written to simulate Real Time Minimal-Byte-Error Probability (RTMBEP) decoding of full unit-memory (FUM) convolutional codes on a 3-bit quantized AWGN channel is described. This program was used to compute the symbol-error probability of FUM codes and to determine the signal to noise (SNR) required to achieve a bit error rate (BER) of 10 to the minus 6th power for corresponding concatenated systems. A (6,6/30) FUM code, 6-bit Reed-Solomon code combination was found to achieve the required BER at a SNR of 1.886 dB. The RTMBEP algorithm was then modified for decoding partial unit-memory (PUM) convolutional codes. A simulation program was also written to simulate the symbol-error probability of these codes.

  18. Improved Power Decoding of One-Point Hermitian Codes

    DEFF Research Database (Denmark)

    Puchinger, Sven; Bouw, Irene; Rosenkilde, Johan Sebastian Heesemann

    2017-01-01

    We propose a new partial decoding algorithm for one-point Hermitian codes that can decode up to the same number of errors as the Guruswami–Sudan decoder. Simulations suggest that it has a similar failure probability as the latter one. The algorithm is based on a recent generalization of the power...... decoding algorithm for Reed–Solomon codes and does not require an expensive root-finding step. In addition, it promises improvements for decoding interleaved Hermitian codes....

  19. A Novel Face Segmentation Algorithm from a Video Sequence for Real-Time Face Recognition

    Directory of Open Access Journals (Sweden)

    Sudhaker Samuel RD

    2007-01-01

    Full Text Available The first step in an automatic face recognition system is to localize the face region in a cluttered background and carefully segment the face from each frame of a video sequence. In this paper, we propose a fast and efficient algorithm for segmenting a face suitable for recognition from a video sequence. The cluttered background is first subtracted from each frame, in the foreground regions, a coarse face region is found using skin colour. Then using a dynamic template matching approach the face is efficiently segmented. The proposed algorithm is fast and suitable for real-time video sequence. The algorithm is invariant to large scale and pose variation. The segmented face is then handed over to a recognition algorithm based on principal component analysis and linear discriminant analysis. The online face detection, segmentation, and recognition algorithms take an average of 0.06 second on a 3.2 GHz P4 machine.

  20. A Fast PDE Algorithm Using Adaptive Scan and Search for Video Coding

    Science.gov (United States)

    Kim, Jong-Nam

    In this paper, we propose an algorithm that reduces unnecessary computations, while keeping the same prediction quality as that of the full search algorithm. In the proposed algorithm, we can reduce unnecessary computations efficiently by calculating initial matching error point from first 1/N partial errors. We can increase the probability that hits minimum error point as soon as possible. Our algorithm decreases the computational amount by about 20% of the conventional PDE algorithm without any degradation of prediction quality. Our algorithm would be useful in real-time video coding applications using MPEG-2/4 AVC standards.

  1. Comparative Evaluation of Background Subtraction Algorithms in Remote Scene Videos Captured by MWIR Sensors

    Science.gov (United States)

    Yao, Guangle; Lei, Tao; Zhong, Jiandan; Jiang, Ping; Jia, Wenwu

    2017-01-01

    Background subtraction (BS) is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects) from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR) video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR) sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG) for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR. PMID:28837112

  2. Comparative Evaluation of Background Subtraction Algorithms in Remote Scene Videos Captured by MWIR Sensors

    Directory of Open Access Journals (Sweden)

    Guangle Yao

    2017-08-01

    Full Text Available Background subtraction (BS is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR.

  3. On Decoding Interleaved Chinese Remainder Codes

    DEFF Research Database (Denmark)

    Li, Wenhui; Sidorenko, Vladimir; Nielsen, Johan Sebastian Rosenkilde

    2013-01-01

    We model the decoding of Interleaved Chinese Remainder codes as that of finding a short vector in a Z-lattice. Using the LLL algorithm, we obtain an efficient decoding algorithm, correcting errors beyond the unique decoding bound and having nearly linear complexity. The algorithm can fail...... with a probability dependent on the number of errors, and we give an upper bound for this. Simulation results indicate that the bound is close to the truth. We apply the proposed decoding algorithm for decoding a single CR code using the idea of “Power” decoding, suggested for Reed-Solomon codes. A combination...... of these two methods can be used to decode low-rate Interleaved Chinese Remainder codes....

  4. A Simple Decoder for Topological Codes

    Directory of Open Access Journals (Sweden)

    James Wootton

    2015-04-01

    Full Text Available Here we study an efficient algorithm for decoding topological codes. It is a simple form of HDRG decoder, which could be straightforwardly generalized to complex decoding problems. Specific results are obtained for the planar code with both i.i.d. and spatially correlated errors. The method is shown to compare well with existing ones, despite its simplicity.

  5. Decoding Codes on Graphs

    Indian Academy of Sciences (India)

    Computer Science and. Automation, liSe. Their research addresses various aspects of algebraic and combinatorial coding theory. 1 low Density Parity Check ..... lustrating how the variable Xd is decoded. As mentioned earlier, this algorithm runs iteratively. To start with, in the first iteration, only bits in the first level of the ...

  6. Decoding of concatenated codes with interleaved outer codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Thommesen, Christian

    2004-01-01

    Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes.......Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes....

  7. Decoding of concatenated codes with interleaved outer codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Thommesen, Christian; Høholdt, Tom

    2004-01-01

    Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved Reed/Solomon codes, which allows close to errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes. (NK) N-K......Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved Reed/Solomon codes, which allows close to errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes. (NK) N-K...

  8. A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video

    Directory of Open Access Journals (Sweden)

    Zhang Liangpei

    2007-01-01

    Full Text Available Super-resolution (SR reconstruction technique is capable of producing a high-resolution image from a sequence of low-resolution images. In this paper, we study an efficient SR algorithm for digital video. To effectively deal with the intractable problems in SR video reconstruction, such as inevitable motion estimation errors, noise, blurring, missing regions, and compression artifacts, the total variation (TV regularization is employed in the reconstruction model. We use the fixed-point iteration method and preconditioning techniques to efficiently solve the associated nonlinear Euler-Lagrange equations of the corresponding variational problem in SR. The proposed algorithm has been tested in several cases of motion and degradation. It is also compared with the Laplacian regularization-based SR algorithm and other TV-based SR algorithms. Experimental results are presented to illustrate the effectiveness of the proposed algorithm.

  9. General Video Game Evaluation Using Relative Algorithm Performance Profiles

    DEFF Research Database (Denmark)

    Nielsen, Thorbjørn; Barros, Gabriella; Togelius, Julian

    2015-01-01

    In order to generate complete games through evolution we need generic and reliably evaluation functions for games. It has been suggested that game quality could be characterised through playing a game with different controllers and comparing their performance. This paper explores that idea through...... investigating the relative performance of different general game-playing algorithms. Seven game-playing algorithms was used to play several hand-designed, mutated and randomly generated VGDL game descriptions. Results discussed appear to support the conjecture that well-designed games have, in average, a higher...... performance difference between better and worse game-playing algorithms....

  10. Decoding small surface codes with feedforward neural networks

    Science.gov (United States)

    Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen

    2018-01-01

    Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.

  11. A high-efficient significant coefficient scanning algorithm for 3-D embedded wavelet video coding

    Science.gov (United States)

    Song, Haohao; Yu, Songyu; Song, Li; Xiong, Hongkai

    2005-07-01

    3-D embedded wavelet video coding (3-D EWVC) algorithms become a vital scheme for state-of-the-art scalable video coding. A major objective in a progressive transmission scheme is to select the most important information which yields the largest distortion reduction to be transmitted first, so traditional 3-D EWVC algorithms scan coefficients according to bit-plane order. To significant bit information of the same bit-plane, however, these algorithms neglect the different effect of coefficients in different subbands to distortion. In this paper, we analyze different effect of significant information bits of the same bit-plane in different subbands to distortion and propose a high-efficient significant coefficient scanning algorithm. Experimental results of 3-D SPIHT and 3-D SPECK show that high-efficient significant coefficient scanning algorithm can improve traditional 3-D EWVC algorithms' ability of compression, and make reconstructed videos have higher PSNR and better visual effects in the same bit rate compared to original significant coefficient scanning algorithms respectively.

  12. Delta modulation. [overshoot suppression algorithm for video data transmission

    Science.gov (United States)

    Schilling, D. L.

    1973-01-01

    The overshoot suppression algorithm has been more extensively studied. Computer generated test-pictures show a radical improvement due to the overshoot suppression algorithm. Considering the delta modulator link as a nonlinear digital filter, a formula that relates the minimum rise time that can be handled for given filter parameters and voltage swings has been developed. The settling time has been calculated for the case of overshoot suppression as well as when no suppression is employed. The results indicate a significant decrease in settling time when overshoot suppression is used. An algorithm for correcting channel errors has been developed. It is shown that pulse stuffing PCM words in the DM bit stream results in a significant reduction in error length.

  13. Algorithm-Based Motion Magnification for Video Processing in Urological Laparoscopy.

    Science.gov (United States)

    Adams, Fabian; Schoelly, Reto; Schlager, Daniel; Schoenthaler, Martin; Schoeb, Dominik S; Wilhelm, Konrad; Hein, Simon; Wetterauer, Ulrich; Miernik, Arkadiusz

    2017-06-01

    Minimally invasive surgery is in constant further development and has replaced many conventional operative procedures. If vascular structure movement could be detected during these procedures, it could reduce the risk of vascular injury and conversion to open surgery. The recently proposed motion-amplifying algorithm, Eulerian Video Magnification (EVM), has been shown to substantially enhance minimal object changes in digitally recorded video that is barely perceptible to the human eye. We adapted and examined this technology for use in urological laparoscopy. Video sequences of routine urological laparoscopic interventions were recorded and further processed using spatial decomposition and filtering algorithms. The freely available EVM algorithm was investigated for its usability in real-time processing. In addition, a new image processing technology, the CRS iimotion Motion Magnification (CRSMM) algorithm, was specifically adjusted for endoscopic requirements, applied, and validated by our working group. Using EVM, no significant motion enhancement could be detected without severe impairment of the image resolution, motion, and color presentation. The CRSMM algorithm significantly improved image quality in terms of motion enhancement. In particular, the pulsation of vascular structures could be displayed more accurately than in EVM. Motion magnification image processing technology has the potential for clinical importance as a video optimizing modality in endoscopic and laparoscopic surgery. Barely detectable (micro)movements can be visualized using this noninvasive marker-free method. Despite these optimistic results, the technology requires considerable further technical development and clinical tests.

  14. Multiscale Architectures and Parallel Algorithms for Video Object Tracking

    Science.gov (United States)

    2011-10-01

    Black River Systems. This may have inadvertently introduced bugs that were later discovered by AFRL during testing (of the June 22, 2011 version of...Parallelism in Algorithms and Architectures, pages 289–298, 2007. [3] S. Ali and M. Shah. COCOA - Tracking in aerial imagery. In Daniel J. Henry

  15. Adaptive Single-Trial Error/Erasure Decoding of Reed-Solomon Codes

    CERN Document Server

    Senger, Christian; Schober, Steffen; Bossert, Martin; Zyablov, Victor V

    2011-01-01

    Algebraic decoding algorithms are commonly applied for the decoding of Reed-Solomon codes. Their main advantages are low computational complexity and predictable decoding capabilities. Many algorithms can be extended for correction of both errors and erasures. This enables the decoder to exploit binary quantized reliability information obtained from the transmission channel: Received symbols with high reliability are forwarded to the decoding algorithm while symbols with low reliability are erased. In this paper we investigate adaptive single-trial error/erasure decoding of Reed-Solomon codes, i.e. we derive an adaptive erasing strategy which minimizes the residual codeword error probability after decoding. Our result is applicable to any error/erasure decoding algorithm as long as its decoding capabilities can be expressed by a decoder capability function. Examples are Bounded Minimum Distance decoding with the Berlekamp-Massey- or the Sugiyama algorithms and the Guruswami-Sudan list decoder.

  16. Unbiased decoding of biologically motivated visual feature descriptors

    Directory of Open Access Journals (Sweden)

    Michael eFelsberg

    2015-08-01

    Full Text Available Visual feature descriptors are essential elements in most computer and robot vision systems. They typically lead to an abstraction of the input data, images or video, for further processing such as clustering and machine learning. In clustering applications, the cluster center represents the prototypical descriptor of the cluster and estimating the corresponding signal value, such as color value or dominating flow orientation, by decoding the prototypical descriptor, is essential. Machine learning applications determine the relevance of respective descriptors and a visualization of the corresponding decoded information is very useful for the analysis of the learning algorithm. Thus decoding of feature descriptors is a relevant problem, frequently addressed in recent work.Also the human brain represents sensorimotor information at a suitable abstraction level through varying activation of neuron populations. Approximative computational models have been derived that confirm neurophysiological experiments on the representation of visual information by decoding the underlying signals. However, the represented variables have a bias towards centers or boundaries of the tuning curves. Despite the fact that feature descriptors in computer vision are motivated from neuroscience, the respective decoding methods have been derived largely independent.From first order principles, we derive unbiased decoding schemes for biologically motivated feature descriptors with a minimum amount of redundancy and suitable invariance properties. These descriptors establish a non-parametric density estimation of the underlying stochastic process with a particular algebraic structure. Based on the resulting algebraic constraints, we show formally how the decoding problem is formulated as an unbiased maximum likelihood estimator and we derive a recurrent inverse diffusion scheme to infer the dominating mode of the distribution. These methods are evaluated and compared to

  17. ALGORITHM OF PLACEMENT OF VIDEO SURVEILLANCE CAMERAS AND ITS SOFTWARE IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Loktev Alexey Alexeevich

    2012-10-01

    Full Text Available Comprehensive distributed safety, control, and monitoring systems applied by companies and organizations of different ownership structure play a substantial role in the present-day society. Video surveillance elements that ensure image processing and decision making in automated or automatic modes are the essential components of new systems. This paper covers the modeling of video surveillance systems installed in buildings, and the algorithm, or pattern, of video camera placement with due account for nearly all characteristics of buildings, detection and recognition facilities, and cameras themselves. This algorithm will be subsequently implemented as a user application. The project contemplates a comprehensive approach to the automatic placement of cameras that take account of their mutual positioning and compatibility of tasks. The project objective is to develop the principal elements of the algorithm of recognition of a moving object to be detected by several cameras. The image obtained by different cameras will be processed. Parameters of motion are to be identified to develop a table of possible options of routes. The implementation of the recognition algorithm represents an independent research project to be covered by a different article. This project consists in the assessment of the degree of complexity of an algorithm of camera placement designated for identification of cases of inaccurate algorithm implementation, as well as in the formulation of supplementary requirements and input data by means of intercrossing sectors covered by neighbouring cameras. The project also contemplates identification of potential problems in the course of development of a physical security and monitoring system at the stage of the project design development and testing. The camera placement algorithm has been implemented as a software application that has already been pilot tested on buildings and inside premises that have irregular dimensions. The

  18. Soft-decision decoding of RS codes

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2005-01-01

    By introducing a few simplifying assumptions we derive a simple condition for successful decoding using the Koetter-Vardy algorithm for soft-decision decoding of RS codes. We show that the algorithm has a significant advantage over hard decision decoding when the code rate is low, when two or more...... sets of received symbols have substantially different reliabilities, or when the number of alternative transmitted symbols is very small...

  19. Shot Boundary Detection in Soccer Video using Twin-comparison Algorithm and Dominant Color Region

    Directory of Open Access Journals (Sweden)

    Matko Šarić

    2008-06-01

    Full Text Available The first step in generic video processing is temporal segmentation, i.e. shot boundary detection. Camera shot transitions can be either abrupt (e.g. cuts or gradual (e.g. fades, dissolves, wipes. Sports video is one of the most challenging domains for robust shot boundary detection. We proposed a shot boundary detection algorithm for soccer video based on the twin-comparison method and the absolute difference between frames in their ratios of dominant colored pixels to total number of pixels. With this approach the detection of gradual transitions is improved by decreasing the number of false positives caused by some camera operations. We also compared performances of our algorithm and the standard twin-comparison method.

  20. The Development of Video Learning to Deliver a Basic Algorithm Learning

    Directory of Open Access Journals (Sweden)

    slamet kurniawan fahrurozi

    2017-12-01

    Full Text Available The world of education is currently entering the era of the media world, where learning activities demand reduction of lecture methods and Should be replaced by the use of many medias. In relation to the function of instructional media, it can be emphasized as follows: as a tool to make learning more effective, accelerate the teaching and learning process and improve the quality of teaching and learning process. This research aimed to develop a learning video programming basic materials algorithm that is appropriate to be applied as a learning resource in class X SMK. This study was also aimed to know the feasibility of learning video media developed. The research method used was research was research and development using development model developed by Alessi and Trollip (2001. The development model was divided into 3 stages namely Planning, Design, and Develpoment. Data collection techniques used interview method, literature method and instrument method. In the next stage, learning video was validated or evaluated by the material experts, media experts and users who are implemented to 30 Learners. The result of the research showed that video learning has been successfully made on basic programming subjects which consist of 8 scane video. Based on the learning video validation result, the percentage of learning video's eligibility is 90.5% from material experts, 95.9% of media experts, and 84% of users or learners. From the testing result that the learning videos that have been developed can be used as learning resources or instructional media programming subjects basic materials algorithm.

  1. A class of Sudan-decodable codes

    DEFF Research Database (Denmark)

    Nielsen, Rasmus Refslund

    2000-01-01

    In this article, Sudan's algorithm is modified into an efficient method to list-decode a class of codes which can be seen as a generalization of Reed-Solomon codes. The algorithm is specialized into a very efficient method for unique decoding. The code construction can be generalized based...

  2. Comprehensive Analysis and Evaluation of Background Subtraction Algorithms for Surveillance Video

    Directory of Open Access Journals (Sweden)

    Yan Feng

    2014-08-01

    Full Text Available Background Subtraction techniques are the basis for moving target detection and tracking in the domain of video surveillance, while the robust and reliable detection and tracking algorithms in complex environment is a challenging subject, so evaluations of various background subtraction algorithms are of great significance. Nine state of the art methods ranging from simple to sophisticated ones are discussed. Then the algorithms were implemented and tested using different videos with ground truth, such as baseline, dynamic background, camera jitter, and intermittent object motion and shadow scenarios. The best suited background modeling methods for each scenario are given by comprehensive analysis of three parameters: recall, precision and F-Measure, which facilitates more accurate target detection and tracking.

  3. DMPDS: A Fast Motion Estimation Algorithm Targeting High Resolution Videos and Its FPGA Implementation

    Directory of Open Access Journals (Sweden)

    Gustavo Sanchez

    2012-01-01

    Full Text Available This paper presents a new fast motion estimation (ME algorithm targeting high resolution digital videos and its efficient hardware architecture design. The new Dynamic Multipoint Diamond Search (DMPDS algorithm is a fast algorithm which increases the ME quality when compared with other fast ME algorithms. The DMPDS achieves a better digital video quality reducing the occurrence of local minima falls, especially in high definition videos. The quality results show that the DMPDS is able to reach an average PSNR gain of 1.85 dB when compared with the well-known Diamond Search (DS algorithm. When compared to the optimum results generated by the Full Search (FS algorithm the DMPDS shows a lose of only 1.03 dB in the PSNR. On the other hand, the DMPDS reached a complexity reduction higher than 45 times when compared to FS. The quality gains related to DS caused an expected increase in the DMPDS complexity which uses 6.4-times more calculations than DS. The DMPDS architecture was designed focused on high performance and low cost, targeting to process Quad Full High Definition (QFHD videos in real time (30 frames per second. The architecture was described in VHDL and synthesized to Altera Stratix 4 and Xilinx Virtex 5 FPGAs. The synthesis results show that the architecture is able to achieve processing rates higher than 53 QFHD fps, reaching the real-time requirements. The DMPDS architecture achieved the highest processing rate when compared to related works in the literature. This high processing rate was obtained designing an architecture with a high operation frequency and low numbers of cycles necessary to process each block.

  4. Robust video transmission with distributed source coded auxiliary channel.

    Science.gov (United States)

    Wang, Jiajun; Majumdar, Abhik; Ramchandran, Kannan

    2009-12-01

    We propose a novel solution to the problem of robust, low-latency video transmission over lossy channels. Predictive video codecs, such as MPEG and H.26x, are very susceptible to prediction mismatch between encoder and decoder or "drift" when there are packet losses. These mismatches lead to a significant degradation in the decoded quality. To address this problem, we propose an auxiliary codec system that sends additional information alongside an MPEG or H.26x compressed video stream to correct for errors in decoded frames and mitigate drift. The proposed system is based on the principles of distributed source coding and uses the (possibly erroneous) MPEG/H.26x decoder reconstruction as side information at the auxiliary decoder. The distributed source coding framework depends upon knowing the statistical dependency (or correlation) between the source and the side information. We propose a recursive algorithm to analytically track the correlation between the original source frame and the erroneous MPEG/H.26x decoded frame. Finally, we propose a rate-distortion optimization scheme to allocate the rate used by the auxiliary encoder among the encoding blocks within a video frame. We implement the proposed system and present extensive simulation results that demonstrate significant gains in performance both visually and objectively (on the order of 2 dB in PSNR over forward error correction based solutions and 1.5 dB in PSNR over intrarefresh based solutions for typical scenarios) under tight latency constraints.

  5. Neural Decoder for Topological Codes

    Science.gov (United States)

    Torlai, Giacomo; Melko, Roger G.

    2017-07-01

    We present an algorithm for error correction in topological codes that exploits modern machine learning techniques. Our decoder is constructed from a stochastic neural network called a Boltzmann machine, of the type extensively used in deep learning. We provide a general prescription for the training of the network and a decoding strategy that is applicable to a wide variety of stabilizer codes with very little specialization. We demonstrate the neural decoder numerically on the well-known two-dimensional toric code with phase-flip errors.

  6. Low-Cost Super-Resolution Algorithms Implementation Over a HW/SW Video Compression Platform

    Directory of Open Access Journals (Sweden)

    Llopis Rafael Peset

    2006-01-01

    Full Text Available Two approaches are presented in this paper to improve the quality of digital images over the sensor resolution using super-resolution techniques: iterative super-resolution (ISR and noniterative super-resolution (NISR algorithms. The results show important improvements in the image quality, assuming that sufficient sample data and a reasonable amount of aliasing are available at the input images. These super-resolution algorithms have been implemented over a codesign video compression platform developed by Philips Research, performing minimal changes on the overall hardware architecture. In this way, a novel and feasible low-cost implementation has been obtained by using the resources encountered in a generic hybrid video encoder. Although a specific video codec platform has been used, the methodology presented in this paper is easily extendable to any other video encoder architectures. Finally a comparison in terms of memory, computational load, and image quality for both algorithms, as well as some general statements about the final impact of the sampling process on the quality of the super-resolved (SR image, are also presented.

  7. List Decoding of Algebraic Codes

    DEFF Research Database (Denmark)

    Nielsen, Johan Sebastian Rosenkilde

    We investigate three paradigms for polynomial-time decoding of Reed–Solomon codes beyond half the minimum distance: the Guruswami–Sudan algorithm, Power decoding and the Wu algorithm. The main results concern shaping the computational core of all three methods to a problem solvable by module...... minimisation; by applying the fastest known algorithms for this general problem, we then obtain realisations of each paradigm which are as fast or faster than all previously known methods. An element of this is the “2D key equation”, a heavily generalised form of the classical key equation, and we show how...... to solve such using module minimisation, or using our new Demand–Driven algorithm which is also based on module minimisation. The decoding paradigms are all derived and analysed in a self-contained manner, often in new ways or examined in greater depth than previously. Among a number of new results, we...

  8. Interior point decoding for linear vector channels

    Energy Technology Data Exchange (ETDEWEB)

    Wadayama, T [Nagoya Institute of Technology, Gokiso, Showa-ku, Nagoya, Aichi, 466-8555 (Japan)], E-mail: wadayama@nitech.ac.jp

    2008-01-15

    In this paper, a novel decoding algorithm for low-density parity-check (LDPC) codes based on convex optimization is presented. The decoding algorithm, called interior point decoding, is designed for linear vector channels. The linear vector channels include many practically important channels such as inter-symbol interference channels and partial response channels. It is shown that the maximum likelihood decoding (MLD) rule for a linear vector channel can be relaxed to a convex optimization problem, which is called a relaxed MLD problem.

  9. Integrating Algorithm Visualization Video into a First-Year Algorithm and Data Structure Course

    Science.gov (United States)

    Crescenzi, Pilu; Malizia, Alessio; Verri, M. Cecilia; Diaz, Paloma; Aedo, Ignacio

    2012-01-01

    In this paper we describe the results that we have obtained while integrating algorithm visualization (AV) movies (strongly tightened with the other teaching material), within a first-year undergraduate course on algorithms and data structures. Our experimental results seem to support the hypothesis that making these movies available significantly…

  10. FPGA implementation of low complexity LDPC iterative decoder

    Science.gov (United States)

    Verma, Shivani; Sharma, Sanjay

    2016-07-01

    Low-density parity-check (LDPC) codes, proposed by Gallager, emerged as a class of codes which can yield very good performance on the additive white Gaussian noise channel as well as on the binary symmetric channel. LDPC codes have gained lots of importance due to their capacity achieving property and excellent performance in the noisy channel. Belief propagation (BP) algorithm and its approximations, most notably min-sum, are popular iterative decoding algorithms used for LDPC and turbo codes. The trade-off between the hardware complexity and the decoding throughput is a critical factor in the implementation of the practical decoder. This article presents introduction to LDPC codes and its various decoding algorithms followed by realisation of LDPC decoder by using simplified message passing algorithm and partially parallel decoder architecture. Simplified message passing algorithm has been proposed for trade-off between low decoding complexity and decoder performance. It greatly reduces the routing and check node complexity of the decoder. Partially parallel decoder architecture possesses high speed and reduced complexity. The improved design of the decoder possesses a maximum symbol throughput of 92.95 Mbps and a maximum of 18 decoding iterations. The article presents implementation of 9216 bits, rate-1/2, (3, 6) LDPC decoder on Xilinx XC3D3400A device from Spartan-3A DSP family.

  11. LDPC Decoding on GPU for Mobile Device

    Directory of Open Access Journals (Sweden)

    Yiqin Lu

    2016-01-01

    Full Text Available A flexible software LDPC decoder that exploits data parallelism for simultaneous multicode words decoding on the mobile device is proposed in this paper, supported by multithreading on OpenCL based graphics processing units. By dividing the check matrix into several parts to make full use of both the local memory and private memory on GPU and properly modify the code capacity each time, our implementation on a mobile phone shows throughputs above 100 Mbps and delay is less than 1.6 millisecond in decoding, which make high-speed communication like video calling possible. To realize efficient software LDPC decoding on the mobile device, the LDPC decoding feature on communication baseband chip should be replaced to save the cost and make it easier to upgrade decoder to be compatible with a variety of channel access schemes.

  12. Generalized Sudan's List Decoding for Order Domain Codes

    DEFF Research Database (Denmark)

    Geil, Hans Olav; Matsumoto, Ryutaroh

    2007-01-01

    We generalize Sudan's list decoding algorithm without multiplicity to evaluation codes coming from arbitrary order domains. The number of correctable errors by the proposed method is larger than the original list decoding without multiplicity.......We generalize Sudan's list decoding algorithm without multiplicity to evaluation codes coming from arbitrary order domains. The number of correctable errors by the proposed method is larger than the original list decoding without multiplicity....

  13. New architecture for MPEG video streaming system with backward playback support.

    Science.gov (United States)

    Fu, Chang-Hong; Chan, Yui-Lam; Ip, Tak-Piu; Siu, Wan-Chi

    2007-09-01

    MPEG digital video is becoming ubiquitous for video storage and communications. It is often desirable to perform various video cassette recording (VCR) functions such as backward playback in MPEG videos. However, the predictive processing techniques employed in MPEG severely complicate the backward-play operation. A straightforward implementation of backward playback is to transmit and decode the whole group-of-picture (GOP), store all the decoded frames in the decoder buffer, and play the decoded frames in reverse order. This approach requires a significant buffer in the decoder, which depends on the GOP size, to store the decoded frames. This approach could not be possible in a severely constrained memory requirement. Another alternative is to decode the GOP up to the current frame to be displayed, and then go back to decode the GOP again up to the next frame to be displayed. This approach does not need the huge buffer, but requires much higher bandwidth of the network and complexity of the decoder. In this paper, we propose a macroblock-based algorithm for an efficient implementation of the MPEG video streaming system to provide backward playback over a network with the minimal requirements on the network bandwidth and the decoder complexity. The proposed algorithm classifies macroblocks in the requested frame into backward macroblocks (BMBs) and forward/backward macroblocks (FBMBs). Two macroblock-based techniques are used to manipulate different types of macroblocks in the compressed domain and the server then sends the processed macroblocks to the client machine. For BMBs, a VLC-domain technique is adopted to reduce the number of macroblocks that need to be decoded by the decoder and the number of bits that need to be sent over the network in the backward-play operation. We then propose a newly mixed VLC/DCT-domain technique to handle FBMBs in order to further reduce the computational complexity of the decoder. With these compressed-domain techniques, the

  14. The effects of camera jitter for background subtraction algorithms on fused infrared-visible video streams

    Science.gov (United States)

    Becker, Stefan; Scherer-Negenborn, Norbert; Thakkar, Pooja; Hübner, Wolfgang; Arens, Michael

    2016-10-01

    This paper is a continuation of the work of Becker et al.1 In their work, they analyzed the robustness of various background subtraction algorithms on fused video streams originating from visible and infrared cameras. In order to cover a broader range of background subtraction applications, we show the effects of fusing infrared-visible video streams from vibrating cameras on a large set of background subtraction algorithms. The effectiveness is quantitatively analyzed on recorded data of a typical outdoor sequence with a fine-grained and accurate annotation of the images. Thereby, we identify approaches which can benefit from fused sensor signals with camera jitter. Finally conclusions on what fusion strategies should be preferred under such conditions are given.

  15. Source and Channel Adaptive Rate Control for Multicast Layered Video Transmission Based on a Clustering Algorithm

    Directory of Open Access Journals (Sweden)

    Jérôme Viéron

    2004-03-01

    Full Text Available This paper introduces source-channel adaptive rate control (SARC, a new congestion control algorithm for layered video transmission in large multicast groups. In order to solve the well-known feedback implosion problem in large multicast groups, we first present a mechanism for filtering RTCP receiver reports sent from receivers to the whole session. The proposed filtering mechanism provides a classification of receivers according to a predefined similarity measure. An end-to-end source and FEC rate control based on this distributed feedback aggregation mechanism coupled with a video layered coding system is then described. The number of layers, their rate, and their levels of protection are adapted dynamically to aggregated feedbacks. The algorithms have been validated with the NS2 network simulator.

  16. The research of moving objects behavior detection and tracking algorithm in aerial video

    Science.gov (United States)

    Yang, Le-le; Li, Xin; Yang, Xiao-ping; Li, Dong-hui

    2015-12-01

    The article focuses on the research of moving target detection and tracking algorithm in Aerial monitoring. Study includes moving target detection, moving target behavioral analysis and Target Auto tracking. In moving target detection, the paper considering the characteristics of background subtraction and frame difference method, using background reconstruction method to accurately locate moving targets; in the analysis of the behavior of the moving object, using matlab technique shown in the binary image detection area, analyzing whether the moving objects invasion and invasion direction; In Auto Tracking moving target, A video tracking algorithm that used the prediction of object centroids based on Kalman filtering was proposed.

  17. VIDEO DENOISING USING SWITCHING ADAPTIVE DECISION BASED ALGORITHM WITH ROBUST MOTION ESTIMATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V. Jayaraj

    2010-08-01

    Full Text Available A Non-linear adaptive decision based algorithm with robust motion estimation technique is proposed for removal of impulse noise, Gaussian noise and mixed noise (impulse and Gaussian with edge and fine detail preservation in images and videos. The algorithm includes detection of corrupted pixels and the estimation of values for replacing the corrupted pixels. The main advantage of the proposed algorithm is that an appropriate filter is used for replacing the corrupted pixel based on the estimation of the noise variance present in the filtering window. This leads to reduced blurring and better fine detail preservation even at the high mixed noise density. It performs both spatial and temporal filtering for removal of the noises in the filter window of the videos. The Improved Cross Diamond Search Motion Estimation technique uses Least Median Square as a cost function, which shows improved performance than other motion estimation techniques with existing cost functions. The results show that the proposed algorithm outperforms the other algorithms in the visual point of view and in Peak Signal to Noise Ratio, Mean Square Error and Image Enhancement Factor.

  18. Two-terminal video coding.

    Science.gov (United States)

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  19. Sub-quadratic decoding of one-point hermitian codes

    DEFF Research Database (Denmark)

    Nielsen, Johan Sebastian Rosenkilde; Beelen, Peter

    2015-01-01

    We present the first two sub-quadratic complexity decoding algorithms for one-point Hermitian codes. The first is based on a fast realization of the Guruswami-Sudan algorithm using state-of-the-art algorithms from computer algebra for polynomial-ring matrix minimization. The second is a power...... decoding algorithm: an extension of classical key equation decoding which gives a probabilistic decoding algorithm up to the Sudan radius. We show how the resulting key equations can be solved by the matrix minimization algorithms from computer algebra, yielding similar asymptotic complexities....

  20. Decoders for MST radars

    Science.gov (United States)

    Woodman, R. F.

    1983-01-01

    Decoding techniques and equipment used by MST radars are described and some recommendations for new systems are presented. Decoding can be done either by software in special-purpose (array processors, etc.) or general-purpose computers or in specially designed digital decoders. Both software and hardware decoders are discussed and the special case of decoding for bistatic radars is examined.

  1. Error Protection of Wavelet Scalable Video Streaming Using Wyner-Ziv Technique over a Lossy Network

    Science.gov (United States)

    Liu, Benjian; Xu, Ke

    This paper presents a novel error resilience scheme for wavelet scalable video coding. We use Wyner-Ziv codec to produce extra bits protecting the important parts of the embedded video streaming. At the same time these bits also as the second description of important parts are transmitted over auxiliary channel to the receiver for error resilience. The errors in the embedded video streaming can be corrected by Wyner-Ziv description which regards the decoded frame as side information. Moreover, Wyner-Ziv decoder utilizes a coarse estimated version of the corrupted parts exploiting frame correlation in wavelet video decoder to generate a refine version. Simulation results show that our proposed method can achieve much better performance compared with Forward Error Correction code. Meanwhile, this error resilient algorithm can achieve 2-3 dB PSNR gains over the motion compensation error concealment.

  2. Decoding neural events from fMRI BOLD signal: A comparison of existing approaches and development of a new algorithm

    Science.gov (United States)

    Bush, Keith; Cisler, Josh

    2013-01-01

    Neuroimaging methodology predominantly relies on the blood oxygenation level dependent (BOLD) signal. While the BOLD signal is a valid measure of neuronal activity, variance in fluctuations of the BOLD signal are not only due to fluctuations in neural activity. Thus, a remaining problem in neuroimaging analyses is developing methods that ensure specific inferences about neural activity that are not confounded by unrelated sources of noise in the BOLD signal. Here, we develop and test a new algorithm for performing semi-blind (i.e., no knowledge of stimulus timings) deconvolution of the BOLD signal that treats the neural event as an observable, but intermediate, probabilistic representation of the system’s state. We test and compare this new algorithm against three other recent deconvolution algorithms under varied levels of autocorrelated and Gaussian noise, hemodynamic response function (HRF) misspecification, and observation sampling rate (i.e., TR). Further, we compare the algorithms’ performance using two models to simulate BOLD data: a convolution of neural events with a known (or misspecified) HRF versus a biophysically accurate balloon model of hemodynamics. We also examine the algorithms’ performance on real task data. The results demonstrated good performance of all algorithms, though the new algorithm generally outperformed the others (3.0% improvement) under simulated resting state experimental conditions exhibiting multiple, realistic confounding factors (as well as 10.3% improvement on a real Stroop task). The simulations also demonstrate that the greatest negative influence on deconvolution accuracy is observation sampling rate. Practical and theoretical implications of these results for improving inferences about neural activity from fMRI BOLD signal are discussed. PMID:23602664

  3. Edge Polynomial Fractal Compression Algorithm for High Quality Video Transmission. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Freddie

    1999-06-01

    In this final report, Physical Optics Corporation (POC) provides a review of its Edge Polynomial Autonomous Compression (EPAC) technology. This project was undertaken to meet the need for low bandwidth transmission of full-motion video images. In addition, this report offers a synopsis of the logical data representation study that was performed to compress still images and video. The mapping singularities and polynomial representation of 3-D surfaces were found to be ideal for very high image compression. Our efforts were then directed to extending the EPAC algorithm for the motion of singularities by tracking the 3-D coordinates of characteristic points and the development of system components. Finally, we describe the integration of the software with the hardware components. This process consists of acquiring and processing each separate camera view, combining the information from different cameras to calculate the location of an object in three dimensions, and tracking the information history and the behavior of the objects.

  4. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    Science.gov (United States)

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  5. Subjective Video Quality Assessment in H.264/AVC Video Coding Standard

    Directory of Open Access Journals (Sweden)

    Z. Miličević

    2012-11-01

    Full Text Available This paper seeks to provide an approach for subjective video quality assessment in the H.264/AVC standard. For this purpose a special software program for the subjective assessment of quality of all the tested video sequences is developed. It was developed in accordance with recommendation ITU-T P.910, since it is suitable for the testing of multimedia applications. The obtained results show that in the proposed selective intra prediction and optimized inter prediction algorithm there is a small difference in picture quality (signal-to-noise ratio between decoded original and modified video sequences.

  6. Decoding of Cyclic Codes,

    Science.gov (United States)

    INFORMATION THEORY, *DECODING), (* DATA TRANSMISSION SYSTEMS , DECODING), STATISTICAL ANALYSIS, STOCHASTIC PROCESSES, CODING, WHITE NOISE, NUMBER THEORY, CORRECTIONS, BINARY ARITHMETIC, SHIFT REGISTERS, CONTROL SYSTEMS, USSR

  7. Performance comparison of machine learning algorithms and number of independent components used in fMRI decoding of belief vs. disbelief.

    Science.gov (United States)

    Douglas, P K; Harris, Sam; Yuille, Alan; Cohen, Mark S

    2011-05-15

    Machine learning (ML) has become a popular tool for mining functional neuroimaging data, and there are now hopes of performing such analyses efficiently in real-time. Towards this goal, we compared accuracy of six different ML algorithms applied to neuroimaging data of persons engaged in a bivariate task, asserting their belief or disbelief of a variety of propositional statements. We performed unsupervised dimension reduction and automated feature extraction using independent component (IC) analysis and extracted IC time courses. Optimization of classification hyperparameters across each classifier occurred prior to assessment. Maximum accuracy was achieved at 92% for Random Forest, followed by 91% for AdaBoost, 89% for Naïve Bayes, 87% for a J48 decision tree, 86% for K*, and 84% for support vector machine. For real-time decoding applications, finding a parsimonious subset of diagnostic ICs might be useful. We used a forward search technique to sequentially add ranked ICs to the feature subspace. For the current data set, we determined that approximately six ICs represented a meaningful basis set for classification. We then projected these six IC spatial maps forward onto a later scanning session within subject. We then applied the optimized ML algorithms to these new data instances, and found that classification accuracy results were reproducible. Additionally, we compared our classification method to our previously published general linear model results on this same data set. The highest ranked IC spatial maps show similarity to brain regions associated with contrasts for belief > disbelief, and disbelief < belief. Copyright © 2010 Elsevier Inc. All rights reserved.

  8. High Speed Frame Synchronization and Viterbi Decoding

    DEFF Research Database (Denmark)

    Paaske, Erik; Justesen, Jørn; Larsen, Knud J.

    1998-01-01

    The study has been divided into two phases. The purpose of Phase 1 of the study was to describe the system structure and algorithms in sufficient detail to allow drawing the high level architecture of units containing frame synchronization and Viterbi decoding. After selection of which specific...... potentially useful.Algorithms for frame synchronization are described and analyzed. Further, the high level architecture of units that contain frame synchronization and various other functions needed in a complete system is presented. Two such units are described, one for placement before the Viterbi decoder...... towards a realization in an FPGA.Node synchronization performed within a Viterbi decoder is discussed, and the high level architectures of three possible implementations of Viterbi decoders are described: The first implementation uses a number of commercially available decoders while the the two others...

  9. On minimizing the maximum broadcast decoding delay for instantly decodable network coding

    KAUST Repository

    Douik, Ahmed S.

    2014-09-01

    In this paper, we consider the problem of minimizing the maximum broadcast decoding delay experienced by all the receivers of generalized instantly decodable network coding (IDNC). Unlike the sum decoding delay, the maximum decoding delay as a definition of delay for IDNC allows a more equitable distribution of the delays between the different receivers and thus a better Quality of Service (QoS). In order to solve this problem, we first derive the expressions for the probability distributions of maximum decoding delay increments. Given these expressions, we formulate the problem as a maximum weight clique problem in the IDNC graph. Although this problem is known to be NP-hard, we design a greedy algorithm to perform effective packet selection. Through extensive simulations, we compare the sum decoding delay and the max decoding delay experienced when applying the policies to minimize the sum decoding delay and our policy to reduce the max decoding delay. Simulations results show that our policy gives a good agreement among all the delay aspects in all situations and outperforms the sum decoding delay policy to effectively minimize the sum decoding delay when the channel conditions become harsher. They also show that our definition of delay significantly improve the number of served receivers when they are subject to strict delay constraints.

  10. Analysis of Video Signal Transmission Through DWDM Network Based on a Quality Check Algorithm

    Directory of Open Access Journals (Sweden)

    A. Markovic

    2013-04-01

    Full Text Available This paper provides an analysis of the multiplexed video signal transmission through the Dense Wavelength Division Multiplexing (DWDM network based on a quality check algorithm, which determines where the interruption of the transmission quality starts. On the basis of this algorithm, simulations of transmission for specific values of fiber parameters ​​ are executed. The analysis of the results shows how the BER and Q-factor change depends on the length of the fiber, i.e. on the number of amplifiers, and what kind of an effect the number of multiplexed channels and the flow rate per channel have on a transmited signals. Analysis of DWDM systems is performed in the software package OptiSystem 7.0, which is designed for systems with flow rates of 2.5 Gb/s and 10 Gb/s per channel.

  11. Implementing a hardware-friendly wavelet entropy codec for scalable video

    Science.gov (United States)

    Eeckhaut, Hendrik; Christiaens, Mark; Devos, Harald; Stroobandt, Dirk

    2005-11-01

    In the RESUME project (Reconfigurable Embedded Systems for Use in Multimedia Environments) we explore the benefits of an implementation of scalable multimedia applications using reconfigurable hardware by building an FPGA implementation of a scalable wavelet-based video decoder. The term "scalable" refers to a design that can easily accommodate changes in quality of service with minimal computational overhead. This is important for portable devices that have different Quality of Service (QoS) requirements and have varying power restrictions. The scalable video decoder consists of three major blocks: a Wavelet Entropy Decoder (WED), an Inverse Discrete Wavelet Transformer (IDWT) and a Motion Compensator (MC). The WED decodes entropy encoded parts of the video stream into wavelet transformed frames. These frames are decoded bitlayer per bitlayer. The more bitlayers are decoded the higher the image quality (scalability in image quality). Resolution scalability is obtained as an inherent property of the IDWT. Finally framerate scalability is achieved through hierarchical motion compensation. In this article we present the results of our investigation into the hardware implementation of such a scalable video codec. In particular we found that the implementation of the entropy codec is a significant bottleneck. We present an alternative, hardware-friendly algorithm for entropy coding with excellent data locality (both temporal and spatial), streaming capabilities, a high degree of parallelism, a smaller memory footprint and state-of-the-art compression while maintaining all required scalability properties. These claims are supported by an effective hardware implementation on an FPGA.

  12. Belief propagation decoding of quantum channels by passing quantum messages

    Science.gov (United States)

    Renes, Joseph M.

    2017-07-01

    The belief propagation (BP) algorithm is a powerful tool in a wide range of disciplines from statistical physics to machine learning to computational biology, and is ubiquitous in decoding classical error-correcting codes. The algorithm works by passing messages between nodes of the factor graph associated with the code and enables efficient decoding of the channel, in some cases even up to the Shannon capacity. Here we construct the first BP algorithm which passes quantum messages on the factor graph and is capable of decoding the classical-quantum channel with pure state outputs. This gives explicit decoding circuits whose number of gates is quadratic in the code length. We also show that this decoder can be modified to work with polar codes for the pure state channel and as part of a decoder for transmitting quantum information over the amplitude damping channel. These represent the first explicit capacity-achieving decoders for non-Pauli channels.

  13. A General Rate K/N Convolutional Decoder Based on Neural Networks with Stopping Criterion

    Directory of Open Access Journals (Sweden)

    Johnny W. H. Kao

    2009-01-01

    Full Text Available A novel algorithm for decoding a general rate K/N convolutional code based on recurrent neural network (RNN is described and analysed. The algorithm is introduced by outlining the mathematical models of the encoder and decoder. A number of strategies for optimising the iterative decoding process are proposed, and a simulator was also designed in order to compare the Bit Error Rate (BER performance of the RNN decoder with the conventional decoder that is based on Viterbi Algorithm (VA. The simulation results show that this novel algorithm can achieve the same bit error rate and has a lower decoding complexity. Most importantly this algorithm allows parallel signal processing, which increases the decoding speed and accommodates higher data rate transmission. These characteristics are inherited from a neural network structure of the decoder and the iterative nature of the algorithm, that outperform the conventional VA algorithm.

  14. Iterative Decoding of Concatenated Codes: A Tutorial

    Directory of Open Access Journals (Sweden)

    Phillip A. Regalia

    2005-05-01

    Full Text Available The turbo decoding algorithm of a decade ago constituted a milestone in error-correction coding for digital communications, and has inspired extensions to generalized receiver topologies, including turbo equalization, turbo synchronization, and turbo CDMA, among others. Despite an accrued understanding of iterative decoding over the years, the “turbo principle” remains elusive to master analytically, thereby inciting interest from researchers outside the communications domain. In this spirit, we develop a tutorial presentation of iterative decoding for parallel and serial concatenated codes, in terms hopefully accessible to a broader audience. We motivate iterative decoding as a computationally tractable attempt to approach maximum-likelihood decoding, and characterize fixed points in terms of a “consensus” property between constituent decoders. We review how the decoding algorithm for both parallel and serial concatenated codes coincides with an alternating projection algorithm, which allows one to identify conditions under which the algorithm indeed converges to a maximum-likelihood solution, in terms of particular likelihood functions factoring into the product of their marginals. The presentation emphasizes a common framework applicable to both parallel and serial concatenated codes.

  15. Study of bifurcation behavior of two-dimensional turbo product code decoders

    Energy Technology Data Exchange (ETDEWEB)

    He Yejun [Department of Electronic and Information Engineering, Hong Kong Polytechnic University, Hunghom, Hong Kong (China); Lau, Francis C.M. [Department of Electronic and Information Engineering, Hong Kong Polytechnic University, Hunghom, Hong Kong (China)], E-mail: encmlau@polyu.edu.hk; Tse, Chi K. [Department of Electronic and Information Engineering, Hong Kong Polytechnic University, Hunghom, Hong Kong (China)

    2008-04-15

    Turbo codes, low-density parity-check (LDPC) codes and turbo product codes (TPCs) are high performance error-correction codes which employ iterative algorithms for decoding. Under different conditions, the behaviors of the decoders are different. While the nonlinear dynamical behaviors of turbo code decoders and LDPC decoders have been reported in the literature, the dynamical behavior of TPC decoders is relatively unexplored. In this paper, we investigate the behavior of the iterative algorithm of a two-dimensional TPC decoder when the input signal-to-noise ratio (SNR) varies. The quantity to be measured is the mean square value of the posterior probabilities of the information bits. Unlike turbo decoders or LDPC decoders, TPC decoders do not produce a clear 'waterfall region'. This is mainly because the TPC decoding algorithm does not converge to 'indecisive' fixed points even at very low SNR values.

  16. A QoE Aware Fairness Bi-level Resource Allocation Algorithm for Multiple Video Streaming in WLAN

    Directory of Open Access Journals (Sweden)

    Hu Zhou

    2015-11-01

    Full Text Available With the increasing of smart devices such as mobile phones and tablets, the scenario of multiple video users watching video streaming simultaneously in one wireless local area network (WLAN becomes more and more popular. However, the quality of experience (QoE and the fairness among multiple users are seriously impacted by the limited bandwidth and shared resources of WLAN. In this paper, we propose a novel bi-level resource allocation algorithm. To maximize the total throughput of the network, the WLAN is firstly tuned to the optimal operation point. Then the wireless resource is carefully allocated at the first level, i.e., between AP and uplink background traffic users, and the second level, i.e., among downlink video users. The simulation results show that the proposed algorithm can guarantee the QoE and the fairness for all the video users, and there is little impact on the average throughput of the background traffic users.

  17. Decoding of interleaved Reed-Solomon codes using improved power decoding

    DEFF Research Database (Denmark)

    Puchinger, Sven; Rosenkilde ne Nielsen, Johan

    2017-01-01

    We propose a new partial decoding algorithm for m-interleaved Reed-Solomon (IRS) codes that can decode, with high probability, a random error of relative weight 1 − Rm/m+1 at all code rates R, in time polynomial in the code length n. For m > 2, this is an asymptotic improvement over the previous...... state-of-the-art for all rates, and the first improvement for R > 1/3 in the last 20 years. The method combines collaborative decoding of IRS codes with power decoding up to the Johnson radius....

  18. Low-Power Bitstream-Residual Decoder for H.264/AVC Baseline Profile Decoding

    Directory of Open Access Journals (Sweden)

    Xu Ke

    2009-01-01

    Full Text Available Abstract We present the design and VLSI implementation of a novel low-power bitstream-residual decoder for H.264/AVC baseline profile. It comprises a syntax parser, a parameter decoder, and an Inverse Quantization Inverse Transform (IQIT decoder. The syntax parser detects and decodes each incoming codeword in the bitstream under the control of a hierarchical Finite State Machine (FSM; the IQIT decoder performs inverse transform and quantization with pipelining and parallelism. Various power reduction techniques, such as data-driven based on statistic results, nonuniform partition, precomputation, guarded evaluation, hierarchical FSM decomposition, TAG method, zero-block skipping, and clock gating , are adopted and integrated throughout the bitstream-residual decoder. With innovative architecture, the proposed design is able to decode QCIF video sequences of 30 fps at a clock rate as low as 1.5 MHz. A prototype H.264/AVC baseline decoding chip utilizing the proposed decoder is fabricated in UMC 0.18  m 1P6M CMOS technology. The proposed design is measured under 1 V 1.8 V supply with 0.1 V step. It dissipates 76  W at 1 V and 253  W at 1.8 V.

  19. Multiview video plus depth transmission via virtual-view-assisted complementary down/upsampling.

    Science.gov (United States)

    Jin, Zhi; Tillo, Tammam; Xiao, Jimin; Zhao, Yao

    Multiview video plus depth is a popular 3D video format which can provide viewers a vivid 3D feeling. However, its requirements in terms of computational complexity and transmission bandwidth are more than that of conventional 2D video. To mitigate these limitations, some works have proposed to reduce the amount of transmitted data by adopting different resolutions for different views, and consequently, the transmitted video is called mixed resolution video. In order to further reduce the transmitted data and maintain good quality at the decoder side; in this paper, we propose a down/upsampling algorithm for 3D multiview video which systematically takes into account the video encoder and decoder. At the encoder side, the rows of the two adjacent views are downsampled following an interlacing and complementary fashion, whereas, at the decoder side, the discarded pixels are recovered by fusing the virtual view pixels with the directional interpolated pixels from the complementary downsampled views. Moreover, the patterns of the texture surrounding the discarded pixels are used to aid the data fusion, so as to enhance edges recovery. Meanwhile, with the assistance of virtual views, at the decoder side, the proposed approach can effectively recover the discarded high-frequency details. The experimental results demonstrate the superior performance of the proposed framework.

  20. On Rational Interpolation-Based List-Decoding and List-Decoding Binary Goppa Codes

    DEFF Research Database (Denmark)

    Beelen, Peter; Høholdt, Tom; Nielsen, Johan Sebastian Rosenkilde

    2013-01-01

    We derive the Wu list-decoding algorithm for generalized Reed–Solomon (GRS) codes by using Gröbner bases over modules and the Euclidean algorithm as the initial algorithm instead of the Berlekamp–Massey algorithm. We present a novel method for constructing the interpolation polynomial fast. We give...

  1. Pilot study on real-time motion detection in UAS video data by human observer and image exploitation algorithm

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Voit, Michael; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2017-05-01

    Real-time motion video analysis is a challenging and exhausting task for the human observer, particularly in safety and security critical domains. Hence, customized video analysis systems providing functions for the analysis of subtasks like motion detection or target tracking are welcome. While such automated algorithms relieve the human operators from performing basic subtasks, they impose additional interaction duties on them. Prior work shows that, e.g., for interaction with target tracking algorithms, a gaze-enhanced user interface is beneficial. In this contribution, we present an investigation on interaction with an independent motion detection (IDM) algorithm. Besides identifying an appropriate interaction technique for the user interface - again, we compare gaze-based and traditional mouse-based interaction - we focus on the benefit an IDM algorithm might provide for an UAS video analyst. In a pilot study, we exposed ten subjects to the task of moving target detection in UAS video data twice, once performing with automatic support, once performing without it. We compare the two conditions considering performance in terms of effectiveness (correct target selections). Additionally, we report perceived workload (measured using the NASA-TLX questionnaire) and user satisfaction (measured using the ISO 9241-411 questionnaire). The results show that a combination of gaze input and automated IDM algorithm provides valuable support for the human observer, increasing the number of correct target selections up to 62% and reducing workload at the same time.

  2. Bounded-Angle Iterative Decoding of LDPC Codes

    Science.gov (United States)

    Dolinar, Samuel; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2009-01-01

    Bounded-angle iterative decoding is a modified version of conventional iterative decoding, conceived as a means of reducing undetected-error rates for short low-density parity-check (LDPC) codes. For a given code, bounded-angle iterative decoding can be implemented by means of a simple modification of the decoder algorithm, without redesigning the code. Bounded-angle iterative decoding is based on a representation of received words and code words as vectors in an n-dimensional Euclidean space (where n is an integer).

  3. A Scalable Architecture of a Structured LDPC Decoder

    Science.gov (United States)

    Lee, Jason Kwok-San; Lee, Benjamin; Thorpe, Jeremy; Andrews, Kenneth; Dolinar, Sam; Hamkins, Jon

    2004-01-01

    We present a scalable decoding architecture for a certain class of structured LDPC codes. The codes are designed using a small (n,r) protograph that is replicated Z times to produce a decoding graph for a (Z x n, Z x r) code. Using this architecture, we have implemented a decoder for a (4096,2048) LDPC code on a Xilinx Virtex-II 2000 FPGA, and achieved decoding speeds of 31 Mbps with 10 fixed iterations. The implemented message-passing algorithm uses an optimized 3-bit non-uniform quantizer that operates with 0.2dB implementation loss relative to a floating point decoder.

  4. Optimized static and video EEG rapid serial visual presentation (RSVP) paradigm based on motion surprise computation

    Science.gov (United States)

    Khosla, Deepak; Huber, David J.; Bhattacharyya, Rajan

    2017-05-01

    In this paper, we describe an algorithm and system for optimizing search and detection performance for "items of interest" (IOI) in large-sized images and videos that employ the Rapid Serial Visual Presentation (RSVP) based EEG paradigm and surprise algorithms that incorporate motion processing to determine whether static or video RSVP is used. The system works by first computing a motion surprise map on image sub-regions (chips) of incoming sensor video data and then uses those surprise maps to label the chips as either "static" or "moving". This information tells the system whether to use a static or video RSVP presentation and decoding algorithm in order to optimize EEG based detection of IOI in each chip. Using this method, we are able to demonstrate classification of a series of image regions from video with an azimuth value of 1, indicating perfect classification, over a range of display frequencies and video speeds.

  5. GOP-based channel rate allocation using genetic algorithm for scalable video streaming over error-prone networks.

    Science.gov (United States)

    Fang, Tao; Chau, Lap-Pui

    2006-06-01

    In this paper, we address the problem of unequal error protection (UEP) for scalable video transmission over wireless packet-erasure channel. Unequal amounts of protection are allocated to the different frames (I- or P-frame) of a group-of-pictures (GOP), and in each frame, unequal amounts of protection are allocated to the progressive bit-stream of scalable video to provide a graceful degradation of video quality as packet loss rate varies. We use a genetic algorithm (GA) to quickly get the allocation pattern, which is hard to get with other conventional methods, like hill-climbing method. Theoretical analysis and experimental results both demonstrate the advantage of the proposed algorithm.

  6. Massively parallel neural circuits for stereoscopic color vision: encoding, decoding and identification.

    Science.gov (United States)

    Lazar, Aurel A; Slutskiy, Yevgeniy B; Zhou, Yiyin

    2015-03-01

    Past work demonstrated how monochromatic visual stimuli could be faithfully encoded and decoded under Nyquist-type rate conditions. Color visual stimuli were then traditionally encoded and decoded in multiple separate monochromatic channels. The brain, however, appears to mix information about color channels at the earliest stages of the visual system, including the retina itself. If information about color is mixed and encoded by a common pool of neurons, how can colors be demixed and perceived? We present Color Video Time Encoding Machines (Color Video TEMs) for encoding color visual stimuli that take into account a variety of color representations within a single neural circuit. We then derive a Color Video Time Decoding Machine (Color Video TDM) algorithm for color demixing and reconstruction of color visual scenes from spikes produced by a population of visual neurons. In addition, we formulate Color Video Channel Identification Machines (Color Video CIMs) for functionally identifying color visual processing performed by a spiking neural circuit. Furthermore, we derive a duality between TDMs and CIMs that unifies the two and leads to a general theory of neural information representation for stereoscopic color vision. We provide examples demonstrating that a massively parallel color visual neural circuit can be first identified with arbitrary precision and its spike trains can be subsequently used to reconstruct the encoded stimuli. We argue that evaluation of the functional identification methodology can be effectively and intuitively performed in the stimulus space. In this space, a signal reconstructed from spike trains generated by the identified neural circuit can be compared to the original stimulus. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Colour-reproduction algorithm for transmitting variable video frames and its application to capsule endoscopy.

    Science.gov (United States)

    Khan, Tareq; Shrestha, Ravi; Imtiaz, Md Shamin; Wahid, Khan A

    2015-04-01

    Presented is a new power-efficient colour generation algorithm for wireless capsule endoscopy (WCE) application. In WCE, transmitting colour image data from the human intestine through radio frequency (RF) consumes a huge amount of power. The conventional way is to transmit all R, G and B components of all frames. Using the proposed dictionary-based colour generation scheme, instead of sending all R, G and B frames, first one colour frame is sent followed by a series of grey-scale frames. At the receiver end, the colour information is extracted from the colour frame and then added to colourise the grey-scale frames. After a certain number of grey-scale frames, another colour frame is sent followed by the same number of grey-scale frames. This process is repeated until the end of the video sequence to maintain the colour similarity. As a result, over 50% of RF transmission power can be saved using the proposed scheme, which will eventually lead to a battery life extension of the capsule by 4-7 h. The reproduced colour images have been evaluated both statistically and subjectively by professional gastroenterologists. The algorithm is finally implemented using a WCE prototype and the performance is validated using an ex-vivo trial.

  8. Comprehensive depth estimation algorithm for efficient stereoscopic content creation in three-dimensional video systems

    Science.gov (United States)

    Xu, Huihui; Jiang, Mingyan

    2015-07-01

    Two-dimensional to three-dimensional (3-D) conversion in 3-D video applications has attracted great attention as it can alleviate the problem of stereoscopic content shortage. Depth estimation is an essential part of this conversion since the depth accuracy directly affects the quality of a stereoscopic image. In order to generate a perceptually reasonable depth map, a comprehensive depth estimation algorithm that considers the scenario type is presented. Based on the human visual system mechanism, which is sensitive to a change in the scenario, this study classifies the type of scenario into four classes according to the relationship between the movements of the camera and the object, and then leverages different strategies on the basis of the scenario type. The proposed strategies efficiently extract the depth information from different scenarios. In addition, the depth generation method for a scenario in which there is no motion, neither of the object nor the camera, is also suitable for the single image. Qualitative and quantitative evaluation results demonstrate that the proposed depth estimation algorithm is very effective for generating stereoscopic content and providing a realistic visual experience.

  9. A 3-Step Algorithm Using Region-Based Active Contours for Video Objects Detection

    Directory of Open Access Journals (Sweden)

    Stéphanie Jehan-Besson

    2002-06-01

    Full Text Available We propose a 3-step algorithm for the automatic detection of moving objects in video sequences using region-based active contours. First, we introduce a very full general framework for region-based active contours with a new Eulerian method to compute the evolution equation of the active contour from a criterion including both region-based and boundary-based terms. This framework can be easily adapted to various applications, thanks to the introduction of functions named descriptors of the different regions. With this new Eulerian method based on shape optimization principles, we can easily take into account the case of descriptors depending upon features globally attached to the regions. Second, we propose a 3-step algorithm for detection of moving objects, with a static or a mobile camera, using region-based active contours. The basic idea is to hierarchically associate temporal and spatial information. The active contour evolves with successively three sets of descriptors: a temporal one, and then two spatial ones. The third spatial descriptor takes advantage of the segmentation of the image in intensity homogeneous regions. User interaction is reduced to the choice of a few parameters at the beginning of the process. Some experimental results are supplied.

  10. Colour-reproduction algorithm for transmitting variable video frames and its application to capsule endoscopy

    Science.gov (United States)

    Khan, Tareq; Shrestha, Ravi; Imtiaz, Md. Shamin

    2015-01-01

    Presented is a new power-efficient colour generation algorithm for wireless capsule endoscopy (WCE) application. In WCE, transmitting colour image data from the human intestine through radio frequency (RF) consumes a huge amount of power. The conventional way is to transmit all R, G and B components of all frames. Using the proposed dictionary-based colour generation scheme, instead of sending all R, G and B frames, first one colour frame is sent followed by a series of grey-scale frames. At the receiver end, the colour information is extracted from the colour frame and then added to colourise the grey-scale frames. After a certain number of grey-scale frames, another colour frame is sent followed by the same number of grey-scale frames. This process is repeated until the end of the video sequence to maintain the colour similarity. As a result, over 50% of RF transmission power can be saved using the proposed scheme, which will eventually lead to a battery life extension of the capsule by 4–7 h. The reproduced colour images have been evaluated both statistically and subjectively by professional gastroenterologists. The algorithm is finally implemented using a WCE prototype and the performance is validated using an ex-vivo trial. PMID:26609405

  11. Variable disparity-motion estimation based fast three-view video coding

    Science.gov (United States)

    Bae, Kyung-Hoon; Kim, Seung-Cheol; Hwang, Yong Seok; Kim, Eun-Soo

    2009-02-01

    In this paper, variable disparity-motion estimation (VDME) based 3-view video coding is proposed. In the encoding, key-frame coding (KFC) based motion estimation and variable disparity estimation (VDE) for effectively fast three-view video encoding are processed. These proposed algorithms enhance the performance of 3-D video encoding/decoding system in terms of accuracy of disparity estimation and computational overhead. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm's PSNRs is 37.66 and 40.55 dB, and the processing time is 0.139 and 0.124 sec/frame, respectively.

  12. Using image processing technology combined with decision tree algorithm in laryngeal video stroboscope automatic identification of common vocal fold diseases.

    Science.gov (United States)

    Jeffrey Kuo, Chung-Feng; Wang, Po-Chun; Chu, Yueng-Hsiang; Wang, Hsing-Won; Lai, Chun-Yu

    2013-10-01

    This study used the actual laryngeal video stroboscope videos taken by physicians in clinical practice as the samples for experimental analysis. The samples were dynamic vocal fold videos. Image processing technology was used to automatically capture the image of the largest glottal area from the video to obtain the physiological data of the vocal folds. In this study, an automatic vocal fold disease identification system was designed, which can obtain the physiological parameters for normal vocal folds, vocal paralysis and vocal nodules from image processing according to the pathological features. The decision tree algorithm was used as the classifier of the vocal fold diseases. The identification rate was 92.6%, and the identification rate with an image recognition improvement processing procedure after classification can be improved to 98.7%. Hence, the proposed system has value in clinical practices. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. Construction and decoding of matrix-product codes from nested codes

    DEFF Research Database (Denmark)

    Hernando, Fernando; Lally, Kristine; Ruano, Diego

    2009-01-01

    We consider matrix-product codes [C1 ... Cs] · A, where C1, ..., Cs  are nested linear codes and matrix A has full rank. We compute their minimum distance and provide a decoding algorithm when A is a non-singular by columns matrix. The decoding algorithm decodes up to half of the minimum distance....

  14. Decoding Codes on Graphs-Probabilistic Decoding

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 9; Issue 2. Decoding Codes on Graphs - Probabilistic Decoding. A S Madhu Aditya Nori. General Article Volume 9 Issue 2 February 2004 pp 39-49. Fulltext. Click here to view fulltext PDF. Permanent link:

  15. Improved side information generation for distributed video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2008-01-01

    As a new coding paradigm, distributed video coding (DVC) deals with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. The performance of DVC highly depends on the quality of side information. With a better side...... information generation method, fewer bits will be requested from the encoder and more reliable decoded frames will be obtained. In this paper, a side information generation method is introduced to further improve the rate-distortion (RD) performance of transform domain distributed video coding. This algorithm...... consists of a variable block size based Y, U and V component motion estimation and an adaptive weighted overlapped block motion compensation (OBMC). The proposal is tested and compared with the results of an executable DVC codec released by DISCOVER group (DIStributed COding for Video sERvices). RD...

  16. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis

    In this thesis we describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon codes with non-uniform profile. With this scheme decoding with good performance is pos...

  17. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1998-01-01

    We describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon (RS) codes with nonuniform profile. With this scheme decoding with good performance is possible as lo...

  18. The XH-map algorithm: A method to process stereo video to produce a real-time obstacle map

    Science.gov (United States)

    Rosselot, Donald; Hall, Ernest L.

    2005-10-01

    This paper presents a novel, simple and fast algorithm to produce a "floor plan" obstacle map in real time using video. The XH-map algorithm is a transformation of stereo vision data in disparity map space into a two dimensional obstacle map space using a method that can be likened to a histogram reduction of image information. The classic floor-ground background noise problem is addressed with a simple one-time semi-automatic calibration method incorporated into the algorithm. This implementation of this algorithm utilizes the Intel Performance Primitives library and OpenCV libraries for extremely fast and efficient execution, creating a scaled obstacle map from a 480x640x256 stereo pair in 1.4 milliseconds. This algorithm has many applications in robotics and computer vision including enabling an "Intelligent Robot" robot to "see" for path planning and obstacle avoidance.

  19. Video-based eyetracking methods and algorithms in head-mounted displays

    Science.gov (United States)

    Hua, Hong; Krishnaswamy, Prasanna; Rolland, Jannick P.

    2006-05-01

    Head pose is utilized to approximate a user’s line-of-sight for real-time image rendering and interaction in most of the 3D visualization applications using head-mounted displays (HMD). The eye often reaches an object of interest before the completion of most head movements. It is highly desirable to integrate eye-tracking capability into HMDs in various applications. While the added complexity of an eyetracked-HMD (ET-HMD) imposes challenges on designing a compact, portable, and robust system, the integration offers opportunities to improve eye tracking accuracy and robustness. In this paper, based on the modeling of an eye imaging and tracking system, we examine the challenges and identify parametric requirements for video-based pupil-glint tracking methods in an ET-HMD design, and predict how these parameters may affect the tracking accuracy, resolution, and robustness. We further present novel methods and associated algorithms that effectively improve eye-tracking accuracy and extend the tracking range.

  20. Low Complexity Approach for High Throughput Belief-Propagation based Decoding of LDPC Codes

    Directory of Open Access Journals (Sweden)

    BOT, A.

    2013-11-01

    Full Text Available The paper proposes a low complexity belief propagation (BP based decoding algorithm for LDPC codes. In spite of the iterative nature of the decoding process, the proposed algorithm provides both reduced complexity and increased BER performances as compared with the classic min-sum (MS algorithm, generally used for hardware implementations. Linear approximations of check-nodes update function are used in order to reduce the complexity of the BP algorithm. Considering this decoding approach, an FPGA based hardware architecture is proposed for implementing the decoding algorithm, aiming to increase the decoder throughput. FPGA technology was chosen for the LDPC decoder implementation, due to its parallel computation and reconfiguration capabilities. The obtained results show improvements regarding decoding throughput and BER performances compared with state-of-the-art approaches.

  1. Time-Out Lattice Sequential Decoding for the MIMO ARQ Channel

    CERN Document Server

    Abediseid, Walid

    2011-01-01

    The optimal diversity-multiplexing-delay tradeoff for the multi-input multi-output (MIMO) automatic repeat request (ARQ) channel can be achieved using incremental redundancy lattice space-time codes coupled with a list decoder for joint error detection and correction. Such a decoder is based on the minimum mean-square error lattice decoding principle which is implemented using sphere decoding algorithms. However, sphere decoders suffer from high computational complexity for low-to-moderate signal-to-noise ratios, especially for large signal dimensions. In this paper, we would like to construct a more efficient decoder that is capable of achieving the optimal tradeoff with much lower complexity. In particular, we will study the throughput-performance-complexity tradeoffs in sequential decoding algorithms and the effect of preprocessing and termination strategies. We show, analytically and via simulation, that using the lattice sequential decoder that implements a time-out algorithm for joint error detection an...

  2. Neuroprosthetic Decoder Training as Imitation Learning.

    Directory of Open Access Journals (Sweden)

    Josh Merel

    2016-05-01

    Full Text Available Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user's intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user's intended movement. Here we show that training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger, can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector.

  3. High Quality Real-Time Video with Scanning Electron Microscope Using Total Variation Algorithm on a Graphics Processing Unit

    Science.gov (United States)

    Ouarti, Nizar; Sauvet, Bruno; Régnier, Stéphane

    2012-04-01

    The scanning electron microscope (SEM) is usually dedicated to taking a picture of micro-nanoscopic objects. In the present study, we wondered whether a SEM can be converted as a real-time video display. To this end, we designed a new methodology. We use the slow mode of the SEM to acquire a high quality reference image that can then be used to estimate the optimal parameters that regularize the signal for a given method. Here, we employ Total Variation, a method which minimizes the noise and regularizes the image. An optimal lagrangian multiplier can be computed that regularizes the image efficiently. We showed that a limited number of iterations for Total Variation algorithm can lead to an acceptable quality of regularization. This algorithm is parallel and deployed on a Graphics Processing Unit to obtain a real-time high quality video with a SEM. It opens the possibility of a real-time interaction at micro-nanoscales.

  4. High Speed Frame Synchronization and Viterbi Decoding

    DEFF Research Database (Denmark)

    Paaske, Erik; Justesen, Jørn; Larsen, Knud J.

    1996-01-01

    The purpose of Phase 1 of the study is to describe the system structure and algorithms in sufficient detail to allow drawing the high level architecture of units containing frame synchronization and Viterbi decoding. The systems we consider are high data rate space communication systems. Also...... components. Node synchronization performed within a Viterbi decoder is discussed, and algorithms for frame synchronization are described and analyzed. We present a list of system configurations that we find potentially useful. Further, the high level architecture of units that contain frame synchronization...

  5. Decoding LDPC Convolutional Codes on Markov Channels

    Directory of Open Access Journals (Sweden)

    Kashyap Manohar

    2008-01-01

    Full Text Available Abstract This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.

  6. Decoding LDPC Convolutional Codes on Markov Channels

    Directory of Open Access Journals (Sweden)

    Chris Winstead

    2008-04-01

    Full Text Available This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.

  7. EXIT Chart Analysis of Binary Message-Passing Decoders

    DEFF Research Database (Denmark)

    Lechner, Gottfried; Pedersen, Troels; Kramer, Gerhard

    2007-01-01

    Binary message-passing decoders for LDPC codes are analyzed using EXIT charts. For the analysis, the variable node decoder performs all computations in the L-value domain. For the special case of a hard decision channel, this leads to the well know Gallager B algorithm, while the analysis can...

  8. LDPC Codes--Structural Analysis and Decoding Techniques

    Science.gov (United States)

    Zhang, Xiaojie

    2012-01-01

    Low-density parity-check (LDPC) codes have been the focus of much research over the past decade thanks to their near Shannon limit performance and to their efficient message-passing (MP) decoding algorithms. However, the error floor phenomenon observed in MP decoding, which manifests itself as an abrupt change in the slope of the error-rate curve,…

  9. Processor core for real time background identification of HD video based on OpenCV Gaussian mixture model algorithm

    Science.gov (United States)

    Genovese, Mariangela; Napoli, Ettore

    2013-05-01

    The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.

  10. Distributed source coding of video with non-stationary side-information

    NARCIS (Netherlands)

    Meyer, P.F.A.; Westerlaken, R.P.; Klein Gunnewiek, R.; Lagendijk, R.L.

    2005-01-01

    In distributed video coding, the complexity of the video encoder is reduced at the cost of a more complex video decoder. Using the principles of Slepian andWolf, video compression is then carried out using channel coding principles, under the assumption that the video decoder can temporally predict

  11. Study on the Detection of Moving Target in the Mining Method Based on Hybrid Algorithm for Sports Video Analysis

    Directory of Open Access Journals (Sweden)

    Huang Tian

    2014-10-01

    Full Text Available Moving object detection and tracking is the computer vision and image processing is a hot research direction, based on the analysis of the moving target detection and tracking algorithm in common use, focus on the sports video target tracking non rigid body. In sports video, non rigid athletes often have physical deformation in the process of movement, and may be associated with the occurrence of moving target under cover. Media data is surging to fast search and query causes more difficulties in data. However, the majority of users want to be able to quickly from the multimedia data to extract the interested content and implicit knowledge (concepts, rules, rules, models and correlation, retrieval and query quickly to take advantage of them, but also can provide the decision support problem solving hierarchy. Based on the motion in sport video object as the object of study, conducts the system research from the theoretical level and technical framework and so on, from the layer by layer mining between low level motion features to high-level semantic motion video, not only provides support for users to find information quickly, but also can provide decision support for the user to solve the problem.

  12. Iterative List Decoding of Concatenated Source-Channel Codes

    Science.gov (United States)

    Hedayat, Ahmadreza; Nosratinia, Aria

    2005-12-01

    Whenever variable-length entropy codes are used in the presence of a noisy channel, any channel errors will propagate and cause significant harm. Despite using channel codes, some residual errors always remain, whose effect will get magnified by error propagation. Mitigating this undesirable effect is of great practical interest. One approach is to use the residual redundancy of variable length codes for joint source-channel decoding. In this paper, we improve the performance of residual redundancy source-channel decoding via an iterative list decoder made possible by a nonbinary outer CRC code. We show that the list decoding of VLC's is beneficial for entropy codes that contain redundancy. Such codes are used in state-of-the-art video coders, for example. The proposed list decoder improves the overall performance significantly in AWGN and fully interleaved Rayleigh fading channels.

  13. Iterative List Decoding of Concatenated Source-Channel Codes

    Directory of Open Access Journals (Sweden)

    Hedayat Ahmadreza

    2005-01-01

    Full Text Available Whenever variable-length entropy codes are used in the presence of a noisy channel, any channel errors will propagate and cause significant harm. Despite using channel codes, some residual errors always remain, whose effect will get magnified by error propagation. Mitigating this undesirable effect is of great practical interest. One approach is to use the residual redundancy of variable length codes for joint source-channel decoding. In this paper, we improve the performance of residual redundancy source-channel decoding via an iterative list decoder made possible by a nonbinary outer CRC code. We show that the list decoding of VLC's is beneficial for entropy codes that contain redundancy. Such codes are used in state-of-the-art video coders, for example. The proposed list decoder improves the overall performance significantly in AWGN and fully interleaved Rayleigh fading channels.

  14. Completion time reduction in instantly decodable network coding through decoding delay control

    KAUST Repository

    Douik, Ahmed S.

    2014-12-01

    For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to completely act against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. In this paper, we study the effect of controlling the decoding delay to reduce the completion time below its currently best known solution. We first derive the decoding-delay-dependent expressions of the users\\' and their overall completion times. Although using such expressions to find the optimal overall completion time is NP-hard, we use a heuristic that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. Simulation results show that this new algorithm achieves both a lower mean completion time and mean decoding delay compared to the best known heuristic for completion time reduction. The gap in performance becomes significant for harsh erasure scenarios.

  15. Multi-hypothesis distributed stereo video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Zamarin, Marco; Forchhammer, Søren

    2013-01-01

    Distributed Video Coding (DVC) is a video coding paradigm that exploits the source statistics at the decoder based on the availability of the Side Information (SI). Stereo sequences are constituted by two views to give the user an illusion of depth. In this paper, we present a DVC decoder...

  16. Decoding Reed-Solomon Codes beyond half the minimum distance

    DEFF Research Database (Denmark)

    Høholdt, Tom; Nielsen, Rasmus Refslund

    1999-01-01

    We describe an efficient implementation of M.Sudan"s algorithm for decoding Reed-Solomon codes beyond half the minimum distance. Furthermore we calculate an upper bound of the probabilty of getting more than one codeword as output...

  17. Reduced-computation trellis decoder with inherent parallelism

    Energy Technology Data Exchange (ETDEWEB)

    Simmons, S.J.

    1986-01-01

    A new breadth-first trellis decoding algorithms is introduced for application to sequence estimation in digital data transmission. The high degree of inherent parallelism and simple data flow make a parallel processing implementation attractive. The proposed algorithm is shown to exhibit an error-rate vs. average computational complexity behavior superior to that of the well-known Viterbi algorithm. The decoding algorithm uses recursively calculated path metrics common to trellis decoding. The threshold rule used causes the number of survivor paths to adapt to the channel noise actually encountered. The low average decoding load is exploited by assigning computational capability just in excess of the average requirement, and buffering the received signal samples. Bounds on the event-error rate and average number of survivors are produced, and efficient trellis search procedures are devised for their evaluation. Performance is tested for the AWGN channel with new multi-level codes, conventional binary convolutional codes, and simple intersymbol interference responses.

  18. FPGA Implementation of Gaussian Mixture Model Algorithm for 47 fps Segmentation of 1080p Video

    Directory of Open Access Journals (Sweden)

    Mariangela Genovese

    2013-01-01

    Full Text Available Circuits and systems able to process high quality video in real time are fundamental in nowadays imaging systems. The circuit proposed in the paper, aimed at the robust identification of the background in video streams, implements the improved formulation of the Gaussian Mixture Model (GMM algorithm that is included in the OpenCV library. An innovative, hardware oriented, formulation of the GMM equations, the use of truncated binary multipliers, and ROM compression techniques allow reduced hardware complexity and increased processing capability. The proposed circuit has been designed having commercial FPGA devices as target and provides speed and logic resources occupation that overcome previously proposed implementations. The circuit, when implemented on Virtex6 or StratixIV, processes more than 45 frame per second in 1080p format and uses few percent of FPGA logic resources.

  19. Video time encoding machines.

    Science.gov (United States)

    Lazar, Aurel A; Pnevmatikakis, Eftychios A

    2011-03-01

    We investigate architectures for time encoding and time decoding of visual stimuli such as natural and synthetic video streams (movies, animation). The architecture for time encoding is akin to models of the early visual system. It consists of a bank of filters in cascade with single-input multi-output neural circuits. Neuron firing is based on either a threshold-and-fire or an integrate-and-fire spiking mechanism with feedback. We show that analog information is represented by the neural circuits as projections on a set of band-limited functions determined by the spike sequence. Under Nyquist-type and frame conditions, the encoded signal can be recovered from these projections with arbitrary precision. For the video time encoding machine architecture, we demonstrate that band-limited video streams of finite energy can be faithfully recovered from the spike trains and provide a stable algorithm for perfect recovery. The key condition for recovery calls for the number of neurons in the population to be above a threshold value.

  20. Design and Implementation of Dual-Mode Wireless Video Monitoring System

    Directory of Open Access Journals (Sweden)

    BAO Song-Jian

    2014-10-01

    Full Text Available Dual-mode wireless video transmission has two major problems. Firstly, one is time delay difference bringing about asynchronous reception decoding frame error phenomenon; secondly, dual-mode network bandwidth inconformity causes scheduling problem. In order to solve above two problems, a kind of TD-SCDMA/CDMA20001x dual-mode wireless video transmission design method is proposed. For the solution of decoding frame error phenomenon, the design puts forward adding frame identification and packet preprocessing at the sending and synchronizing combination at the receiving end. For the solution of scheduling problem, the wireless communication channel cooperative work and video data transmission scheduling management algorithm is proposed in the design.

  1. JSATS Decoder Software Manual

    Energy Technology Data Exchange (ETDEWEB)

    Flory, Adam E.; Lamarche, Brian L.; Weiland, Mark A.

    2013-05-01

    The Juvenile Salmon Acoustic Telemetry System (JSATS) Decoder is a software application that converts a digitized acoustic signal (a waveform stored in the .bwm file format) into a list of potential JSATS Acoustic MicroTransmitter (AMT) tagcodes along with other data about the signal including time of arrival and signal to noise ratios (SNR). This software is capable of decoding single files, directories, and viewing raw acoustic waveforms. When coupled with the JSATS Detector, the Decoder is capable of decoding in ‘real-time’ and can also provide statistical information about acoustic beacons placed within receive range of hydrophones within a JSATS array. This document details the features and functionality of the software. The document begins with software installation instructions (section 2), followed in order by instructions for decoder setup (section 3), decoding process initiation (section 4), then monitoring of beacons (section 5) using real-time decoding features. The last section in the manual describes the beacon, beacon statistics, and the results file formats. This document does not consider the raw binary waveform file format.

  2. Feature-based fast coding unit partition algorithm for high efficiency video coding

    Directory of Open Access Journals (Sweden)

    Yih-Chuan Lin

    2015-04-01

    Full Text Available High Efficiency Video Coding (HEVC, which is the newest video coding standard, has been developed for the efficient compression of ultra high definition videos. One of the important features in HEVC is the adoption of a quad-tree based video coding structure, in which each incoming frame is represented as a set of non-overlapped coding tree blocks (CTB by variable-block sized prediction and coding process. To do this, each CTB needs to be recursively partitioned into coding unit (CU, predict unit (PU and transform unit (TU during the coding process, leading to a huge computational load in the coding of each video frame. This paper proposes to extract visual features in a CTB and uses them to simplify the coding procedure by reducing the depth of quad-tree partition for each CTB in HEVC intra coding mode. A measure for the edge strength in a CTB, which is defined with simple Sobel edge detection, is used to constrain the possible maximum depth of quad-tree partition of the CTB. With the constrained partition depth, the proposed method can reduce a lot of encoding time. Experimental results by HM10.1 show that the average time-savings is about 13.4% under the increase of encoded BD-Rate by only 0.02%, which is a less performance degradation in comparison to other similar methods.

  3. High Speed Viterbi Decoder Architecture

    DEFF Research Database (Denmark)

    Paaske, Erik; Andersen, Jakob Dahl

    1998-01-01

    The fastest commercially available Viterbi decoders for the (171,133) standard rate 1/2 code operate with a decoding speed of 40-50 Mbit/s (net data rate). In this paper we present a suitable architecture for decoders operating with decoding speeds of 150-300 Mbit/s....

  4. Hardware acceleration of lucky-region fusion (LRF) algorithm for high-performance real-time video processing

    Science.gov (United States)

    Browning, Tyler; Jackson, Christopher; Cayci, Furkan; Carhart, Gary W.; Liu, J. J.; Kiamilev, Fouad

    2015-06-01

    "Lucky-region" fusion (LRF) is a synthetic imaging technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm extracts sharp regions of an image obtained from a series of short exposure frames from fast, high-resolution image sensors, and fuses the sharp regions into a final, improved image. In our previous research, the LRF algorithm had been implemented on CPU and field programmable gate array (FPGA) platforms. The CPU did not have sufficient processing power to handle real-time processing of video. Last year, we presented a real-time LRF implementation using an FPGA. However, due to the slow register-transfer level (RTL) development and simulation time, it was difficult to adjust and discover optimal LRF settings such as Gaussian kernel radius and synthetic frame buffer size. To overcome this limitation, we implemented the LRF algorithm on an off-the-shelf graphical processing unit (GPU) in order to take advantage of built-in parallelization and significantly faster development time. Our initial results show that the unoptimized GPU implementation has almost comparable turbulence mitigation to the FPGA version. In our presentation, we will explore optimization of the LRF algorithm on the GPU to achieve higher performance results, and adding new performance capabilities such as image stabilization.

  5. Improving Video Segmentation by Fusing Depth Cues and the Visual Background Extractor (ViBe Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaoqin Zhou

    2017-05-01

    Full Text Available Depth-sensing technology has led to broad applications of inexpensive depth cameras that can capture human motion and scenes in three-dimensional space. Background subtraction algorithms can be improved by fusing color and depth cues, thereby allowing many issues encountered in classical color segmentation to be solved. In this paper, we propose a new fusion method that combines depth and color information for foreground segmentation based on an advanced color-based algorithm. First, a background model and a depth model are developed. Then, based on these models, we propose a new updating strategy that can eliminate ghosting and black shadows almost completely. Extensive experiments have been performed to compare the proposed algorithm with other, conventional RGB-D (Red-Green-Blue and Depth algorithms. The experimental results suggest that our method extracts foregrounds with higher effectiveness and efficiency.

  6. Unsupervised adaptation of brain machine interface decoders

    Directory of Open Access Journals (Sweden)

    Tayfun eGürel

    2012-11-01

    Full Text Available The performance of neural decoders can degrade over time due to nonstationarities in the relationship between neuronal activity and behavior. In this case, brain-machine interfaces (BMI require adaptation of their decoders to maintain high performance across time. One way to achieve this is by use of periodical calibration phases, during which the BMI system (or an external human demonstrator instructs the user to perform certain movements or behaviors. This approach has two disadvantages: (i calibration phases interrupt the autonomous operation of the BMI and (ii between two calibration phases the BMI performance might not be stable but continuously decrease. A better alternative would be that the BMI decoder is able to continuously adapt in an unsupervised manner during autonomous BMI operation, i.e. without knowing the movement intentions of the user. In the present article, we present an efficient method for such unsupervised training of BMI systems for continuous movement control. The proposed method utilizes a cost function derived from neuronal recordings, which guides a learning algorithm to evaluate the decoding parameters. We verify the performance of our adaptive method by simulating a BMI user with an optimal feedback control model and its interaction with our adaptive BMI decoder. The simulation results show that the cost function and the algorithm yield fast and precise trajectories towards targets at random orientations on a 2-dimensional computer screen. For initially unknown and nonstationary tuning parameters, our unsupervised method is still able to generate precise trajectories and to keep its performance stable in the long term. The algorithm can optionally work also with neuronal error signals instead or in conjunction with the proposed unsupervised adaptation.

  7. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  8. Joint disparity and motion estimation using optical flow for multiview Distributed Video Coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Raket, Lars Lau; Brites, Catarina

    2014-01-01

    Distributed Video Coding (DVC) is a video coding paradigm where the source statistics are exploited at the decoder based on the availability of Side Information (SI). In a monoview video codec, the SI is generated by exploiting the temporal redundancy of the video, through motion estimation and c...... flow. The proposed SI generation algorithm allows for RD improvements up to 10% (Bjøntegaard) in bit-rate savings, when compared with block-based SI generation algorithms leveraging temporal and inter-view redundancies....... and compensation techniques. In a multiview scenario, the correlation between views can also be exploited to further enhance the overall Rate-Distortion (RD) performance. Thus, to generate SI in a multiview distributed coding scenario, a joint disparity and motion estimation technique is proposed, based on optical...

  9. On Decoding Grid Cell Population Codes Using Approximate Belief Propagation.

    Science.gov (United States)

    Yoo, Yongseok; Kim, Woori

    2017-03-01

    Neural systems are inherently noisy. One well-studied example of a noise reduction mechanism in the brain is the population code, where representing a variable with multiple neurons allows the encoded variable to be recovered with fewer errors. Studies have assumed ideal observer models for decoding population codes, and the manner in which information in the neural population can be retrieved remains elusive. This letter addresses a mechanism by which realistic neural circuits can recover encoded variables. Specifically, the decoding problem of recovering a spatial location from populations of grid cells is studied using belief propagation. We extend the belief propagation decoding algorithm in two aspects. First, beliefs are approximated rather than being calculated exactly. Second, decoding noises are introduced into the decoding circuits. Numerical simulations demonstrate that beliefs can be effectively approximated by combining polynomial nonlinearities with divisive normalization. This approximate belief propagation algorithm is tolerant to decoding noises. Thus, this letter presents a realistic model for decoding neural population codes and investigates fault-tolerant information retrieval mechanisms in the brain.

  10. A Genetic Algorithm and Fuzzy Logic Approach for Video Shot Boundary Detection.

    Science.gov (United States)

    Thounaojam, Dalton Meitei; Khelchandra, Thongam; Manglem Singh, Kh; Roy, Sudipta

    2016-01-01

    This paper proposed a shot boundary detection approach using Genetic Algorithm and Fuzzy Logic. In this, the membership functions of the fuzzy system are calculated using Genetic Algorithm by taking preobserved actual values for shot boundaries. The classification of the types of shot transitions is done by the fuzzy system. Experimental results show that the accuracy of the shot boundary detection increases with the increase in iterations or generations of the GA optimization process. The proposed system is compared to latest techniques and yields better result in terms of F1score parameter.

  11. Turbo decoder core design for system development

    Science.gov (United States)

    Chen, Xiaoyi; Yao, Qingdong; Liu, Peng

    2003-04-01

    Due to their near Shannon-capacity performance, turbo codes have received a considerable amount of attention since their introduction. They are particularly attractive for cellular communication systems and have been included in the specifications for both the WCDMA(UMTS) and cdma2000 third-generation cellular standards. The log-MAP decoding algorithm and some technologies used to reduce the complexity have discussed in the past days. But we can see that if we apply the Turbo code to wireless communications,the decoding process rate is the bottleneck. The software implement is not realistic in today"s DSP process rate. So the hardware design is supposed to realize the decoding. The purpose of this paper is to present a full ASIC design way of Turbo decoding. Many technologies are added to the basic Log-MAP algorithm to simple the design and improve the performance. With the log-MAP algorithm, the Jacobi logarithm is computed exactly using max*()=ln(exp(x)+exp(y))=max()+fc(|y-x|),The correction function fc(|y-x|) is important because there will be 0.5dB SNR loss without it. The linear approximation can be used and the linear parameters was selected carefully to suit hardware realize in our design. In order to save the power consumption and also to assure the performance, the quantization is important in ASIC design, we adopt a compromise scheme to save the power and also there is good BER behaves. Many noisy frames can be corrected with a few iterations while some of the more noisy frames need to experience a full number of iterations (NOI). Thus, if the decoder could stop the iteration as soon as the frame becomes correct, the average NOI would be reduced. There are many ways to stop the iteration such as CRC, compare and so on, we adopt a significantly less computation and much less storage stop criteria. For long frames the memory for storing the entire frame of the forward probability α or the backward probability β can be very large. Available products all

  12. Simulation of video sequences for an accurate evaluation of tracking algorithms on complex scenes

    Science.gov (United States)

    Dubreu, Christine; Manzanera, Antoine; Bohain, Eric

    2008-04-01

    As target tracking is arousing more and more interest, the necessity to reliably assess tracking algorithms in any conditions is becoming essential. The evaluation of such algorithms requires a database of sequences representative of the whole range of conditions in which the tracking system is likely to operate, together with its associated ground truth. However, building such a database with real sequences, and collecting the associated ground truth appears to be hardly possible and very time-consuming. Therefore, more and more often, synthetic sequences are generated by complex and heavy simulation platforms to evaluate the performance of tracking algorithms. Some methods have also been proposed using simple synthetic sequences generated without such complex simulation platforms. These sequences are generated from a finite number of discriminating parameters, and are statistically representative, as regards these parameters, of real sequences. They are very simple and not photorealistic, but can be reliably used for low-level tracking algorithms evaluation in any operating conditions. The aim of this paper is to assess the reliability of these non-photorealistic synthetic sequences for evaluation of tracking systems on complex-textured objects, and to show how the number of parameters can be increased to synthesize more elaborated scenes and deal with more complex dynamics, including occlusions and three-dimensional deformations.

  13. Conceptual design for a universal Reed-Solomon decoder

    Science.gov (United States)

    Miller, R. L.; Deutsch, L. J.

    1981-01-01

    An algorithm which enables one Reed-Solomon decoder to process other Reed-Solomon encoded data from a different code is presented. The sole requirement is that both codes have the same length, the same rate, and the same field of coefficients. It is pointed out that only very simple pre- and post-processing hardware is needed to resolve an encoder/decoder incompatibility and that no encoder modification is needed.

  14. STACK DECODING OF LINEAR BLOCK CODES FOR DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM

    Directory of Open Access Journals (Sweden)

    H. Prashantha Kumar

    2012-03-01

    Full Text Available The boundaries between block and convolutional codes have become diffused after recent advances in the understanding of the trellis structure of block codes and the tail-biting structure of some convolutional codes. Therefore, decoding algorithms traditionally proposed for decoding convolutional codes have been applied for decoding certain classes of block codes. This paper presents the decoding of block codes using tree structure. Many good block codes are presently known. Several of them have been used in applications ranging from deep space communication to error control in storage systems. But the primary difficulty with applying Viterbi or BCJR algorithms to decode of block codes is that, even though they are optimum decoding methods, the promised bit error rates are not achieved in practice at data rates close to capacity. This is because the decoding effort is fixed and grows with block length, and thus only short block length codes can be used. Therefore, an important practical question is whether a suboptimal realizable soft decision decoding method can be found for block codes. A noteworthy result which provides a partial answer to this question is described in the following sections. This result of near optimum decoding will be used as motivation for the investigation of different soft decision decoding methods for linear block codes which can lead to the development of efficient decoding algorithms. The code tree can be treated as an expanded version of the trellis, where every path is totally distinct from every other path. We have derived the tree structure for (8, 4 and (16, 11 extended Hamming codes and have succeeded in implementing the soft decision stack algorithm to decode them. For the discrete memoryless channel, gains in excess of 1.5dB at a bit error rate of 10-5 with respect to conventional hard decision decoding are demonstrated for these codes.

  15. Fine-Grained Rate Shaping for Video Streaming over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Chen Tsuhan

    2004-01-01

    Full Text Available Video streaming over wireless networks faces challenges of time-varying packet loss rate and fluctuating bandwidth. In this paper, we focus on streaming precoded video that is both source and channel coded. Dynamic rate shaping has been proposed to “shape” the precompressed video to adapt to the fluctuating bandwidth. In our earlier work, rate shaping was extended to shape the channel coded precompressed video, and to take into account the time-varying packet loss rate as well as the fluctuating bandwidth of the wireless networks. However, prior work on rate shaping can only adjust the rate oarsely. In this paper, we propose “fine-grained rate shaping (FGRS” to allow for bandwidth adaptation over a wide range of bandwidth and packet loss rate in fine granularities. The video is precoded with fine granularity scalability (FGS followed by channel coding. Utilizing the fine granularity property of FGS and channel coding, FGRS selectively drops part of the precoded video and still yields decodable bit-stream at the decoder. Moreover, FGRS optimizes video streaming rather than achieves heuristic objectives as conventional methods. A two-stage rate-distortion (RD optimization algorithm is proposed for FGRS. Promising results of FGRS are shown.

  16. Evaluation framework for K-best sphere decoders

    KAUST Repository

    Shen, Chungan

    2010-08-01

    While Maximum-Likelihood (ML) is the optimum decoding scheme for most communication scenarios, practical implementation difficulties limit its use, especially for Multiple Input Multiple Output (MIMO) systems with a large number of transmit or receive antennas. Tree-searching type decoder structures such as Sphere decoder and K-best decoder present an interesting trade-off between complexity and performance. Many algorithmic developments and VLSI implementations have been reported in literature with widely varying performance to area and power metrics. In this semi-tutorial paper we present a holistic view of different Sphere decoding techniques and K-best decoding techniques, identifying the key algorithmic and implementation trade-offs. We establish a consistent benchmark framework to investigate and compare the delay cost, power cost, and power-delay-product cost incurred by each method. Finally, using the framework, we propose and analyze a novel architecture and compare that to other published approaches. Our goal is to explicitly elucidate the overall advantages and disadvantages of each proposed algorithms in one coherent framework. © 2010 World Scientific Publishing Company.

  17. A Time-Consistent Video Segmentation Algorithm Designed for Real-Time Implementation

    Directory of Open Access Journals (Sweden)

    M. El Hassani

    2008-01-01

    Temporal consistency of the segmentation is ensured by incorporating motion information through the use of an improved change-detection mask. This mask is designed using both illumination differences between frames and region segmentation of the previous frame. By considering both pixel and region levels, we obtain a particularly efficient algorithm at a low computational cost, allowing its implementation in real-time on the TriMedia processor for CIF image sequences.

  18. Peeling Decoding of LDPC Codes with Applications in Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Weijun Zeng

    2016-01-01

    Full Text Available We present a new approach for the analysis of iterative peeling decoding recovery algorithms in the context of Low-Density Parity-Check (LDPC codes and compressed sensing. The iterative recovery algorithm is particularly interesting for its low measurement cost and low computational complexity. The asymptotic analysis can track the evolution of the fraction of unrecovered signal elements in each iteration, which is similar to the well-known density evolution analysis in the context of LDPC decoding algorithm. Our analysis shows that there exists a threshold on the density factor; if under this threshold, the recovery algorithm is successful; otherwise it will fail. Simulation results are also provided for verifying the agreement between the proposed asymptotic analysis and recovery algorithm. Compared with existing works of peeling decoding algorithm, focusing on the failure probability of the recovery algorithm, our proposed approach gives accurate evolution of performance with different parameters of measurement matrices and is easy to implement. We also show that the peeling decoding algorithm performs better than other schemes based on LDPC codes.

  19. Duality based optical flow algorithms with applications

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau

    We consider the popular TV-L1 optical flow formulation, and the so-called duality based algorithm for minimizing the TV-L1 energy. The original formulation is extended to allow for vector valued images, and minimization results are given. In addition we consider different definitions of total...... the motion estimation becomes more robust to noise and large displacements, and the computational workload is more than halved compared to usual bidirectional methods. Finally we consider two applications of frame interpolation for distributed video coding. The first of these considers the use of depth data...... to improve interpolation, and the second considers using the information from partially decoded video frames to improve interpolation accuracy in high-motion video sequences....

  20. Colour-reproduction algorithm for transmitting variable video frames and its application to capsule endoscopy

    OpenAIRE

    Khan, Tareq; Shrestha, Ravi; Imtiaz, Md. Shamin; Khan A. Wahid

    2015-01-01

    Presented is a new power-efficient colour generation algorithm for wireless capsule endoscopy (WCE) application. In WCE, transmitting colour image data from the human intestine through radio frequency (RF) consumes a huge amount of power. The conventional way is to transmit all R, G and B components of all frames. Using the proposed dictionary-based colour generation scheme, instead of sending all R, G and B frames, first one colour frame is sent followed by a series of grey-scale frames. At ...

  1. Peri-operative imaging of cancer margins with reflectance confocal microscopy during Mohs micrographic surgery: feasibility of a video-mosaicing algorithm

    Science.gov (United States)

    Flores, Eileen; Yelamos, Oriol; Cordova, Miguel; Kose, Kivanc; Phillips, William; Rossi, Anthony; Nehal, Kishwer; Rajadhyaksha, Milind

    2017-02-01

    Reflectance confocal microscopy (RCM) imaging shows promise for guiding surgical treatment of skin cancers. Recent technological advancements such as the introduction of the handheld version of the reflectance confocal microscope, video acquisition and video-mosaicing have improved RCM as an emerging tool to evaluate cancer margins during routine surgical skin procedures such as Mohs micrographic surgery (MMS). Detection of residual non-melanoma skin cancer (NMSC) tumor during MMS is feasible, as demonstrated by the introduction of real-time perioperative imaging on patients in the surgical setting. Our study is currently testing the feasibility of a new mosaicing algorithm for perioperative RCM imaging of NMSC cancer margins on patients during MMS. We report progress toward imaging and image analysis on forty-five patients, who presented for MMS at the MSKCC Dermatology service. The first 10 patients were used as a training set to establish an RCM imaging algorithm, which was implemented on the remaining test set of 35 patients. RCM imaging, using 35% AlCl3 for nuclear contrast, was performed pre- and intra-operatively with the Vivascope 3000 (Caliber ID). Imaging was performed in quadrants in the wound, to simulate the Mohs surgeon's examination of pathology. Videos were taken at the epidermal and deep dermal margins. Our Mohs surgeons assessed all videos and video-mosaics for quality and correlation to histology. Overall, our RCM video-mosaicing algorithm is feasible. RCM videos and video-mosaics of the epidermal and dermal margins were found to be of clinically acceptable quality. Assessment of cancer margins was affected by type of NMSC, size and location. Among the test set of 35 patients, 83% showed acceptable imaging quality, resolution and contrast. Visualization of nuclear and cellular morphology of residual BCC/SCC tumor and normal skin features could be detected in the peripheral and deep dermal margins. We observed correlation between the RCM videos/video

  2. O2-GIDNC: Beyond instantly decodable network coding

    KAUST Repository

    Aboutorab, Neda

    2013-06-01

    In this paper, we are concerned with extending the graph representation of generalized instantly decodable network coding (GIDNC) to a more general opportunistic network coding (ONC) scenario, referred to as order-2 GIDNC (O2-GIDNC). In the O2-GIDNC scheme, receivers can store non-instantly decodable packets (NIDPs) comprising two of their missing packets, and use them in a systematic way for later decodings. Once this graph representation is found, it can be used to extend the GIDNC graph-based analyses to the proposed O2-GIDNC scheme with a limited increase in complexity. In the proposed O2-GIDNC scheme, the information of the stored NIDPs at the receivers and the decoding opportunities they create can be exploited to improve the broadcast completion time and decoding delay compared to traditional GIDNC scheme. The completion time and decoding delay minimizing algorithms that can operate on the new O2-GIDNC graph are further described. The simulation results show that our proposed O2-GIDNC improves the completion time and decoding delay performance of the traditional GIDNC. © 2013 IEEE.

  3. Encoder-decoder optimization for brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Josh Merel

    2015-06-01

    Full Text Available Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model" and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.

  4. Stopping Rules for Turbo Decoders

    Science.gov (United States)

    Matache, A.; Dolinar, S.; Pollara, F.

    2000-04-01

    Decoders for turbo codes are iterative in nature, i.e., they have to perform a certain number of iterations before reaching a satisfactory degree of confidence regarding a frame to be decoded. Until now standard turbo decoders have used a fixed number of iterations. In this article, we propose some simple "stopping rules" that can be used to reduce the average number of iterations. This technique offers a trade-off between speed and performance and can provide a significant increase in the average decoding speed while not sacrificing decoder performance. We tested several types of stopping rules for turbo decoders. One type is based on comparing decoded bits (hard bit decisions) with previous decoded bits; a second type is based on comparing reliabilities (soft bit decisions) with a threshold; and a third type uses a cyclic redundancy check (CRC) code applied to hard decoded bits. We simulated turbo decoder performance using these rules (including several variations of t he first two types) and further required that the decoder cease after 20 iterations if the stopping rule is not yet satisfied. Specifically, we analyzed the decoder-error rates and the average number of iterations for each rule. We found that the average number of iterations was roughly between 4 and 7 for a bit signal-to-noise ratio, E_b/N_0, near the "waterfall" threshold, as compared with the 10 fixed iterations used by the current turbo decoder. In addition, the resulting error rates were noticeably lower than those for 10 fixed iterations, and in fact were very nearly equal to the error rates achieved by a decoder using 20 fixed iterations.

  5. Algorithms

    Indian Academy of Sciences (India)

    positive numbers. The word 'algorithm' was most often associated with this algorithm till 1950. It may however be pOinted out that several non-trivial algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used.

  6. On Complexity, Energy- and Implementation-Efficiency of Channel Decoders

    CERN Document Server

    Kienle, Frank; Meyr, Heinrich

    2010-01-01

    Future wireless communication systems require efficient and flexible baseband receivers. Meaningful efficiency metrics are key for design space exploration to quantify the algorithmic and the implementation complexity of a receiver. Most of the current established efficiency metrics are based on counting operations, thus neglecting important issues like data and storage complexity. In this paper we introduce suitable energy and area efficiency metrics which resolve the afore-mentioned disadvantages. These are decoded information bit per energy and throughput per area unit. Efficiency metrics are assessed by various implementations of turbo decoders, LDPC decoders and convolutional decoders. New exploration methodologies are presented, which permit an appropriate benchmarking of implementation efficiency, communications performance, and flexibility trade-offs. These exploration methodologies are based on efficiency trajectories rather than a single snapshot metric as done in state-of-the-art approaches.

  7. Decoding the human genome

    CERN Multimedia

    CERN. Geneva. Audiovisual Unit; Antonerakis, S E

    2002-01-01

    Decoding the Human genome is a very up-to-date topic, raising several questions besides purely scientific, in view of the two competing teams (public and private), the ethics of using the results, and the fact that the project went apparently faster and easier than expected. The lecture series will address the following chapters: Scientific basis and challenges. Ethical and social aspects of genomics.

  8. Joint Optimized CPU and Networking Control Scheme for Improved Energy Efficiency in Video Streaming on Mobile Devices

    Directory of Open Access Journals (Sweden)

    Sung-Woong Jo

    2017-01-01

    Full Text Available Video streaming service is one of the most popular applications for mobile users. However, mobile video streaming services consume a lot of energy, resulting in a reduced battery life. This is a critical problem that results in a degraded user’s quality of experience (QoE. Therefore, in this paper, a joint optimization scheme that controls both the central processing unit (CPU and wireless networking of the video streaming process for improved energy efficiency on mobile devices is proposed. For this purpose, the energy consumption of the network interface and CPU is analyzed, and based on the energy consumption profile a joint optimization problem is formulated to maximize the energy efficiency of the mobile device. The proposed algorithm adaptively adjusts the number of chunks to be downloaded and decoded in each packet. Simulation results show that the proposed algorithm can effectively improve the energy efficiency when compared with the existing algorithms.

  9. Low-Complexity Hierarchical Mode Decision Algorithms Targeting VLSI Architecture Design for the H.264/AVC Video Encoder

    Directory of Open Access Journals (Sweden)

    Guilherme Corrêa

    2012-01-01

    Full Text Available In H.264/AVC, the encoding process can occur according to one of the 13 intraframe coding modes or according to one of the 8 available interframes block sizes, besides the SKIP mode. In the Joint Model reference software, the choice of the best mode is performed through exhaustive executions of the entire encoding process, which significantly increases the encoder's computational complexity and sometimes even forbids its use in real-time applications. Considering this context, this work proposes a set of heuristic algorithms targeting hardware architectures that lead to earlier selection of one encoding mode. The amount of repetitions of the encoding process is reduced by 47 times, at the cost of a relatively small cost in compression performance. When compared to other works, the fast hierarchical mode decision results are expressively more satisfactory in terms of computational complexity reduction, quality, and bit rate. The low-complexity mode decision architecture proposed is thus a very good option for real-time coding of high-resolution videos. The solution is especially interesting for embedded and mobile applications with support to multimedia systems, since it yields good compression rates and image quality with a very high reduction in the encoder complexity.

  10. Algorithms

    Indian Academy of Sciences (India)

    In the description of algorithms and programming languages, what is the role of control abstraction? • What are the inherent limitations of the algorithmic processes? In future articles in this series, we will show that these constructs are powerful and can be used to encode any algorithm. In the next article, we will discuss ...

  11. Multi-Trial Guruswami–Sudan Decoding for Generalised Reed–Solomon Codes

    DEFF Research Database (Denmark)

    Nielsen, Johan Sebastian Rosenkilde; Zeh, Alexander

    2013-01-01

    An iterated refinement procedure for the Guruswami–Sudan list decoding algorithm for Generalised Reed–Solomon codes based on Alekhnovich’s module minimisation is proposed. The method is parametrisable and allows variants of the usual list decoding approach. In particular, finding the list...

  12. High resolution decoding of Multi-Anode Microchannel Array detectors

    Science.gov (United States)

    Kasle, David B.; Morgan, Jeffrey S.

    1991-01-01

    The Multi-Anode Microchannel Array (MAMA) is a photon counting detector which utilizes a photocathode for photon to electron conversion, a microchannel plate (MCP) for signal amplification and a proximity focused anode array for position sensitivity. The detector electronics decode the position of an event through coincidence discrimination. The decoding algorithm which associates a given event with the appropriate pixel is determined by the geometry of the array. A new algorithm incorporated into a CMOS Application Specific Integrated Circuit (ASIC) decoder which improves the pixel spatial resolution is described. The new algorithm does not degrade the detector throughput and does not require any modifications to the detector tube. The standard MAMA detector has a pixel size of 25 x 25 square microns, but with the new decoder circuit the pixel size is reduced to 12.5 x 12.5 square microns. We have built the first set of decode electronics utilizing the new ASIC chips and report here on the first imaging tests of this system.

  13. Segment scheduling method for reducing 360° video streaming latency

    Science.gov (United States)

    Gudumasu, Srinivas; Asbun, Eduardo; He, Yong; Ye, Yan

    2017-09-01

    360° video is an emerging new format in the media industry enabled by the growing availability of virtual reality devices. It provides the viewer a new sense of presence and immersion. Compared to conventional rectilinear video (2D or 3D), 360° video poses a new and difficult set of engineering challenges on video processing and delivery. Enabling comfortable and immersive user experience requires very high video quality and very low latency, while the large video file size poses a challenge to delivering 360° video in a quality manner at scale. Conventionally, 360° video represented in equirectangular or other projection formats can be encoded as a single standards-compliant bitstream using existing video codecs such as H.264/AVC or H.265/HEVC. Such method usually needs very high bandwidth to provide an immersive user experience. While at the client side, much of such high bandwidth and the computational power used to decode the video are wasted because the user only watches a small portion (i.e., viewport) of the entire picture. Viewport dependent 360°video processing and delivery approaches spend more bandwidth on the viewport than on non-viewports and are therefore able to reduce the overall transmission bandwidth. This paper proposes a dual buffer segment scheduling algorithm for viewport adaptive streaming methods to reduce latency when switching between high quality viewports in 360° video streaming. The approach decouples the scheduling of viewport segments and non-viewport segments to ensure the viewport segment requested matches the latest user head orientation. A base layer buffer stores all lower quality segments, and a viewport buffer stores high quality viewport segments corresponding to the most recent viewer's head orientation. The scheduling scheme determines viewport requesting time based on the buffer status and the head orientation. This paper also discusses how to deploy the proposed scheduling design for various viewport adaptive video

  14. A new LDPC decoding scheme for PDM-8QAM BICM coherent optical communication system

    Science.gov (United States)

    Liu, Yi; Zhang, Wen-bo; Xi, Li-xia; Tang, Xian-feng; Zhang, Xiao-guang

    2015-11-01

    A new log-likelihood ratio (LLR) message estimation method is proposed for polarization-division multiplexing eight quadrature amplitude modulation (PDM-8QAM) bit-interleaved coded modulation (BICM) optical communication system. The formulation of the posterior probability is theoretically analyzed, and the way to reduce the pre-decoding bit error rate ( BER) of the low density parity check (LDPC) decoder for PDM-8QAM constellations is presented. Simulation results show that it outperforms the traditional scheme, i.e., the new post-decoding BER is decreased down to 50% of that of the traditional post-decoding algorithm.

  15. 47 CFR 15.122 - Closed caption decoder requirements for digital television receivers and converter boxes.

    Science.gov (United States)

    2010-10-01

    ..., decoders must support BOTTOM_TO_TOP scrolling. For windows sharing the same horizontal scan lines on the... separated from the underlying video by a sufficient number of background pixels to insure the foreground is... the trailing “white” pixels of the last character on a row do not bleed into the underlying video. (i...

  16. How Major Depressive Disorder affects the ability to decode multimodal dynamic emotional stimuli

    Directory of Open Access Journals (Sweden)

    FILOMENA SCIBELLI

    2016-09-01

    Full Text Available Most studies investigating the processing of emotions in depressed patients reported impairments in the decoding of negative emotions. However, these studies adopted static stimuli (mostly stereotypical facial expressions corresponding to basic emotions which do not reflect the way people experience emotions in everyday life. For this reason, this work proposes to investigate the decoding of emotional expressions in patients affected by Recurrent Major Depressive Disorder (RMDDs using dynamic audio/video stimuli. RMDDs’ performance is compared with the performance of patients with Adjustment Disorder with Depressed Mood (ADs and healthy (HCs subjects. The experiments involve 27 RMDDs (16 with acute depression - RMDD-A, and 11 in a compensation phase - RMDD-C, 16 ADs and 16 HCs. The ability to decode emotional expressions is assessed through an emotion recognition task based on short audio (without video, video (without audio and audio/video clips. The results show that AD patients are significantly less accurate than HCs in decoding fear, anger, happiness, surprise and sadness. RMDD-As with acute depression are significantly less accurate than HCs in decoding happiness, sadness and surprise. Finally, no significant differences were found between HCs and RMDD-Cs in a compensation phase. The different communication channels and the types of emotion play a significant role in limiting the decoding accuracy.

  17. Modified Suboptimal Iterative Decoding for Regular Repeat- Accumulate Coded Signals

    Directory of Open Access Journals (Sweden)

    Muhammad Thamer Nesr

    2017-05-01

    Full Text Available In this work, two algorithms are suggested in order to improve the performance of systematic Repeat-Accumulate ( decoding. The first one is accomplished by the insertion of pilot symbols among the data stream that entering the encoder. The positions where pilots should be inserted are chosen in such a way that to improve the minimum Hamming distance and/or to reduce the error coefficients of the code. The second proposed algorithm includes the utilization of the inserted pilots to estimate scaling (correction factors. Two-dimensional correction factor was suggested in order to enhance the performance of traditional Minimum-Sum decoding of regular repeat accumulate codes. An adaptive method can be achieved for getting the correction factors by calculating the mean square difference between the values of received pilots and the a-posteriori data of bit and check node related to them which created by the minimum-sum ( decoder

  18. Partially blind instantly decodable network codes for lossy feedback environment

    KAUST Repository

    Sorour, Sameh

    2014-09-01

    In this paper, we study the multicast completion and decoding delay minimization problems for instantly decodable network coding (IDNC) in the case of lossy feedback. When feedback loss events occur, the sender falls into uncertainties about packet reception at the different receivers, which forces it to perform partially blind selections of packet combinations in subsequent transmissions. To determine efficient selection policies that reduce the completion and decoding delays of IDNC in such an environment, we first extend the perfect feedback formulation in our previous works to the lossy feedback environment, by incorporating the uncertainties resulting from unheard feedback events in these formulations. For the completion delay problem, we use this formulation to identify the maximum likelihood state of the network in events of unheard feedback and employ it to design a partially blind graph update extension to the multicast IDNC algorithm in our earlier work. For the decoding delay problem, we derive an expression for the expected decoding delay increment for any arbitrary transmission. This expression is then used to find the optimal policy that reduces the decoding delay in such lossy feedback environment. Results show that our proposed solutions both outperform previously proposed approaches and achieve tolerable degradation even at relatively high feedback loss rates.

  19. Indirect decoding edges for one-shot shape acquisition.

    Science.gov (United States)

    Zhang, Xu; Zhu, Limin; Li, Youfu

    2011-04-01

    Determining the correspondence in coded structured light is challenging, but it is very important in one-shot techniques for 3D shape reconstruction. The problem of decoding stripe patterns can be modeled as matching two code sequences. We propose decoding edges indirectly based on the property of the stripe pattern, which can be represented as edge code, color code, or mixed code sequences. While traditional methods match two edge code sequences, indirect decoding matches two color sequences or mixed code sequences. The advantages of the proposed method, including a higher Hamming distance, enforced local coherence, and more code information, make indirect decoding excellent in performance. Previously, the lack of ground truth has prevented direct comparisons of different decoding algorithms. Here we obtain six benchmark datasets using the robust space-time analysis. Five decoding methods are quantitatively evaluated using the ground truth. The comparison results show that our method is robust for complex code situations and that it outperforms the state-of-the-art technique in this area. © 2011 Optical Society of America

  20. Sequential decoders for large MIMO systems

    KAUST Repository

    Ali, Konpal S.

    2014-05-01

    Due to their ability to provide high data rates, multiple-input multiple-output (MIMO) systems have become increasingly popular. Decoding of these systems with acceptable error performance is computationally very demanding. In this paper, we employ the Sequential Decoder using the Fano Algorithm for large MIMO systems. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity and vice versa for higher bias values. Numerical results are done that show moderate bias values result in a decent performance-complexity trade-off. We also attempt to bound the error by bounding the bias, using the minimum distance of a lattice. The variations in complexity with SNR have an interesting trend that shows room for considerable improvement. Our work is compared against linear decoders (LDs) aided with Element-based Lattice Reduction (ELR) and Complex Lenstra-Lenstra-Lovasz (CLLL) reduction. © 2014 IFIP.

  1. Loneliness and Interpersonal Decoding Skills.

    Science.gov (United States)

    Zakahi, Walter R.; Goss, Blaine

    1995-01-01

    Finds that the romantic loneliness dimension of the Differential Loneliness Scale is related to decoding ability, and that there are moderate linear relationships among several of the dimensions of the Differential Loneliness Scale, the self-report of listening ability, and participants' view of their own decoding ability. (SR)

  2. Hardware Implementation of Serially Concatenated PPM Decoder

    Science.gov (United States)

    Moision, Bruce; Hamkins, Jon; Barsoum, Maged; Cheng, Michael; Nakashima, Michael

    2009-01-01

    A prototype decoder for a serially concatenated pulse position modulation (SCPPM) code has been implemented in a field-programmable gate array (FPGA). At the time of this reporting, this is the first known hardware SCPPM decoder. The SCPPM coding scheme, conceived for free-space optical communications with both deep-space and terrestrial applications in mind, is an improvement of several dB over the conventional Reed-Solomon PPM scheme. The design of the FPGA SCPPM decoder is based on a turbo decoding algorithm that requires relatively low computational complexity while delivering error-rate performance within approximately 1 dB of channel capacity. The SCPPM encoder consists of an outer convolutional encoder, an interleaver, an accumulator, and an inner modulation encoder (more precisely, a mapping of bits to PPM symbols). Each code is describable by a trellis (a finite directed graph). The SCPPM decoder consists of an inner soft-in-soft-out (SISO) module, a de-interleaver, an outer SISO module, and an interleaver connected in a loop (see figure). Each SISO module applies the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm to compute a-posteriori bit log-likelihood ratios (LLRs) from apriori LLRs by traversing the code trellis in forward and backward directions. The SISO modules iteratively refine the LLRs by passing the estimates between one another much like the working of a turbine engine. Extrinsic information (the difference between the a-posteriori and a-priori LLRs) is exchanged rather than the a-posteriori LLRs to minimize undesired feedback. All computations are performed in the logarithmic domain, wherein multiplications are translated into additions, thereby reducing complexity and sensitivity to fixed-point implementation roundoff errors. To lower the required memory for storing channel likelihood data and the amounts of data transfer between the decoder and the receiver, one can discard the majority of channel likelihoods, using only the remainder in

  3. Algorithms

    Indian Academy of Sciences (India)

    , i is referred to as the loop-index, 'stat-body' is any sequence of ... while i ~ N do stat-body; i: = i+ 1; endwhile. The algorithm for sorting the numbers is described in Table 1 and the algorithmic steps on a list of 4 numbers shown in. Figure 1.

  4. Memory-efficient decoding of LDPC codes

    Science.gov (United States)

    Kwok-San Lee, Jason; Thorpe, Jeremy; Hawkins, Jon

    2005-01-01

    We present a low-complexity quantization scheme for the implementation of regular (3,6) LDPC codes. The quantization parameters are optimized to maximize the mutual information between the source and the quantized messages. Using this non-uniform quantized belief propagation algorithm, we have simulated that an optimized 3-bit quantizer operates with 0.2dB implementation loss relative to a floating point decoder, and an optimized 4-bit quantizer operates less than 0.1dB quantization loss.

  5. Turbo codes for multi-hop wireless sensor networks with decode-and-forward mechanism

    National Research Council Canada - National Science Library

    Abughalieh, Nashat; Steenhaut, Kris; Nowé, Ann; Anpalagan, Alagan

    2014-01-01

    .... Error correcting codes (ECCs) can be used to reduce number of retransmissions, but most ECCs have complex decoding algorithms, which leads to high processing energy consumption at the receiving nodes in the WSN...

  6. Electromyography-based seizure detector: Preliminary results comparing a generalized tonic-clonic seizure detection algorithm to video-EEG recordings.

    Science.gov (United States)

    Szabó, Charles Ákos; Morgan, Lola C; Karkar, Kameel M; Leary, Linda D; Lie, Octavian V; Girouard, Michael; Cavazos, José E

    2015-09-01

    Automatic detection of generalized tonic-clonic seizures (GTCS) will facilitate patient monitoring and early intervention to prevent comorbidities, recurrent seizures, or death. Brain Sentinel (San Antonio, Texas, USA) developed a seizure-detection algorithm evaluating surface electromyography (sEMG) signals during GTCS. This study aims to validate the seizure-detection algorithm using inpatient video-electroencephalography (EEG) monitoring. sEMG was recorded unilaterally from the biceps/triceps muscles in 33 patients (17white/16 male) with a mean age of 40 (range 14-64) years who were admitted for video-EEG monitoring. Maximum voluntary biceps contraction was measured in each patient to set up the baseline physiologic muscle threshold. The raw EMG signal was recorded using conventional amplifiers, sampling at 1,024 Hz and filtered with a 60 Hz noise detection algorithm before it was processed with three band-pass filters at pass frequencies of 3-40, 130-240, and 300-400 Hz. A seizure-detection algorithm utilizing Hotelling's T-squared power analysis of compound muscle action potentials was used to identify GTCS and correlated with video-EEG recordings. In 1,399 h of continuous recording, there were 196 epileptic seizures (21 GTCS, 96 myoclonic, 28 tonic, 12 absence, and 42 focal seizures with or without loss of awareness) and 4 nonepileptic spells. During retrospective, offline evaluation of sEMG from the biceps alone, the algorithm detected 20 GTCS (95%) in 11 patients, averaging within 20 s of electroclinical onset of generalized tonic activity, as identified by video-EEG monitoring. Only one false-positive detection occurred during the postictal period following a GTCS, but false alarms were not triggered by other seizure types or spells. Brain Sentinel's seizure detection algorithm demonstrated excellent sensitivity and specificity for identifying GTCS recorded in an epilepsy monitoring unit. Further studies are needed in larger patient groups, including

  7. A general algorithm for distributing information in a graph

    OpenAIRE

    Aji, Srinivas M.; McEliece, Robert J.

    1997-01-01

    We present a general “message-passing” algorithm for distributing information in a graph. This algorithm may help us to understand the approximate correctness of both the Gallager-Tanner-Wiberg algorithm, and the turbo-decoding algorithm.

  8. Transform domain Wyner-Ziv video coding with refinement of noise residue and side information

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2010-01-01

    are successively updating the estimated noise residue for noise modeling and side information frame quality during decoding. Experimental results show that the proposed decoder can improve the Rate- Distortion (RD) performance of a state-of-the-art Wyner Ziv video codec for the set of test sequences.......Distributed Video Coding (DVC) is a video coding paradigm which mainly exploits the source statistics at the decoder based on the availability of side information at the decoder. This paper considers feedback channel based Transform Domain Wyner-Ziv (TDWZ) DVC. The coding efficiency of TDWZ video...... coding does not match that of conventional video coding yet, mainly due to the quality of side information and inaccurate noise estimation. In this context, a novel TDWZ video decoder with noise residue refinement (NRR) and side information refinement (SIR) is proposed. The proposed refinement schemes...

  9. Astrophysics Decoding the cosmos

    CERN Document Server

    Irwin, Judith A

    2007-01-01

    Astrophysics: Decoding the Cosmos is an accessible introduction to the key principles and theories underlying astrophysics. This text takes a close look at the radiation and particles that we receive from astronomical objects, providing a thorough understanding of what this tells us, drawing the information together using examples to illustrate the process of astrophysics. Chapters dedicated to objects showing complex processes are written in an accessible manner and pull relevant background information together to put the subject firmly into context. The intention of the author is that the book will be a 'tool chest' for undergraduate astronomers wanting to know the how of astrophysics. Students will gain a thorough grasp of the key principles, ensuring that this often-difficult subject becomes more accessible.

  10. Divide & Concur and Difference-Map BP Decoders for LDPC Codes

    CERN Document Server

    Yedidia, Jonathan S; Draper, Stark C

    2010-01-01

    The "Divide and Concur'' (DC) algorithm, recently introduced by Gravel and Elser, can be considered a competitor to the belief propagation (BP) algorithm, in that both algorithms can be applied to a wide variety of constraint satisfaction, optimization, and probabilistic inference problems. We show that DC can be interpreted as a message-passing algorithm on a constraint graph, which helps make the comparison with BP more clear. The "difference-map'' dynamics of the DC algorithm enables it to avoid "traps'' which may be related to the "trapping sets'' or "pseudo-codewords'' that plague BP decoders of low-density parity check (LDPC) codes in the error-floor regime. We investigate two decoders for low-density parity-check (LDPC) codes based on these ideas. The first decoder is based directly on DC, while the second decoder borrows the important "difference-map'' concept from the DC algorithm and translates it into a BP-like decoder. We show that this "difference-map belief propagation'' (DMBP) decoder has drama...

  11. Luminance information decoding on the basis of local field potential signals of pigeon optic tectum neurons.

    Science.gov (United States)

    Wang, Songwei; Liu, Lijun; Wang, Zhizhong; Niu, Xiaoke; Hu, Yuxia; Shi, Li

    2017-11-08

    Important aspects of brain information processing can be understood by examining decoding of visual stimuli from neuronal response signals. In this research, the luminance information is decoded from the local field potential signal in the optic tectum region of the pigeon. We designed a luminance visual stimulus model with transient flicker characteristics, recorded multichannel local field potential (LFP) signals using a microelectrode array, extracted LFP Fourier transform energy and phase features, constructed a multivariate linear inverse filter luminance information decoding algorithm, and evaluated decoding effects using a cross-correlation method. We found that LFP signal phase decoding of luminance information yielded better effects than amplitude decoding of luminance information. In the case of optimal frequency band, channels, delay time, and other parameters, the results of phase and amplitude codecoding could reach 0.94±0.02. Comparing the differences between neuronal spike decoding and LFP decoding, we found that LFP signal phase and amplitude codecoding resulted in luminance closer to that of the actual stimulus and required fewer decoding electrode channels.

  12. Distributed Video Coding: Iterative Improvements

    DEFF Research Database (Denmark)

    Luong, Huynh Van

    at the decoder side offering such benefits for these applications. Although there have been some advanced improvement techniques, improving the DVC coding efficiency is still challenging. The thesis addresses this challenge by proposing several iterative algorithms at different working levels, e.g. bitplane......, band, and frame levels. In order to show the information theoretic basis, theoretical foundations of DVC are introduced. The first proposed algorithm applies parallel iterative decoding using multiple LDPC decoders to utilize cross bitplane correlation. To improve Side Information (SI) generation...... and noise modeling and also learn from the previous decoded Wyner-Ziv (WZ) frames, side information and noise learning (SING) is proposed. The SING scheme introduces an optical flow technique to compensate the weaknesses of the block based SI generation and also utilizes clustering of DCT blocks to capture...

  13. Analysis of Minimal LDPC Decoder System on a Chip Implementation

    Directory of Open Access Journals (Sweden)

    T. Palenik

    2015-09-01

    Full Text Available This paper presents a practical method of potential replacement of several different Quasi-Cyclic Low-Density Parity-Check (QC-LDPC codes with one, with the intention of saving as much memory as required to implement the LDPC encoder and decoder in a memory-constrained System on a Chip (SoC. The presented method requires only a very small modification of the existing encoder and decoder, making it suitable for utilization in a Software Defined Radio (SDR platform. Besides the analysis of the effects of necessary variable-node value fixation during the Belief Propagation (BP decoding algorithm, practical standard-defined code parameters are scrutinized in order to evaluate the feasibility of the proposed LDPC setup simplification. Finally, the error performance of the modified system structure is evaluated and compared with the original system structure by means of simulation.

  14. Distributed video coding with multiple side information

    DEFF Research Database (Denmark)

    Huang, Xin; Brites, C.; Ascenso, J.

    2009-01-01

    Distributed Video Coding (DVC) is a new video coding paradigm which mainly exploits the source statistics at the decoder based on the availability of some decoder side information. The quality of the side information has a major impact on the DVC rate-distortion (RD) performance in the same way...... the quality of the predictions had a major impact in predictive video coding. In this paper, a DVC solution exploiting multiple side information is proposed; the multiple side information is generated by frame interpolation and frame extrapolation targeting to improve the side information of a single...

  15. Algorithms

    Indian Academy of Sciences (India)

    Algorithms. 3. Procedures and Recursion. R K Shyamasundar. In this article we introduce procedural abstraction and illustrate its uses. Further, we illustrate the notion of recursion which is one of the most useful features of procedural abstraction. Procedures. Let us consider a variation of the pro blem of summing the first M.

  16. Algorithms

    Indian Academy of Sciences (India)

    number of elements. We shall illustrate the widely used matrix multiplication algorithm using the two dimensional arrays in the following. Consider two matrices A and B of integer type with di- mensions m x nand n x p respectively. Then, multiplication of. A by B denoted, A x B , is defined by matrix C of dimension m xp where.

  17. Robust pattern decoding in shape-coded structured light

    Science.gov (United States)

    Tang, Suming; Zhang, Xu; Song, Zhan; Song, Lifang; Zeng, Hai

    2017-09-01

    Decoding is a challenging and complex problem in a coded structured light system. In this paper, a robust pattern decoding method is proposed for the shape-coded structured light in which the pattern is designed as grid shape with embedded geometrical shapes. In our decoding method, advancements are made at three steps. First, a multi-template feature detection algorithm is introduced to detect the feature point which is the intersection of each two orthogonal grid-lines. Second, pattern element identification is modelled as a supervised classification problem and the deep neural network technique is applied for the accurate classification of pattern elements. Before that, a training dataset is established, which contains a mass of pattern elements with various blurring and distortions. Third, an error correction mechanism based on epipolar constraint, coplanarity constraint and topological constraint is presented to reduce the false matches. In the experiments, several complex objects including human hand are chosen to test the accuracy and robustness of the proposed method. The experimental results show that our decoding method not only has high decoding accuracy, but also owns strong robustness to surface color and complex textures.

  18. Online sparse representation for remote sensing compressed-sensed video sampling

    Science.gov (United States)

    Wang, Jie; Liu, Kun; Li, Sheng-liang; Zhang, Li

    2014-11-01

    Most recently, an emerging Compressed Sensing (CS) theory has brought a major breakthrough for data acquisition and recovery. It asserts that a signal, which is highly compressible in a known basis, can be reconstructed with high probability through sampling frequency which is well below Nyquist Sampling Frequency. When applying CS to Remote Sensing (RS) Video imaging, it can directly and efficiently acquire compressed image data by randomly projecting original data to obtain linear and non-adaptive measurements. In this paper, with the help of distributed video coding scheme which is a low-complexity technique for resource limited sensors, the frames of a RS video sequence are divided into Key frames (K frames) and Non-Key frames (CS frames). In other words, the input video sequence consists of many groups of pictures (GOPs) and each GOP consists of one K frame followed by several CS frames. Both of them are measured based on block, but at different sampling rates. In this way, the major encoding computation burden will be shifted to the decoder. At the decoder, the Side Information (SI) is generated for the CS frames using traditional Motion-Compensated Interpolation (MCI) technique according to the reconstructed key frames. The over-complete dictionary is trained by dictionary learning methods based on SI. These learning methods include ICA-like, PCA, K-SVD, MOD, etc. Using these dictionaries, the CS frames could be reconstructed according to sparse-land model. In the numerical experiments, the reconstruction performance of ICA algorithm, which is often evaluated by Peak Signal-to-Noise Ratio (PSNR), has been made compared with other online sparse representation algorithms. The simulation results show its advantages in reducing reconstruction time and robustness in reconstruction performance when applying ICA algorithm to remote sensing video reconstruction.

  19. On the error probability of general tree and trellis codes with applications to sequential decoding

    Science.gov (United States)

    Johannesson, R.

    1973-01-01

    An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.

  20. Fast Erasure and Error decoding of Algebraic Geometry Codes up to the Feng-Rao Bound

    DEFF Research Database (Denmark)

    Jensen, Helge Elbrønd; Sakata, S.; Leonard, D.

    1996-01-01

    This paper gives an errata(that is erasure-and error-) decoding algorithm of one-point algebraic geometry codes up to the Feng-Rao designed minimum distance using Sakata's multidimensional generalization of the Berlekamp-massey algorithm and the votin procedure of Feng and Rao.......This paper gives an errata(that is erasure-and error-) decoding algorithm of one-point algebraic geometry codes up to the Feng-Rao designed minimum distance using Sakata's multidimensional generalization of the Berlekamp-massey algorithm and the votin procedure of Feng and Rao....

  1. FPGA-based Implementation of a Streaming Decoder for Shortened Reed-Solomon Codes

    Directory of Open Access Journals (Sweden)

    S. V. Fedorov

    2016-01-01

    Full Text Available The traditional approaches to decoding shortened block codes are the padding of received codeword with zeros or correction of the syndrome polynomial values. Such approaches make stream-oriented decoding of continuous input data stream impossible. The paper offers a new technique for decoding the shortened Reed-Solomon codes. Its hardware implementation allows us to provide stream processing, reduce latency in clock cycles, and decrease a required quantity of hardware resources as compared to decoder implementation with padding of received packet with zeros. A distinctive feature of the proposed technique is that the decoder processes a stream of code words of different lengths without changing their structure and insertion of additional delays and that it is possible to use the modules of existing Reed-Solomon decoders for full codeword length. To enable this, a notion of error locator corrector for shortened code is introduced and a technique of their calculation is offered. Error locator correctors are calculated during the reception of a codeword in parallel to syndrome polynomial calculation. Their values allow us to modify the polynomial values of locators and errors at the output of the key equation solver. The paper considers a shortened code decoder that is implemented on the full code decoder modules based on Berlekamp-Massey algorithm, describes architecture of additional modules and required modifications of the algorithm. It shows the way to save hardware resources by using the multipliers in Berlekemp-Massey key equation solver to correct values. The Altera FPGA-based decoder has been tested in and compared to the Reed-Solomon II decoder IP from Altera.

  2. Real-time data compression of broadcast video signals

    Science.gov (United States)

    Shalkauser, Mary Jo W. (Inventor); Whyte, Wayne A., Jr. (Inventor); Barnes, Scott P. (Inventor)

    1991-01-01

    A non-adaptive predictor, a nonuniform quantizer, and a multi-level Huffman coder are incorporated into a differential pulse code modulation system for coding and decoding broadcast video signals in real time.

  3. A Study on Decoding Models for the Reconstruction of Hand Trajectories from the Human Magnetoencephalography

    Directory of Open Access Journals (Sweden)

    Hong Gi Yeom

    2014-01-01

    Full Text Available Decoding neural signals into control outputs has been a key to the development of brain-computer interfaces (BCIs. While many studies have identified neural correlates of kinematics or applied advanced machine learning algorithms to improve decoding performance, relatively less attention has been paid to optimal design of decoding models. For generating continuous movements from neural activity, design of decoding models should address how to incorporate movement dynamics into models and how to select a model given specific BCI objectives. Considering nonlinear and independent speed characteristics, we propose a hybrid Kalman filter to decode the hand direction and speed independently. We also investigate changes in performance of different decoding models (the linear and Kalman filters when they predict reaching movements only or predict both reach and rest. Our offline study on human magnetoencephalography (MEG during point-to-point arm movements shows that the performance of the linear filter or the Kalman filter is affected by including resting states for training and predicting movements. However, the hybrid Kalman filter consistently outperforms others regardless of movement states. The results demonstrate that better design of decoding models is achieved by incorporating movement dynamics into modeling or selecting a model according to decoding objectives.

  4. Volitional and Real-Time Control Cursor Based on Eye Movement Decoding Using a Linear Decoding Model

    Directory of Open Access Journals (Sweden)

    Jinhua Zhang

    2016-01-01

    Full Text Available The aim of this study is to build a linear decoding model that reveals the relationship between the movement information and the EOG (electrooculogram data to online control a cursor continuously with blinks and eye pursuit movements. First of all, a blink detection method is proposed to reject a voluntary single eye blink or double-blink information from EOG. Then, a linear decoding model of time series is developed to predict the position of gaze, and the model parameters are calibrated by the RLS (Recursive Least Square algorithm; besides, the assessment of decoding accuracy is assessed through cross-validation procedure. Additionally, the subsection processing, increment control, and online calibration are presented to realize the online control. Finally, the technology is applied to the volitional and online control of a cursor to hit the multiple predefined targets. Experimental results show that the blink detection algorithm performs well with the voluntary blink detection rate over 95%. Through combining the merits of blinks and smooth pursuit movements, the movement information of eyes can be decoded in good conformity with the average Pearson correlation coefficient which is up to 0.9592, and all signal-to-noise ratios are greater than 0. The novel system allows people to successfully and economically control a cursor online with a hit rate of 98%.

  5. A fast method for video deblurring based on a combination of gradient methods and denoising algorithms in Matlab and C environments

    Science.gov (United States)

    Mirzadeh, Zeynab; Mehri, Razieh; Rabbani, Hossein

    2010-01-01

    In this paper the degraded video with blur and noise is enhanced by using an algorithm based on an iterative procedure. In this algorithm at first we estimate the clean data and blur function using Newton optimization method and then the estimation procedure is improved using appropriate denoising methods. These noise reduction techniques are based on local statistics of clean data and blur function. For estimated blur function we use LPA-ICI (local polynomial approximation - intersection of confidence intervals) method that use an anisotropic window around each point and obtain the enhanced data employing Wiener filter in this local window. Similarly, to improvement the quality of estimated clean video, at first we transform the data to wavelet transform domain and then improve our estimation using maximum a posterior (MAP) estimator and local Laplace prior. This procedure (initial estimation and improvement of estimation by denoising) is iterated and finally the clean video is obtained. The implementation of this algorithm is slow in MATLAB1 environment and so it is not suitable for online applications. However, MATLAB has the capability of running functions written in C. The files which hold the source for these functions are called MEX-Files. The MEX functions allow system-specific APIs to be called to extend MATLAB's abilities. So, in this paper to speed up our algorithm, the written code in MATLAB is sectioned and the elapsed time for each section is measured and slow sections (that use 60% of complete running time) are selected. Then these slow sections are translated to C++ and linked to MATLAB. In fact, the high loads of information in images and processed data in the "for" loops of relevant code, makes MATLAB an unsuitable candidate for writing such programs. The written code for our video deblurring algorithm in MATLAB contains eight "for" loops. These eighth "for" utilize 60% of the total execution time of the entire program and so the runtime should be

  6. Low-Complexity Decoding of Block Turbo-Coded System with Antenna Diversity

    Directory of Open Access Journals (Sweden)

    Chen Yanni

    2003-01-01

    Full Text Available The goal of this paper is to reduce the decoding complexity of space-time block turbo-coded system with low performance degradation. Two block turbo-coded systems with antenna diversity are considered. These include the simple serial concatenation of error control code with space-time block code, and the recently proposed transmit antenna diversity scheme using forward error correction techniques. It is shown that the former performs better when compared to the latter in terms of bit error rate (BER under the same spectral efficiency (up to 7 dB at the BER of for quasistatic channel with two transmit and two receive antennas. For the former system, a computationally efficient decoding approach is proposed for the soft decoding of space-time block code. Compared to its original maximum likelihood decoding algorithm, it can reduce the computation by up to 70% without any performance degradation. Additionally, for the considered outer code block turbo code, through reduction of test patterns scanned in the Chase algorithm and the alternative computation of its extrinsic information during iterative decoding, extra 0.3 dB to 0.4 dB coding gain is obtained if compared with previous approaches with negligible hardware overhead. The overall decoding complexity is approximately ten times less than that of the near-optimum block turbo decoder with coding gain loss of 0.5 dB at the BER of over AWGN channel.

  7. Robust real-time segmentation of images and videos using a smooth-spline snake-based algorithm.

    Science.gov (United States)

    Precioso, Frederic; Barlaud, Michel; Blu, Thierry; Unser, Michael

    2005-07-01

    This paper deals with fast image and video segmentation using active contours. Region-based active contours using level sets are powerful techniques for video segmentation, but they suffer from large computational cost. A parametric active contour method based on B-Spline interpolation has been proposed in to highly reduce the computational cost, but this method is sensitive to noise. Here, we choose to relax the rigid interpolation constraint in order to robustify our method in the presence of noise: by using smoothing splines, we trade a tunable amount of interpolation error for a smoother spline curve. We show by experiments on natural sequences that this new flexibility yields segmentation results of higher quality at no additional computational cost. Hence, real-time processing for moving objects segmentation is preserved.

  8. LDPC Decoding on GPU for Mobile Device

    National Research Council Canada - National Science Library

    Yiqin Lu; Weiyue Su; Jiancheng Qin

    2016-01-01

    A flexible software LDPC decoder that exploits data parallelism for simultaneous multicode words decoding on the mobile device is proposed in this paper, supported by multithreading on OpenCL based...

  9. LDPC code decoding adapted to the precoded partial response magnetic recording channels

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jun E-mail: leejun28@sait.samsung.co.kr; Kim, Kyuyong; Lee, Jaejin; Yang, Gijoo

    2004-05-01

    We propose a signal processing technique using LDPC (low-density parity-check) code instead of PRML (partial response maximum likelihood) system for the longitudinal magnetic recording channel. The scheme is designed by the precoder admitting level detection at the receiver-end and modifying the likelihood function for LDPC code decoding. The scheme can be collaborated with other decoder for turbo-like systems. The proposed algorithm can contribute to improve the performance of the conventional turbo-like systems.

  10. On the reduced-complexity of LDPC decoders for ultra-high-speed optical transmission.

    Science.gov (United States)

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2010-10-25

    We propose two reduced-complexity (RC) LDPC decoders, which can be used in combination with large-girth LDPC codes to enable ultra-high-speed serial optical transmission. We show that optimally attenuated RC min-sum sum algorithm performs only 0.46 dB (at BER of 10(-9)) worse than conventional sum-product algorithm, while having lower storage memory requirements and much lower latency. We further study the use of RC LDPC decoding algorithms in multilevel coded modulation with coherent detection and show that with RC decoding algorithms we can achieve the net coding gain larger than 11 dB at BERs below 10(-9).

  11. FPGA Realization of Memory 10 Viterbi Decoder

    DEFF Research Database (Denmark)

    Paaske, Erik; Bach, Thomas Bo; Andersen, Jakob Dahl

    1997-01-01

    sequence mode when feedback from the Reed-Solomon decoder is available. The Viterbi decoder is realized using two Altera FLEX 10K50 FPGA's. The overall operating speed is 30 kbit/s, and since up to three iterations are performed for each frame and only one decoder is used, the operating speed...

  12. List decoding of a class of affine variety codes

    CERN Document Server

    Geil, Olav

    2011-01-01

    Consider a polynomial $F$ in $m$ variables and a finite point ensemble $S=S_1 \\times ... \\times S_m$. When given the leading monomial of $F$ with respect to a lexicographic ordering we derive improved information on the possible number of zeros of $F$ of multiplicity at least $r$ from $S$. We then use this information to design a list decoding algorithm for a large class of affine variety codes.

  13. Design of LOG-MAP / MAX-LOG-MAP Decoder

    Directory of Open Access Journals (Sweden)

    Mihai TIMIS, PhD Candidate, Dipl.Eng.

    2007-01-01

    Full Text Available The process of turbo-code decoding starts with the formation of a posteriori probabilities (APPs for each data bit, which is followed by choosing the data-bit value that corresponds to the maximum a posteriori (MAP probability for that data bit. Upon reception of a corrupted code-bit sequence, the process of decision making with APPs allows the MAP algorithm to determine the most likely information bit to have been transmitted at each bit time.

  14. Improved fault tolerance of Turbo decoding based on optimized index assignments

    Science.gov (United States)

    Geldmacher, J.; Götze, J.

    2014-11-01

    This paper investigates the impact of an error-prone buffer memory on a channel decoder as employed in modern digital communication systems. On one hand this work is motivated by the fact that energy efficient decoder implementations may not only be achieved by optimizations on algorithmic level, but also by chip-level modifications. One of such modifications is so called aggressive voltage scaling of buffer memories, which, while achieving reduced power consumption, also injects errors into the likelihood values used during the decoding process. On the other hand, it has been recognized that the ongoing increase of integration density with smaller structures makes integrated circuits more sensitive to process variations during manufacturing, and to voltage and temperature variations. This may lead to a paradigm shift from 100 %-reliable operation to fault tolerant signal processing. Both reasons are the motivation to discuss the required co-design of algorithms and underlying circuits. For an error-prone receive buffer of a Turbo decoder the influence of quantizer design and index assignment on the error resilience of the decoding algorithm is discussed. It is shown that a suitable design of both enables a compensation of hardware induced bits errors with rates up to 1 % without increasing the computational complexity of the decoder.

  15. Complexity Analysis of Reed-Solomon Decoding over GF without Using Syndromes

    Directory of Open Access Journals (Sweden)

    Chen Ning

    2008-01-01

    Full Text Available Abstract There has been renewed interest in decoding Reed-Solomon (RS codes without using syndromes recently. In this paper, we investigate the complexity of syndromeless decoding, and compare it to that of syndrome-based decoding. Aiming to provide guidelines to practical applications, our complexity analysis focuses on RS codes over characteristic-2 fields, for which some multiplicative FFT techniques are not applicable. Due to moderate block lengths of RS codes in practice, our analysis is complete, without big notation. In addition to fast implementation using additive FFT techniques, we also consider direct implementation, which is still relevant for RS codes with moderate lengths. For high-rate RS codes, when compared to syndrome-based decoding algorithms, not only syndromeless decoding algorithms require more field operations regardless of implementation, but also decoder architectures based on their direct implementations have higher hardware costs and lower throughput. We also derive tighter bounds on the complexities of fast polynomial multiplications based on Cantor's approach and the fast extended Euclidean algorithm.

  16. Complexity Analysis of Reed-Solomon Decoding over GF(2m without Using Syndromes

    Directory of Open Access Journals (Sweden)

    Zhiyuan Yan

    2008-06-01

    Full Text Available There has been renewed interest in decoding Reed-Solomon (RS codes without using syndromes recently. In this paper, we investigate the complexity of syndromeless decoding, and compare it to that of syndrome-based decoding. Aiming to provide guidelines to practical applications, our complexity analysis focuses on RS codes over characteristic-2 fields, for which some multiplicative FFT techniques are not applicable. Due to moderate block lengths of RS codes in practice, our analysis is complete, without big O notation. In addition to fast implementation using additive FFT techniques, we also consider direct implementation, which is still relevant for RS codes with moderate lengths. For high-rate RS codes, when compared to syndrome-based decoding algorithms, not only syndromeless decoding algorithms require more field operations regardless of implementation, but also decoder architectures based on their direct implementations have higher hardware costs and lower throughput. We also derive tighter bounds on the complexities of fast polynomial multiplications based on Cantor's approach and the fast extended Euclidean algorithm.

  17. Delay reduction in lossy intermittent feedback for generalized instantly decodable network coding

    KAUST Repository

    Douik, Ahmed S.

    2013-10-01

    In this paper, we study the effect of lossy intermittent feedback loss events on the multicast decoding delay performance of generalized instantly decodable network coding. These feedback loss events create uncertainty at the sender about the reception statues of different receivers and thus uncertainty to accurately determine subsequent instantly decodable coded packets. To solve this problem, we first identify the different possibilities of uncertain packets at the sender and their probabilities. We then derive the expression of the mean decoding delay. We formulate the Generalized Instantly Decodable Network Coding (G-IDNC) minimum decoding delay problem as a maximum weight clique problem. Since finding the optimal solution is NP-hard, we design a variant of the algorithm employed in [1]. Our algorithm is compared to the two blind graph update proposed in [2] through extensive simulations. Results show that our algorithm outperforms the blind approaches in all the situations and achieves a tolerable degradation, against the perfect feedback, for large feedback loss period. © 2013 IEEE.

  18. Adaptive neuron-to-EMG decoder training for FES neuroprostheses

    Science.gov (United States)

    Ethier, Christian; Acuna, Daniel; Solla, Sara A.; Miller, Lee E.

    2016-08-01

    Objective. We have previously demonstrated a brain-machine interface neuroprosthetic system that provided continuous control of functional electrical stimulation (FES) and restoration of grasp in a primate model of spinal cord injury (SCI). Predicting intended EMG directly from cortical recordings provides a flexible high-dimensional control signal for FES. However, no peripheral signal such as force or EMG is available for training EMG decoders in paralyzed individuals. Approach. Here we present a method for training an EMG decoder in the absence of muscle activity recordings; the decoder relies on mapping behaviorally relevant cortical activity to the inferred EMG activity underlying an intended action. Monkeys were trained at a 2D isometric wrist force task to control a computer cursor by applying force in the flexion, extension, ulnar, and radial directions and execute a center-out task. We used a generic muscle force-to-endpoint force model based on muscle pulling directions to relate each target force to an optimal EMG pattern that attained the target force while minimizing overall muscle activity. We trained EMG decoders during the target hold periods using a gradient descent algorithm that compared EMG predictions to optimal EMG patterns. Main results. We tested this method both offline and online. We quantified both the accuracy of offline force predictions and the ability of a monkey to use these real-time force predictions for closed-loop cursor control. We compared both offline and online results to those obtained with several other direct force decoders, including an optimal decoder computed from concurrently measured neural and force signals. Significance. This novel approach to training an adaptive EMG decoder could make a brain-control FES neuroprosthesis an effective tool to restore the hand function of paralyzed individuals. Clinical implementation would make use of individualized EMG-to-force models. Broad generalization could be achieved by

  19. Parity Bit Replenishment for JPEG 2000-Based Video Streaming

    Directory of Open Access Journals (Sweden)

    François-Olivier Devaux

    2009-01-01

    Full Text Available This paper envisions coding with side information to design a highly scalable video codec. To achieve fine-grained scalability in terms of resolution, quality, and spatial access as well as temporal access to individual frames, the JPEG 2000 coding algorithm has been considered as the reference algorithm to encode INTRA information, and coding with side information has been envisioned to refresh the blocks that change between two consecutive images of a video sequence. One advantage of coding with side information compared to conventional closed-loop hybrid video coding schemes lies in the fact that parity bits are designed to correct stochastic errors and not to encode deterministic prediction errors. This enables the codec to support some desynchronization between the encoder and the decoder, which is particularly helpful to adapt on the fly pre-encoded content to fluctuating network resources and/or user preferences in terms of regions of interest. Regarding the coding scheme itself, to preserve both quality scalability and compliance to the JPEG 2000 wavelet representation, a particular attention has been devoted to the definition of a practical coding framework able to exploit not only the temporal but also spatial correlation among wavelet subbands coefficients, while computing the parity bits on subsets of wavelet bit-planes. Simulations have shown that compared to pure INTRA-based conditional replenishment solutions, the addition of the parity bits option decreases the transmission cost in terms of bandwidth, while preserving access flexibility.

  20. Scalable video transmission over Rayleigh fading channels using LDPC codes

    Science.gov (United States)

    Bansal, Manu; Kondi, Lisimachos P.

    2005-03-01

    In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.

  1. Software Optimization of Video Codecs on Pentium Processor with MMX Technology

    Directory of Open Access Journals (Sweden)

    Liu KJ Ray

    2001-01-01

    Full Text Available A key enabling technology for the proliferation of multimedia PC's is the availability of fast video codecs, which are the basic building blocks of many new multimedia applications. Since most industrial video coding standards (e.g., MPEG1, MPEG2, H.261, H.263 only specify the decoder syntax, there are a lot of rooms for optimization in a practical implementation. When considering a specific hardware platform like the PC, the algorithmic optimization must be considered in tandem with the architecture of the PC. Specifically, an algorithm that is optimal in the sense of number of operations needed may not be the fastest implementation on the PC. This is because special instructions are available which can perform several operations at once under special circumstances. In this work, we describe a fast implementation of H.263 video encoder for the Pentium processor with MMX technology. The described codec is adopted for video mail and video phone softwares used in IBM ThinkPad.

  2. Decoding continuous limb movements from high-density epidural electrode arrays using custom spatial filters

    Science.gov (United States)

    Marathe, A. R.; Taylor, D. M.

    2013-06-01

    Objective. Our goal was to identify spatial filtering methods that would improve decoding of continuous arm movements from epidural field potentials as well as demonstrate the use of the epidural signals in a closed-loop brain-machine interface (BMI) system in monkeys. Approach. Eleven spatial filtering options were compared offline using field potentials collected from 64-channel high-density epidural arrays in monkeys. Arrays were placed over arm/hand motor cortex in which intracortical microelectrodes had previously been implanted and removed leaving focal cortical damage but no lasting motor deficits. Spatial filters tested included: no filtering, common average referencing (CAR), principle component analysis, and eight novel modifications of the common spatial pattern (CSP) algorithm. The spatial filtering method and decoder combination that performed the best offline was then used online where monkeys controlled cursor velocity using continuous wrist position decoded from epidural field potentials in real time. Main results. Optimized CSP methods improved continuous wrist position decoding accuracy by 69% over CAR and by 80% compared to no filtering. Kalman decoders performed better than linear regression decoders and benefitted from including more spatially-filtered signals but not from pre-smoothing the calculated power spectra. Conversely, linear regression decoders required fewer spatially-filtered signals and were improved by pre-smoothing the power values. The ‘position-to-velocity’ transformation used during online control enabled the animals to generate smooth closed-loop movement trajectories using the somewhat limited position information available in the epidural signals. The monkeys’ online performance significantly improved across days of closed-loop training. Significance. Most published BMI studies that use electrocorticographic signals to decode continuous limb movements either use no spatial filtering or CAR. This study suggests a

  3. Key-equations for list decoding of Reed-Solomon codes and how to solve them

    DEFF Research Database (Denmark)

    Beelen, Peter; Brander, Kristian

    2010-01-01

    A Reed-Solomon code of length n can be list decoded using the well-known Guruswami-Sudan algorithm. By a result of Alekhnovich (2005) the interpolation part in this algorithm can be done in complexity O(s^4l^4nlog^2nloglogn), where l denotes the designed list size and s the multiplicity parameter...

  4. Dynamics of intracellular information decoding.

    Science.gov (United States)

    Kobayashi, Tetsuya J; Kamimura, Atsushi

    2011-10-01

    A variety of cellular functions are robust even to substantial intrinsic and extrinsic noise in intracellular reactions and the environment that could be strong enough to impair or limit them. In particular, of substantial importance is cellular decision-making in which a cell chooses a fate or behavior on the basis of information conveyed in noisy external signals. For robust decoding, the crucial step is filtering out the noise inevitably added during information transmission. As a minimal and optimal implementation of such an information decoding process, the autocatalytic phosphorylation and autocatalytic dephosphorylation (aPadP) cycle was recently proposed. Here, we analyze the dynamical properties of the aPadP cycle in detail. We describe the dynamical roles of the stationary and short-term responses in determining the efficiency of information decoding and clarify the optimality of the threshold value of the stationary response and its information-theoretical meaning. Furthermore, we investigate the robustness of the aPadP cycle against the receptor inactivation time and intrinsic noise. Finally, we discuss the relationship among information decoding with information-dependent actions, bet-hedging and network modularity.

  5. Dynamics of intracellular information decoding

    Science.gov (United States)

    Kobayashi, Tetsuya J.; Kamimura, Atsushi

    2011-10-01

    A variety of cellular functions are robust even to substantial intrinsic and extrinsic noise in intracellular reactions and the environment that could be strong enough to impair or limit them. In particular, of substantial importance is cellular decision-making in which a cell chooses a fate or behavior on the basis of information conveyed in noisy external signals. For robust decoding, the crucial step is filtering out the noise inevitably added during information transmission. As a minimal and optimal implementation of such an information decoding process, the autocatalytic phosphorylation and autocatalytic dephosphorylation (aPadP) cycle was recently proposed. Here, we analyze the dynamical properties of the aPadP cycle in detail. We describe the dynamical roles of the stationary and short-term responses in determining the efficiency of information decoding and clarify the optimality of the threshold value of the stationary response and its information-theoretical meaning. Furthermore, we investigate the robustness of the aPadP cycle against the receptor inactivation time and intrinsic noise. Finally, we discuss the relationship among information decoding with information-dependent actions, bet-hedging and network modularity.

  6. Early Decoding and Encoding Strategies.

    Science.gov (United States)

    Goldwater-Rozensher, Susan; Hebard, Amy J.

    A combination of case study observation and mini-experimentation techniques were used to examine a number of issues of relevance in the study of the acquisition of beginning reading skills. Six children were divided equally among three instructional modes: phonics, whole word, and mixed. They were asked to decode and encode words, and their…

  7. Decoding intention at sensorimotor timescales.

    Directory of Open Access Journals (Sweden)

    Mathew Salvaris

    Full Text Available The ability to decode an individual's intentions in real time has long been a 'holy grail' of research on human volition. For example, a reliable method could be used to improve scientific study of voluntary action by allowing external probe stimuli to be delivered at different moments during development of intention and action. Several Brain Computer Interface applications have used motor imagery of repetitive actions to achieve this goal. These systems are relatively successful, but only if the intention is sustained over a period of several seconds; much longer than the timescales identified in psychophysiological studies for normal preparation for voluntary action. We have used a combination of sensorimotor rhythms and motor imagery training to decode intentions in a single-trial cued-response paradigm similar to those used in human and non-human primate motor control research. Decoding accuracy of over 0.83 was achieved with twelve participants. With this approach, we could decode intentions to move the left or right hand at sub-second timescales, both for instructed choices instructed by an external stimulus and for free choices generated intentionally by the participant. The implications for volition are considered.

  8. Intelligent video surveillance systems

    CERN Document Server

    Dufour, Jean-Yves

    2012-01-01

    Belonging to the wider academic field of computer vision, video analytics has aroused a phenomenal surge of interest since the current millennium. Video analytics is intended to solve the problem of the incapability of exploiting video streams in real time for the purpose of detection or anticipation. It involves analyzing the videos using algorithms that detect and track objects of interest over time and that indicate the presence of events or suspect behavior involving these objects.The aims of this book are to highlight the operational attempts of video analytics, to identify possi

  9. Decoding repetitive finger movements with brain activity acquired via non-invasive electroencephalography

    Directory of Open Access Journals (Sweden)

    Andrew Young Paek

    2014-03-01

    Full Text Available We investigated how well repetitive finger tapping movements can be decoded from scalp electroencephalography (EEG signals. A linear decoder with memory was used to infer continuous index finger angular velocities from the low-pass filtered fluctuations of the amplitude of a plurality of EEG signals distributed across the scalp. To evaluate the accuracy of the decoder, the Pearson’s correlation coefficient (r between the observed and predicted trajectories was calculated in a 10-fold cross-validation scheme. We also assessed attempts to decode finger kinematics from EEG data that was cleaned with independent component analysis (ICA, EEG data from peripheral sensors, and EEG data from rest periods. A genetic algorithm was used to select combinations of EEG channels that maximized decoding accuracies. Our results (lower quartile r = 0.20, median r = 0.36, upper quartile r = 0.52 show that delta-band EEG signals contain useful information that can be used to infer finger kinematics. Further, the highest decoding accuracies were characterized by highly correlated delta band EEG activity mostly localized to the contralateral central areas of the scalp. Spectral analysis in alpha (8-13Hz and beta (20-30 Hz EEG respectively also showed focused bilateral alpha event related desynchronizations (ERDs over central scalp areas and contralateral beta event related synchronizations (ERSs localized to central scalp areas. Overall, this study demonstrates the feasibility of decoding finger kinematics from scalp EEG signals.

  10. The Ising decoder: reading out the activity of large neural ensembles.

    Science.gov (United States)

    Schaub, Michael T; Schultz, Simon R

    2012-02-01

    The Ising model has recently received much attention for the statistical description of neural spike train data. In this paper, we propose and demonstrate its use for building decoders capable of predicting, on a millisecond timescale, the stimulus represented by a pattern of neural activity. After fitting to a training dataset, the Ising decoder can be applied "online" for instantaneous decoding of test data. While such models can be fit exactly using Boltzmann learning, this approach rapidly becomes computationally intractable as neural ensemble size increases. We show that several approaches, including the Thouless-Anderson-Palmer (TAP) mean field approach from statistical physics, and the recently developed Minimum Probability Flow Learning (MPFL) algorithm, can be used for rapid inference of model parameters in large-scale neural ensembles. Use of the Ising model for decoding, unlike other problems such as functional connectivity estimation, requires estimation of the partition function. As this involves summation over all possible responses, this step can be limiting. Mean field approaches avoid this problem by providing an analytical expression for the partition function. We demonstrate these decoding techniques by applying them to simulated neural ensemble responses from a mouse visual cortex model, finding an improvement in decoder performance for a model with heterogeneous as opposed to homogeneous neural tuning and response properties. Our results demonstrate the practicality of using the Ising model to read out, or decode, spatial patterns of activity comprised of many hundreds of neurons.

  11. Prior Knowledge Improves Decoding of Finger Flexion from Electrocorticographic (ECoG Signals

    Directory of Open Access Journals (Sweden)

    Zuoguan eWang

    2011-11-01

    Full Text Available Brain-computer interfaces (BCIs use brain signals to convey a user's intent. Some BCI approaches begin by decoding kinematic parameters of movements from brain signals, and then proceed to using these signals, in absence of movements, to allow a user to control an output. Recent results have shown that electrocorticographic (ECoG recordings from the surface of the brain in humans can give information about kinematic parameters (eg{} hand velocity or finger flexion. The decoding approaches in these studies usually employed classical classification/regression algorithms that derive a linear mapping between brain signals and outputs. However, they typically only incorporate little prior information about the target movement parameter. In this paper, we incorporate prior knowledge using a Bayesian decoding method, and use it to decode finger flexion from ECoG signals. Specifically, we exploit the anatomic constraints and dynamic constraints that govern finger flexion and incorporate these constraints in the construction, structure, and the probabilistic functions of the prior model of a switched non-parametric dynamic system (SNDS. Given a measurement model resulting from a traditional linear regression method, we decoded finger flexion using posterior estimation that combined the prior and measurement models. Our results show that the application of the Bayesian decoding model, which incorporates prior knowledge, improves decoding performance compared to the application of a linear regression model, which does not incorporate prior knowledge. Thus, the results presented in this paper may ultimately lead to neurally controlled hand prostheses with full fine-grained finger articulation.

  12. Joint Source and Channel Decoding for Variable Length Encoded Turbo Codes

    Directory of Open Access Journals (Sweden)

    Yang Yang

    2007-11-01

    Full Text Available Joint source and channel decoding (JSCD has been proved to be an effective technique which can improve decoding performance by exploiting residual source redundancy. Most previous publications on this subject focus on a traditional coding scheme in which the source variable-length coding (VLC is serially concatenated with a channel code. In this paper, a parallel concatenated coding scheme for the VLC combined with a turbo code is presented. By merging a symbol-level VLC trellis with a convolutional trellis, we construct a symbol-level joint trellis with compound states. Also, a solution of the symbol-by-symbol a posteriori probability (APP decoding algorithm based on this joint trellis is derived, which leads to an iterative JSCD approach in the similar way to the classical turbo decoder. The simulation results show that our joint source-channel en/decoding system achieves some gains at the cost of increasing decoding complexity, when compared to the joint iterative decoding based on the bit-level super trellis for the separate coding system.

  13. An Improved Motion JPEG2000 Decoder for Error Concealment of Segmentation Symbol Faults

    Directory of Open Access Journals (Sweden)

    Hashemi MahmoudReza

    2008-01-01

    Full Text Available The motion JPEG2000 (MJP2 video coding standard uses intraframe compression and provides practical features such as fast coding, easy editing, and error robustness. Error robustness and concealment are some of the crucial requirements of any decoder in error prone applications. In this paper, a new error concealment method for MJP2 decoder has been proposed. The proposed method exploits the remaining temporal redundancies within the MJP2 coded video sequences for error concealment. The realized MJP2 decoder is able to detect and remove the artifacts produced by highly damaged bit planes that were not detected by the segmentation symbol, one of the error resilience tools in MJP2. Simulation results indicate that the proposed decoder can effectively remove such artifacts in the decoded erroneous MJP2 bit streams and improve the PSNR of the corrupted frames by up to 6.8 dB. An average of 0.2 dB improvement has been observed over a relatively large number of test cases and various error rates.

  14. An Improved Motion JPEG2000 Decoder for Error Concealment of Segmentation Symbol Faults

    Directory of Open Access Journals (Sweden)

    Mahmoud Reza Hashemi

    2008-03-01

    Full Text Available The motion JPEG2000 (MJP2 video coding standard uses intraframe compression and provides practical features such as fast coding, easy editing, and error robustness. Error robustness and concealment are some of the crucial requirements of any decoder in error prone applications. In this paper, a new error concealment method for MJP2 decoder has been proposed. The proposed method exploits the remaining temporal redundancies within the MJP2 coded video sequences for error concealment. The realized MJP2 decoder is able to detect and remove the artifacts produced by highly damaged bit planes that were not detected by the segmentation symbol, one of the error resilience tools in MJP2. Simulation results indicate that the proposed decoder can effectively remove such artifacts in the decoded erroneous MJP2 bit streams and improve the PSNR of the corrupted frames by up to 6.8 dB. An average of 0.2 dB improvement has been observed over a relatively large number of test cases and various error rates.

  15. Decoding bipedal locomotion from the rat sensorimotor cortex

    Science.gov (United States)

    Rigosa, J.; Panarese, A.; Dominici, N.; Friedli, L.; van den Brand, R.; Carpaneto, J.; DiGiovanna, J.; Courtine, G.; Micera, S.

    2015-10-01

    Objective. Decoding forelimb movements from the firing activity of cortical neurons has been interfaced with robotic and prosthetic systems to replace lost upper limb functions in humans. Despite the potential of this approach to improve locomotion and facilitate gait rehabilitation, decoding lower limb movement from the motor cortex has received comparatively little attention. Here, we performed experiments to identify the type and amount of information that can be decoded from neuronal ensemble activity in the hindlimb area of the rat motor cortex during bipedal locomotor tasks. Approach. Rats were trained to stand, step on a treadmill, walk overground and climb staircases in a bipedal posture. To impose this gait, the rats were secured in a robotic interface that provided support against the direction of gravity and in the mediolateral direction, but behaved transparently in the forward direction. After completion of training, rats were chronically implanted with a micro-wire array spanning the left hindlimb motor cortex to record single and multi-unit activity, and bipolar electrodes into 10 muscles of the right hindlimb to monitor electromyographic signals. Whole-body kinematics, muscle activity, and neural signals were simultaneously recorded during execution of the trained tasks over multiple days of testing. Hindlimb kinematics, muscle activity, gait phases, and locomotor tasks were decoded using offline classification algorithms. Main results. We found that the stance and swing phases of gait and the locomotor tasks were detected with accuracies as robust as 90% in all rats. Decoded hindlimb kinematics and muscle activity exhibited a larger variability across rats and tasks. Significance. Our study shows that the rodent motor cortex contains useful information for lower limb neuroprosthetic development. However, brain-machine interfaces estimating gait phases or locomotor behaviors, instead of continuous variables such as limb joint positions or speeds

  16. On Lattice Sequential Decoding for Large MIMO Systems

    KAUST Repository

    Ali, Konpal S.

    2014-04-01

    Due to their ability to provide high data rates, Multiple-Input Multiple-Output (MIMO) wireless communication systems have become increasingly popular. Decoding of these systems with acceptable error performance is computationally very demanding. In the case of large overdetermined MIMO systems, we employ the Sequential Decoder using the Fano Algorithm. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity and vice versa for higher bias values. We attempt to bound the error by bounding the bias, using the minimum distance of a lattice. Also, a particular trend is observed with increasing SNR: a region of low complexity and high error, followed by a region of high complexity and error falling, and finally a region of low complexity and low error. For lower bias values, the stages of the trend are incurred at lower SNR than for higher bias values. This has the important implication that a low enough bias value, at low to moderate SNR, can result in low error and low complexity even for large MIMO systems. Our work is compared against Lattice Reduction (LR) aided Linear Decoders (LDs). Another impressive observation for low bias values that satisfy the error bound is that the Sequential Decoder\\'s error is seen to fall with increasing system size, while it grows for the LR-aided LDs. For the case of large underdetermined MIMO systems, Sequential Decoding with two preprocessing schemes is proposed – 1) Minimum Mean Square Error Generalized Decision Feedback Equalization (MMSE-GDFE) preprocessing 2) MMSE-GDFE preprocessing, followed by Lattice Reduction and Greedy Ordering. Our work is compared against previous work which employs Sphere Decoding preprocessed using MMSE-GDFE, Lattice Reduction and Greedy Ordering. For the case of large systems, this results in high complexity and difficulty in choosing the sphere radius. Our schemes

  17. Convolutional Encoder and Viterbi Decoder Using SOPC For Variable Constraint Length

    DEFF Research Database (Denmark)

    Kulkarni, Anuradha; Dnyaneshwar, Mantri; Prasad, Neeli R.

    2013-01-01

    detailed analysis, this paper deals with the implementation of convolution encoder and Viterbi decoder using system on programming chip (SOPC). It uses variable constraint length of 7, 8 and 9 bits for 1/2 and 1/3 code rates. By analyzing the Viterbi algorithm it is seen that our algorithm has a better......Convolution encoder and Viterbi decoder are the basic and important blocks in any Code Division Multiple Accesses (CDMA). They are widely used in communication system due to their error correcting capability But the performance degrades with variable constraint length. In this context to have...

  18. Tv & video engineer's reference book

    CERN Document Server

    Jackson, K G

    1991-01-01

    TV & Video Engineer's Reference Book presents an extensive examination of the basic television standards and broadcasting spectrum. It discusses the fundamental concepts in analogue and digital circuit theory. It addresses studies in the engineering mathematics, formulas, and calculations. Some of the topics covered in the book are the conductors and insulators, passive components, alternating current circuits; broadcast transmission; radio frequency propagation; electron optics in cathode ray tube; color encoding and decoding systems; television transmitters; and remote supervision of unatten

  19. Efficient temporal and interlayer parameter prediction for weighted prediction in scalable high efficiency video coding

    Science.gov (United States)

    Tsang, Sik-Ho; Chan, Yui-Lam; Siu, Wan-Chi

    2017-01-01

    Weighted prediction (WP) is an efficient video coding tool that was introduced since the establishment of the H.264/AVC video coding standard, for compensating the temporal illumination change in motion estimation and compensation. WP parameters, including a multiplicative weight and an additive offset for each reference frame, are required to be estimated and transmitted to the decoder by slice header. These parameters cause extra bits in the coded video bitstream. High efficiency video coding (HEVC) provides WP parameter prediction to reduce the overhead. Therefore, WP parameter prediction is crucial to research works or applications, which are related to WP. Prior art has been suggested to further improve the WP parameter prediction by implicit prediction of image characteristics and derivation of parameters. By exploiting both temporal and interlayer redundancies, we propose three WP parameter prediction algorithms, enhanced implicit WP parameter, enhanced direct WP parameter derivation, and interlayer WP parameter, to further improve the coding efficiency of HEVC. Results show that our proposed algorithms can achieve up to 5.83% and 5.23% bitrate reduction compared to the conventional scalable HEVC in the base layer for SNR scalability and 2× spatial scalability, respectively.

  20. Analysis of error floor of LDPC codes under LP decoding over the BSC

    Energy Technology Data Exchange (ETDEWEB)

    Chertkov, Michael [Los Alamos National Laboratory; Chilappagari, Shashi [UNIV OF AZ; Vasic, Bane [UNIV OF AZ; Stepanov, Mikhail [UNIV OF AZ

    2009-01-01

    We consider linear programming (LP) decoding of a fixed low-density parity-check (LDPC) code over the binary symmetric channel (BSC). The LP decoder fails when it outputs a pseudo-codeword which is not a codeword. We propose an efficient algorithm termed the instanton search algorithm (ISA) which, given a random input, generates a set of flips called the BSC-instanton and prove that: (a) the LP decoder fails for any set of flips with support vector including an instanton; (b) for any input, the algorithm outputs an instanton in the number of steps upper-bounded by twice the number of flips in the input. We obtain the number of unique instantons of different sizes by running the ISA sufficient number of times. We then use the instanton statistics to predict the performance of the LP decoding over the BSC in the error floor region. We also propose an efficient semi-analytical method to predict the performance of LP decoding over a large range of transition probabilities of the BSC.

  1. Toric Codes, Multiplicative Structure and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    2017-01-01

    and aligns with decoding by error correcting pairs. We have used the multiplicative structure on toric codes to construct linear secret sharing schemes with \\emph{strong multiplication} via Massey's construction generalizing the Shamir Linear secret sharing shemes constructed from Reed-Solomon codes. We have...... constructed quantum error correcting codes from toric surfaces by the Calderbank-Shor-Steane method.......Long linear codes constructed from toric varieties over finite fields, their multiplicative structure and decoding. The main theme is the inherent multiplicative structure on toric codes. The multiplicative structure allows for \\emph{decoding}, resembling the decoding of Reed-Solomon codes...

  2. Application of Beyond Bound Decoding for High Speed Optical Communications

    DEFF Research Database (Denmark)

    Li, Bomin; Larsen, Knud J.; Vegas Olmos, Juan José

    2013-01-01

    This paper studies the application of beyond bound decoding method for high speed optical communications. This hard-decision decoding method outperforms traditional minimum distance decoding method, with a total net coding gain of 10.36 dB....

  3. Dynamic configuration management of a multi-standard and multi-mode reconfigurable multi-ASIP architecture for turbo decoding

    Science.gov (United States)

    Lapotre, Vianney; Gogniat, Guy; Baghdadi, Amer; Diguet, Jean-Philippe

    2017-12-01

    The multiplication of connected devices goes along with a large variety of applications and traffic types needing diverse requirements. Accompanying this connectivity evolution, the last years have seen considerable evolutions of wireless communication standards in the domain of mobile telephone networks, local/wide wireless area networks, and Digital Video Broadcasting (DVB). In this context, intensive research has been conducted to provide flexible turbo decoder targeting high throughput, multi-mode, multi-standard, and power consumption efficiency. However, flexible turbo decoder implementations have not often considered dynamic reconfiguration issues in this context that requires high speed configuration switching. Starting from this assessment, this paper proposes the first solution that allows frame-by-frame run-time configuration management of a multi-processor turbo decoder without compromising the decoding performances.

  4. Concatenated coding system with iterated sequential inner decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1995-01-01

    We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder......We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder...

  5. Utilizing Cross-Layer Information to Improve Performance in JPEG2000 Decoding

    Directory of Open Access Journals (Sweden)

    Hannes Persson

    2007-01-01

    Full Text Available We focus on wireless multimedia communication and investigate how cross-layer information can be used to improve performance at the application layer, using JPEG2000 as an example. The cross-layer information is in the form of soft information from the physical layer. The soft information, which is supplied by a soft decision demodulator, yields reliability measures for the received bits and is fed into two soft input iterative JPEG2000 image decoders. When errors are detected with the error detecting mechanisms in JPEG2000, the decoders utilize the soft information to point out likely transmission errors. Hence, the decoders can correct errors and increase the image quality without making time-consuming retransmissions. We believe that the proposed decoding method utilizing soft information is suitable for a general IP-based network and that it keeps the principles of a layered structure of the protocol stack intact. Further, experimental results with images transmitted over a simulated wireless channel show that a simple decoding algorithm that utilizes soft information can give high gains in image quality compared to the standard hard-decision decoding.

  6. Time-varying block codes for synchronisation errors: maximum a posteriori decoder and practical issues

    Directory of Open Access Journals (Sweden)

    Johann A. Briffa

    2014-06-01

    Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.

  7. Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes

    Directory of Open Access Journals (Sweden)

    Luca Fanucci

    2009-01-01

    Full Text Available The layered decoding algorithm has recently been proposed as an efficient means for the decoding of low-density parity-check (LDPC codes, thanks to the remarkable improvement in the convergence speed (2x of the decoding process. However, pipelined semi-parallel decoders suffer from violations or “hazards” between consecutive updates, which not only violate the layered principle but also enforce the loops in the code, thus spoiling the error correction performance. This paper describes three different techniques to properly reschedule the decoding updates, based on the careful insertion of “idle” cycles, to prevent the hazards of the pipeline mechanism. Also, different semi-parallel architectures of a layered LDPC decoder suitable for use with such techniques are analyzed. Then, taking the LDPC codes for the wireless local area network (IEEE 802.11n as a case study, a detailed analysis of the performance attained with the proposed techniques and architectures is reported, and results of the logic synthesis on a 65 nm low-power CMOS technology are shown.

  8. Techniques and Architectures for Hazard-Free Semi-Parallel Decoding of LDPC Codes

    Directory of Open Access Journals (Sweden)

    Rovini Massimo

    2009-01-01

    Full Text Available The layered decoding algorithm has recently been proposed as an efficient means for the decoding of low-density parity-check (LDPC codes, thanks to the remarkable improvement in the convergence speed (2x of the decoding process. However, pipelined semi-parallel decoders suffer from violations or "hazards" between consecutive updates, which not only violate the layered principle but also enforce the loops in the code, thus spoiling the error correction performance. This paper describes three different techniques to properly reschedule the decoding updates, based on the careful insertion of "idle" cycles, to prevent the hazards of the pipeline mechanism. Also, different semi-parallel architectures of a layered LDPC decoder suitable for use with such techniques are analyzed. Then, taking the LDPC codes for the wireless local area network (IEEE 802.11n as a case study, a detailed analysis of the performance attained with the proposed techniques and architectures is reported, and results of the logic synthesis on a 65 nm low-power CMOS technology are shown.

  9. Satellite demodulator and telecommand decoder

    Science.gov (United States)

    Pereiradelucena, Antonio Macilio

    1986-11-01

    This work presents the design, analysis and test results of the BPSK/NRZ-L demodulator and telecommand decoder of the data collecting the satellite of Missao Espacial Completa Brasileira (MECB). The constraints imposed by the space application motivate the adoption of nonconventional solutions. Basically, the innovations are the parallel subcarrier and bit synchronization and the replacement of multipliers by exclusive-or gates operating on digital signals. These innovations offer advantages in speed, power consumption and reliability. A partial and original mathematical analysis of the adopted solutions is presented.

  10. A Smart Video Coding Method for Time Lag Reduction in Telesurgery

    National Research Council Canada - National Science Library

    Sun, Mingui; Liu, Qiang; Xu, Jian; Kassam, Amin; Enos, Sharon E; Marchessault, Ronald; Gilbert, Gary; Sclabassi, Robert J

    2006-01-01

    .... These advances have made remotely operable telemedicine possible. However, a key technology that rapidly encodes, transmits, and decodes surgical video with the minimum round-trip delay and the least influence by network jitter...

  11. The Half-Duplex AWGN Single-Relay Channel: Full Decoding or Partial Decoding?

    CERN Document Server

    Ong, Lawrence; Kellett, Christopher M

    2011-01-01

    This paper compares the partial-decode-forward and the complete-decode-forward coding strategies for the half-duplex Gaussian single-relay channel. We analytically show that partial-decode-forward outperforms the more straightforward complete-decode-forward by at most 12.5%, for any channel parameters. Furthermore, in the following asymptotic cases, complete-decode-forward achieves the same rates as partial-decode-forward does: (i) when the relay is close to the source, (ii) when the relay is close to the destination, and (iii) when the SNR is low. In addition, when the SNR increases, the normalized gap between the rates achievable by the two coding strategies diminishes. Consequently, significant performance improvements are not achieved by optimizing the fraction of data the relay should decode and forward.

  12. Outlaw distributions and locally decodable codes

    NARCIS (Netherlands)

    J. Briët (Jop); Zeev Dvir; Sivakanth Gopi

    2016-01-01

    textabstractLocally decodable codes (LDCs) are error correcting codes that allow for decoding of a single message bit using a small number of queries to a corrupted encoding. Despite decades of study, the optimal trade-off between query complexity and codeword length is far from understood. In

  13. Oppositional Decoding as an Act of Resistance.

    Science.gov (United States)

    Steiner, Linda

    1988-01-01

    Argues that contributors to the "No Comment" feature of "Ms." magazine are engaging in oppositional decoding and speculates on why this is a satisfying group process. Also notes such decoding presents another challenge to the idea that mass media has the same effect on all audiences. (SD)

  14. Outlaw distributions and locally decodable codes

    NARCIS (Netherlands)

    J. Briët (Jop); Zeev Dvir; Sivakanth Gopi

    2017-01-01

    textabstractLocally decodable codes (LDCs) are error correcting codes that allow for decoding of a single message bit using a small number of queries to a corrupted encoding. Despite decades of study, the optimal trade-off between query complexity and codeword length is far from understood. In

  15. Outlaw distributions and locally decodable codes

    NARCIS (Netherlands)

    J. Briët (Jop); Z. Dvir (Zeev); S. Gopi (Sivakanth)

    2017-01-01

    htmlabstractLocally decodable codes (LDCs) are error correcting codes that allow for decoding of a single message bit using a small number of queries to a corrupted encoding. Despite decades of study, the optimal trade-off between query complexity and codeword length is far from understood. In this

  16. Data hiding for error concealment of H.264/AVC video transmission over mobile networks

    Science.gov (United States)

    Piva, Alessandro; Caldelli, Roberto; Filippini, Francesco

    2005-03-01

    In this paper, a new data hiding-based error concealment algorithm is proposed. The method allows to increase the video quality in H.264/AVC wireless video transmission and Real-Time applications, where the retransmission is unacceptable. Data hiding is used for carrying to the decoder the values of 6 inner pixels of every macro-block (MB) to be used to reconstruct lost MBs into Intra frames through a bilinear interpolation process. The side information concerning a slice is hidden into another slice of the same frame, by properly modifying some quantized AC coefficients of the Integer Transform of the 16 blocks 4x4 composing the MBs of the host slice. At the decoder, the embedded information can be recovered from the bit-stream and used in the bilinear interpolation to reconstruct the damaged slice. This method, although allowing the system to remain fully compliant with the standard, improves the performance with respect to the conventional error concealment methods adopted by H.264/AVC, from the point of view of visual quality and Y-PSNR. In particular, it is possible to improve the result of the interpolation process adopted by H.264/AVC, reducing the distance between interpolating pixels from 16 to 5.

  17. Age-Related Response Bias in the Decoding of Sad Facial Expressions

    Directory of Open Access Journals (Sweden)

    Mara Fölster

    2015-10-01

    Full Text Available Recent studies have found that age is negatively associated with the accuracy of decoding emotional facial expressions; this effect of age was found for actors as well as for raters. Given that motivational differences and stereotypes may bias the attribution of emotion, the aim of the present study was to explore whether these age effects are due to response bias, that is, the unbalanced use of response categories. Thirty younger raters (19–30 years and thirty older raters (65–81 years viewed video clips of younger and older actors representing the same age ranges, and decoded their facial expressions. We computed both raw hit rates and bias-corrected hit rates to assess the influence of potential age-related response bias on decoding accuracy. Whereas raw hit rates indicated significant effects of both the actors’ and the raters’ ages on decoding accuracy for sadness, these age effects were no longer significant when response bias was corrected. Our results suggest that age effects on the accuracy of decoding facial expressions may be due, at least in part, to age-related response bias.

  18. Design Space of Flexible Multigigabit LDPC Decoders

    Directory of Open Access Journals (Sweden)

    Philipp Schläfer

    2012-01-01

    Full Text Available Multigigabit LDPC decoders are demanded by standards like IEEE 802.15.3c and IEEE 802.11ad. To achieve the high throughput while supporting the needed flexibility, sophisticated architectures are mandatory. This paper comprehensively presents the design space for flexible multigigabit LDPC applications for the first time. The influence of various design parameters on the hardware is investigated in depth. Two new decoder architectures in a 65 nm CMOS technology are presented to further explore the design space. In the past, the memory domination was the bottleneck for throughputs of up to 1 Gbit/s. Our systematic investigation of column- versus row-based partially parallel decoders shows that this is no more a bottleneck for multigigabit architectures. The evolutionary progress in flexible multigigabit LDPC decoder design is highlighted in an extensive comparison of state-of-the-art decoders.

  19. Delay Reduction for Instantly Decodable Network Coding in Persistent Channels With Feedback Imperfections

    KAUST Repository

    Douik, Ahmed S.

    2015-11-05

    This paper considers the multicast decoding delay reduction problem for generalized instantly decodable network coding (G-IDNC) over persistent erasure channels with feedback imperfections. The feedback scenario discussed is the most general situation in which the sender does not always receive acknowledgments from the receivers after each transmission and the feedback communications are subject to loss. The decoding delay increment expressions are derived and employed to express the decoding delay reduction problem as a maximum weight clique problem in the G-IDNC graph. This paper provides a theoretical analysis of the expected decoding delay increase at each time instant. Problem formulations in simpler channel and feedback models are shown to be special cases of the proposed generalized formulation. Since finding the optimal solution to the problem is known to be NP-hard, a suboptimal greedy algorithm is designed and compared with blind approaches proposed in the literature. Through extensive simulations, the proposed algorithm is shown to outperform the blind methods in all situations and to achieve significant improvement, particularly for high time-correlated channels.

  20. Layer-Optimized Streaming of Motion-Compensated Orthogonal Video

    OpenAIRE

    SHEN, Wenjie

    2013-01-01

    This report presents a layer-optimized streaming technique for delivering video content over the Internet using quality-scalable motion-compensated orthogonal video. We use Motion-Compensated Orthogonal Transforms (MCOT) to remove temporal and spatial redundancy. The resulting subbands are quantized and entropy coded by Embedded Block Coding with Optimized Truncations (EBCOT). Therefore, we are able to encode the input video into multiple quality layers with sequential decoding dependency. A ...

  1. Generalized instantly decodable network coding for relay-assisted networks

    KAUST Repository

    Elmahdy, Adel M.

    2013-09-01

    In this paper, we investigate the problem of minimizing the frame completion delay for Instantly Decodable Network Coding (IDNC) in relay-assisted wireless multicast networks. We first propose a packet recovery algorithm in the single relay topology which employs generalized IDNC instead of strict IDNC previously proposed in the literature for the same relay-assisted topology. This use of generalized IDNC is supported by showing that it is a super-set of the strict IDNC scheme, and thus can generate coding combinations that are at least as efficient as strict IDNC in reducing the average completion delay. We then extend our study to the multiple relay topology and propose a joint generalized IDNC and relay selection algorithm. This proposed algorithm benefits from the reception diversity of the multiple relays to further reduce the average completion delay in the network. Simulation results show that our proposed solutions achieve much better performance compared to previous solutions in the literature. © 2013 IEEE.

  2. An area-efficient path memory structure for VLSI Implementation of high speed Viterbi decoders

    DEFF Research Database (Denmark)

    Paaske, Erik; Pedersen, Steen; Sparsø, Jens

    1991-01-01

    area savings compared to the REA and the TBA are achieved. Furthermore, the relative area savings increase for larger decoding depths, which might be desirable for punctured codes.Based on the new algorithm a test chip has been designed and fabricated in a 2 micron CMOS process using MOSIS like...

  3. Performance of high resolution decoding with Multi-Anode Microchannel Array detectors

    Science.gov (United States)

    Kasle, David B.; Horch, Elliott P.

    1993-01-01

    The Multi-Anode Microchannel Array (MAMA) is a microchannel plate based photon counting detector with applications in ground-based and space-based astronomy. The detector electronics decode the position of each photon event, and the decoding algorithm that associates a given event with the appropriate pixel is determined by the geometry of the anode array. The standard MAMA detector has a spatial resolution set by the anode array of 25 microns, but the MCP pore resolution exceeds this. The performance of a new algorithm that halves the pixel spacing and improves the pixel spatial resolution is described. The new algorithm does not degrade the pulse-pair resolution of the detector and does not require any modifications to the detector tube. Measurements of the detector's response demonstrate that high resolution decoding yields a 60 percent enhancement in spatial resolution. Measurements of the performance of the high resolution algorithm with a 14 micron MAMA detector are also described. The parameters that control high resolution performance are discussed. Results of the application of high resolution decoding to speckle interferometry are presented.

  4. Distributed Video Coding for Multiview and Video-plus-depth Coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo

    The interest in Distributed Video Coding (DVC) systems has grown considerably in the academic world in recent years. With DVC the correlation between frames is exploited at the decoder (joint decoding). The encoder codes the frame independently, performing relatively simple operations. Therefore......, with DVC the complexity is shifted from encoder to decoder, making the coding architecture a viable solution for encoders with limited resources. DVC may empower new applications which can benefit from this reversed coding architecture. Multiview Distributed Video Coding (M-DVC) is the application...... of the DVC principles to camera networks. Thanks to its reversed coding paradigm M-DVC enables the exploitation of inter-camera redundancy without inter-camera communication, because the frames are encoded independently. One of the key elements in DVC is the Side Information (SI) which is an estimation...

  5. ARQ with sequential decoding of packetized data - Queueing analysis

    Science.gov (United States)

    Shacham, N.

    1984-10-01

    The operation of a sequential decoder in a packet-switching environment is considered. Packets arrive randomly at the decoder, and a packet is stored in a buffer if the decoder is busy upon its arrival. The decoder devotes no more than a time-out period of predetermined length to the decoding of any single packet. If packet decoding is completed within that period, the packet leaves the system. Otherwise, it is retransmitted and its decoding starts anew. While a packet is retransmitted, the decoder decodes another packet that resides in its buffer. An upper bound on the maximum rate of packets that can be supported by the channel-decoder combination is derived, and the optimum time-out that maximizes that rate is determined. A discrete-time model of the decoder's queue is presented, and the average queue length and throughput are evaluated.

  6. Fast Erasure-and error decoding of algebraic geometry codes up to the Feng-Rao bound

    DEFF Research Database (Denmark)

    Høholdt, Tom; Jensen, Helge Elbrønd; Sakata, Shojiro

    1998-01-01

    This correspondence gives an errata (that is erasure-and error-) decoding algorithm of one-point algebraic-geometry codes up to the Feng-Rao designed minimum distance using Sakata's multidimensional generalization of the Berlekamp-Massey algorithm and the voting procedure of Feng and Rao....

  7. MPEG2 video parameter and no reference PSNR estimation

    DEFF Research Database (Denmark)

    Li, Huiying; Forchhammer, Søren

    2009-01-01

    MPEG coded video may be processed for quality assessment or postprocessed to reduce coding artifacts or transcoded. Utilizing information about the MPEG stream may be useful for these tasks. This paper deals with estimating MPEG parameter information from the decoded video stream without access...... to the MPEG stream. This may be used in systems and applications where the coded stream is not accessible. Detection of MPEG I-frames and DCT (discrete cosine transform) block size is presented. For the I-frames, the quantization parameters are estimated. Combining these with statistics of the reconstructed...... DCT coefficients, the PSNR is estimated from the decoded video without reference images. Tests on decoded fixed rate MPEG2 sequences demonstrate perfect detection rates and good performance of the PSNR estimation....

  8. Influence of video compression on the measurement error of the television system

    Science.gov (United States)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    possible reducing of the digital stream. Discrete cosine transformation is most widely used among possible orthogonal transformation. Errors of television measuring systems and data compression protocols analyzed In this paper. The main characteristics of measuring systems and detected sources of their error detected. The most effective methods of video compression are determined. The influence of video compression error on television measuring systems was researched. Obtained results will increase the accuracy of the measuring systems. In television image quality measuring system reduces distortion identical distortion in analog systems and specific distortions resulting from the process of coding / decoding digital video signal and errors in the transmission channel. By the distortions associated with encoding / decoding signal include quantization noise, reducing resolution, mosaic effect, "mosquito" effect edging on sharp drops brightness, blur colors, false patterns, the effect of "dirty window" and other defects. The size of video compression algorithms used in television measuring systems based on the image encoding with intra- and inter prediction individual fragments. The process of encoding / decoding image is non-linear in space and in time, because the quality of the playback of a movie at the reception depends on the pre- and post-history of a random, from the preceding and succeeding tracks, which can lead to distortion of the inadequacy of the sub-picture and a corresponding measuring signal.

  9. No-Reference Video Quality Assessment using MPEG Analysis

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2013-01-01

    We present a method for No-Reference (NR) Video Quality Assessment (VQA) for decoded video without access to the bitstream. This is achieved by extracting and pooling features from a NR image quality assessment method used frame by frame. We also present methods to identify the video coding...... and estimate the video coding parameters for MPEG-2 and H.264/AVC which can be used to improve the VQA. The analysis differs from most other video coding analysis methods since it is without access to the bitstream. The results show that our proposed method is competitive with other recent NR VQA methods...

  10. HEVC performance and complexity for 4K video

    OpenAIRE

    Bross, Benjamin; George, Valeri; Álvarez-Mesa, Mauricio; Mayer, Tobias; Chi, Chi Ching; Brandenburg, Jens; Schierl, Thomas; Marpe, Detlev; Juurlink, Ben

    2013-01-01

    The recently finalized High-Efficiency Video Coding (HEVC) standard was jointly developed by the ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG) to improve the compression performance of current video coding standards by 50%. Especially when it comes to transmit high resolution video like 4K over the internet or in broadcast, the 50% bitrate reduction is essential. This paper shows that real-time decoding of 4K video with a frame-level parallel deco...

  11. Quantum Public-Key Cryptography and Quantum Circuits of Viterbi Decoding

    OpenAIRE

    横尾, 隆文; 清野, 光弘; Yokoo, Takafumi; Seino, Mitsuhiro

    2014-01-01

    The quantum circuits to generate the public-key determined by OTU2000 are suggested. The quantum circuits structured by n units for Shor's algorithm are discussed. Public-keys calculated by the quantum computer implemented quantum circuits proposed in this paper are generated in polynomial time by using discrete logarithm transformation. And the quantum circuits programmed by Grover's algorithm which perform Viterbi decoding processing on the assumption that communications are realized by OTU...

  12. Multiscale brain-machine interface decoders.

    Science.gov (United States)

    Han-Lin Hsieh; Shanechi, Maryam M

    2016-08-01

    Brain-machine interfaces (BMI) have vastly used a single scale of neural activity, e.g., spikes or electrocorticography (ECoG), as their control signal. New technology allows for simultaneous recording of multiple scales of neural activity, from spikes to local field potentials (LFP) and ECoG. These advances introduce the new challenge of modeling and decoding multiple scales of neural activity jointly. Such multi-scale decoding is challenging for two reasons. First, spikes are discrete-valued and ECoG/LFP are continuous-valued, resulting in fundamental differences in statistical characteristics. Second, the time-scales of these signals are different, with spikes having a millisecond time-scale and ECoG/LFP having much slower time-scales on the order of tens of milliseconds. Here we develop a new multiscale modeling and decoding framework that addresses these challenges. Our multiscale decoder extracts information from ECoG/LFP in addition to spikes, while operating at the fast time-scale of the spikes. The multiscale decoder specializes to a Kalman filter (KF) or to a point process filter (PPF) when no spikes or ECoG/LFP are available, respectively. Using closed-loop BMI simulations, we show that compared to PPF decoding of spikes alone or KF decoding of LFP/ECoG alone, the multiscale decoder significantly improves the accuracy and error performance of BMI control and runs at the fast millisecond time-scale of the spikes. This new multiscale modeling and decoding framework has the potential to improve BMI control using simultaneous multiscale neural activity.

  13. Performance Comparisons of Improved Regular Repeat Accumulate (RA and Irregular Repeat Accumulate (IRA Turbo Decoding

    Directory of Open Access Journals (Sweden)

    Ahmed Abdulkadhim Hamad

    2017-08-01

    Full Text Available In this paper, different techniques are used to improve the turbo decoding of regular repeat accumulate (RA and irregular repeat accumulate (IRA codes. The adaptive scaling of a-posteriori information produced by Soft-output Viterbi decoder (SOVA is proposed. The encoded pilots are another scheme that applied for short length RA codes. This work also suggests a simple and a fast method to generate a random interleaver having a free 4 cycle Tanner graph. Progressive edge growth algorithm (PEG is also studied and simulated to create the Tanner graphs which have a great girth.

  14. On the error probability of general trellis codes with applications to sequential decoding

    Science.gov (United States)

    Johannesson, R.

    1977-01-01

    An upper bound on the average probability of error for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n with distinction between memory length and tail length is given. It is shown that the bound is independent of the length L of the information sequence if the memory length exceeds the tail length by a specified amount that depends on L. Sequential decoding simulations using the stack algorithm were conducted to test the dependence of the undetected error probability on tail length and memory length, and the results corroborated the theory.

  15. CHARACTER RECOGNITION OF VIDEO SUBTITLES\\

    Directory of Open Access Journals (Sweden)

    Satish S Hiremath

    2016-11-01

    Full Text Available An important task in content based video indexing is to extract text information from videos. The challenges involved in text extraction and recognition are variation of illumination on each video frame with text, the text present on the complex background and different font size of the text. Using various image processing algorithms like morphological operations, blob detection and histogram of oriented gradients the character recognition of video subtitles is implemented. Segmentation, feature extraction and classification are the major steps of character recognition. Several experimental results are shown to demonstrate the performance of the proposed algorithm

  16. Spatiotemporal multiplexing: color decoding from single-array digital images

    Science.gov (United States)

    Martinez-Uriegas, Eugenio; Crane, Hewitt D.; Peters, John D.

    1997-06-01

    This is an update from our ongoing studies on a biologically inspired, digital color image representation presented here before. It has evolved into a technology named ChromaplexTM, with practical applications for both static (still pictures) and dynamic (digital video and motion pictures) images in three areas: (1) Simplification of processing, storage, compression, and transmission of digital color images. (2) Economical full-color upgrading of black and white (gray scale) image capturing systems. (3) Increase up to 4X of spatial resolution in high-quality digital image capturing systems currently designed for triplane color capture (three separate CCDs or three scans). Sample images of these applications are available on a world wide web site. In this paper we present data showing that spatial blur artifacts are worse when produced by conventional techniques of color interpolation than those produced by Chromaplex decoding. We also show that Chromaplex color decoding of CCD outputs, first demonstrated for CCDs with relatively narrow-band RGB filters, is equally applicable to digital imaging systems having CCDs with broad band subtractive color filters (like cyan, yellow, and magenta), but the often necessary color transformation from subtractive color to RGB brings in different tradeoffs.

  17. Improved compression of network coding vectors using erasure decoding and list decoding

    CERN Document Server

    Li, Shizheng

    2010-01-01

    Practical random network coding based schemes for multicast include a header in each packet that records the transformation between the sources and the terminal. The header introduces an overhead that can be significant in certain scenarios. In previous work, parity check matrices of error control codes along with error decoding were used to reduce this overhead. In this work we propose novel packet formats that allow us to use erasure decoding and list decoding. Both schemes have a smaller overhead compared to the error decoding based scheme, when the number of sources combined in a packet is not too small.

  18. Decoding facial expressions based on face-selective and motion-sensitive areas.

    Science.gov (United States)

    Liang, Yin; Liu, Baolin; Xu, Junhai; Zhang, Gaoyan; Li, Xianglin; Wang, Peiyuan; Wang, Bin

    2017-06-01

    Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face-selective areas; in contrast, whether motion-sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi-voxel pattern analysis (MVPA) to explore facial expression decoding in both face-selective and motion-sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes-obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye-related information on facial expression decoding were also examined. It was found that motion-sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face-selective and motion-sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye-related information was also observed. Overall, the findings showed that emotional expressions are represented in motion-sensitive areas in addition to conventional face-selective areas, suggesting that motion-sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye-related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113-3125, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  19. Orientation decoding depends on maps, not columns

    Science.gov (United States)

    Freeman, Jeremy; Brouwer, Gijs Joost; Heeger, David J.; Merriam, Elisha P.

    2011-01-01

    The representation of orientation in primary visual cortex (V1) has been examined at a fine spatial scale corresponding to the columnar architecture. We present functional magnetic resonance imaging (fMRI) measurements providing evidence for a topographic map of orientation preference in human V1 at a much coarser scale, in register with the angular-position component of the retinotopic map of V1. This coarse-scale orientation map provides a parsimonious explanation for why multivariate pattern analysis methods succeed in decoding stimulus orientation from fMRI measurements, challenging the widely-held assumption that decoding results reflect sampling of spatial irregularities in the fine-scale columnar architecture. Decoding stimulus attributes and cognitive states from fMRI measurements has proven useful for a number of applications, but our results demonstrate that the interpretation cannot assume decoding reflects or exploits columnar organization. PMID:21451017

  20. Upper Bounds on the Number of Errors Corrected by the Koetter–Vardy Algorithm

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2007-01-01

    By introducing a few simplifying assumptions we derive a simple condition for successful decoding using the Koetter-Vardy algorithm for soft-decision decoding of Reed-Solomon codes. We show that the algorithm has a significant advantage over hard decision decoding when the code rate is low, when ...... two or more sets of received symbols have substantially different reliabilities, or when the number of alternative transmitted symbols is very small....

  1. Coarse electrocorticographic decoding of ipsilateral reach in patients with brain lesions.

    Directory of Open Access Journals (Sweden)

    Guy Hotson

    Full Text Available In patients with unilateral upper limb paralysis from strokes and other brain lesions, strategies for functional recovery may eventually include brain-machine interfaces (BMIs using control signals from residual sensorimotor systems in the damaged hemisphere. When voluntary movements of the contralateral limb are not possible due to brain pathology, initial training of such a BMI may require use of the unaffected ipsilateral limb. We conducted an offline investigation of the feasibility of decoding ipsilateral upper limb movements from electrocorticographic (ECoG recordings in three patients with different lesions of sensorimotor systems associated with upper limb control. We found that the first principal component (PC of unconstrained, naturalistic reaching movements of the upper limb could be decoded from ipsilateral ECoG using a linear model. ECoG signal features yielding the best decoding accuracy were different across subjects. Performance saturated with very few input features. Decoding performances of 0.77, 0.73, and 0.66 (median Pearson's r between the predicted and actual first PC of movement using nine signal features were achieved in the three subjects. The performance achieved here with small numbers of electrodes and computationally simple decoding algorithms suggests that it may be possible to control a BMI using ECoG recorded from damaged sensorimotor brain systems.

  2. Optimizations of a Turbo-Like Decoder for Deep-Space Optical Communications

    Science.gov (United States)

    Cheng, M. K.; Nakashima, M. A.; Moision, B. E.; Hamkins, J.

    2007-02-01

    The National Aeronautics and Space Administration has developed a capacity-approaching modulation and coding scheme that comprises a serial concatenation of an inner accumulate pulse-position modulation and an outer convolutional code for deep-space optical communications. Decoding of this serially concatenated pulse-position modulation (SCPPM) code uses a turbo-like algorithm. However, the inner code trellis contains many parallel edges that are not typical in standard turbo codes and, therefore, a straightforward application of classical turbo decoding is very inefficient. Here, we present various optimizations applicable in hardware implementation of the SCPPM decoder. More specifically, we feature a Super Gamma computation to efficiently handle parallel trellis edges, a "maxstar top 2" circuit fit for pipelining, a modified two's complement subtraction circuit with a shorter path delay, and a cyclic redundancy check circuit for window-based turbo decoders. We also present a polynomial interleaver where current interleaver positions can be calculated from previous positions. This recursive interleaver property enables an algorithmic realization in which no memory is needed to store the interleaver mappings. Using the featured optimizations, we implement a 6.72 megabits-per-second (Mbps) SCPPM decoder on a single field-programmable gate array (FPGA). Compared to the current data rate of 256 kbps from Mars, the SCPPM-coded scheme represents a throughput increase of more than twenty-six fold. Extension to a 50-Mbps decoder on a board with multiple FPGAs follows naturally. We show through hardware simulations that the SCPPM-coded system can operate within 1 dB of the Shannon capacity at nominal operating conditions.

  3. Rate-adaptive compressive video acquisition with sliding-window total-variation-minimization reconstruction

    Science.gov (United States)

    Liu, Ying; Pados, Dimitris A.

    2013-05-01

    We consider a compressive video acquisition system where frame blocks are sensed independently. Varying block sparsity is exploited in the form of individual per-block open-loop sampling rate allocation with minimal system overhead. At the decoder, video frames are reconstructed via sliding-window inter-frame total variation minimization. Experimental results demonstrate that such rate-adaptive compressive video acquisition improves noticeably the rate-distortion performance of the video stream over fixed-rate acquisition approaches.

  4. An HMM posterior decoder for sequence feature prediction that includes homology information

    DEFF Research Database (Denmark)

    Käll, Lukas; Krogh, Anders Stærmose; Sonnhammer, Erik L. L.

    2005-01-01

    Motivation: When predicting sequence features like transmembrane topology, signal peptides, coil-coil structures, protein secondary structure or genes, extra support can be gained from homologs. Results: We present here a general hidden Markov model (HMM) decoding algorithm that combines...... probabilities for sequence features of homologs by considering the average of the posterior label probability of each position in a global sequence alignment. The algorithm is an extension of the previously described ‘optimal accuracy' decoder, allowing homology information to be used. It was benchmarked using...... an HMM for transmembrane topology and signal peptide prediction, Phobius. We found that the performance was substantially increased when incorporating information from homologs. Availability: A prediction server for transmembrane topology and signal peptides that uses the algorithm is available at http...

  5. Error Resilient Video Compression Using Behavior Models

    Directory of Open Access Journals (Sweden)

    Jacco R. Taal

    2004-03-01

    Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.

  6. Multiresolution coding for video-based service applications

    Science.gov (United States)

    Gharavi, Hami

    1995-12-01

    The video coding and distribution approach presented in this paper has two key characteristics that make it ideal for integration of video communication services over common broadband digital networks. The modular multi-resolution nature of the coding scheme provides the necessary flexibility to accommodate future advances in video technology as well as robust distribution over various network environments. This paper will present an efficient and scalable coding scheme for video communications. The scheme is capable of encoding and decoding video signals in a hierarchical, multilayer fashion to provide video at differing quality grades. Subsequently, the utilization of this approach to enable efficient bandwidth sharing and robust distribution of video signals in multipoint communications is presented. Coding and distribution architectures are discussed which include multi-party communications in a multi-window fashion within ATM environments. Furthermore, under the limited capabilities typical of wideband/broadband access networks, this architecture accommodates important video-based service applications such as Interactive Distance Learning.

  7. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  8. Delay reduction in persistent erasure channels for generalized instantly decodable network coding

    KAUST Repository

    Sorour, Sameh

    2013-06-01

    In this paper, we consider the problem of minimizing the decoding delay of generalized instantly decodable network coding (G-IDNC) in persistent erasure channels (PECs). By persistent erasure channels, we mean erasure channels with memory, which are modeled as a Gilbert-Elliott two-state Markov model with good and bad channel states. In this scenario, the channel erasure dependence, represented by the transition probabilities of this channel model, is an important factor that could be exploited to reduce the decoding delay. We first formulate the G-IDNC minimum decoding delay problem in PECs as a maximum weight clique problem over the G-IDNC graph. Since finding the optimal solution of this formulation is NP-hard, we propose two heuristic algorithms to solve it and compare them using extensive simulations. Simulation results show that each of these heuristics outperforms the other in certain ranges of channel memory levels. They also show that the proposed heuristics significantly outperform both the optimal strict IDNC in the literature and the channel-unaware G-IDNC algorithms. © 2013 IEEE.

  9. A 128-Channel Extreme Learning Machine-Based Neural Decoder for Brain Machine Interfaces.

    Science.gov (United States)

    Chen, Yi; Yao, Enyi; Basu, Arindam

    2016-06-01

    Currently, state-of-the-art motor intention decoding algorithms in brain-machine interfaces are mostly implemented on a PC and consume significant amount of power. A machine learning coprocessor in 0.35- μm CMOS for the motor intention decoding in the brain-machine interfaces is presented in this paper. Using Extreme Learning Machine algorithm and low-power analog processing, it achieves an energy efficiency of 3.45 pJ/MAC at a classification rate of 50 Hz. The learning in second stage and corresponding digitally stored coefficients are used to increase robustness of the core analog processor. The chip is verified with neural data recorded in monkey finger movements experiment, achieving a decoding accuracy of 99.3% for movement type. The same coprocessor is also used to decode time of movement from asynchronous neural spikes. With time-delayed feature dimension enhancement, the classification accuracy can be increased by 5% with limited number of input channels. Further, a sparsity promoting training scheme enables reduction of number of programmable weights by ≈ 2X.

  10. Random Walk Graph Laplacian-Based Smoothness Prior for Soft Decoding of JPEG Images.

    Science.gov (United States)

    Liu, Xianming; Cheung, Gene; Wu, Xiaolin; Zhao, Debin

    2017-02-01

    Given the prevalence of joint photographic experts group (JPEG) compressed images, optimizing image reconstruction from the compressed format remains an important problem. Instead of simply reconstructing a pixel block from the centers of indexed discrete cosine transform (DCT) coefficient quantization bins (hard decoding), soft decoding reconstructs a block by selecting appropriate coefficient values within the indexed bins with the help of signal priors. The challenge thus lies in how to define suitable priors and apply them effectively. In this paper, we combine three image priors-Laplacian prior for DCT coefficients, sparsity prior, and graph-signal smoothness prior for image patches-to construct an efficient JPEG soft decoding algorithm. Specifically, we first use the Laplacian prior to compute a minimum mean square error initial solution for each code block. Next, we show that while the sparsity prior can reduce block artifacts, limiting the size of the overcomplete dictionary (to lower computation) would lead to poor recovery of high DCT frequencies. To alleviate this problem, we design a new graph-signal smoothness prior (desired signal has mainly low graph frequencies) based on the left eigenvectors of the random walk graph Laplacian matrix (LERaG). Compared with the previous graph-signal smoothness priors, LERaG has desirable image filtering properties with low computation overhead. We demonstrate how LERaG can facilitate recovery of high DCT frequencies of a piecewise smooth signal via an interpretation of low graph frequency components as relaxed solutions to normalized cut in spectral clustering. Finally, we construct a soft decoding algorithm using the three signal priors with appropriate prior weights. Experimental results show that our proposal outperforms the state-of-the-art soft decoding algorithms in both objective and subjective evaluations noticeably.

  11. A robust fusion method for multiview distributed video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Ascenso, Joao; Brites, Catarina

    2014-01-01

    to have the various views available simultaneously. However, in multiview DVC (M-DVC), the decoder can still exploit the redundancy between views, avoiding the need for inter-camera communication. The key element of every DVC decoder is the side information (SI), which can be generated by leveraging intra......-view or inter-view redundancy for multiview video data. In this paper, a novel learning-based fusion technique is proposed, which is able to robustly fuse an inter-view SI and an intra-view (temporal) SI. An inter-view SI generation method capable of identifying occluded areas is proposed and is coupled...... values. The proposed solution is able to achieve gains up to 0.9 dB in Bjøntegaard difference when compared with the best-performing (in a RD sense) single SI DVC decoder, chosen as the best of an inter-view and a temporal SI-based decoder one....

  12. Noise Residual Learning for Noise Modeling in Distributed Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Forchhammer, Søren

    2012-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the source statistics at the decoder side to reduce the complexity at the encoder. The noise model is one of the inherently difficult challenges in DVC. This paper considers Transform Domain Wyner-Ziv (TDWZ) coding and proposes...... noise residual learning techniques that take residues from previously decoded frames into account to estimate the decoding residue more precisely. Moreover, the techniques calculate a number of candidate noise residual distributions within a frame to adaptively optimize the soft side information during...... decoding. A residual refinement step is also introduced to take advantage of correlation of DCT coefficients. Experimental results show that the proposed techniques robustly improve the coding efficiency of TDWZ DVC and for GOP=2 bit-rate savings up to 35% on WZ frames are achieved compared with DISCOVER....

  13. A Nonlinear Decision-Based Algorithm for Removal of Strip Lines, Drop Lines, Blotches, Band Missing and Impulses in Images and Videos

    Directory of Open Access Journals (Sweden)

    D. Ebenezer

    2008-10-01

    Full Text Available A decision-based nonlinear algorithm for removal of strip lines, drop lines, blotches, band missing, and impulses in images is presented. The algorithm performs two simultaneous operations, namely, detection of corrupted pixels and estimation of new pixels for replacing the corrupted pixels. Removal of these artifacts is achieved without damaging edges and details. The algorithm uses an adaptive length window whose maximum size is 5×5 to avoid blurring due to large window sizes. However, the restricted window size renders median operation less effective whenever noise is excessive in which case the proposed algorithm automatically switches to mean filtering. The performance of the algorithm is analyzed in terms of mean square error [MSE], peak-signal-to-noise ratio [PSNR], and image enhancement factor [IEF] and compared with standard algorithms already in use. Improved performance of the proposed algorithm is demonstrated. The advantage of the proposed algorithm is that a single algorithm can replace several independent algorithms required for removal of different artifacts.

  14. A Nonlinear Decision-Based Algorithm for Removal of Strip Lines, Drop Lines, Blotches, Band Missing and Impulses in Images and Videos

    Directory of Open Access Journals (Sweden)

    Manikandan S

    2008-01-01

    Full Text Available Abstract A decision-based nonlinear algorithm for removal of strip lines, drop lines, blotches, band missing, and impulses in images is presented. The algorithm performs two simultaneous operations, namely, detection of corrupted pixels and estimation of new pixels for replacing the corrupted pixels. Removal of these artifacts is achieved without damaging edges and details. The algorithm uses an adaptive length window whose maximum size is to avoid blurring due to large window sizes. However, the restricted window size renders median operation less effective whenever noise is excessive in which case the proposed algorithm automatically switches to mean filtering. The performance of the algorithm is analyzed in terms of mean square error [MSE], peak-signal-to-noise ratio [PSNR], and image enhancement factor [IEF] and compared with standard algorithms already in use. Improved performance of the proposed algorithm is demonstrated. The advantage of the proposed algorithm is that a single algorithm can replace several independent algorithms required for removal of different artifacts.

  15. Performance breakdown in optimal stimulus decoding.

    Science.gov (United States)

    Lubomir Kostal; Lansky, Petr; Pilarski, Stevan

    2015-06-01

    One of the primary goals of neuroscience is to understand how neurons encode and process information about their environment. The problem is often approached indirectly by examining the degree to which the neuronal response reflects the stimulus feature of interest. In this context, the methods of signal estimation and detection theory provide the theoretical limits on the decoding accuracy with which the stimulus can be identified. The Cramér-Rao lower bound on the decoding precision is widely used, since it can be evaluated easily once the mathematical model of the stimulus-response relationship is determined. However, little is known about the behavior of different decoding schemes with respect to the bound if the neuronal population size is limited. We show that under broad conditions the optimal decoding displays a threshold-like shift in performance in dependence on the population size. The onset of the threshold determines a critical range where a small increment in size, signal-to-noise ratio or observation time yields a dramatic gain in the decoding precision. We demonstrate the existence of such threshold regions in early auditory and olfactory information coding. We discuss the origin of the threshold effect and its impact on the design of effective coding approaches in terms of relevant population size.

  16. An Early Fire Detection Algorithm Using IP Cameras

    Directory of Open Access Journals (Sweden)

    Hector Perez-Meana

    2012-05-01

    Full Text Available The presence of smoke is the first symptom of fire; therefore to achieve early fire detection, accurate and quick estimation of the presence of smoke is very important. In this paper we propose an algorithm to detect the presence of smoke using video sequences captured by Internet Protocol (IP cameras, in which important features of smoke, such as color, motion and growth properties are employed. For an efficient smoke detection in the IP camera platform, a detection algorithm must operate directly in the Discrete Cosine Transform (DCT domain to reduce computational cost, avoiding a complete decoding process required for algorithms that operate in spatial domain. In the proposed algorithm the DCT Inter-transformation technique is used to increase the detection accuracy without inverse DCT operation. In the proposed scheme, firstly the candidate smoke regions are estimated using motion and color smoke properties; next using morphological operations the noise is reduced. Finally the growth properties of the candidate smoke regions are furthermore analyzed through time using the connected component labeling technique. Evaluation results show that a feasible smoke detection method with false negative and false positive error rates approximately equal to 4% and 2%, respectively, is obtained.

  17. Scalable-to-lossless transform domain distributed video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Veselov, Anton

    2010-01-01

    Distributed video coding (DVC) is a novel approach providing new features as low complexity encoding by mainly exploiting the source statistics at the decoder based on the availability of decoder side information. In this paper, scalable-tolossless DVC is presented based on extending a lossy...... TransformDomain Wyner-Ziv (TDWZ) distributed video codec with feedback.The lossless coding is obtained by using a reversible integer DCT.Experimental results show that the performance of the proposed scalable-to-lossless TDWZ video codec can outperform alternatives based on the JPEG 2000 standard. The TDWZ...... codec provides frame by frame encoding. Comparing the lossless coding efficiency, the proposed scalable-to-lossless TDWZ video codec can save up to 5%-13% bits compared to JPEG LS and H.264 Intra frame lossless coding and do so as a scalable-to-lossless coding....

  18. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  19. Characterizing multivariate decoding models based on correlated EEG spectral features.

    Science.gov (United States)

    McFarland, Dennis J

    2013-07-01

    Multivariate decoding methods are popular techniques for analysis of neurophysiological data. The present study explored potential interpretative problems with these techniques when predictors are correlated. Data from sensorimotor rhythm-based cursor control experiments was analyzed offline with linear univariate and multivariate models. Features were derived from autoregressive (AR) spectral analysis of varying model order which produced predictors that varied in their degree of correlation (i.e., multicollinearity). The use of multivariate regression models resulted in much better prediction of target position as compared to univariate regression models. However, with lower order AR features interpretation of the spectral patterns of the weights was difficult. This is likely to be due to the high degree of multicollinearity present with lower order AR features. Care should be exercised when interpreting the pattern of weights of multivariate models with correlated predictors. Comparison with univariate statistics is advisable. While multivariate decoding algorithms are very useful for prediction their utility for interpretation may be limited when predictors are correlated. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  20. ITERATIVE SOVA DECODING OVER SYMMETRIC ALPHA STABLE CHANNELS

    Directory of Open Access Journals (Sweden)

    CHUAN HSIAN, PU

    2012-06-01

    Full Text Available Soft-Output Viterbi Algorithm (SOVA is one type of recovery memory-less Markov Chain and is used widely to decode convolutional codes. Fundamentally, conventional SOVA is designed on the basis of Maximum A-Posteriori probability (MAP under Additive White Gaussian Noise (AWGN interference. Therefore, the use of conventional Gaussian-based SOVA performs inefficiently and generates high BER (Bit Error Rate in the presence of Symmetric Alpha Stable noise SS. The poor performance of the Gaussian-based SOVA can be attributed to the mathematical quadratic cost function of the receiving mechanism. The quadratic cost function at the receiving end is statistically vulnerable and inefficient to guard SOVA component decoder against the entries of the outliers which are superimposed on the transmitted signal from hostile SS channel. The author studies and improves the performance of conventional SOVA with the introduction of Bayesian Cauchy metric calculation. Substantial performance improvement was observed from Mento Carlo Simulation for SOVA running on the platform of parallel turbo codes.

  1. Video processing project

    CSIR Research Space (South Africa)

    Globisch, R

    2009-03-01

    Full Text Available Video processing source code for algorithms and tools used in software media pipelines (e.g. image scalers, colour converters, etc.) The currently available source code is written in C++ with their associated libraries and DirectShow- Filters....

  2. Improved virtual channel noise model for transform domain Wyner-Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2009-01-01

    Distributed video coding (DVC) has been proposed as a new video coding paradigm to deal with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. A virtual channel noise model is utilized at the decoder to estimate...... the noise distribution between the side information frame and the original frame. This is one of the most important aspects influencing the coding performance of DVC. Noise models with different granularity have been proposed. In this paper, an improved noise model for transform domain Wyner-Ziv video...

  3. Multimodal decoding and congruent sensory information enhance reaching performance in subjects with cervical spinal cord injury

    Directory of Open Access Journals (Sweden)

    Elaine Anna Corbett

    2014-05-01

    Full Text Available Cervical spinal cord injury (SCI paralyzes muscles of the hand and arm, making it difficult to perform activities of daily living. Restoring the ability to reach can dramatically improve quality of life for people with cervical SCI. Any reaching system requires a user interface to decode parameters of an intended reach, such as trajectory and target. A challenge in developing such decoders is that often few physiological signals related to the intended reach remain under voluntary control, especially in patients with high cervical injuries. Furthermore, the decoding problem changes when the user is controlling the motion of their limb, as opposed to an external device. The purpose of this study was to investigate the benefits of combining disparate signal sources to control reach in people with a range of impairments, and to consider the effect of two feedback approaches. Subjects with cervical SCI performed robot-assisted reaching, controlling trajectories with either shoulder electromyograms (EMGs or EMGs combined with gaze. We then evaluated how reaching performance was influenced by task-related sensory feedback, testing the EMG-only decoder in two conditions. The first involved moving the arm with the robot, providing congruent sensory feedback through their remaining sense of proprioception. In the second, the subjects moved the robot without the arm attached, as in applications that control external devices. We found that the multimodal decoding algorithm worked well for all subjects, enabling them to perform straight, accurate reaches. The inclusion of gaze information, used to estimate target location, was especially important for the most impaired subjects. In the absence of gaze information, congruent sensory feedback improved performance. These results highlight the importance of proprioceptive feedback, and suggest that multi-modal decoders are likely to be most beneficial for highly impaired subjects and in tasks where such

  4. Deep learning with convolutional neural networks for EEG decoding and visualization.

    Science.gov (United States)

    Schirrmeister, Robin Tibor; Springenberg, Jost Tobias; Fiederer, Lukas Dominique Josef; Glasstetter, Martin; Eggensperger, Katharina; Tangermann, Michael; Hutter, Frank; Burgard, Wolfram; Ball, Tonio

    2017-11-01

    Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end-to-end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end-to-end EEG analysis, but a better understanding of how to design and train ConvNets for end-to-end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task-related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG-based brain mapping. Hum Brain Mapp 38:5391-5420, 2017. © 2017 Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  5. Deep learning with convolutional neural networks for EEG decoding and visualization

    Science.gov (United States)

    Springenberg, Jost Tobias; Fiederer, Lukas Dominique Josef; Glasstetter, Martin; Eggensperger, Katharina; Tangermann, Michael; Hutter, Frank; Burgard, Wolfram; Ball, Tonio

    2017-01-01

    Abstract Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end‐to‐end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end‐to‐end EEG analysis, but a better understanding of how to design and train ConvNets for end‐to‐end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task‐related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG‐based brain mapping. Hum Brain Mapp 38:5391–5420, 2017. © 2017 Wiley Periodicals, Inc. PMID:28782865

  6. Decoding the content of delayed intentions.

    Science.gov (United States)

    Gilbert, Sam J

    2011-02-23

    Rostrolateral prefrontal cortex (RLPFC) plays a key role in our ability to postpone the execution of intended behaviors until after another activity has been performed. However, it is poorly understood in computational terms. One crucial question is whether RLPFC represents the content of delayed intentions or plays a nonspecific role. In this human functional magnetic resonance imaging study (n = 32), RLPFC was active while participants stored delayed intentions during a distracting ongoing task. Multivariate analysis showed that the intended cue for future action and the intended behavior could be decoded from distinct posterior brain regions. However, the content of intentions could not be decoded from RLPFC itself. Functional connectivity analysis showed that RLPFC increased its coupling with content-representing regions during intention storage. Furthermore, trials with relatively high RLPFC activity were associated with enhanced decoding. Thus, RLPFC may enable realization of delayed intentions via interactions with posterior brain regions, which represent their content.

  7. Fast bootstrapping and permutation testing for assessing reproducibility and interpretability of multivariate fMRI decoding models.

    Directory of Open Access Journals (Sweden)

    Bryan R Conroy

    Full Text Available Multivariate decoding models are increasingly being applied to functional magnetic imaging (fMRI data to interpret the distributed neural activity in the human brain. These models are typically formulated to optimize an objective function that maximizes decoding accuracy. For decoding models trained on full-brain data, this can result in multiple models that yield the same classification accuracy, though some may be more reproducible than others--i.e. small changes to the training set may result in very different voxels being selected. This issue of reproducibility can be partially controlled by regularizing the decoding model. Regularization, along with the cross-validation used to estimate decoding accuracy, typically requires retraining many (often on the order of thousands of related decoding models. In this paper we describe an approach that uses a combination of bootstrapping and permutation testing to construct both a measure of cross-validated prediction accuracy and model reproducibility of the learned brain maps. This requires re-training our classification method on many re-sampled versions of the fMRI data. Given the size of fMRI datasets, this is normally a time-consuming process. Our approach leverages an algorithm called fast simultaneous training of generalized linear models (FaSTGLZ to create a family of classifiers in the space of accuracy vs. reproducibility. The convex hull of this family of classifiers can be used to identify a subset of Pareto optimal classifiers, with a single-optimal classifier selectable based on the relative cost of accuracy vs. reproducibility. We demonstrate our approach using full-brain analysis of elastic-net classifiers trained to discriminate stimulus type in an auditory and visual oddball event-related fMRI design. Our approach and results argue for a computational approach to fMRI decoding models in which the value of the interpretation of the decoding model ultimately depends upon optimizing a

  8. Multiprocessor architecture for multiple path stack sequential decoders

    Energy Technology Data Exchange (ETDEWEB)

    Belanger, N.; Haccoun, D.; Savaria, Y. [Ecole Poytechnique de Montreal, Montreal (Canada)

    1994-02-01

    The Zigangirov-Jelinek (stack) algorithm allows decoding convolutional codes with a small computational effort compared to the optimum Viterbi algorithm. However, it suffers from a variability of the computational effort that is highly undesirable. This paper describes an architecture that implements a multiple-path-like stack algorithm for reducing this variability. This architecture is organized as a linear structure comprising special processors for extending tree nodes, called extenders, and priority stacks for storing nodes in sorted metric order. The architecture is shown to have a good potential for reducing the computational variability without adding much overhead to the system. Simulations have shown that this architecture effectively reduces computational variability as the number of processors increases, even for a relatively large number of extenders. Simulations run for up to 16 extenders have also shown that using 4 to 16 extenders is a good choice. The architecture is also shown to reduce computational variability like the multiple path algorithm does, while having a better time performance. 12 refs.

  9. No-reference analysis of decoded MPEG images for PSNR estimation and post-processing

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Li, Huiying; Andersen, Jakob Dahl

    2011-01-01

    We propose no-reference analysis and processing of DCT (Discrete Cosine Transform) coded images based on estimation of selected MPEG parameters from the decoded video. The goal is to assess MPEG video quality and perform post-processing without access to neither the original stream nor the code...... stream. Solutions are presented for MPEG-2 video. A method to estimate the quantization parameters of DCT coded images and MPEG I-frames at the macro-block level is presented. The results of this analysis is used for deblocking and deringing artifact reduction and no-reference PSNR estimation without...... code stream access. An adaptive deringing method using texture classification is presented. On the test set, the quantization parameters in MPEG-2 I-frames are estimated with an overall accuracy of 99.9% and the PSNR is estimated with an overall average error of 0.3dB. The deringing and deblocking...

  10. Video-rate fluorescence lifetime imaging camera with CMOS single-photon avalanche diode arrays and high-speed imaging algorithm

    NARCIS (Netherlands)

    Li, D.D.U.; Arlt, J.; Tyndall, D.; Walker, R.; Richardson, J.; Stoppa, D.; Charbon, E.; Henderson, R.K.

    2011-01-01

    A high-speed and hardware-only algorithm using a center of mass method has been proposed for single-detector fluorescence lifetime sensing applications. This algorithm is now implemented on a field programmable gate array to provide fast lifetime estimates from a 32 × 32 low dark count 0.13 ?m

  11. High resolution decoding techniques and single-chip decoders for multi-anode microchannel arrays

    Science.gov (United States)

    Kasle, David B.

    1989-01-01

    While the pixel size in a standard multianode microchannel array (MAMA) photon-counting detector is determined by the spacing of the anode array, the factor most fundamentally limiting pixel resolution is the spacing of the channels in the microchannel plate. Detector resolution can be improved by means of digitization followed by a centroiding procedure. The decoding hardware and the techniques for enhanced pixel resolution are presently discussed with a view to the factors of speed and complexity. A single-chip CMOS gate-array implementation of the decoder is devised and compared with existing multichip decoders' performance.

  12. A best-first tree-searching approach for ML decoding in MIMO system

    KAUST Repository

    Shen, Chung-An

    2012-07-28

    In MIMO communication systems maximum-likelihood (ML) decoding can be formulated as a tree-searching problem. This paper presents a tree-searching approach that combines the features of classical depth-first and breadth-first approaches to achieve close to ML performance while minimizing the number of visited nodes. A detailed outline of the algorithm is given, including the required storage. The effects of storage size on BER performance and complexity in terms of search space are also studied. Our result demonstrates that with a proper choice of storage size the proposed method visits 40% fewer nodes than a sphere decoding algorithm at signal to noise ratio (SNR) = 20dB and by an order of magnitude at 0 dB SNR.

  13. Flight State Information Inference with Application to Helicopter Cockpit Video Data Analysis Using Data Mining Techniques

    Science.gov (United States)

    Shin, Sanghyun

    The National Transportation Safety Board (NTSB) has recently emphasized the importance of analyzing flight data as one of the most effective methods to improve eciency and safety of helicopter operations. By analyzing flight data with Flight Data Monitoring (FDM) programs, the safety and performance of helicopter operations can be evaluated and improved. In spite of the NTSB's effort, the safety of helicopter operations has not improved at the same rate as the safety of worldwide airlines, and the accident rate of helicopters continues to be much higher than that of fixed-wing aircraft. One of the main reasons is that the participation rates of the rotorcraft industry in the FDM programs are low due to the high costs of the Flight Data Recorder (FDR), the need of a special readout device to decode the FDR, anxiety of punitive action, etc. Since a video camera is easily installed, accessible, and inexpensively maintained, cockpit video data could complement the FDR in the presence of the FDR or possibly replace the role of the FDR in the absence of the FDR. Cockpit video data is composed of image and audio data: image data contains outside views through cockpit windows and activities on the flight instrument panels, whereas audio data contains sounds of the alarms within the cockpit. The goal of this research is to develop, test, and demonstrate a cockpit video data analysis algorithm based on data mining and signal processing techniques that can help better understand situations in the cockpit and the state of a helicopter by efficiently and accurately inferring the useful flight information from cockpit video data. Image processing algorithms based on data mining techniques are proposed to estimate a helicopter's attitude such as the bank and pitch angles, identify indicators from a flight instrument panel, and read the gauges and the numbers in the analogue gauge indicators and digital displays from cockpit image data. In addition, an audio processing algorithm

  14. 3-D model-based frame interpolation for distributed video coding of static scenes.

    Science.gov (United States)

    Maitre, Matthieu; Guillemot, Christine; Morin, Luce

    2007-05-01

    This paper addresses the problem of side information extraction for distributed coding of videos captured by a camera moving in a 3-D static environment. Examples of targeted applications are augmented reality, remote-controlled robots operating in hazardous environments, or remote exploration by drones. It explores the benefits of the structure-from-motion paradigm for distributed coding of this type of video content. Two interpolation methods constrained by the scene geometry, based either on block matching along epipolar lines or on 3-D mesh fitting, are first developed. These techniques are based on a robust algorithm for sub-pel matching of feature points, which leads to semi-dense correspondences between key frames. However, their rate-distortion (RD) performances are limited by misalignments between the side information and the actual Wyner-Ziv (WZ) frames due to the assumption of linear motion between key frames. To cope with this problem, two feature point tracking techniques are introduced, which recover the camera parameters of the WZ frames. A first technique, in which the frames remain encoded separately, performs tracking at the decoder and leads to significant RD performance gains. A second technique further improves the RD performances by allowing a limited tracking at the encoder. As an additional benefit, statistics on tracks allow the encoder to adapt the key frame frequency to the video motion content.

  15. Accurate distortion estimation and optimal bandwidth allocation for scalable H.264 video transmission over MIMO systems.

    Science.gov (United States)

    Jubran, Mohammad K; Bansal, Manu; Kondi, Lisimachos P; Grover, Rohan

    2009-01-01

    In this paper, we propose an optimal strategy for the transmission of scalable video over packet-based multiple-input multiple-output (MIMO) systems. The scalable extension of H.264/AVC that provides a combined temporal, quality and spatial scalability is used. For given channel conditions, we develop a method for the estimation of the distortion of the received video and propose different error concealment schemes. We show the accuracy of our distortion estimation algorithm in comparison with simulated wireless video transmission with packet errors. In the proposed MIMO system, we employ orthogonal space-time block codes (O-STBC) that guarantee independent transmission of different symbols within the block code. In the proposed constrained bandwidth allocation framework, we use the estimated end-to-end decoder distortion to optimally select the application layer parameters, i.e., quantization parameter (QP) and group of pictures (GOP) size, and physical layer parameters, i.e., rate-compatible turbo (RCPT) code rate and symbol constellation. Results show the substantial performance gain by using different symbol constellations across the scalable layers as compared to a fixed constellation.

  16. Fast decoding of codes from algebraic plane curves

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Jensen, Helge Elbrønd

    1992-01-01

    Improvement to an earlier decoding algorithm for codes from algebraic geometry is presented. For codes from an arbitrary regular plane curve the authors correct up to d*/2-m2 /8+m/4-9/8 errors, where d* is the designed distance of the code and m is the degree of the curve. The complexity of finding...... the error locator is O(n7/3 ), where n is the length of the code. For codes from Hermitian curves the complexity of finding the error values, given the error locator, is O(n2), and the same complexity can be obtained in the general case if only d*/2-m2/2 errors are corrected...

  17. Layered Wyner-Ziv video coding.

    Science.gov (United States)

    Xu, Qian; Xiong, Zixiang

    2006-12-01

    Following recent theoretical works on successive Wyner-Ziv coding (WZC), we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantization, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered WZC for quality enhancement. Similar to FGS coding, there is no performance difference between layered and monolithic WZC when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that WZC gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks.

  18. List Decoding of Matrix-Product Codes from nested codes: an application to Quasi-Cyclic codes

    DEFF Research Database (Denmark)

    Hernando, Fernando; Høholdt, Tom; Ruano, Diego

    2012-01-01

    A list decoding algorithm for matrix-product codes is provided when $C_1,..., C_s$ are nested linear codes and $A$ is a non-singular by columns matrix. We estimate the probability of getting more than one codeword as output when the constituent codes are Reed-Solomon codes. We extend this list...

  19. Coder and decoder of fractal signals of comb-type structure

    Directory of Open Access Journals (Sweden)

    Politanskyi R. L.

    2014-08-01

    Full Text Available The article presents a coder and decoder of fractal signals of comb-type structure (FSCS based on microcontrollers (MC. The coder and decoder consist of identical control modules, while their managed modules have different schematic constructions. The control module performs forming or recognition of signals, and also carries out the function of information exchange with a computer. The basic element of the control module is a PIC18F2550 microcontroller from MicroChip. The coder of the system forms fractal signals of a given order according to the information bits coming from the computer. Samples of the calculated values of the amplitudes of elementary rectangular pulses that constitute the structure of fractal pulses are stored in the memory of the microcontroller as a table. Minimum bit capacity of the DAC necessary for the generation of FSCS of fourth order is four bits. The operation algorithm, "wired" into the controller of the program, provides for encoding of the transmitted information by two-bit symbols. Recognition of the start of transmission of each byte in communication channel is performed by the transmission of the timing signal. In a decoder the microcontroller carries out reception and decoding of the received fractal signals which are then transmitted to the computer. The developed algorithm of the program for the microcontroller of the decoder is carried out by determination of order of fractal impulse after the value of sum of amplitudes of elementary impulses, constituents fractal signal. The programs for coder and decoder are written in "C". In the most critical places of the program influencing on the fast-acting of chart “assembler” insertions are done. The blocks of the coder and decoder were connected with a coaxial 10 meters long cable with an impendance of 75 Ohm. The signals generated by the developed coder of FSCS, were studied using a digital oscillograph. On the basis of the obtained spectrums, it is possible

  20. Cross-band noise model refinement for transform domain Wyner–Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2012-01-01

    TDWZ video coding trails that of conventional video coding solutions, mainly due to the quality of side information, inaccurate noise modeling and loss in the final coding step. The major goal of this paper is to enhance the accuracy of the noise modeling, which is one of the most important aspects...... that the proposed noise model and noise residue refinement scheme can improve the rate-distortion (RD) performance of TDWZ video coding significantly. The quality of the side information modeling is also evaluated by a measure of the ideal code length.......Distributed Video Coding (DVC) is a new video coding paradigm, which mainly exploits the source statistics at the decoder based on the availability of decoder side information. One approach to DVC is feedback channel based Transform Domain Wyner–Ziv (TDWZ) video coding. The efficiency of current...

  1. Improved digital TV encoding and decoding system

    Science.gov (United States)

    Deutermann, A. R.

    1967-01-01

    Analog-to-digital coder and digital-to-analog decoder system handles wideband TV signals. The system incorporates solid state plug-in modular units and is operated in a VSD /Variable Slope Delta Modulation/ mode or in the conventional one-bit DM /Delta Modulation/ mode.

  2. Older Adults Have Difficulty in Decoding Sarcasm

    Science.gov (United States)

    Phillips, Louise H.; Allen, Roy; Bull, Rebecca; Hering, Alexandra; Kliegel, Matthias; Channon, Shelley

    2015-01-01

    Younger and older adults differ in performance on a range of social-cognitive skills, with older adults having difficulties in decoding nonverbal cues to emotion and intentions. Such skills are likely to be important when deciding whether someone is being sarcastic. In the current study we investigated in a life span sample whether there are…

  3. Chinese Character Decoding: A Semantic Bias?

    Science.gov (United States)

    Williams, Clay; Bever, Thomas

    2010-01-01

    The effects of semantic and phonetic radicals on Chinese character decoding were examined. Our results suggest that semantic and phonetic radicals are each available for access when a corresponding task emphasizes one or the other kind of radical. But in a more neutral lexical recognition task, the semantic radical is more informative. Semantic…

  4. Improved decoding for a concatenated coding system

    DEFF Research Database (Denmark)

    Paaske, Erik

    1990-01-01

    The concatenated coding system recommended by CCSDS (Consultative Committee for Space Data Systems) uses an outer (255,233) Reed-Solomon (RS) code based on 8-b symbols, followed by the block interleaver and an inner rate 1/2 convolutional code with memory 6. Viterbi decoding is assumed. Two new...

  5. Euler Elastica regularized Logistic Regression for whole-brain decoding of fMRI data.

    Science.gov (United States)

    Zhang, Chuncheng; Yao, Li; Song, Sutao; Wen, Xiaotong; Zhao, Xiaojie; Long, Zhiying

    2017-09-25

    Multivariate pattern analysis (MVPA) methods have been widely applied to functional magnetic resonance imaging (fMRI) data to decode brain states. Due to the "high features, low samples" in fMRI data, machine learning methods have been widely regularized using various regularizations to avoid overfitting. Both total variation (TV) using the gradients of images and Euler's elastica (EE) using the gradient and the curvature of images are the two popular regulations with spatial structures. In contrast to TV, EE regulation is able to overcome the disadvantage of TV regulation that favored piecewise constant images over piecewise smooth images. In this study, we introduced EE to fMRI-based decoding for the first time and proposed the EE regularized multinomial logistic regression (EELR) algorithm for multi-class classification. We performed experimental tests on both simulated and real fMRI data to investigate the feasibility and robustness of EELR. The performance of EELR was compared with sparse logistic regression (SLR) and TV regularized LR (TVLR). The results showed that EELR was more robustness to noises and showed significantly higher classification performance than TVLR and SLR. Moreover, the forward models and weights patterns revealed that EELR detected larger brain regions that were discriminative to each task and activated by each task than TVLR. The results suggest that EELR not only performs well in brain decoding but also reveals meaningful discriminative and activation patterns. This study demonstrated that EELR showed promising potential in brain decoding and discriminative/activation pattern detection.

  6. Population Decoding of Motor Cortical Activity using a Generalized Linear Model with Hidden States

    Science.gov (United States)

    Lawhern, Vernon; Wu, Wei; Hatsopoulos, Nicholas G.; Paninski, Liam

    2010-01-01

    Generalized linear models (GLMs) have been developed for modeling and decoding population neuronal spiking activity in the motor cortex. These models provide reasonable characterizations between neural activity and motor behavior. However, they lack a description of movement-related terms which are not observed directly in these experiments, such as muscular activation, the subject's level of attention, and other internal or external states. Here we propose to include a multi-dimensional hidden state to address these states in a GLM framework where the spike count at each time is described as a function of the hand state (position, velocity, and acceleration), truncated spike history, and the hidden state. The model can be identified by an Expectation-Maximization algorithm. We tested this new method in two datasets where spikes were simultaneously recorded using a multi-electrode array in the primary motor cortex of two monkeys. It was found that this method significantly improves the model-fitting over the classical GLM, for hidden dimensions varying from 1 to 4. This method also provides more accurate decoding of hand state (lowering the Mean Square Error by up to 29% in some cases), while retaining real-time computational efficiency. These improvements on representation and decoding over the classical GLM model suggest that this new approach could contribute as a useful tool to motor cortical decoding and prosthetic applications. PMID:20359500

  7. Performance characterization and transmission schemes for instantly decodable network coding in wireless broadcast

    Science.gov (United States)

    Yu, Mingchao; Sadeghi, Parastoo; Aboutorab, Neda

    2015-12-01

    We consider broadcasting a block of packets to multiple wireless receivers under random packet erasures using instantly decodable network coding (IDNC). The sender first broadcasts each packet uncoded once, then generates coded packets according to receivers' feedback about their missing packets. We focus on strict IDNC (S-IDNC), where each coded packet includes at most one missing packet of every receiver. But, we will also study its relation with generalized IDNC (G-IDNC), where this condition is relaxed. We characterize two fundamental performance limits of S-IDNC: (1) the number of transmissions to complete the broadcast, which measures throughput and (2) average packet decoding delay, which measures how fast each packet is decoded at each receiver on average. We derive a closed-form expression for the expected minimum number of transmissions in terms of the number of packets and receivers and the erasure probability. We prove that it is NP-hard to minimize the average packet decoding delay of S-IDNC. We also prove that the graph models of S- and G-IDNC share the same chromatic number. Next, we design efficient S-IDNC transmission schemes and coding algorithms with full/intermittent receiver feedback. We present simulation results to corroborate the developed theory and compare our schemes with existing ones.

  8. Circuit for high resolution decoding of multi-anode microchannel array detectors

    Science.gov (United States)

    Kasle, David B. (Inventor)

    1995-01-01

    A circuit for high resolution decoding of multi-anode microchannel array detectors consisting of input registers accepting transient inputs from the anode array; anode encoding logic circuits connected to the input registers; midpoint pipeline registers connected to the anode encoding logic circuits; and pixel decoding logic circuits connected to the midpoint pipeline registers is described. A high resolution algorithm circuit operates in parallel with the pixel decoding logic circuit and computes a high resolution least significant bit to enhance the multianode microchannel array detector's spatial resolution by halving the pixel size and doubling the number of pixels in each axis of the anode array. A multiplexer is connected to the pixel decoding logic circuit and allows a user selectable pixel address output according to the actual multi-anode microchannel array detector anode array size. An output register concatenates the high resolution least significant bit onto the standard ten bit pixel address location to provide an eleven bit pixel address, and also stores the full eleven bit pixel address. A timing and control state machine is connected to the input registers, the anode encoding logic circuits, and the output register for managing the overall operation of the circuit.

  9. Digital video.

    Science.gov (United States)

    Johnson, Don; Johnson, Mike

    2004-04-01

    The process of digital capture, editing, and archiving video has become an important aspect of documenting arthroscopic surgery. Recording the arthroscopic findings before and after surgery is an essential part of the patient's medical record. The hardware and software has become more reasonable to purchase, but the learning curve to master the software is steep. Digital video is captured at the time of arthroscopy to a hard disk, and written to a CD at the end of the operative procedure. The process of obtaining video of open procedures is more complex. Outside video of the procedure is recorded on digital tape with a digital video camera. The camera must be plugged into a computer to capture the video on the hard disk. Adobe Premiere software is used to edit the video and render the finished video to the hard drive. This finished video is burned onto a CD. We outline the choice of computer hardware and software for the manipulation of digital video. The techniques of backup and archiving the completed projects and files also are outlined. The uses of digital video for education and the formats that can be used in PowerPoint presentations are discussed.

  10. Contextual analysis of videos

    CERN Document Server

    Thida, Myo; Monekosso, Dorothy

    2013-01-01

    Video context analysis is an active and vibrant research area, which provides means for extracting, analyzing and understanding behavior of a single target and multiple targets. Over the last few decades, computer vision researchers have been working to improve the accuracy and robustness of algorithms to analyse the context of a video automatically. In general, the research work in this area can be categorized into three major topics: 1) counting number of people in the scene 2) tracking individuals in a crowd and 3) understanding behavior of a single target or multiple targets in the scene.

  11. Video-based rendering

    CERN Document Server

    Magnor, Marcus A

    2005-01-01

    Driven by consumer-market applications that enjoy steadily increasing economic importance, graphics hardware and rendering algorithms are a central focus of computer graphics research. Video-based rendering is an approach that aims to overcome the current bottleneck in the time-consuming modeling process and has applications in areas such as computer games, special effects, and interactive TV. This book offers an in-depth introduction to video-based rendering, a rapidly developing new interdisciplinary topic employing techniques from computer graphics, computer vision, and telecommunication en

  12. Video Pulses: User-Based Modeling of Interesting Video Segments

    Directory of Open Access Journals (Sweden)

    Markos Avlonitis

    2014-01-01

    Full Text Available We present a user-based method that detects regions of interest within a video in order to provide video skims and video summaries. Previous research in video retrieval has focused on content-based techniques, such as pattern recognition algorithms that attempt to understand the low-level features of a video. We are proposing a pulse modeling method, which makes sense of a web video by analyzing users' Replay interactions with the video player. In particular, we have modeled the user information seeking behavior as a time series and the semantic regions as a discrete pulse of fixed width. Then, we have calculated the correlation coefficient between the dynamically detected pulses at the local maximums of the user activity signal and the pulse of reference. We have found that users' Replay activity significantly matches the important segments in information-rich and visually complex videos, such as lecture, how-to, and documentary. The proposed signal processing of user activity is complementary to previous work in content-based video retrieval and provides an additional user-based dimension for modeling the semantics of a social video on the web.

  13. Performance and Complexity Co-evaluation of the Advanced Video Coding Standard for Cost-Effective Multimedia Communications

    Science.gov (United States)

    Saponara, Sergio; Denolf, Kristof; Lafruit, Gauthier; Blanch, Carolina; Bormans, Jan

    2004-12-01

    The advanced video codec (AVC) standard, recently defined by a joint video team (JVT) of ITU-T and ISO/IEC, is introduced in this paper together with its performance and complexity co-evaluation. While the basic framework is similar to the motion-compensated hybrid scheme of previous video coding standards, additional tools improve the compression efficiency at the expense of an increased implementation cost. As a first step to bridge the gap between the algorithmic design of a complex multimedia system and its cost-effective realization, a high-level co-evaluation approach is proposed and applied to a real-life AVC design. An exhaustive analysis of the codec compression efficiency versus complexity (memory and computational costs) project space is carried out at the early algorithmic design phase. If all new coding features are used, the improved AVC compression efficiency (up to 50% compared to current video coding technology) comes with a complexity increase of a factor 2 for the decoder and larger than one order of magnitude for the encoder. This represents a challenge for resource-constrained multimedia systems such as wireless devices or high-volume consumer electronics. The analysis also highlights important properties of the AVC framework allowing for complexity reduction at the high system level: when combining the new coding features, the implementation complexity accumulates, while the global compression efficiency saturates. Thus, a proper use of the AVC tools maintains the same performance as the most complex configuration while considerably reducing complexity. The reported results provide inputs to assist the profile definition in the standard, highlight the AVC bottlenecks, and select optimal trade-offs between algorithmic performance and complexity.

  14. Performance and Complexity Co-evaluation of the Advanced Video Coding Standard for Cost-Effective Multimedia Communications

    Directory of Open Access Journals (Sweden)

    Saponara Sergio

    2004-01-01

    Full Text Available The advanced video codec (AVC standard, recently defined by a joint video team (JVT of ITU-T and ISO/IEC, is introduced in this paper together with its performance and complexity co-evaluation. While the basic framework is similar to the motion-compensated hybrid scheme of previous video coding standards, additional tools improve the compression efficiency at the expense of an increased implementation cost. As a first step to bridge the gap between the algorithmic design of a complex multimedia system and its cost-effective realization, a high-level co-evaluation approach is proposed and applied to a real-life AVC design. An exhaustive analysis of the codec compression efficiency versus complexity (memory and computational costs project space is carried out at the early algorithmic design phase. If all new coding features are used, the improved AVC compression efficiency (up to 50% compared to current video coding technology comes with a complexity increase of a factor 2 for the decoder and larger than one order of magnitude for the encoder. This represents a challenge for resource-constrained multimedia systems such as wireless devices or high-volume consumer electronics. The analysis also highlights important properties of the AVC framework allowing for complexity reduction at the high system level: when combining the new coding features, the implementation complexity accumulates, while the global compression efficiency saturates. Thus, a proper use of the AVC tools maintains the same performance as the most complex configuration while considerably reducing complexity. The reported results provide inputs to assist the profile definition in the standard, highlight the AVC bottlenecks, and select optimal trade-offs between algorithmic performance and complexity.

  15. Visualizing and quantifying movement from pre-recorded videos: The spectral time-lapse (STL algorithm [v1; ref status: indexed, http://f1000r.es/2qo

    Directory of Open Access Journals (Sweden)

    Christopher R Madan

    2014-01-01

    Full Text Available When studying animal behaviour within an open environment, movement-related data are often important for behavioural analyses. Therefore, simple and efficient techniques are needed to present and analyze the data of such movements. However, it is challenging to present both spatial and temporal information of movements within a two-dimensional image representation. To address this challenge, we developed the spectral time-lapse (STL algorithm that re-codes an animal’s position at every time point with a time-specific color, and overlays it with a reference frame of the video, to produce a summary image. We additionally incorporated automated motion tracking, such that the animal’s position can be extracted and summary statistics such as path length and duration can be calculated, as well as instantaneous velocity and acceleration. Here we describe the STL algorithm and offer a freely available MATLAB toolbox that implements the algorithm and allows for a large degree of end-user control and flexibility.

  16. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  17. Video Stabilization Using Feature Point Matching

    Science.gov (United States)

    Kulkarni, Shamsundar; Bormane, D. S.; Nalbalwar, S. L.

    2017-01-01

    Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper an algorithm is proposed to stabilize jittery videos. A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video is identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances.

  18. Design of video interface conversion system based on FPGA

    Science.gov (United States)

    Zhao, Heng; Wang, Xiang-jun

    2014-11-01

    This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.

  19. Performance Analysis of Video Transmission Using Sequential Distortion Minimization Method for Digital Video Broadcasting Terrestrial

    Directory of Open Access Journals (Sweden)

    Novita Astin

    2016-12-01

    Full Text Available This paper presents about the transmission of Digital Video Broadcasting system with streaming video resolution 640x480 on different IQ rate and modulation. In the video transmission, distortion often occurs, so the received video has bad quality. Key frames selection algorithm is flexibel on a change of video, but on these methods, the temporal information of a video sequence is omitted. To minimize distortion between the original video and received video, we aimed at adding methodology using sequential distortion minimization algorithm. Its aim was to create a new video, better than original video without significant loss of content between the original video and received video, fixed sequentially. The reliability of video transmission was observed based on a constellation diagram, with the best result on IQ rate 2 Mhz and modulation 8 QAM. The best video transmission was also investigated using SEDIM (Sequential Distortion Minimization Method and without SEDIM. The experimental result showed that the PSNR (Peak Signal to Noise Ratio average of video transmission using SEDIM was an increase from 19,855 dB to 48,386 dB and SSIM (Structural Similarity average increase 10,49%. The experimental results and comparison of proposed method obtained a good performance. USRP board was used as RF front-end on 2,2 GHz.

  20. Low-complexity JPEG-based progressive video codec for wireless video transmission

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Forchhammer, Søren

    2010-01-01

    This paper discusses the question of video codec enhancement for wireless video transmission of high definition video data taking into account constraints on memory and complexity. Starting from parameter adjustment for JPEG2000 compression algorithm used for wireless transmission and achieving...

  1. Search in Real-Time Video Games

    OpenAIRE

    Cowling, Peter I.; Buro, Michael; Bida, Michal; Botea, Adi; Bouzy, Bruno; Butz, Martin V.; Hingston, Philip; Muñoz-Avila, Hector; Nau, Dana; Sipper, Moshe

    2013-01-01

    This chapter arises from the discussions of an experienced international group of researchers interested in the potential for creative application of algorithms for searching finite discrete graphs, which have been highly successful in a wide range of application areas, to address a broad range of problems arising in video games. The chapter first summarises the state of the art in search algorithms for games. It then considers the challenges in implementing these algorithms in video games (p...

  2. Estimation of Web video multiplicity

    Science.gov (United States)

    Cheung, SenChing S.; Zakhor, Avideh

    1999-12-01

    With ever more popularity of video web-publishing, many popular contents are being mirrored, reformatted, modified and republished, resulting in excessive content duplication. While such redundancy provides fault tolerance for continuous availability of information, it could potentially create problems for multimedia search engines in that the search results for a given query might become repetitious, and cluttered with a large number of duplicates. As such, developing techniques for detecting similarity and duplication is important to multimedia search engines. In addition, content providers might be interested in identifying duplicates of their content for legal, contractual or other business related reasons. In this paper, we propose an efficient algorithm called video signature to detect similar video sequences for large databases such as the web. The idea is to first form a 'signature' for each video sequence by selection a small number of its frames that are most similar to a number of randomly chosen seed images. Then the similarity between any tow video sequences can be reliably estimated by comparing their respective signatures. Using this method, we achieve 85 percent recall and precision ratios on a test database of 377 video sequences. As a proof of concept, we have applied our proposed algorithm to a collection of 1800 hours of video corresponding to around 45000 clips from the web. Our results indicate that, on average, every video in our collection from the web has around five similar copies.

  3. An Adaptive Fair-Distributed Scheduling Algorithm to Guarantee QoS for Both VBR and CBR Video Traffics on IEEE 802.11e WLANs

    Directory of Open Access Journals (Sweden)

    Reza Berangi

    2008-07-01

    Full Text Available Most of the centralized QoS mechanisms for WLAN MAC layer are only able to guarantee QoS parameters for CBR video traffic effectively. On the other hand, the existing distributed QoS mechanisms are only able to differentiate between various traffic streams without being able to guarantee QoS. This paper addresses these deficiencies by proposing a new distributed QoS scheme that guarantees QoS parameters such as delay and throughput for both CBR and VBR video traffics. The proposed scheme is also fair for all streams and it can adapt to the various conditions of the network. To achieve this, three fields are added to the RTS/CTS frames whose combination with the previously existing duration field of RTS/CTS frames guarantees the periodic fair adaptive access of a station to the channel. The performance of the proposed method has been evaluated with NS-2. The results showed that it outperforms IEEE 802.11e HCCA.

  4. No-Reference Video Quality Assessment using Codec Analysis

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2015-01-01

    A no-reference video quality assessment (VQA) method is presented for videos distorted by H.264/AVC and MPEG-2. The assessment is performed without access to the bit-stream. Instead we analyze and estimate coefficients based on decoded pixels. The approach involves distinguishing between the two...... types of videos, estimating the level of quantization used in the I-frames, and exploiting this information to assess the video quality. In order to do this for H.264/AVC, the distribution of the DCT-coefficients after intra-prediction and deblocking are modeled. To obtain VQA features for H.264/AVC, we...... propose a novel estimation method of the quantization in H.264/AVC videos without bitstream access, which can also be used for Peak Signalto-Noise Ratio (PSNR) estimation. The results from the MPEG-2 and H.264/AVC analysis are mapped to a perceptual measure of video quality by Support Vector Regression...

  5. Video enhancement effectiveness for target detection

    Science.gov (United States)

    Simon, Michael; Fischer, Amber; Petrov, Plamen

    2011-05-01

    Unmanned aerial vehicles (UAVs) capture real-time video data of military targets while keeping the warfighter at a safe distance. This keeps soldiers out of harm's way while they perform intelligence, surveillance and reconnaissance (ISR) and close-air support troops in contact (CAS-TIC) situations. The military also wants to use UAV video to achieve force multiplication. One method of achieving effective force multiplication involves fielding numerous UAVs with cameras and having multiple videos processed simultaneously by a single operator. However, monitoring multiple video streams is difficult for operators when the videos are of low quality. To address this challenge, we researched several promising video enhancement algorithms that focus on improving video quality. In this paper, we discuss our video enhancement suite and provide examples of video enhancement capabilities, focusing on stabilization, dehazing, and denoising. We provide results that show the effects of our enhancement algorithms on target detection and tracking algorithms. These results indicate that there is potential to assist the operator in identifying and tracking relevant targets with aided target recognition even on difficult video, increasing the force multiplier effect of UAVs. This work also forms the basis for human factors research into the effects of enhancement algorithms on ISR missions.

  6. Video games

    OpenAIRE

    Kolář, Vojtěch

    2012-01-01

    This thesis is based on a detailed analysis of various topics related to the question of whether video games can be art. In the first place it analyzes the current academic discussion on this subject and confronts different opinions of both supporters and objectors of the idea, that video games can be a full-fledged art form. The second point of this paper is to analyze the properties, that are inherent to video games, in order to find the reason, why cultural elite considers video games as i...

  7. An information-theoretic approach to motor action decoding with a reconfigurable parallel architecture.

    Science.gov (United States)

    Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C

    2011-01-01

    Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.

  8. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

    KAUST Repository

    Abediseid, Walid

    2012-10-01

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the lattice decoder. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.

  9. On Lattice Sequential Decoding for The Unconstrained AWGN Channel

    KAUST Repository

    Abediseid, Walid

    2013-04-04

    In this paper, the performance limits and the computational complexity of the lattice sequential decoder are analyzed for the unconstrained additive white Gaussian noise channel. The performance analysis available in the literature for such a channel has been studied only under the use of the minimum Euclidean distance decoder that is commonly referred to as the \\\\textit{lattice decoder}. Lattice decoders based on solutions to the NP-hard closest vector problem are very complex to implement, and the search for low complexity receivers for the detection of lattice codes is considered a challenging problem. However, the low computational complexity advantage that sequential decoding promises, makes it an alternative solution to the lattice decoder. In this work, we characterize the performance and complexity tradeoff via the error exponent and the decoding complexity, respectively, of such a decoder as a function of the decoding parameter --- the bias term. For the above channel, we derive the cut-off volume-to-noise ratio that is required to achieve a good error performance with low decoding complexity.

  10. Decoding Local Field Potentials for Neural Interfaces.

    Science.gov (United States)

    Jackson, Andrew; Hall, Thomas M

    2017-10-01

    The stability and frequency content of local field potentials (LFPs) offer key advantages for long-term, low-power neural interfaces. However, interpreting LFPs may require new signal processing techniques which should be informed by a scientific understanding of how these recordings arise from the coordinated activity of underlying neuronal populations. We review current approaches to decoding LFPs for brain-machine interface (BMI) applications, and suggest several directions for future research. To facilitate an improved understanding of the relationship between LFPs and spike activity, we share a dataset of multielectrode recordings from monkey motor cortex, and describe two unsupervised analysis methods we have explored for extracting a low-dimensional feature space that is amenable to biomimetic decoding and biofeedback training.

  11. Algoritmi selektivnog šifrovanja - pregled sa ocenom performansi / Selective encryption algorithms: Overview with performance evaluation

    Directory of Open Access Journals (Sweden)

    Boriša Ž. Jovanović

    2010-10-01

    Full Text Available Digitalni multimedijalni sadržaj postaje zastupljeniji i sve više se razmenjuje putem računarskih mreža i javnih kanala (satelitske komunikacije, bežične mreže, internet, itd. koji predstavljaju nebezbedne medijume za prenos informacija osetljive sadržine. Sve više na značaju dobijaju mehanizmi kriptološke zaštite slika i video sadržaja. Tradicionalni sistemi kriptografske obrade u sistemima za prenos ovih vrsta informacija garantuju visok stepen sigurnosti, ali i imaju svoje nedostatke - visoku cenu implementacije i znatno kašnjenje u prenosu podataka. Pomenuti nedostaci se prevazilaze primenom algoritama selektivnog šifrovanja. / Digital multimedia content is becoming widely used and increasingly exchanged over computer network and public channels (satelite, wireless networks, Internet, etc. which is unsecured transmission media for ex changing that kind of information. Mechanisms made to encrypt image and video data are becoming more and more significant. Traditional cryptographic techniques can guarantee a high level of security but at the cost of expensive implementation and important transmission delays. These shortcomings can be exceeded using selective encryption algorithms. Introduction In traditional image and video content protection schemes, called fully layered, the whole content is first compressed. Then, the compressed bitstream is entirely encrypted using a standard cipher (DES - Data Encryption Algorithm, IDEA - International Data Encryption Algorithm, AES - Advanced Encryption Algorithm etc.. The specific characteristics of this kind of data, high-transmission rate with limited bandwidth, make standard encryption algorithms inadequate. Another limitation of traditional systems consists of altering the whole bitstream syntax which may disable some codec functionalities on the delivery site coder and decoder on the receiving site. Selective encryption is a new trend in image and video content protection. As its

  12. Improved prediction of bimanual movements by a two-staged (effector-then-trajectory) decoder with epidural ECoG in nonhuman primates

    Science.gov (United States)

    Choi, Hoseok; Lee, Jeyeon; Park, Jinsick; Lee, Seho; Ahn, Kyoung-ha; Kim, In Young; Lee, Kyoung-Min; Jang, Dong Pyo

    2018-02-01

    Objective. In arm movement BCIs (brain–computer interfaces), unimanual research has been much more extensively studied than its bimanual counterpart. However, it is well known that the bimanual brain state is different from the unimanual one. Conventional methodology used in unimanual studies does not take the brain stage into consideration, and therefore appears to be insufficient for decoding bimanual movements. In this paper, we propose the use of a two-staged (effector-then-trajectory) decoder, which combines the classification of movement conditions and uses a hand trajectory predicting algorithm for unimanual and bimanual movements, for application in real-world BCIs. Approach. Two micro-electrode patches (32 channels) were inserted over the dura mater of the left and right hemispheres of two rhesus monkeys, covering the motor related cortex for epidural electrocorticograph (ECoG). Six motion sensors (inertial measurement unit) were used to record the movement signals. The monkeys performed three types of arm movement tasks: left unimanual, right unimanual, bimanual. To decode these movements, we used a two-staged decoder, which combines the effector classifier for four states (left unimanual, right unimanual, bimanual movements, and stationary state) and movement predictor using regression. Main results. Using this approach, we successfully decoded both arm positions using the proposed decoder. The results showed that decoding performance for bimanual movements were improved compared to the conventional method, which does not consider the effector, and the decoding performance was significant and stable over a period of four months. In addition, we also demonstrated the feasibility of epidural ECoG signals, which provided an adequate level of decoding accuracy. Significance. These results provide evidence that brain signals are different depending on the movement conditions or effectors. Thus, the two-staged method could be useful if BCIs are used to

  13. Robust Shot Boundary Detection from Video Using Dynamic Texture

    Directory of Open Access Journals (Sweden)

    Peng Taile

    2014-03-01

    Full Text Available Video boundary detection belongs to a basis subject in computer vision. It is more important to video analysis and video understanding. The existing video boundary detection methods always are effective to certain types of video data. These methods have relatively low generalization ability. We present a novel shot boundary detection algorithm based on video dynamic texture. Firstly, the two adjacent frames are read from a given video. We normalize the two frames to get the same size frame. Secondly, we divide these frames into some sub-domain on the same standard. The following thing is to calculate the average gradient direction of sub-domain and form dynamic texture. Finally, the dynamic texture of adjacent frames is compared. We have done some experiments in different types of video data. These experimental results show that our method has high generalization ability. To different type of videos, our algorithm can achieve higher average precision and average recall relative to some algorithms.

  14. Remote stereoscopic video play platform for naked eyes based on the Android system

    Science.gov (United States)

    Jia, Changxin; Sang, Xinzhu; Liu, Jing; Cheng, Mingsheng

    2014-11-01

    As people's life quality have been improved significantly, the traditional 2D video technology can not meet people's urgent desire for a better video quality, which leads to the rapid development of 3D video technology. Simultaneously people want to watch 3D video in portable devices,. For achieving the above purpose, we set up a remote stereoscopic video play platform. The platform consists of a server and clients. The server is used for transmission of different formats of video and the client is responsible for receiving remote video for the next decoding and pixel restructuring. We utilize and improve Live555 as video transmission server. Live555 is a cross-platform open source project which provides solutions for streaming media such as RTSP protocol and supports transmission of multiple video formats. At the receiving end, we use our laboratory own player. The player for Android, which is with all the basic functions as the ordinary players do and able to play normal 2D video, is the basic structure for redevelopment. Also RTSP is implemented into this structure for telecommunication. In order to achieve stereoscopic display, we need to make pixel rearrangement in this player's decoding part. The decoding part is the local code which JNI interface calls so that we can extract video frames more effectively. The video formats that we process are left and right, up and down and nine grids. In the design and development, a large number of key technologies from Android application development have been employed, including a variety of wireless transmission, pixel restructuring and JNI call. By employing these key technologies, the design plan has been finally completed. After some updates and optimizations, the video player can play remote 3D video well anytime and anywhere and meet people's requirement.

  15. General Purpose Graphics Processing Unit Based High-Rate Rice Decompression and Reed-Solomon Decoding

    Energy Technology Data Exchange (ETDEWEB)

    Loughry, Thomas A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    As the volume of data acquired by space-based sensors increases, mission data compression/decompression and forward error correction code processing performance must likewise scale. This competency development effort was explored using the General Purpose Graphics Processing Unit (GPGPU) to accomplish high-rate Rice Decompression and high-rate Reed-Solomon (RS) decoding at the satellite mission ground station. Each algorithm was implemented and benchmarked on a single GPGPU. Distributed processing across one to four GPGPUs was also investigated. The results show that the GPGPU has considerable potential for performing satellite communication Data Signal Processing, with three times or better performance improvements and up to ten times reduction in cost over custom hardware, at least in the case of Rice Decompression and Reed-Solomon Decoding.

  16. A Simple Scheme for Belief Propagation Decoding of BCH and RS Codes in Multimedia Transmissions

    Directory of Open Access Journals (Sweden)

    Marco Baldi

    2008-01-01

    Full Text Available Classic linear block codes, like Bose-Chaudhuri-Hocquenghem (BCH and Reed-Solomon (RS codes, are widely used in multimedia transmissions, but their soft-decision decoding still represents an open issue. Among the several approaches proposed for this purpose, an important role is played by the iterative belief propagation principle, whose application to low-density parity-check (LDPC codes permits to approach the channel capacity. In this paper, we elaborate a new technique for decoding classic binary and nonbinary codes through the belief propagation algorithm. We focus on RS codes included in the recent CDMA2000 standard, and compare the proposed technique with the adaptive belief propagation approach, that is able to ensure very good performance but with higher complexity. Moreover, we consider the case of long BCH codes included in the DVB-S2 standard, for which we show that the usage of “pure” LDPC codes would provide better performance.

  17. Diversity-Aware Multi-Video Summarization

    Science.gov (United States)

    Panda, Rameswar; Mithun, Niluthpol Chowdhury; Roy-Chowdhury, Amit K.

    2017-10-01

    Most video summarization approaches have focused on extracting a summary from a single video; we propose an unsupervised framework for summarizing a collection of videos. We observe that each video in the collection may contain some information that other videos do not have, and thus exploring the underlying complementarity could be beneficial in creating a diverse informative summary. We develop a novel diversity-aware sparse optimization method for multi-video summarization by exploring the complementarity within the videos. Our approach extracts a multi-video summary which is both interesting and representative in describing the whole video collection. To efficiently solve our optimization problem, we develop an alternating minimization algorithm that minimizes the overall objective function with respect to one video at a time while fixing the other videos. Moreover, we introduce a new benchmark dataset, Tour20, that contains 140 videos with multiple human created summaries, which were acquired in a controlled experiment. Finally, by extensive experiments on the new Tour20 dataset and several other multi-view datasets, we show that the proposed approach clearly outperforms the state-of-the-art methods on the two problems-topic-oriented video summarization and multi-view video summarization in a camera network.

  18. STASSAGE: A FORTRAN program to decode stratigraphic ages from the international IGBADAT database

    Science.gov (United States)

    Al-Mishwat, Ali T.

    1997-04-01

    STASSAGE is a FORTRAN program designed to decode stratigraphic age data contained within the international igneous rock database IGBADAT, and databases with similar structure. The algorithm decodes and translates stratigraphic information and retrieves rock samples falling within a desired stratigraphic age or a stratigraphic age interval, as specified in a search condition, along with petrographic, geochemical, mineralogic, and other ancillary data pertaining to these specimens. Program core operation involves two basic steps: decoding and translation, followed by search, comparison, and retrieval. The search engine in the program utilizes a system of stratigraphic figure-of-merit (SFOM) numbers and the principle of inclusiveness. The SFOM number system assigns to every recognized stratigraphic age a numerical value that is largest for oldest ages and smallest for youngest ones. The Principle of Inclusiveness states that " All Belongs to One and One Contains All", meaning that all ages in lower-rank stratigraphic age divisions are included in higher-rank age divisions containing them, and that a higher-rank age division contains in it all of its lower-rank component ages. In order to select specimens satisfying some given search criteria, search algorithms invoke the principle of inclusiveness and use extensive alphanumeric string comparisons that apply simple greater-than and lessthan Boolean algebra operators to the SFOM numbers to decode abbreviated IGBA age nouns. Samples screened by the search filter are saved in an output file, along with their full specimen descriptions for further analysis. Optionally, the program provides the facility to recast retrieved data of such samples into blank-delimited spreadsheets which are compatible with most spreadsheet programs.

  19. Bidirectional Fano Algorithm for Lattice Coded MIMO Channels

    KAUST Repository

    Al-Quwaiee, Hessa

    2013-05-08

    Recently, lattices - a mathematical representation of infinite discrete points in the Euclidean space, have become an effective way to describe and analyze communication systems especially system those that can be modeled as linear Gaussian vector channel model. Channel codes based on lattices are preferred due to three facts: lattice codes have simple structure, the code can achieve the limits of the channel, and they can be decoded efficiently using lattice decoders which can be considered as the Closest Lattice Point Search (CLPS). Since the time lattice codes were introduced to Multiple Input Multiple Output (MIMO) channel, Sphere Decoder (SD) has been an efficient way to implement lattice decoders. Sphere decoder offers the optimal performance at the expense of high decoding complexity especially for low signal-to-noise ratios (SNR) and for high- dimensional systems. On the other hand, linear and non-linear receivers, Minimum Mean Square Error (MMSE), and MMSE Decision-Feedback Equalization (DFE), provide the lowest decoding complexity but unfortunately with poor performance. Several studies works have been conducted in the last years to address the problem of designing low complexity decoders for the MIMO channel that can achieve near optimal performance. It was found that sequential decoders using backward tree 
search can bridge the gap between SD and MMSE. The sequential decoder provides an interesting performance-complexity trade-off using a bias term. Yet, the sequential decoder still suffers from high complexity for mid-to-high SNR values. In this work, we propose a new algorithm for Bidirectional Fano sequential Decoder (BFD) in order to reduce the mid-to-high SNR complexity. Our algorithm consists of first constructing a unidirectional Sequential Decoder based on forward search using the QL decomposition. After that, BFD incorporates two searches, forward and backward, to work simultaneously till they merge and find the closest lattice point to the

  20. Interleaved Convolutional Code and Its Viterbi Decoder Architecture

    Directory of Open Access Journals (Sweden)

    Kong Jun Jin

    2003-01-01

    Full Text Available We propose an area-efficient high-speed interleaved Viterbi decoder architecture, which is based on the state-parallel architecture with register exchange path memory structure, for interleaved convolutional code. The state-parallel architecture uses as many add-compare-select (ACS units as the number of trellis states. By replacing each delay (or storage element in state metrics memory (or path metrics memory and path memory (or survival memory with delays, interleaved Viterbi decoder is obtained where is the interleaving degree. The decoding speed of this decoder architecture is as fast as the operating clock speed. The latency of proposed interleaved Viterbi decoder is “decoding depth (DD interleaving degree ,” which increases linearly with the interleaving degree .

  1. Watermarking textures in video games

    Science.gov (United States)

    Liu, Huajian; Berchtold, Waldemar; Schäfer, Marcel; Lieb, Patrick; Steinebach, Martin

    2014-02-01

    Digital watermarking is a promising solution to video game piracy. In this paper, based on the analysis of special challenges and requirements in terms of watermarking textures in video games, a novel watermarking scheme for DDS textures in video games is proposed. To meet the performance requirements in video game applications, the proposed algorithm embeds the watermark message directly in the compressed stream in DDS files and can be straightforwardly applied in watermark container technique for real-time embedding. Furthermore, the embedding approach achieves high watermark payload to handle collusion secure fingerprinting codes with extreme length. Hence, the scheme is resistant to collusion attacks, which is indispensable in video game applications. The proposed scheme is evaluated in aspects of transparency, robustness, security and performance. Especially, in addition to classical objective evaluation, the visual quality and playing experience of watermarked games is assessed subjectively in game playing.

  2. Color image and video enhancement

    CERN Document Server

    Lecca, Michela; Smolka, Bogdan

    2015-01-01

    This text covers state-of-the-art color image and video enhancement techniques. The book examines the multivariate nature of color image/video data as it pertains to contrast enhancement, color correction (equalization, harmonization, normalization, balancing, constancy, etc.), noise removal and smoothing. This book also discusses color and contrast enhancement in vision sensors and applications of image and video enhancement.   ·         Focuses on enhancement of color images/video ·         Addresses algorithms for enhancing color images and video ·         Presents coverage on super resolution, restoration, in painting, and colorization.

  3. Decoding Delay Controlled Completion Time Reduction in Instantly Decodable Network Coding

    KAUST Repository

    Douik, Ahmed

    2016-06-27

    For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to act completely against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling the other. This paper investigates the effect of controlling the decoding delay to reduce the completion time below its currently best-known solution in both perfect and imperfect feedback with persistent erasure channels. To solve the problem, the decodingdelay- dependent expressions of the users’ and overall completion times are derived in the complete feedback scenario. Although using such expressions to find the optimal overall completion time is NP-hard, the paper proposes two novel heuristics that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays. Afterward, the paper extends the study to the imperfect feedback scenario in which uncertainties at the sender affects its ability to anticipate accurately the decoding delay increase at each user. The paper formulates the problem in such environment and derives the expression of the minimum increase in the completion time. Simulation results show the performance of the proposed solutions and suggest that both heuristics achieves a lower mean completion time as compared to the best-known heuristics for the completion time reduction in perfect and imperfect feedback. The gap in performance becomes more significant as the erasure of the channel increases.

  4. Innovative Video Diagnostic Equipment for Material Science

    Science.gov (United States)

    Capuano, G.; Titomanlio, D.; Soellner, W.; Seidel, A.

    2012-01-01

    Materials science experiments under microgravity increasingly rely on advanced optical systems to determine the physical properties of the samples under investigation. This includes video systems with high spatial and temporal resolution. The acquisition, handling, storage and transmission to ground of the resulting video data are very challenging. Since the available downlink data rate is limited, the capability to compress the video data significantly without compromising the data quality is essential. We report on the development of a Digital Video System (DVS) for EML (Electro Magnetic Levitator) which provides real-time video acquisition, high compression using advanced Wavelet algorithms, storage and transmission of a continuous flow of video with different characteristics in terms of image dimensions and frame rates. The DVS is able to operate with the latest generation of high-performance cameras acquiring high resolution video images up to 4Mpixels@60 fps or high frame rate video images up to about 1000 fps@512x512pixels.

  5. Design of a VLSI Decoder for Partially Structured LDPC Codes

    Directory of Open Access Journals (Sweden)

    Fabrizio Vacca

    2008-01-01

    of their parity matrix can be partitioned into two disjoint sets, namely, the structured and the random ones. For the proposed class of codes a constructive design method is provided. To assess the value of this method the constructed codes performance are presented. From these results, a novel decoding method called split decoding is introduced. Finally, to prove the effectiveness of the proposed approach a whole VLSI decoder is designed and characterized.

  6. An efficient HW and SW design of H.264 video compression, storage and playback on FPGA devices for handheld thermal imaging systems

    Science.gov (United States)

    Gunay, Omer; Ozsarac, Ismail; Kamisli, Fatih

    2017-05-01

    Video recording is an essential property of new generation military imaging systems. Playback of the stored video on the same device is also desirable as it provides several operational benefits to end users. Two very important constraints for many military imaging systems, especially for hand-held devices and thermal weapon sights, are power consumption and size. To meet these constraints, it is essential to perform most of the processing applied to the video signal, such as preprocessing, compression, storing, decoding, playback and other system functions on a single programmable chip, such as FPGA, DSP, GPU or ASIC. In this work, H.264/AVC (Advanced Video Coding) compatible video compression, storage, decoding and playback blocks are efficiently designed and implemented on FPGA platforms using FPGA fabric and Altera NIOS II soft processor. Many subblocks that are used in video encoding are also used during video decoding in order to save FPGA resources and power. Computationally complex blocks are designed using FPGA fabric, while blocks such as SD card write/read, H.264 syntax decoding and CAVLC decoding are done using NIOS processor to benefit from software flexibility. In addition, to keep power consumption low, the system was designed to require limited external memory access. The design was tested using 640x480 25 fps thermal camera on CYCLONE V FPGA, which is the ALTERA's lowest power FPGA family, and consumes lower than 40% of CYCLONE V 5CEFA7 FPGA resources on average.

  7. Adaptive recursive algorithm for optimal weighted suprathreshold stochastic resonance

    Science.gov (United States)

    Xu, Liyan; Duan, Fabing; Gao, Xiao; Abbott, Derek; McDonnell, Mark D.

    2017-09-01

    Suprathreshold stochastic resonance (SSR) is a distinct form of stochastic resonance, which occurs in multilevel parallel threshold arrays with no requirements on signal strength. In the generic SSR model, an optimal weighted decoding scheme shows its superiority in minimizing the mean square error (MSE). In this study, we extend the proposed optimal weighted decoding scheme to more general input characteristics by combining a Kalman filter and a least mean square (LMS) recursive algorithm, wherein the weighted coefficients can be adaptively adjusted so as to minimize the MSE without complete knowledge of input statistics. We demonstrate that the optimal weighted decoding scheme based on the Kalman-LMS recursive algorithm is able to robustly decode the outputs from the system in which SSR is observed, even for complex situations where the signal and noise vary over time.

  8. Optimal Routing for Decode-and-Forward based Cooperation in Wireless Networks

    CERN Document Server

    Ong, Lawrence

    2007-01-01

    We investigate cooperative wireless relay networks in which the nodes can help each other in data transmission. We study different coding strategies in the single-source single-destination network with many relay nodes. Given the myriad of ways in which nodes can cooperate, there is a natural routing problem, i.e., determining an ordered set of nodes to relay the data from the source to the destination. We find that for a given route, the decode-and-forward strategy, which is an information theoretic cooperative coding strategy, achieves rates significantly higher than that achievable by the usual multi-hop coding strategy, which is a point-to-point non-cooperative coding strategy. We construct an algorithm to find an optimal route (in terms of rate maximizing) for the decode-and-forward strategy. Since the algorithm runs in factorial time in the worst case, we propose a heuristic algorithm that runs in polynomial time. The heuristic algorithm outputs an optimal route when the nodes transmit independent codew...

  9. Electroencephalogram-based decoding cognitive states using convolutional neural network and likelihood ratio based score fusion.

    Science.gov (United States)

    Zafar, Raheel; Dass, Sarat C; Malik, Aamir Saeed

    2017-01-01

    Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain-computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method.

  10. Real-Time Neural Signals Decoding onto Off-the-Shelf DSP Processors for Neuroprosthetic Applications.

    Science.gov (United States)

    Pani, Danilo; Barabino, Gianluca; Citi, Luca; Meloni, Paolo; Raspopovic, Stanisa; Micera, Silvestro; Raffo, Luigi

    2016-09-01

    The control of upper limb neuroprostheses through the peripheral nervous system (PNS) can allow restoring motor functions in amputees. At present, the important aspect of the real-time implementation of neural decoding algorithms on embedded systems has been often overlooked, notwithstanding the impact that limited hardware resources have on the efficiency/effectiveness of any given algorithm. Present study is addressing the optimization of a template matching based algorithm for PNS signals decoding that is a milestone for its real-time, full implementation onto a floating-point digital signal processor (DSP). The proposed optimized real-time algorithm achieves up to 96% of correct classification on real PNS signals acquired through LIFE electrodes on animals, and can correctly sort spikes of a synthetic cortical dataset with sufficiently uncorrelated spike morphologies (93% average correct classification) comparably to the results obtained with top spike sorter (94% on average on the same dataset). The power consumption enables more than 24 h processing at the maximum load, and latency model has been derived to enable a fair performance assessment. The final embodiment demonstrates the real-time performance onto a low-power off-the-shelf DSP, opening to experiments exploiting the efferent signals to control a motor neuroprosthesis.

  11. Single-layer HDR video coding with SDR backward compatibility

    Science.gov (United States)

    Lasserre, S.; François, E.; Le Léannec, F.; Touzé, D.

    2016-09-01

    The migration from High Definition (HD) TV to Ultra High Definition (UHD) is already underway. In addition to an increase of picture spatial resolution, UHD will bring more color and higher contrast by introducing Wide Color Gamut (WCG) and High Dynamic Range (HDR) video. As both Standard Dynamic Range (SDR) and HDR devices will coexist in the ecosystem, the transition from Standard Dynamic Range (SDR) to HDR will require distribution solutions supporting some level of backward compatibility. This paper presents a new HDR content distribution scheme, named SL-HDR1, using a single layer codec design and providing SDR compatibility. The solution is based on a pre-encoding HDR-to-SDR conversion, generating a backward compatible SDR video, with side dynamic metadata. The resulting SDR video is then compressed, distributed and decoded using standard-compliant decoders (e.g. HEVC Main 10 compliant). The decoded SDR video can be directly rendered on SDR displays without adaptation. Dynamic metadata of limited size are generated by the pre-processing and used to reconstruct the HDR signal from the decoded SDR video, using a post-processing that is the functional inverse of the pre-processing. Both HDR quality and artistic intent are preserved. Pre- and post-processing are applied independently per picture, do not involve any inter-pixel dependency, and are codec agnostic. Compression performance, and SDR quality are shown to be solidly improved compared to the non-backward and backward-compatible approaches, respectively using the Perceptual Quantization (PQ) and Hybrid Log Gamma (HLG) Opto-Electronic Transfer Functions (OETF).

  12. How to detect Edgar Allan Poe's 'purloined letter,' or cross-correlation algorithms in digitized video images for object identification, movement evaluation, and deformation analysis

    Science.gov (United States)

    Dost, Michael; Vogel, Dietmar; Winkler, Thomas; Vogel, Juergen; Erb, Rolf; Kieselstein, Eva; Michel, Bernd

    2003-07-01

    Cross correlation analysis of digitised grey scale patterns is based on - at least - two images which are compared one to each other. Comparison is performed by means of a two-dimensional cross correlation algorithm applied to a set of local intensity submatrices taken from the pattern matrices of the reference and the comparison images in the surrounding of predefined points of interest. Established as an outstanding NDE tool for 2D and 3D deformation field analysis with a focus on micro- and nanoscale applications (microDAC and nanoDAC), the method exhibits an additional potential for far wider applications, that could be used for advancing homeland security. Cause the cross correlation algorithm in some kind seems to imitate some of the "smart" properties of human vision, this "field-of-surface-related" method can provide alternative solutions to some object and process recognition problems that are difficult to solve with more classic "object-related" image processing methods. Detecting differences between two or more images using cross correlation techniques can open new and unusual applications in identification and detection of hidden objects or objects with unknown origin, in movement or displacement field analysis and in some aspects of biometric analysis, that could be of special interest for homeland security.

  13. Video Podcasts

    DEFF Research Database (Denmark)

    Nortvig, Anne Mette; Sørensen, Birgitte Holm

    2016-01-01

    This project’s aim was to support and facilitate master’s students’ preparation and collaboration by making video podcasts of short lectures available on YouTube prior to students’ first face-to-face seminar. The empirical material stems from group interviews, from statistical data created through...... YouTube analytics and from surveys answered by students after the seminar. The project sought to explore how video podcasts support learning and reflection online and how students use and reflect on the integration of online activities in the videos. Findings showed that students engaged actively...

  14. Decoding of finger trajectory from ECoG using Deep Learning.

    Science.gov (United States)

    Xie, Ziqian; Schwartz, Odelia; Prasad, Abhishek

    2017-11-28

    Conventional decoding pipeline for brain machine interfaces (BMIs) consists of chained different stages of feature extraction, time-frequency analysis and statistical learning models. Each of these stages uses a different algorithm trained in a sequential manner, which makes it difficult to make the whole system adaptive. The goal was to create an adaptive online system with a single objective function and a single learning algorithm so that the whole system can be trained in parallel to increase the decoding performance. Here, we used deep neural networks consisting of convolutional neural networks (CNN) and a special kind of recurrent neural network (RNN) called long short term memory (LSTM) to address these needs. Approach: We used electrocorticography (ECoG) data collected by Kubanek et al. The task consisted of individual finger flexions upon a visual cue. Our model combined a hierarchical feature extractor CNN and a RNN that was able to process sequential data and recognize temporal dynamics in the neural data. CNN was used as the feature extractor and LSTM was used as the regression algorithm to capture the temporal dynamics of the signal. Results: We predicted the finger trajectory using ECoG signals and compared results for the least angle regression (LARS), CNN-LSTM, Random Forest, LSTM model (LSTM_HC, for using hard-coded features) and a decoding pipeline consisting of band-pass filtering, energy extraction, feature selection and linear regression. The results showed that the deep learning models performed better than the commonly used linear model. The deep learning models not only gave smoother and more realistic trajectories but also learned the transition between movement and rest state. Significance: This study demonstrated a decoding network for BMI that involved a convolutional and recurrent neural network model. It integrated the feature extraction pipeline into the convolution and pooling layer and used LSTM layer to capture the

  15. Brain-Machine Interface Control Algorithms.

    Science.gov (United States)

    Shanechi, Maryam M

    2016-12-14

    Motor brain-machine interfaces (BMI) allow subjects to control external devices by modulating their neural activity. BMIs record the neural activity, use a mathematical algorithm to estimate the subject's intended movement, actuate an external device, and provide visual feedback of the generated movement to the subject. A critical component of a BMI system is the control algorithm, termed decoder. Significant progress has been made in the design of BMI decoders in recent years resulting in proficient control in non-human primates and humans. In this review article, we discuss the decoding algorithms developed in the BMI field, with particular focus on recent designs that are informed by closed-loop control ideas. A motor BMI can be modeled as a closed-loop control system, where the controller is the brain, the plant is the prosthetic, the feedback is the biofeedback, and the control command is the neural activity. Additionally, compared to other closed-loop systems, BMIs have various unique properties. Neural activity is noisy and stochastic, and often consists of a sequence of spike trains. Neural representations of movement could be non-stationary and change over time, for example as a result of learning. We review recent decoder designs that take these unique properties into account. We also discuss the opportunities that exist at the interface of control theory, statistical inference, and neuroscience to devise a control-theoretic framework for BMI design and help develop the next-generation BMI control algorithms.

  16. Errors and erasures decoding for interference channels

    Science.gov (United States)

    Stuber, G.; Mark, J.; Blake, I.

    This paper examines the gains to be realized by using an errors-and-erasures decoding strategy on a pulse jammed channel, rather than errors-only, for concatenated Reed-Solomon/orthogonal codes. The work is largely computational and serves to quantify the gains achievable in such a situation. Previous studies have shown that the gains of errors-and-erasures over errors-only on the AWGN channel are minimal. The gains reported here for the pulse jammed channel are more substantial, particularly for the lower duty cycles.

  17. Algorithms and Algorithmic Languages.

    Science.gov (United States)

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  18. Predicting the integration of overlapping memories by decoding mnemonic processing states during learning.

    Science.gov (United States)

    Richter, Franziska R; Chanales, Avi J H; Kuhl, Brice A

    2016-01-01

    The hippocampal memory system is thought to alternate between two opposing processing states: encoding and retrieval. When present experience overlaps with past experience, this creates a potential tradeoff between encoding the present and retrieving the past. This tradeoff may be resolved by memory integration-that is, by forming a mnemonic representation that links present experience with overlapping past experience. Here, we used fMRI decoding analyses to predict when - and establish how - past and present experiences become integrated in memory. In an initial experiment, we alternately instructed subjects to adopt encoding, retrieval or integration states during overlapping learning. We then trained across-subject pattern classifiers to 'read out' the instructed processing states from fMRI activity patterns. We show that an integration state was clearly dissociable from encoding or retrieval states. Moreover, trial-by-trial fluctuations in decoded evidence for an integration state during learning reliably predicted behavioral expressions of successful memory integration. Strikingly, the decoding algorithm also successfully predicted specific instances of spontaneous memory integration in an entirely independent sample of subjects for whom processing state instructions were not administered. Finally, we show that medial prefrontal cortex and hippocampus differentially contribute to encoding, retrieval, and integration states: whereas hippocampus signals the tradeoff between encoding vs. retrieval states, medial prefrontal cortex actively represents past experience in relation to new learning. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Turbo Processing for Joint Channel Estimation, Synchronization, and Decoding in Coded MIMO-OFDM Systems

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available This paper proposes a turbo joint channel estimation, synchronization, and decoding scheme for coded multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM systems. The effects of carrier frequency offset (CFO, sampling frequency offset (SFO, and channel impulse responses (CIRs on the received samples are analyzed and explored to develop the turbo decoding process and vector recursive least squares (RLSs algorithm for joint CIR, CFO, and SFO tracking. For burst transmission, with initial estimates derived from the preamble, the proposed scheme can operate without the need of pilot tones during the data segment. Simulation results show that the proposed turbo joint channel estimation, synchronization, and decoding scheme offers fast convergence and low mean squared error (MSE performance over quasistatic Rayleigh multipath fading channels. The proposed scheme can be used in a coded MIMO-OFDM transceiver in the presence of multipath fading, carrier frequency offset, and sampling frequency offset to provide a bit error rate (BER performance comparable to that in an ideal case of perfect synchronization and channel estimation over a wide range of SFO values.

  20. Turbo Processing for Joint Channel Estimation, Synchronization, and Decoding in Coded MIMO-OFDM Systems

    Directory of Open Access Journals (Sweden)

    Ko ChiChung

    2009-01-01

    Full Text Available This paper proposes a turbo joint channel estimation, synchronization, and decoding scheme for coded multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM systems. The effects of carrier frequency offset (CFO, sampling frequency offset (SFO, and channel impulse responses (CIRs on the received samples are analyzed and explored to develop the turbo decoding process and vector recursive least squares (RLSs algorithm for joint CIR, CFO, and SFO tracking. For burst transmission, with initial estimates derived from the preamble, the proposed scheme can operate without the need of pilot tones during the data segment. Simulation results show that the proposed turbo joint channel estimation, synchronization, and decoding scheme offers fast convergence and low mean squared error (MSE performance over quasistatic Rayleigh multipath fading channels. The proposed scheme can be used in a coded MIMO-OFDM transceiver in the presence of multipath fading, carrier frequency offset, and sampling frequency offset to provide a bit error rate (BER performance comparable to that in an ideal case of perfect synchronization and channel estimation over a wide range of SFO values.

  1. Side Information and Noise Learning for Distributed Video Coding using Optical Flow and Clustering

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Rakêt, Lars Lau; Huang, Xin

    2012-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the source statistics at the decoder side to reduce the complexity at the encoder. The coding efficiency of DVC critically depends on the quality of side information generation and accuracy of noise modeling. This paper considers...... side information frames. Clustering is introduced to capture cross band correlation and increase local adaptivity in the noise modeling. This paper also proposes techniques to learn from previously decoded (WZ) frames. Different techniques are combined by calculating a number of candidate soft side...... information for (LDPCA) decoding. The proposed decoder side techniques for side information and noise learning (SING) are integrated in a TDWZ scheme. On test sequences, the proposed SING codec robustly improves the coding efficiency of TDWZ DVC. For WZ frames using a GOP size of 2, up to 4dB improvement...

  2. Role of gender and linguistic diversity in word decoding development

    NARCIS (Netherlands)

    Verhoeven, L.T.W.; Leeuwe, J.F.J. van

    2011-01-01

    The purpose of the present study was to investigate the role of gender and linguistic diversity in the growth of Dutch word decoding skills throughout elementary school for a representative sample of children living in the Netherlands. Following a longitudinal design, the children's decoding

  3. Does Linguistic Comprehension Support the Decoding Skills of Struggling Readers?

    Science.gov (United States)

    Blick, Michele; Nicholson, Tom; Chapman, James; Berman, Jeanette

    2017-01-01

    This study investigated the contribution of linguistic comprehension to the decoding skills of struggling readers. Participants were 36 children aged between eight and 12 years, all below average in decoding but differing in linguistic comprehension. The children read passages from the Neale Analysis of Reading Ability and their first 25 miscues…

  4. Decoding Information in the Human Hippocampus: A User's Guide

    Science.gov (United States)

    Chadwick, Martin J.; Bonnici, Heidi M.; Maguire, Eleanor A.

    2012-01-01

    Multi-voxel pattern analysis (MVPA), or "decoding", of fMRI activity has gained popularity in the neuroimaging community in recent years. MVPA differs from standard fMRI analyses by focusing on whether information relating to specific stimuli is encoded in patterns of activity across multiple voxels. If a stimulus can be predicted, or decoded,…

  5. Multilingual Speakers' Problems in Decoding in a Second Language.

    Science.gov (United States)

    Kaldor, Susan; Shell, Ruth

    Through an experiment investigating the processes used by several speakers of Asian languages to decode passages by speakers of Australian English, this paper seeks to establish and categorize the types of problems encountered by multilingual speakers when decoding the speech of monolingual speakers in one of their (the multilinguals') second…

  6. Joint Synchronization Of Viterbi And Reed-Solomon Decoders

    Science.gov (United States)

    Statman, Joseph I.; Chauvin, Todd H.; Cheung, Kar-Ming; Rabkin, Jay; Belongie, Mignon L.

    1995-01-01

    Synchronization times reduced to reduce loss of data. Scheme for decoding received doubly encoded binary-data signal provides for joint synchronization of two decoders. Applies to concatenated error-correcting channel coding communication system in which, at transmitter, data first encoded by interleaved Reed-Solomon code (block code), then by convolutional code.

  7. EEG source imaging assists decoding in a face recognition task

    DEFF Research Database (Denmark)

    Andersen, Rasmus S.; Eliasen, Anders U.; Pedersen, Nicolai

    2017-01-01

    and rather strong a priori information must be invoked. Recent work by Edelman et al. (2016) has demonstrated that introduction of a spatially focal source space representation can improve decoding of motor imagery. In this work we explore the generality of Edelman et al. hypothesis by considering decoding...

  8. Still image and video compression with MATLAB

    CERN Document Server

    Thyagarajan, K

    2010-01-01

    This book describes the principles of image and video compression techniques and introduces current and popular compression standards, such as the MPEG series. Derivations of relevant compression algorithms are developed in an easy-to-follow fashion. Numerous examples are provided in each chapter to illustrate the concepts. The book includes complementary software written in MATLAB SIMULINK to give readers hands-on experience in using and applying various video compression methods. Readers can enhance the software by including their own algorithms.

  9. ALOGORITHMS FOR AUTOMATIC RUNWAY DETECTION ON VIDEO SEQUENCES

    Directory of Open Access Journals (Sweden)

    A. I. Logvin

    2015-01-01

    Full Text Available The article discusses algorithm for automatic runway detection on video sequences. The main stages of algorithm are represented. Some methods to increase reliability of recognition are described.

  10. Unifying the Video and Question Attentions for Open-Ended Video Question Answering.

    Science.gov (United States)

    Xue, Hongyang; Zhao, Zhou; Cai, Deng

    2017-08-29

    Video question answering is an important task towards scene understanding and visual data retrieval. However, current visual question answering works mainly focus on a single static image which is distinct from the dynamic and sequential visual data in the real world. Their approaches cannot utilize the temporal information in videos. In this paper we introduce the task of free-form open-ended video question answering. The open-ended answers enable wider applications compared with the common multiple-choice tasks in Visual-QA. We first propose a dataset for open-ended Video-QA with the automatic question generation approaches. Then we propose our sequential video attention and temporal question attention models. These two models apply the attention mechanism on videos and questions while preserving the sequential and temporal structures of the guides. The two models are integrated into the model of unified attention. After the video and the question are encoded, the answers are generated wordwisely from our models by a decoder. In the end, we evaluate our models on the proposed dataset. The experimental results demonstrate the effectiveness of our proposed model.

  11. Regular expressions for decoding of neural network outputs.

    Science.gov (United States)

    Strauß, Tobias; Leifert, Gundram; Grüning, Tobias; Labahn, Roger

    2016-07-01

    This article proposes a convenient tool for decoding the output of neural networks trained by Connectionist Temporal Classification (CTC) for handwritten text recognition. We use regular expressions to describe the complex structures expected in the writing. The corresponding finite automata are employed to build a decoder. We analyze theoretically which calculations are relevant and which can be avoided. A great speed-up results from an approximation. We conclude that the approximation most likely fails if the regular expression does not match the ground truth which is not harmful for many applications since the low probability will be even underestimated. The proposed decoder is very efficient compared to other decoding methods. The variety of applications reaches from information retrieval to full text recognition. We refer to applications where we integrated the proposed decoder successfully. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Frame-rate performance modeling of software MPEG decoder

    Science.gov (United States)

    Ramamoorthy, Victor

    1997-01-01

    A software MPEG decoder, though attractive in terms of performance and cost, opens up new technical challenges. The most critical question is: When does a software decoder drop a frame? How to predict its timing performance well ahead of its implementation? It is not easy to answer these questions without introducing a stochastic model of the decoding time. With a double buffering scheme, fluctuations in decoding time can be smoothed out to a large extent. However, dropping of frames can not be totally eliminated. New ideas of slip and asymptotic synchronous locking are shown to answer critical design questions of a software decoder. Beneath the troubled world of frame droppings lies the beauty and harmony of our stochastic formulation.

  13. FPGA Prototyping of RNN Decoder for Convolutional Codes

    Directory of Open Access Journals (Sweden)

    Salcic Zoran

    2006-01-01

    Full Text Available This paper presents prototyping of a recurrent type neural network (RNN convolutional decoder using system-level design specification and design flow that enables easy mapping to the target FPGA architecture. Implementation and the performance measurement results have shown that an RNN decoder for hard-decision decoding coupled with a simple hard-limiting neuron activation function results in a very low complexity, which easily fits into standard Altera FPGA. Moreover, the design methodology allowed modeling of complete testbed for prototyping RNN decoders in simulation and real-time environment (same FPGA, thus enabling evaluation of BER performance characteristics of the decoder for various conditions of communication channel in real time.

  14. Quantitative evaluation of muscle synergy models: a single-trial task decoding approach.

    Science.gov (United States)

    Delis, Ioannis; Berret, Bastien; Pozzo, Thierry; Panzeri, Stefano

    2013-01-01

    Muscle synergies, i.e., invariant coordinated activations of groups of muscles, have been proposed as building blocks that the central nervous system (CNS) uses to construct the patterns of muscle activity utilized for executing movements. Several efficient dimensionality reduction algorithms that extract putative synergies from electromyographic (EMG) signals have been developed. Typically, the quality of synergy decompositions is assessed by computing the Variance Accounted For (VAF). Yet, little is known about the extent to which the combination of those synergies encodes task-discriminating variations of muscle activity in individual trials. To address this question, here we conceive and develop a novel computational framework to evaluate muscle synergy decompositions in task space. Unlike previous methods considering the total variance of muscle patterns (VAF based metrics), our approach focuses on variance discriminating execution of different tasks. The procedure is based on single-trial task decoding from muscle synergy activation features. The task decoding based metric evaluates quantitatively the mapping between synergy recruitment and task identification and automatically determines the minimal number of synergies that captures all the task-discriminating variability in the synergy activations. In this paper, we first validate the method on plausibly simulated EMG datasets. We then show that it can be applied to different types of muscle synergy decomposition and illustrate its applicability to real data by using it for the analysis of EMG recordings during an arm pointing task. We find that time-varying and synchronous synergies with similar number of parameters are equally efficient in task decoding, suggesting that in this experimental paradigm they are equally valid representations of muscle synergies. Overall, these findings stress the effectiveness of the decoding metric in systematically assessing muscle synergy decompositions in task space.

  15. A New profiling and pipelining approach for HEVC Decoder on ZedBoard Platform

    Directory of Open Access Journals (Sweden)

    Habib Smei

    2017-10-01

    Full Text Available New multimedia applications such as mobile video, high-quality Internet video or digital television requires high-performance encoding of video signals to meet technical constraints such as runtime, bandwidth or latency. Video coding standard h.265 HEVC (High Efficiency Video Coding was developed by JCT-VC to replace the MPEG-2, MPEG-4 and h.264 codecs and to respond to these new functional constraints. Currently, there are several implementations of this standard. Some implementations are based on software acceleration techniques; Others, on techniques of purely hardware acceleration and some others combine the two techniques. In software implementations, several techniques are used in order to decrease the video coding and decoding time. We quote data parallelism, tasks parallelism and combined solutions. In the other hand, In order to fulfill the computational demands of the new standard, HEVC includes several coding tools that allow dividing each picture into several partitions that can be processed in parallel, without degrading neither the quality nor the bitrate. In this paper, we adapt one of these approaches, the Tile coding tool to propose a pipeline execution approach of the HEVC / h265 decoder application in its version HM Test model. This approach is based on a fine profiling by using code injection techniques supported by standard profiling tools such as Gprof and Valgrind. Profiling allowed us to divide functions into four groups according to three criteria: the first criterion is based on the minimization of communication between the different functions groups in order to have minimal intergroup communication and maximum intragroup communication. The second criterion is the load balancing between processors. The third criterion is the parallelism between functions. Experiments carried out in this paper are based on the Zedboard platform, which integrates a chip Zynq xilinx with a dual core ARM A9. We start with a purely

  16. Optimization and simulation of a New Low Density Parity-Check Decoder using the Syndrome Block

    Directory of Open Access Journals (Sweden)

    Lagrini Lakbir

    2016-02-01

    Full Text Available The Low Density Parity-Check codes are one of the hottest topics in coding theory nowadays equipped with very fast encoding and decoding algorithms, LDPC are very attractive both theoretically and practically.In this article, we present a simulation of a work that has been accepted in an international journal. The newalgorithm allows us to correct errors quickly and without iterations. We show that the proposed algorithm simulation can be applied for both regular and irregular LDPC codes. First, we developed the design of the syndrome Block Second, we generated and simulated the hardware description language source code using Quartus software tools, and finally we show low complexity compared to the basic algorithm

  17. Photonic circuits for iterative decoding of a class of low-density parity-check codes

    Science.gov (United States)

    Pavlichin, Dmitri S.; Mabuchi, Hideo

    2014-10-01

    Photonic circuits in which stateful components are coupled via guided electromagnetic fields are natural candidates for resource-efficient implementation of iterative stochastic algorithms based on propagation of information around a graph. Conversely, such message=passing algorithms suggest novel circuit architectures for signal processing and computation that are well matched to nanophotonic device physics. Here, we construct and analyze a quantum optical model of a photonic circuit for iterative decoding of a class of low-density parity-check (LDPC) codes called expander codes. Our circuit can be understood as an open quantum system whose autonomous dynamics map straightforwardly onto the subroutines of an LDPC decoding scheme, with several attractive features: it can operate in the ultra-low power regime of photonics in which quantum fluctuations become significant, it is robust to noise and component imperfections, it achieves comparable performance to known iterative algorithms for this class of codes, and it provides an instructive example of how nanophotonic cavity quantum electrodynamic components can enable useful new information technology even if the solid-state qubits on which they are based are heavily dephased and cannot support large-scale entanglement.

  18. Performance Analysis of Faulty Gallager-B Decoding of QC-LDPC Codes with Applications

    Directory of Open Access Journals (Sweden)

    O. Al Rasheed

    2014-06-01

    Full Text Available In this paper we evaluate the performance of Gallager-B algorithm, used for decoding low-density parity-check (LDPC codes, under unreliable message computation. Our analysis is restricted to LDPC codes constructed from circular matrices (QC-LDPC codes. Using Monte Carlo simulation we investigate the effects of different code parameters on coding system performance, under a binary symmetric communication channel and independent transient faults model. One possible application of the presented analysis in designing memory architecture with unreliable components is considered.

  19. SVC VIDEO STREAM ALLOCATION AND ADAPTATION IN HETEROGENEOUS NETWORK

    Directory of Open Access Journals (Sweden)

    E. A. Pakulova

    2016-07-01

    Full Text Available The paper deals with video data transmission in format H.264/SVC standard with QoS requirements satisfaction. The Sender-Side Path Scheduling (SSPS algorithm and Sender-Side Video Adaptation (SSVA algorithm were developed. SSPS algorithm gives the possibility to allocate video traffic among several interfaces while SSVA algorithm dynamically changes the quality of video sequence in relation to QoS requirements. It was shown that common usage of two developed algorithms enables to aggregate throughput of access networks, increase parameters of Quality of Experience and decrease losses in comparison with Round Robin algorithm. For evaluation of proposed solution, the set-up was made. The trace files with throughput of existing public networks were used in experiments. Based on this information the throughputs of networks were limited and losses for paths were set. The results of research may be used for study and transmission of video data in heterogeneous wireless networks.

  20. Lattice Sequential Decoder for Coded MIMO Channel: Performance and Complexity Analysis

    CERN Document Server

    Abediseid, Walid

    2011-01-01

    In this paper, the performance limit of lattice sequential decoder for lattice space-time coded MIMO channel is analysed. We determine the rates achievable by lattice coding and sequential decoding applied to such channel. The diversity-multiplexing tradeoff (DMT) under lattice sequential decoding is derived as a function of its parameter---the bias term. The bias parameter is critical for controlling the amount of computations required at the decoding stage. Achieving low decoding complexity requires increasing the value of the bias term. However, this is done at the expense of losing the optimal tradeoff of the channel. We show how such a decoder can bridge the gap between lattice decoder and low complexity decoders. Moreover, the computational complexity of lattice sequential decoder is analysed. Specifically, we derive the tail distribution of the decoder's computational complexity in the high signal-to-noise ratio regime. Similar to the conventional sequential decoder used in discrete memoryless channel,...

  1. Instantly decodable network coding for real-time device-to-device communications

    KAUST Repository

    Douik, Ahmed

    2016-01-04

    This paper studies the delay reduction problem for instantly decodable network coding (IDNC)-based device-to-device (D2D) communication-enabled networks. Unlike conventional point-to-multipoint (PMP) systems in which the wireless base station has the sufficient computation abilities, D2D networks rely on battery-powered operations of the devices. Therefore, a particular emphasis on the computation complexity needs to be addressed in the design of delay reduction algorithms for D2D networks. While most of the existing literature on IDNC directly extend the delay reduction PMP schemes, known to be NP-hard, to the D2D setting, this paper proposes to investigate and minimize the complexity of such algorithms for battery-powered devices. With delay minimization problems in IDNC-based systems being equivalent to a maximum weight clique problems in the IDNC graph, the presented algorithms, in this paper, can be applied to different delay aspects. This paper introduces and focuses on the reduction of the maximum value of the decoding delay as it represents the most general solution. The complexity of the solution is reduced by first proposing efficient methods for the construction, the update, and the dimension reduction of the IDNC graph. The paper, further, shows that, under particular scenarios, the problem boils down to a maximum clique problem. Due to the complexity of discovering such maximum clique, the paper presents a fast selection algorithm. Simulation results illustrate the performance of the proposed schemes and suggest that the proposed fast selection algorithm provides appreciable complexity gain as compared to the optimal selection one, with a negligible degradation in performance. In addition, they indicate that the running time of the proposed solution is close to the random selection algorithm.

  2. Akademisk video

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2017-01-01

    Dette kapitel har fokus på metodiske problemstillinger, der opstår i forhold til at bruge (digital) video i forbindelse med forskningskommunikation, ikke mindst online. Video har længe været benyttet i forskningen til dataindsamling og forskningskommunikation. Med digitaliseringen og internettet er...... der dog opstået nye muligheder og udfordringer i forhold til at formidle og distribuere forskningsresultater til forskellige målgrupper via video. Samtidig er klassiske metodologiske problematikker som forskerens positionering i forhold til det undersøgte stadig aktuelle. Både klassiske og nye...... problemstillinger diskuteres i kapitlet, som rammesætter diskussionen ud fra forskellige positioneringsmuligheder: formidler, historiefortæller, eller dialogist. Disse positioner relaterer sig til genrer inden for ’akademisk video’. Afslutningsvis præsenteres en metodisk værktøjskasse med redskaber til planlægning...

  3. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... World Videos. The workshops were run on December 4, 2016, in Cancun in Mexico. The two workshops together received 13 papers. Each paper was then reviewed by at least two expert reviewers in the field. In all, 11 papers were accepted to be presented at the workshops. The topics covered in the papers...

  4. Encoding Concept Prototypes for Video Event Detection and Summarization

    NARCIS (Netherlands)

    Mazloom, M.; Habibian, A.; Liu, D.; Snoek, C.G.M.; Chang, S.F.

    2015-01-01

    This paper proposes a new semantic video representation for few and zero example event detection and unsupervised video event summarization. Different from existing works, which obtain a semantic representation by training concepts over images or entire video clips, we propose an algorithm that

  5. Graphics processing unit implementation and optimisation of a flexible maximum a-posteriori decoder for synchronisation correction

    Directory of Open Access Journals (Sweden)

    Johann A. Briffa

    2014-06-01

    Full Text Available In this paper, the author presents an optimised parallel implementation of a flexible maximum a-posteriori decoder for synchronisation error correcting codes, supporting a very wide range of code sizes and channel conditions. On mid-range GPUs the author demonstrates decoding speedups of more than two orders of magnitude over a central processing unit implementation of the same optimised algorithm, and more than an order of magnitude over the author's earlier GPU implementation. The prominent challenge is to maintain high parallelisation efficiency over a wide range of code sizes and channel conditions, and different execution hardware. The author ensures this with a dynamic strategy for choosing parallel execution parameters at run-time. They also present a variant that trades off some decoding speed for significantly reduced memory requirement, with no loss to the decoder's error correction performance. The increased throughput of their implementation and its ability to work with less memory allow us to analyse larger codes and poorer channel conditions, and makes practical use of such codes more feasible.

  6. Secure information display by use of multiple decoding masks

    Science.gov (United States)

    Yamamoto, Hirotsugu; Hayasaki, Yoshio; Nishida, Nobuo

    2004-10-01

    Security has become an important issue as information technology has become increasingly pervasive in our everyday lives. Security risks arise with a display that shows decrypted information. In this paper, we propose a secure information display technique by use of visual cryptography. Its decryption requires no special computing devices and is implemented using only human vision. To improve security of the key to decode the encrypted displayed image, multiple decoding masks have been constructed based on visual secret sharing scheme. The proposed display appears as a random pattern to anyone who looks at it unless the person views the displayed image through stacked multiple decoding masks. We have constructed code sets to share the secret image information with a displayed image and multiple decoding masks. The decoding process is a kind of optical processing based on spatial encoding and it needs no computer calculation, which means that no decrypted data exists in a computer system. Thus, the proposed display technique is secure against theft of the decrypted data and eavesdropping of the display signals, and provides a limited viewing zone. Multiple decoding masks increase security of the decoding masks.

  7. Glycan microarrays for decoding the glycome

    Science.gov (United States)

    Rillahan, Cory D.; Paulson, James C.

    2011-01-01

    In the last decade glycan microarrays have revolutionized the analysis of the specificity of glycan binding proteins, providing information that simultaneously illuminates the biology mediated by them and decodes the information content of the glycome. Numerous methods have emerged for arraying glycans in a ‘chip’ format, and glycan libraries have been assembled that address the diversity of the human glycome. Such arrays have been successfully used for analysis of glycan binding proteins that mediate mammalian biology, host-pathogen interactions, immune recognition of glycans relevant to vaccine production and cancer antigens. This review covers the development of glycan microarrays and applications that have provided insights into the roles of mammalian and microbial glycan binding proteins. PMID:21469953

  8. Decoding the mechanisms of Antikythera astronomical device

    CERN Document Server

    Lin, Jian-Liang

    2016-01-01

    This book presents a systematic design methodology for decoding the interior structure of the Antikythera mechanism, an astronomical device from ancient Greece. The historical background, surviving evidence and reconstructions of the mechanism are introduced, and the historical development of astronomical achievements and various astronomical instruments are investigated. Pursuing an approach based on the conceptual design of modern mechanisms and bearing in mind the standards of science and technology at the time, all feasible designs of the six lost/incomplete/unclear subsystems are synthesized as illustrated examples, and 48 feasible designs of the complete interior structure are presented. This approach provides not only a logical tool for applying modern mechanical engineering knowledge to the reconstruction of the Antikythera mechanism, but also an innovative research direction for identifying the original structures of the mechanism in the future. In short, the book offers valuable new insights for all...

  9. Decoding speech for understanding and treating aphasia.

    Science.gov (United States)

    Pasley, Brian N; Knight, Robert T

    2013-01-01

    Aphasia is an acquired language disorder with a diverse set of symptoms that can affect virtually any linguistic modality across both the comprehension and production of spoken language. Partial recovery of language function after injury is common but typically incomplete. Rehabilitation strategies focus on behavioral training to induce plasticity in underlying neural circuits to maximize linguistic recovery. Understanding the different neural circuits underlying diverse language functions is a key to developing more effective treatment strategies. This chapter discusses a systems identification analytic approach to the study of linguistic neural representation. The focus of this framework is a quantitative, model-based characterization of speech and language neural representations that can be used to decode, or predict, speech representations from measured brain activity. Recent results of this approach are discussed in the context of applications to understanding the neural basis of aphasia symptoms and the potential to optimize plasticity during the rehabilitation process. © 2013 Elsevier B.V. All rights reserved.

  10. Academic Training - Bioinformatics: Decoding the Genome

    CERN Multimedia

    Chris Jones

    2006-01-01

    ACADEMIC TRAINING LECTURE SERIES 27, 28 February 1, 2, 3 March 2006 from 11:00 to 12:00 - Auditorium, bldg. 500 Decoding the Genome A special series of 5 lectures on: Recent extraordinary advances in the life sciences arising through new detection technologies and bioinformatics The past five years have seen an extraordinary change in the information and tools available in the life sciences. The sequencing of the human genome, the discovery that we possess far fewer genes than foreseen, the measurement of the tiny changes in the genomes that differentiate us, the sequencing of the genomes of many pathogens that lead to diseases such as malaria are all examples of completely new information that is now available in the quest for improved healthcare. New tools have allowed similar strides in the discovery of the associated protein structures, providing invaluable information for those searching for new drugs. New DNA microarray chips permit simultaneous measurement of the state of expression of tens...

  11. Intelligent video surveillance systems and technology

    CERN Document Server

    Ma, Yunqian

    2009-01-01

    From the streets of London to subway stations in New York City, hundreds of thousands of surveillance cameras ubiquitously collect hundreds of thousands of videos, often running 24/7. How can such vast volumes of video data be stored, analyzed, indexed, and searched? How can advanced video analysis and systems autonomously recognize people and detect targeted activities real-time? Collating and presenting the latest information Intelligent Video Surveillance: Systems and Technology explores these issues, from fundamentals principle to algorithmic design and system implementation.An Integrated

  12. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real W...

  13. Encoding and Decoding Models in Cognitive Electrophysiology

    Directory of Open Access Journals (Sweden)

    Christopher R. Holdgraf

    2017-09-01

    Full Text Available Cognitive neuroscience has seen rapid growth in the size and complexity of data recorded from the human brain as well as in the computational tools available to analyze this data. This data explosion has resulted in an increased use of multivariate, model-based methods for asking neuroscience questions, allowing scientists to investigate multiple hypotheses with a single dataset, to use complex, time-varying stimuli, and to study the human brain under more naturalistic conditions. These tools come in the form of “Encoding” models, in which stimulus features are used to model brain activity, and “Decoding” models, in which neural features are used to generated a stimulus output. Here we review the current state of encoding and decoding models in cognitive electrophysiology and provide a practical guide toward conducting experiments and analyses in this emerging field. Our examples focus on using linear models in the study of human language and audition. We show how to calculate auditory receptive fields from natural sounds as well as how to decode neural recordings to predict speech. The paper aims to be a useful tutorial to these approaches, and a practical introduction to using machine learning and applied statistics to build models of neural activity. The data analytic approaches we discuss may also be applied to other sensory modalities, motor systems, and cognitive systems, and we cover some examples in these areas. In addition, a collection of Jupyter notebooks is publicly available as a complement to the material covered in this paper, providing code examples and tutorials for predictive modeling in python. The aim is to provide a practical understanding of predictive modeling of human brain data and to propose best-practices in conducting these analyses.

  14. Encoding and Decoding Models in Cognitive Electrophysiology

    Science.gov (United States)

    Holdgraf, Christopher R.; Rieger, Jochem W.; Micheli, Cristiano; Martin, Stephanie; Knight, Robert T.; Theunissen, Frederic E.

    2017-01-01

    Cognitive neuroscience has seen rapid growth in the size and complexity of data recorded from the human brain as well as in the computational tools available to analyze this data. This data explosion has resulted in an increased use of multivariate, model-based methods for asking neuroscience questions, allowing scientists to investigate multiple hypotheses with a single dataset, to use complex, time-varying stimuli, and to study the human brain under more naturalistic conditions. These tools come in the form of “Encoding” models, in which stimulus features are used to model brain activity, and “Decoding” models, in which neural features are used to generated a stimulus output. Here we review the current state of encoding and decoding models in cognitive electrophysiology and provide a practical guide toward conducting experiments and analyses in this emerging field. Our examples focus on using linear models in the study of human language and audition. We show how to calculate auditory receptive fields from natural sounds as well as how to decode neural recordings to predict speech. The paper aims to be a useful tutorial to these approaches, and a practical introduction to using machine learning and applied statistics to build models of neural activity. The data analytic approaches we discuss may also be applied to other sensory modalities, motor systems, and cognitive systems, and we cover some examples in these areas. In addition, a collection of Jupyter notebooks is publicly available as a complement to the material covered in this paper, providing code examples and tutorials for predictive modeling in python. The aim is to provide a practical understanding of predictive modeling of human brain data and to propose best-practices in conducting these analyses. PMID:29018336

  15. Locally decodable codes and private information retrieval schemes

    CERN Document Server

    Yekhanin, Sergey

    2010-01-01

    Locally decodable codes (LDCs) are codes that simultaneously provide efficient random access retrieval and high noise resilience by allowing reliable reconstruction of an arbitrary bit of a message by looking at only a small number of randomly chosen codeword bits. Local decodability comes with a certain loss in terms of efficiency - specifically, locally decodable codes require longer codeword lengths than their classical counterparts. Private information retrieval (PIR) schemes are cryptographic protocols designed to safeguard the privacy of database users. They allow clients to retrieve rec

  16. Turbo decoder architecture for beyond-4G applications

    CERN Document Server

    Wong, Cheng-Chi

    2013-01-01

    This book describes the most recent techniques for turbo decoder implementation, especially for 4G and beyond 4G applications. The authors reveal techniques for the design of high-throughput decoders for future telecommunication systems, enabling designers to reduce hardware cost and shorten processing time. Coverage includes an explanation of VLSI implementation of the turbo decoder, from basic functional units to advanced parallel architecture. The authors discuss both hardware architecture techniques and experimental results, showing the variations in area/throughput/performance with respec

  17. A Modified Multiview Video Streaming System Using 3-Tier Architecture

    Directory of Open Access Journals (Sweden)

    Mohamed M. Fouad

    2016-01-01

    Full Text Available In this paper, we present a modified inter-view prediction Multiview Video Coding (MVC scheme from the perspective of viewer's interactivity. When a viewer requests some view(s, our scheme leads to lower transmission bit-rate. We develop an interactive multiview video streaming system exploiting that modified MVC scheme. Conventional interactive multiview video systems require high bandwidth due to redundant data being transferred. With real data test sequences, clear improvements are shown using the proposed interactive multiview video system compared to competing ones in terms of the average transmission bit-rate and storage size of the decoded (i.e., transferred data with comparable rate-distortion.

  18. General Video Game AI: Learning from Screen Capture

    OpenAIRE

    Kunanusont, Kamolwan; Lucas, Simon M.; Perez-Liebana, Diego

    2017-01-01

    General Video Game Artificial Intelligence is a general game playing framework for Artificial General Intelligence research in the video-games domain. In this paper, we propose for the first time a screen capture learning agent for General Video Game AI framework. A Deep Q-Network algorithm was applied and improved to develop an agent capable of learning to play different games in the framework. After testing this algorithm using various games of different categories and difficulty levels, th...

  19. Online scene change detection of multicast (MBone) video

    Science.gov (United States)

    Zhou, Wensheng; Shen, Ye; Vellaikal, Asha; Kuo, C.-C. Jay

    1998-10-01

    Many multimedia applications, such as multimedia data management systems and communication systems, require efficient representation of multimedia content. Thus semantic interpretation of video content has been a popular research area. Currently, most content-based video representation involves the segmentation of video based on key frames which are generated using scene change detection techniques as well as camera/object motion. Then, video features can be extracted from key frames. However most of such research performs off-line video processing in which the whole video scope is known as a priori which allows multiple scans of the stored video files during video processing. In comparison, relatively not much research has been done in the area of on-line video processing, which is crucial in video communication applications such as on-line collaboration, news broadcasts and so on. Our research investigates on-line real-time scene change detection of multicast video over the Internet. Our on-line processing system are designed to meet the requirements of real-time video multicasting over the Internet and to utilize the successful video parsing techniques available today. The proposed algorithms extract key frames from video bitstreams sent through the MBone network, and the extracted key frames are multicasted as annotations or metadata over a separate channel to assist in content filtering such as those anticipated to be in use by on-line filtering proxies in the Internet. The performance of the proposed algorithms are demonstrated and discussed in this paper.

  20. FFT Algorithm for Binary Extension Finite Fields and Its Application to Reed–Solomon Codes

    KAUST Repository

    Lin, Sian Jheng

    2016-08-15

    Recently, a new polynomial basis over binary extension fields was proposed, such that the fast Fourier transform (FFT) over such fields can be computed in the complexity of order O(n lg(n)), where n is the number of points evaluated in FFT. In this paper, we reformulate this FFT algorithm, such that it can be easier understood and be extended to develop frequency-domain decoding algorithms for (n = 2(m), k) systematic Reed-Solomon (RS) codes over F-2m, m is an element of Z(+), with n-k a power of two. First, the basis of syndrome polynomials is reformulated in the decoding procedure so that the new transforms can be applied to the decoding procedure. A fast extended Euclidean algorithm is developed to determine the error locator polynomial. The computational complexity of the proposed decoding algorithm is O(n lg(n-k)+(n-k)lg(2)(n-k)), improving upon the best currently available decoding complexity O(n lg(2)(n) lg lg(n)), and reaching the best known complexity bound that was established by Justesen in 1976. However, Justesen\\'s approach is only for the codes over some specific fields, which can apply Cooley-Tukey FFTs. As revealed by the computer simulations, the proposed decoding algorithm is 50 times faster than the conventional one for the (2(16), 2(15)) RS code over F-216.

  1. Fast compressed domain motion detection in H.264 video streams for video surveillance applications

    DEFF Research Database (Denmark)

    Szczerba, Krzysztof; Forchhammer, Søren; Støttrup-Andersen, Jesper

    2009-01-01

    numbers of video streams on a single server. The focus of the work is on using the information in coded video streams to reduce the computational complexity and memory requirements, which translates into reduced hardware requirements and costs. The devised algorithm detects and segments activity based...

  2. ENHANCEMENT OF ITERATIVE TURBO DECODING FOR HARQ SYSTEMS

    National Research Council Canada - National Science Library

    Imed Amamra; Nadir Derouiche

    2016-01-01

    .... The simulation results of the proposed stopping criterion show an interesting application in a hybrid automatic repeat request systems with turbo coding scheme, where the decoding complexity can be fairly reduced...

  3. Evolution of Neural Computations: Mantis Shrimp and Human Color Decoding

    Directory of Open Access Journals (Sweden)

    Qasim Zaidi

    2014-10-01

    Full Text Available Mantis shrimp and primates both possess good color vision, but the neural implementation in the two species is very different, a reflection of the largely unrelated evolutionary lineages of these creatures. Mantis shrimp have scanning compound eyes with 12 classes of photoreceptors, and have evolved a system to decode color information at the front-end of the sensory stream. Primates have image-focusing eyes with three classes of cones, and decode color further along the visual-processing hierarchy. Despite these differences, we report a fascinating parallel between the computational strategies at the color-decoding stage in the brains of stomatopods and primates. Both species appear to use narrowly tuned cells that support interval decoding color identification.

  4. FPGA Prototyping of RNN Decoder for Convolutional Codes

    National Research Council Canada - National Science Library

    Salcic, Zoran; Berber, Stevan; Secker, Paul

    2006-01-01

    This paper presents prototyping of a recurrent type neural network (RNN) convolutional decoder using system-level design specification and design flow that enables easy mapping to the target FPGA architecture...

  5. Interleaved Convolutional Code and Its Viterbi Decoder Architecture

    National Research Council Canada - National Science Library

    Kong, Jun Jin; Parhi, Keshab K

    2003-01-01

    We propose an area-efficient high-speed interleaved Viterbi decoder architecture, which is based on the state-parallel architecture with register exchange path memory structure, for interleaved convolutional code...

  6. Impact of Decoding Work within a Professional Program

    Science.gov (United States)

    Yeo, Michelle; Lafave, Mark; Westbrook, Khatija; McAllister, Jenelle; Valdez, Dennis; Eubank, Breda

    2017-01-01

    This chapter demonstrates how Decoding work can be used productively within a curriculum change process to help make design decisions based on a more nuanced understanding of student learning and the relationship of a professional program to the field.

  7. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition...

  8. Video Analytics

    DEFF Research Database (Denmark)

    include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition......This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...

  9. Tiny videos: a large data set for nonparametric video retrieval and frame classification.

    Science.gov (United States)

    Karpenko, Alexandre; Aarabi, Parham

    2011-03-01

    In this paper, we present a large database of over 50,000 user-labeled videos collected from YouTube. We develop a compact representation called "tiny videos" that achieves high video compression rates while retaining the overall visual appearance of the video as it varies over time. We show that frame sampling using affinity propagation-an exemplar-based clustering algorithm-achieves the best trade-off between compression and video recall. We use this large collection of user-labeled videos in conjunction with simple data mining techniques to perform related video retrieval, as well as classification of images and video frames. The classification results achieved by tiny videos are compared with the tiny images framework [24] for a variety of recognition tasks. The tiny images data set consists of 80 million images collected from the Internet. These are the largest labeled research data sets of videos and images available to date. We show that tiny videos are better suited for classifying scenery and sports activities, while tiny images perform better at recognizing objects. Furthermore, we demonstrate that combining the tiny images and tiny videos data sets improves classification precision in a wider range of categories.

  10. Encoding and Decoding Procedures for Arrangements

    Directory of Open Access Journals (Sweden)

    Alexander A. Babaev

    2012-05-01

    Full Text Available This article discusses an algorithm based on the encoding procedure for representing a set of arrangement elements as a single number. Also the author provides the procedure for the inverse transformation of the code into arrangement elements. In addition the Article includes recommendations on the use of the above procedures in combinatorial algorithms of optimization.

  11. Sudan-decoding generalized geometric Goppa codes

    DEFF Research Database (Denmark)

    Heydtmann, Agnes Eileen

    2003-01-01

    for these codes based on Sudan's improved algorithm is presented and its error-correcting capacity is analyzed. For the implementation of the algorithm it is necessary that the so-called increasing zero bases of certain spaces of functions are available. A method to obtain such bases is developed....

  12. An Indirect Genetic Algorithm for a Nurse Scheduling Problem

    OpenAIRE

    Aickelin, Uwe; Dowsland, Kathryn

    2008-01-01

    This paper describes a Genetic Algorithms approach to a manpower-scheduling problem arising at a major UK hospital. Although Genetic Algorithms have been successfully used for similar problems in the past, they always had to overcome the limitations of the classical Genetic Algorithms paradigm in handling the conflict between objectives and constraints. The approach taken here is to use an indirect coding based on permutations of the nurses, and a heuristic decoder that builds schedules from ...

  13. Stereo side information generation in low-delay distributed stereo video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Forchhammer, Søren

    2012-01-01

    Distributed Video Coding (DVC) is a technique that allows shifting the computational complexity from the encoder to the decoder. One of the core elements of the decoder is the creation of the Side Information (SI), which is a hypothesis of what the to-be-decoded frame looks like. Much work on DVC...... has been carried out: often the decoder can use future and past frames in order to obtain the SI exploiting the time redundancy. Other work has addressed a Multiview scenario; exploiting the frames coming from cameras close to the one we are decoding (usually a left and right camera) it is possible...... to create SI exploiting the inter-view spatial redundancy. A careful fusion of the two SI should be done in order to use the best part of each SI. In this work we study a Stereo Low-Delay scenario using only two views. Due to the delay constraint we use only past frames of the sequence we are decoding...

  14. Hardware architectures for real time processing of High Definition video sequences

    OpenAIRE

    Genovese, Mariangela

    2014-01-01

    Actually, application fields, such as medicine, space exploration, surveillance, authentication, HDTV, and automated industry inspection, require capturing, storing and processing continuous streams of video data. Consequently, different process techniques (video enhancement, segmentation, object detection, or video compression, as examples) are involved in these applications. Such techniques often require a significant number of operations depending on the algorithm complexity and the video ...

  15. Neural decoding of expressive human movement from scalp electroencephalography (EEG

    Directory of Open Access Journals (Sweden)

    Zachery Ryan Hernandez

    2014-04-01

    Full Text Available Although efforts to characterize human movement through EEG have revealed neural activities unique to limb control that can be used to infer movement kinematics, it is still unknown the extent to which EEG can be used to discern the expressive qualities that influence such movements. In this study we used EEG and inertial sensors to record brain activity and movement of five skilled and certified Laban Movement Analysis (LMA dancers. Each dancer performed whole body functional movements of three Action types: movements devoid of expressive qualities ('Neutral', non-expressive movements while thinking about specific expressive qualities ('Think’, and enacted expressive movements ('Do'. The expressive movement qualities that were used in the 'Think' and 'Do' actions consisted of a sequence of eight Laban Efforts as defined by LMA - a notation system and language for describing, visualizing, interpreting and documenting all varieties of human movement. We used delta band (0.2 – 4 Hz EEG as input to a machine learning algorithm that computed locality-preserving Fisher’s discriminant analysis (LFDA for dimensionality reduction followed by Gaussian mixture models (GMMs to decode the type of Action. We also trained our LFDA-GMM models to classify all the possible combinations of Action Type and Laban Effort (giving a total of 17 classes. Classification accuracy rates were 59.4 ± 0.6% for Action Type and 88.2 ± 0.7% for Laban Effort Type. Ancillary analyses of the potential relations between the EEG and movement kinematics of the dancer's body, indicated that motion-related artifacts did not significantly influence our classification results. In summary, this research demonstrates that EEG has valuable information about the expressive qualities of movement. These results may have applications for advancing the understanding of the neural basis of expressive movements and for the development of neuroprosthetics to restore movements.

  16. Decoding post-stroke motor function from structural brain imaging

    Directory of Open Access Journals (Sweden)

    Jane M. Rondina

    2016-01-01

    Full Text Available Clinical research based on neuroimaging data has benefited from machine learning methods, which have the ability to provide individualized predictions and to account for the interaction among units of information in the brain. Application of machine learning in structural imaging to investigate diseases that involve brain injury presents an additional challenge, especially in conditions like stroke, due to the high variability across patients regarding characteristics of the lesions. Extracting data from anatomical images in a way that translates brain damage information into features to be used as input to learning algorithms is still an open question. One of the most common approaches to capture regional information from brain injury is to obtain the lesion load per region (i.e. the proportion of voxels in anatomical structures that are considered to be damaged. However, no systematic evaluation has yet been performed to compare this approach with using patterns of voxels (i.e. considering each voxel as a single feature. In this paper we compared both approaches applying Gaussian Process Regression to decode motor scores in 50 chronic stroke patients based solely on data derived from structural MRI. For both approaches we compared different ways to delimit anatomical areas: regions of interest from an anatomical atlas, the corticospinal tract, a mask obtained from fMRI analysis with a motor task in healthy controls and regions selected using lesion-symptom mapping. Our analysis showed that extracting features through patterns of voxels that represent lesion probability produced better results than quantifying the lesion load per region. In particular, from the different ways to delimit anatomical areas compared, the best performance was obtained with a combination of a range of cortical and subcortical motor areas as well as the corticospinal tract. These results will inform the appropriate methodology for predicting long term motor outcomes

  17. Neural decoding of expressive human movement from scalp electroencephalography (EEG)

    Science.gov (United States)

    Cruz-Garza, Jesus G.; Hernandez, Zachery R.; Nepaul, Sargoon; Bradley, Karen K.; Contreras-Vidal, Jose L.

    2014-01-01

    Although efforts to characterize human movement through electroencephalography (EEG) have revealed neural activities unique to limb control that can be used to infer movement kinematics, it is still unknown the extent to which EEG can be used to discern the expressive qualities that influence such movements. In this study we used EEG and inertial sensors to record brain activity and movement of five skilled and certified Laban Movement Analysis (LMA) dancers. Each dancer performed whole body movements of three Action types: movements devoid of expressive qualities (“Neutral”), non-expressive movements while thinking about specific expressive qualities (“Think”), and enacted expressive movements (“Do”). The expressive movement qualities that were used in the “Think” and “Do” actions consisted of a sequence of eight Laban Effort qualities as defined by LMA—a notation system and language for describing, visualizing, interpreting and documenting all varieties of human movement. We used delta band (0.2–4 Hz) EEG as input to a machine learning algorithm that computed locality-preserving Fisher's discriminant analysis (LFDA) for dimensionality reduction followed by Gaussian mixture models (GMMs) to decode the type of Action. We also trained our LFDA-GMM models to classify all the possible combinations of Action Type and Laban Effort quality (giving a total of 17 classes). Classification accuracy rates were 59.4 ± 0.6% for Action Type and 88.2 ± 0.7% for Laban Effort quality Type. Ancillary analyses of the potential relations between the EEG and movement kinematics of the dancer's body, indicated that motion-related artifacts did not significantly influence our classification results. In summary, this research demonstrates that EEG has valuable information about the expressive qualities of movement. These results may have applications for advancing the understanding of the neural basis of expressive movements and for the development of

  18. Neural decoding of expressive human movement from scalp electroencephalography (EEG).

    Science.gov (United States)

    Cruz-Garza, Jesus G; Hernandez, Zachery R; Nepaul, Sargoon; Bradley, Karen K; Contreras-Vidal, Jose L

    2014-01-01

    Although efforts to characterize human movement through electroencephalography (EEG) have revealed neural activities unique to limb control that can be used to infer movement kinematics, it is still unknown the extent to which EEG can be used to discern the expressive qualities that influence such movements. In this study we used EEG and inertial sensors to record brain activity and movement of five skilled and certified Laban Movement Analysis (LMA) dancers. Each dancer performed whole body movements of three Action types: movements devoid of expressive qualities ("Neutral"), non-expressive movements while thinking about specific expressive qualities ("Think"), and enacted expressive movements ("Do"). The expressive movement qualities that were used in the "Think" and "Do" actions consisted of a sequence of eight Laban Effort qualities as defined by LMA-a notation system and language for describing, visualizing, interpreting and documenting all varieties of human movement. We used delta band (0.2-4 Hz) EEG as input to a machine learning algorithm that computed locality-preserving Fisher's discriminant analysis (LFDA) for dimensionality reduction followed by Gaussian mixture models (GMMs) to decode the type of Action. We also trained our LFDA-GMM models to classify all the possible combinations of Action Type and Laban Effort quality (giving a total of 17 classes). Classification accuracy rates were 59.4 ± 0.6% for Action Type and 88.2 ± 0.7% for Laban Effort quality Type. Ancillary analyses of the potential relations between the EEG and movement kinematics of the dancer's body, indicated that motion-related artifacts did not significantly influence our classification results. In summary, this research demonstrates that EEG has valuable information about the expressive qualities of movement. These results may have applications for advancing the understanding of the neural basis of expressive movements and for the development of neuroprosthetics to restore

  19. Classifying smoke in laparoscopic videos using SVM

    Directory of Open Access Journals (Sweden)

    Alshirbaji Tamer Abdulbaki

    2017-09-01

    Full Text Available Smoke in laparoscopic videos usually appears due to the use of electrocautery when cutting or coagulating tissues. Therefore, detecting smoke can be used for event-based annotation in laparoscopic surgeries by retrieving the events associated with the electrocauterization. Furthermore, smoke detection can also be used for automatic smoke removal. However, detecting smoke in laparoscopic video is a challenge because of the changeability of smoke patterns, the moving camera and the different lighting conditions. In this paper, we present a video-based smoke detection algorithm to detect smoke of different densities such as fog, low and high density in laparoscopic videos. The proposed method depends on extracting various visual features from the laparoscopic images and providing them to support vector machine (SVM classifier. Features are based on motion, colour and texture patterns of the smoke. We validated our algorithm using experimental evaluation on four laparoscopic cholecystectomy videos. These four videos were manually annotated by defining every frame as smoke or non-smoke frame. The algorithm was applied to the videos by using different feature combinations for classification. Experimental results show that the combination of all proposed features gives the best classification performance. The overall accuracy (i.e. correctly classified frames is around 84%, with the sensitivity (i.e. correctly detected smoke frames and the specificity (i.e. correctly detected non-smoke frames are 89% and 80%, respectively.

  20. A decoding method of an n length binary BCH code through (n + 1n length binary cyclic code

    Directory of Open Access Journals (Sweden)

    TARIQ SHAH

    2013-09-01

    Full Text Available For a given binary BCH code Cn of length n = 2 s - 1 generated by a polynomial of degree r there is no binary BCH code of length (n + 1n generated by a generalized polynomial of degree 2r. However, it does exist a binary cyclic code C (n+1n of length (n + 1n such that the binary BCH code Cn is embedded in C (n+1n . Accordingly a high code rate is attained through a binary cyclic code C (n+1n for a binary BCH code Cn . Furthermore, an algorithm proposed facilitates in a decoding of a binary BCH code Cn through the decoding of a binary cyclic code C (n+1n , while the codes Cn and C (n+1n have the same minimum hamming distance.

  1. Decoding the mechanical fingerprints of biomolecules.

    Science.gov (United States)

    Dudko, Olga K

    2016-01-01

    The capacity of biological macromolecules to act as exceedingly sophisticated and highly efficient cellular machines - switches, assembly factors, pumps, or motors - is realized through their conformational transitions, that is, their folding into distinct shapes and selective binding to other molecules. Conformational transitions can be induced, monitored, and manipulated by pulling individual macromolecules apart with an applied force. Pulling experiments reveal, for a given biomolecule, the relationship between applied force and molecular extension. Distinct signatures in the force-extension relationship identify a given biomolecule and thus serve as the molecule's 'mechanical fingerprints'. But, how can these fingerprints be decoded to uncover the energy barriers crossed by the molecule in the course of its conformational transition, as well as the associated timescales? This review summarizes a powerful class of approaches to interpreting single-molecule force spectroscopy measurements - namely, analytically tractable approaches. On the fundamental side, analytical theories have the power to reveal the unifying principles underneath the bewildering diversity of biomolecules and their behaviors. On the practical side, analytical expressions that result from these theories are particularly well suited for a direct fit to experimental data, yielding the important parameters that govern biological processes at the molecular level.

  2. Rate Aware Instantly Decodable Network Codes

    KAUST Repository

    Douik, Ahmed

    2016-02-26

    This paper addresses the problem of reducing the delivery time of data messages to cellular users using instantly decodable network coding (IDNC) with physical-layer rate awareness. While most of the existing literature on IDNC does not consider any physical layer complications, this paper proposes a cross-layer scheme that incorporates the different channel rates of the various users in the decision process of both the transmitted message combinations and the rates with which they are transmitted. The completion time minimization problem in such scenario is first shown to be intractable. The problem is, thus, approximated by reducing, at each transmission, the increase of an anticipated version of the completion time. The paper solves the problem by formulating it as a maximum weight clique problem over a newly designed rate aware IDNC (RA-IDNC) graph. Further, the paper provides a multi-layer solution to improve the completion time approximation. Simulation results suggest that the cross-layer design largely outperforms the uncoded transmissions strategies and the classical IDNC scheme. © 2015 IEEE.

  3. fNIRS-based online deception decoding

    Science.gov (United States)

    Hu, Xiao-Su; Hong, Keum-Shik; Ge, Shuzhi Sam

    2012-04-01

    Deception involves complex neural processes in the brain. Different techniques have been used to study and understand brain mechanisms during deception. Moreover, efforts have been made to develop schemes that can detect and differentiate deception and truth-telling. In this paper, a functional near-infrared spectroscopy (fNIRS)-based online brain deception decoding framework is developed. Deploying dual-wavelength fNIRS, we interrogate 16 locations in the forehead when eight able-bodied adults perform deception and truth-telling scenarios separately. By combining preprocessed oxy-hemoglobin and deoxy-hemoglobin signals, we develop subject-specific classifiers using the support vector machine. Deception and truth-telling states are classified correctly in seven out of eight subjects. A control experiment is also conducted to verify the deception-related hemodynamic response. The average classification accuracy is over 83.44% from these seven subjects. The obtained result suggests that the applicability of fNIRS as a brain imaging technique for online deception detection is very promising.

  4. Fast mental states decoding in mixed reality.

    Science.gov (United States)

    De Massari, Daniele; Pacheco, Daniel; Malekshahi, Rahim; Betella, Alberto; Verschure, Paul F M J; Birbaumer, Niels; Caria, Andrea

    2014-01-01

    The combination of Brain-Computer Interface (BCI) technology, allowing online monitoring and decoding of brain activity, with virtual and mixed reality (MR) systems may help to shape and guide implicit and explicit learning using ecological scenarios. Real-time information of ongoing brain states acquired through BCI might be exploited for controlling data presentation in virtual environments. Brain states discrimination during mixed reality experience is thus critical for adapting specific data features to contingent brain activity. In this study we recorded electroencephalographic (EEG) data while participants experienced MR scenarios implemented through the eXperience Induction Machine (XIM). The XIM is a novel framework modeling the integration of a sensing system that evaluates and measures physiological and psychological states with a number of actuators and effectors that coherently reacts to the user's actions. We then assessed continuous EEG-based discrimination of spatial navigation, reading and calculation performed in MR, using linear discriminant analysis (LDA) and support vector machine (SVM) classifiers. Dynamic single trial classification showed high accuracy of LDA and SVM classifiers in detecting multiple brain states as well as in differentiating between high and low mental workload, using a 5 s time-window shifting every 200 ms. Our results indicate overall better performance of LDA with respect to SVM and suggest applicability of our approach in a BCI-controlled MR scenario. Ultimately, successful prediction of brain states might be used to drive adaptation of data representation in order to boost information processing in MR.

  5. Fast mental states decoding in mixed reality.

    Directory of Open Access Journals (Sweden)

    Daniele eDe Massari

    2014-11-01

    Full Text Available The combination of Brain-Computer Interface technology, allowing online monitoring and decoding of brain activity, with virtual and mixed reality systems may help to shape and guide implicit and explicit learning using ecological scenarios. Real-time information of ongoing brain states acquired through BCI might be exploited for controlling data presentation in virtual environments. In this context, assessing to what extent brain states can be discriminated during mixed reality experience is critical for adapting specific data features to contingent brain activity. In this study we recorded EEG data while participants experienced a mixed reality scenario implemented through the eXperience Induction Machine (XIM. The XIM is a novel framework modeling the integration of a sensing system that evaluates and measures physiological and psychological states with a number of actuators and effectors that coherently reacts to the user's actions. We then assessed continuous EEG-based discrimination of spatial navigation, reading and calculation performed in mixed reality, using LDA and SVM classifiers. Dynamic single trial classification showed high accuracy of LDA and SVM classifiers in detecting multiple brain states as well as in differentiating between high and low mental workload, using a 5 s time-window shifting every 200 ms. Our results indicate overall better performance of LDA with respect to SVM and suggest applicability of our approach in a BCI-controlled mixed reality scenario. Ultimately, successful prediction of brain states might be used to drive adaptation of data representation in order to boost information processing in mixed reality.

  6. Decoding reality the universe as quantum information

    CERN Document Server

    Vedral, Vlatko

    2010-01-01

    In Decoding Reality, Vlatko Vedral offers a mind-stretching look at the deepest questions about the universe--where everything comes from, why things are as they are, what everything is. The most fundamental definition of reality is not matter or energy, he writes, but information--and it is the processing of information that lies at the root of all physical, biological, economic, and social phenomena. This view allows Vedral to address a host of seemingly unrelated questions: Why does DNA bind like it does? What is the ideal diet for longevity? How do you make your first million dollars? We can unify all through the understanding that everything consists of bits of information, he writes, though that raises the question of where these bits come from. To find the answer, he takes us on a guided tour through the bizarre realm of quantum physics. At this sub-sub-subatomic level, we find such things as the interaction of separated quantum particles--what Einstein called "spooky action at a distance." In fact, V...

  7. Research on Agricultural Surveillance Video of Intelligent Tracking

    Science.gov (United States)

    Cai, Lecai; Xu, Jijia; Liangping, Jin; He, Zhiyong

    Intelligent video tracking technology is the digital video processing and analysis of an important field of application in the civilian and military defense have a wide range of applications. In this paper, a systematic study on the surveillance video of the Smart in the agricultural tracking, particularly in target detection and tracking problem of the study, respectively for the static background of the video sequences of moving targets detection and tracking algorithm, the goal of agricultural production for rapid detection and tracking algorithm and Mean Shift-based translation and rotation of the target tracking algorithm. Experimental results show that the system can effectively and accurately track the target in the surveillance video. Therefore, in agriculture for the intelligent video surveillance tracking study, whether it is from the environmental protection or social security, economic efficiency point of view, are very meaningful.

  8. An FPGA Implementation of -Regular Low-Density Parity-Check Code Decoder

    Directory of Open Access Journals (Sweden)

    Parhi Keshab K

    2003-01-01

    Full Text Available Because of their excellent error-correcting performance, low-density parity-check (LDPC codes have recently attracted a lot of attention. In this paper, we are interested in the practical LDPC code decoder hardware implementations. The direct fully parallel decoder implementation usually incurs too high hardware complexity for many real applications, thus partly parallel decoder design approaches that can achieve appropriate trade-offs between hardware complexity and decoding throughput are highly desirable. Applying a joint code and decoder design methodology, we develop a high-speed -regular LDPC code partly parallel decoder architecture based on which we implement a -bit, rate- -regular LDPC code decoder on Xilinx FPGA device. This partly parallel decoder supports a maximum symbol throughput of Mbps and achieves BER at 2 dB over AWGN channel while performing maximum decoding iterations.

  9. Spatio-temporal image inpainting for video applications

    Directory of Open Access Journals (Sweden)

    Voronin Viacheslav

    2017-01-01

    Full Text Available Video inpainting or completion is a vital video improvement technique used to repair or edit digital videos. This paper describes a framework for temporally consistent video completion. The proposed method allows to remove dynamic objects or restore missing or tainted regions presented in a video sequence by utilizing spatial and temporal information from neighboring scenes. Masking algorithm is used for detection of scratches or damaged portions in video frames. The algorithm iteratively performs the following operations: achieve frame; update the scene model; update positions of moving objects; replace parts of the frame occupied by the objects marked for remove by using a background model. In this paper, we extend an image inpainting algorithm based texture and structure reconstruction by incorporating an improved strategy for video. Our algorithm is able to deal with a variety of challenging situations which naturally arise in video inpainting, such as the correct reconstruction of dynamic textures, multiple moving objects and moving background. Experimental comparisons to state-of-the-art video completion methods demonstrate the effectiveness of the proposed approach. It is shown that the proposed spatio-temporal image inpainting method allows restoring a missing blocks and removing a text from the scenes on videos.

  10. Controlling Initial and Final Radii to Achieve a Low-Complexity Sphere Decoding Technique in MIMO Channels

    Directory of Open Access Journals (Sweden)

    Fatemeh Eshagh Hosseini

    2012-01-01

    Full Text Available In order to apply sphere decoding algorithm in multiple-input multiple-output communication systems and to make it feasible for real-time applications, its computational complexity should be decreased. To achieve this goal, this paper provides some useful insights into the effect of initial and the final sphere radii and estimating them effortlessly. It also discusses practical ways of initiating the algorithm properly and terminating it before the normal end of the process as well as the cost of these methods. Besides, a novel algorithm is introduced which utilizes the presented techniques according to a threshold factor which is defined in terms of the number of transmit antennas and the noise variance. Simulation results show that the proposed algorithm offers a desirable performance and reasonable complexity satisfying practical constraints.

  11. Transform Domain Unidirectional Distributed Video Coding Using Dynamic Parity Allocation

    Science.gov (United States)

    Badem, Murat B.; Fernando, Anil; Weerakkody, Rajitha; Arachchi, Hemantha K.; Kondoz, Ahmet M.

    DVC based video codecs proposed in the literature generally include a reverse (feedback) channel between the encoder and the decoder. This channel is used to communicate the dynamic parity bit request messages from the decoder to the encoder resulting in an optimum dynamic variable rate control implementation. However it is observed that this dynamic feedback mechanism is a practical hindrance in a number of practical consumer electronics applications. In this paper we proposed a novel transform domain Unidirectional Distributed Video Codec (UDVC) without a feedback channel. First, all Wyner-Ziv frames are divided into rectangular macroblocks. A simple metric is used for each block to represent the correlations between the corresponding blocks in the adjacent key frame and the Wyner-Ziv frame. Based on the value of this metric, parity is allocated dynamically for each block. These parities are either stored for offline processing or transmitted to the DVC decoder for on line processing. Simulation results show that the proposed codec outperforms the existing UDVC solutions by a significant margin.

  12. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...... would like to emphasize another side to the algorithmic everyday life. We argue that algorithms can instigate and facilitate imagination, creativity, and frivolity, while saying something that is simultaneously old and new, always almost repeating what was before but never quite returning. We show...... this by threading together stimulating quotes and screenshots from Google’s autocomplete algorithms. In doing so, we invite the reader to re-explore Google’s autocomplete algorithms in a creative, playful, and reflexive way, thereby rendering more visible some of the excitement and frivolity that comes from being...

  13. Movement decoding using neural synchronization and inter-hemispheric connectivity from deep brain local field potentials.

    Science.gov (United States)

    Mamun, K A; Mace, M; Lutman, M E; Stein, J; Liu, X; Aziz, T; Vaidyanathan, R; Wang, S

    2015-10-01

    Correlating electrical activity within the human brain to movement is essential for developing and refining interventions (e.g. deep brain stimulation (DBS)) to treat central nervous system disorders. It also serves as a basis for next generation brain-machine interfaces (BMIs). This study highlights a new decoding strategy for capturing movement and its corresponding laterality from deep brain local field potentials (LFPs). LFPs were recorded with surgically implanted electrodes from the subthalamic nucleus or globus pallidus interna in twelve patients with Parkinson's disease or dystonia during a visually cued finger-clicking task. We introduce a method to extract frequency dependent neural synchronization and inter-hemispheric connectivity features based upon wavelet packet transform (WPT) and Granger causality approaches. A novel weighted sequential feature selection algorithm has been developed to select optimal feature subsets through a feature contribution measure. This is particularly useful when faced with limited trials of high dimensionality data as it enables estimation of feature importance during the decoding process. This novel approach was able to accurately and informatively decode movement related behaviours from the recorded LFP activity. An average accuracy of 99.8% was achieved for movement identification, whilst subsequent laterality classification was 81.5%. Feature contribution analysis highlighted stronger contralateral causal driving between the basal ganglia hemispheres compared to ipsilateral driving, with causality measures considerably improving laterality discrimination. These findings demonstrate optimally selected neural synchronization alongside causality measures related to inter-hemispheric connectivity can provide an effective control signal for augmenting adaptive BMIs. In the case of DBS patients, acquiring such signals requires no additional surgery whilst providing a relatively stable and computationally inexpensive control

  14. Temporal compressive imaging for video

    Science.gov (United States)

    Zhou, Qun; Zhang, Linxia; Ke, Jun

    2018-01-01

    In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.

  15. Design and Implementation of Video Shot Detection on Field Programmable Gate Arrays

    Directory of Open Access Journals (Sweden)

    Jharna Majumdar

    2012-09-01

    Full Text Available Video has become an interactive medium of communication in everyday life. The sheer volume of video makes it extremely difficult to browse through and find the required data. Hence extraction of key frames from the video which represents the abstract of the entire video becomes necessary. The aim of the video shot detection is to find the position of the shot boundaries, so that key frames can be selected from each shot for subsequent processing such as video summarization, indexing etc. For most of the surveillance applications like video summery, face recognition etc., the hardware (real time implementation of these algorithms becomes necessary. Here in this paper we present the architecture for simultaneous accessing of consecutive frames, which are then used for the implementation of various Video Shot Detection algorithms. We also present the real time implementation of three video shot detection algorithms using the above mentioned architecture on FPGA (Field Programmable Gate Arrays.

  16. Temporal Segmentation of MPEG Video Streams

    Directory of Open Access Journals (Sweden)

    Janko Calic

    2002-06-01

    Full Text Available Many algorithms for temporal video partitioning rely on the analysis of uncompressed video features. Since the information relevant to the partitioning process can be extracted directly from the MPEG compressed stream, higher efficiency can be achieved utilizing information from the MPEG compressed domain. This paper introduces a real-time algorithm for scene change detection that analyses the statistics of the macroblock features extracted directly from the MPEG stream. A method for extraction of the continuous frame difference that transforms the 3D video stream into a 1D curve is presented. This transform is then further employed to extract temporal units within the analysed video sequence. Results of computer simulations are reported.

  17. Turning Video Resource Management into Cloud Computing

    Directory of Open Access Journals (Sweden)

    Weili Kou

    2016-07-01

    Full Text Available Big data makes cloud computing more and more popular in various fields. Video resources are very useful and important to education, security monitoring, and so on. However, issues of their huge volumes, complex data types, inefficient processing performance, weak security, and long times for loading pose challenges in video resource management. The Hadoop Distributed File System (HDFS is an open-source framework, which can provide cloud-based platforms and presents an opportunity for solving these problems. This paper presents video resource management architecture based on HDFS to provide a uniform framework and a five-layer model for standardizing the current various algorithms and applications. The architecture, basic model, and key algorithms are designed for turning video resources into a cloud computing environment. The design was tested by establishing a simulation system prototype.

  18. Shadow Detection Based on Regions of Light Sources for Object Extraction in Nighttime Video.

    Science.gov (United States)

    Lee, Gil-Beom; Lee, Myeong-Jin; Lee, Woo-Kyung; Park, Joo-Heon; Kim, Tae-Hwan

    2017-03-22

    Intelligent video surveillance systems detect pre-configured surveillance events through background modeling, foreground and object extraction, object tracking, and event detection. Shadow regions inside video frames sometimes appear as foreground objects, interfere with ensuing processes, and finally degrade the event detection performance of the systems. Conventional studies have mostly used intensity, color, texture, and geometric information to perform shadow detection in daytime video, but these methods lack the capability of removing shadows in nighttime video. In this paper, a novel shadow detection algorithm for nighttime video is proposed; this algorithm partitions each foreground object based on the object's vertical histogram and screens out shadow objects by validating their orientations heading toward regions of light sources. From the experimental results, it can be seen that the proposed algorithm shows more than 93.8% shadow removal and 89.9% object extraction rates for nighttime video sequences, and the algorithm outperforms conventional shadow removal algorithms designed for daytime videos.

  19. Patent landscape for royalty-free video coding

    Science.gov (United States)

    Reader, Cliff

    2016-09-01

    Digital video coding is over 60 years old and the first major video coding standard - H.261 - is over 25 years old, yet today there are more patents than ever related to, or evaluated as essential to video coding standards. This paper examines the historical development of video coding standards, from the perspective of when the significant contributions for video coding technology were made, what performance can be attributed to those contributions and when original patents were filed for those contributions. These patents have now expired, so the main video coding tools, which provide the significant majority of coding performance, are now royalty-free. The deployment of video coding tools in a standard involves several related developments. The tools themselves have evolved over time to become more adaptive, taking advantage of the increased complexity afforded by advances in semiconductor technology. In most cases, the improvement in performance for any given tool has been incremental, although significant improvement has occurred in aggregate across all tools. The adaptivity must be mirrored by the encoder and decoder, and advances have been made in reducing the overhead of signaling adaptive modes and parameters. Efficient syntax has been developed to provide such signaling. Furthermore, efficient ways of implementing the tools with limited precision, simple mathematical operators have been developed. Correspondingly, categories of patents related to video coding can be defined. Without discussing active patents, this paper provides the timeline of the developments of video coding and lays out the landscape of patents related to video coding. This provides a foundation on which royalty free video codec design can take place.

  20. Taste quality decoding parallels taste sensations.

    Science.gov (United States)

    Crouzet, Sébastien M; Busch, Niko A; Ohla, Kathrin

    2015-03-30

    In most species, the sense of taste is key in the distinction of potentially nutritious and harmful food constituents and thereby in the acceptance (or rejection) of food. Taste quality is encoded by specialized receptors on the tongue, which detect chemicals corresponding to each of the basic tastes (sweet, salty, sour, bitter, and savory [1]), before taste quality information is transmitted via segregated neuronal fibers [2], distributed coding across neuronal fibers [3], or dynamic firing patterns [4] to the gustatory cortex in the insula. In rodents, both hardwired coding by labeled lines [2] and flexible, learning-dependent representations [5] and broadly tuned neurons [6] seem to coexist. It is currently unknown how, when, and where taste quality representations are established in the cortex and whether these representations are used for perceptual decisions. Here, we show that neuronal response patterns allow to decode which of four tastants (salty, sweet, sour, and bitter) participants tasted in a given trial by using time-resolved multivariate pattern analyses of large-scale electrophysiological brain responses. The onset of this prediction coincided with the earliest taste-evoked responses originating from the insula and opercular cortices, indicating that quality is among the first attributes of a taste represented in the central gustatory system. These response patterns correlated with perceptual decisions of taste quality: tastes that participants discriminated less accurately also evoked less discriminated brain response patterns. The results therefore provide the first evidence for a link between taste-related decision-making and the predictive value of these brain response patterns. Copyright © 2015 Elsevier Ltd. All rights reserved.