WorldWideScience

Sample records for huffman coding compression

  1. Sequential adaptive compressed sampling via Huffman codes

    CERN Document Server

    Aldroubi, Akram; Zarringhalam, Kourosh

    2008-01-01

    There are two main approaches in compressed sensing: the geometric approach and the combinatorial approach. In this paper we introduce an information theoretic approach and use results from the theory of Huffman codes to construct a sequence of binary sampling vectors to determine a sparse signal. Unlike other approaches, our approach is adaptive in the sense that each sampling vector depends on the previous sample. The number of measurements we need for a k-sparse vector in n-dimensional space is no more than O(k log n) and the reconstruction is O(k).

  2. Huffman-based code compression techniques for embedded processors

    KAUST Repository

    Bonny, Mohamed Talal

    2010-09-01

    The size of embedded software is increasing at a rapid pace. It is often challenging and time consuming to fit an amount of required software functionality within a given hardware resource budget. Code compression is a means to alleviate the problem by providing substantial savings in terms of code size. In this article we introduce a novel and efficient hardware-supported compression technique that is based on Huffman Coding. Our technique reduces the size of the generated decoding table, which takes a large portion of the memory. It combines our previous techniques, Instruction Splitting Technique and Instruction Re-encoding Technique into new one called Combined Compression Technique to improve the final compression ratio by taking advantage of both previous techniques. The instruction Splitting Technique is instruction set architecture (ISA)-independent. It splits the instructions into portions of varying size (called patterns) before Huffman coding is applied. This technique improves the final compression ratio by more than 20% compared to other known schemes based on Huffman Coding. The average compression ratios achieved using this technique are 48% and 50% for ARM and MIPS, respectively. The Instruction Re-encoding Technique is ISA-dependent. It investigates the benefits of reencoding unused bits (we call them reencodable bits) in the instruction format for a specific application to improve the compression ratio. Reencoding those bits can reduce the size of decoding tables by up to 40%. Using this technique, we improve the final compression ratios in comparison to the first technique to 46% and 45% for ARM and MIPS, respectively (including all overhead that incurs). The Combined Compression Technique improves the compression ratio to 45% and 42% for ARM and MIPS, respectively. In our compression technique, we have conducted evaluations using a representative set of applications and we have applied each technique to two major embedded processor architectures

  3. Analysis of LAPAN-IPB image lossless compression using differential pulse code modulation and huffman coding

    Science.gov (United States)

    Hakim, P. R.; Permala, R.

    2017-01-01

    LAPAN-A3/IPB satellite is the latest Indonesian experimental microsatellite with remote sensing and earth surveillance missions. The satellite has three optical payloads, which are multispectral push-broom imager, digital matrix camera and video camera. To increase data transmission efficiency, the multispectral imager data can be compressed using either lossy or lossless compression method. This paper aims to analyze Differential Pulse Code Modulation (DPCM) method and Huffman coding that are used in LAPAN-IPB satellite image lossless compression. Based on several simulation and analysis that have been done, current LAPAN-IPB lossless compression algorithm has moderate performance. There are several aspects that can be improved from current configuration, which are the type of DPCM code used, the type of Huffman entropy-coding scheme, and the use of sub-image compression method. The key result of this research shows that at least two neighboring pixels should be used for DPCM calculation to increase compression performance. Meanwhile, varying Huffman tables with sub-image approach could also increase the performance if on-board computer can support for more complicated algorithm. These results can be used as references in designing Payload Data Handling System (PDHS) for an upcoming LAPAN-A4 satellite.

  4. An Upper Limit of AC Huffman Code Length in JPEG Compression

    OpenAIRE

    Horie, Kenichi

    2009-01-01

    A strategy for computing upper code-length limits of AC Huffman codes for an 8x8 block in JPEG Baseline coding is developed. The method is based on a geometric interpretation of the DCT, and the calculated limits are as close as 14% to the maximum code-lengths. The proposed strategy can be adapted to other transform coding methods, e.g., MPEG 2 and 4 video compressions, to calculate close upper code length limits for the respective processing blocks.

  5. Efficient Data Compression Scheme using Dynamic Huffman Code Applied on Arabic Language

    Directory of Open Access Journals (Sweden)

    Sameh Ghwanmeh

    2006-01-01

    Full Text Available The development of an efficient compression scheme to process the Arabic language represents a difficult task. This paper employs the dynamic Huffman coding on data compression with variable length bit coding, on the Arabic language. Experimental tests have been performed on both Arabic and English text. A comparison was made to measure the efficiency of compressing data results on both Arabic and English text. Also a comparison was made between the compression rate and the size of the file to be compressed. It has been found that as the file size increases, the compression ratio decreases for both Arabic and English text. The experimental results show that the average message length and the efficiency of compression on Arabic text was better than the compression on English text. Also, results show that the main factor which significantly affects compression ratio and average message length was the frequency of the symbols on the text.

  6. Compression and Encryption of ECG Signal Using Wavelet and Chaotically Huffman Code in Telemedicine Application.

    Science.gov (United States)

    Raeiatibanadkooki, Mahsa; Quchani, Saeed Rahati; KhalilZade, MohammadMahdi; Bahaadinbeigy, Kambiz

    2016-03-01

    In mobile health care monitoring, compression is an essential tool for solving storage and transmission problems. The important issue is able to recover the original signal from the compressed signal. The main purpose of this paper is compressing the ECG signal with no loss of essential data and also encrypting the signal to keep it confidential from everyone, except for physicians. In this paper, mobile processors are used and there is no need for any computers to serve this purpose. After initial preprocessing such as removal of the baseline noise, Gaussian noise, peak detection and determination of heart rate, the ECG signal is compressed. In compression stage, after 3 steps of wavelet transform (db04), thresholding techniques are used. Then, Huffman coding with chaos for compression and encryption of the ECG signal are used. The compression rates of proposed algorithm is 97.72 %. Then, the ECG signals are sent to a telemedicine center to acquire specialist diagnosis by TCP/IP protocol.

  7. Wavelet transform and Huffman coding based electrocardiogram compression algorithm: Application to telecardiology

    Science.gov (United States)

    Chouakri, S. A.; Djaafri, O.; Taleb-Ahmed, A.

    2013-08-01

    We present in this work an algorithm for electrocardiogram (ECG) signal compression aimed to its transmission via telecommunication channel. Basically, the proposed ECG compression algorithm is articulated on the use of wavelet transform, leading to low/high frequency components separation, high order statistics based thresholding, using level adjusted kurtosis value, to denoise the ECG signal, and next a linear predictive coding filter is applied to the wavelet coefficients producing a lower variance signal. This latter one will be coded using the Huffman encoding yielding an optimal coding length in terms of average value of bits per sample. At the receiver end point, with the assumption of an ideal communication channel, the inverse processes are carried out namely the Huffman decoding, inverse linear predictive coding filter and inverse discrete wavelet transform leading to the estimated version of the ECG signal. The proposed ECG compression algorithm is tested upon a set of ECG records extracted from the MIT-BIH Arrhythmia Data Base including different cardiac anomalies as well as the normal ECG signal. The obtained results are evaluated in terms of compression ratio and mean square error which are, respectively, around 1:8 and 7%. Besides the numerical evaluation, the visual perception demonstrates the high quality of ECG signal restitution where the different ECG waves are recovered correctly.

  8. Bounds on Generalized Huffman Codes

    CERN Document Server

    Baer, Michael B

    2007-01-01

    New lower and upper bounds are obtained for the compression of optimal binary prefix codes according to various nonlinear codeword length objectives. Like the coding bounds for Huffman coding - which concern the traditional linear code objective of minimizing average codeword length -- these are in terms of a form of entropy and the probability of the most probable input symbol. As in Huffman coding, some upper bounds can be found using sufficient conditions for the codeword corresponding to the most probable symbol being one bit long. Whereas having probability no less than 0.4 is a tight sufficient condition for this to be the case in Huffman coding, other penalties differ, some having a tighter condition, some a looser condition, and others having no such sufficient condition. The objectives explored here are ones for which optimal codes can be found using a generalized form of Huffman coding. These objectives include one related to queueing (an increasing exponential average), one related to single-shot c...

  9. Evaluation of Huffman and Arithmetic Algorithms for Multimedia Compression Standards

    CERN Document Server

    Shahbahrami, Asadollah; Rostami, Mobin Sabbaghi; Mobarhan, Mostafa Ayoubi

    2011-01-01

    Compression is a technique to reduce the quantity of data without excessively reducing the quality of the multimedia data. The transition and storing of compressed multimedia data is much faster and more efficient than original uncompressed multimedia data. There are various techniques and standards for multimedia data compression, especially for image compression such as the JPEG and JPEG2000 standards. These standards consist of different functions such as color space conversion and entropy coding. Arithmetic and Huffman coding are normally used in the entropy coding phase. In this paper we try to answer the following question. Which entropy coding, arithmetic or Huffman, is more suitable compared to other from the compression ratio, performance, and implementation points of view? We have implemented and tested Huffman and arithmetic algorithms. Our implemented results show that compression ratio of arithmetic coding is better than Huffman coding, while the performance of the Huffman coding is higher than A...

  10. Maximal codeword lengths in Huffman codes

    Science.gov (United States)

    Abu-Mostafa, Y. S.; Mceliece, R. J.

    1992-01-01

    The following question about Huffman coding, which is an important technique for compressing data from a discrete source, is considered. If p is the smallest source probability, how long, in terms of p, can the longest Huffman codeword be? It is shown that if p is in the range 0 less than p less than or equal to 1/2, and if K is the unique index such that 1/F(sub K+3) less than p less than or equal to 1/F(sub K+2), where F(sub K) denotes the Kth Fibonacci number, then the longest Huffman codeword for a source whose least probability is p is at most K, and no better bound is possible. Asymptotically, this implies the surprising fact that for small values of p, a Huffman code's longest codeword can be as much as 44 percent larger than that of the corresponding Shannon code.

  11. Huffman编码在矢量地图压缩中的应用%Huffman Coding and Applications in Compression for Vector Maps

    Institute of Scientific and Technical Information of China (English)

    刘兴科; 陈轲; 于晓光

    2014-01-01

    Huffman 编码是一种统计编码,是数据无损压缩中的重要方法。本文研究了Huffman编码的原理及其实现,并将其应用于矢量地图数据的压缩。针对矢量地图数据的特点,提出了Huffman编码的具体算法及压缩与解压缩的实现步骤,讨论了算法用于压缩矢量地图的优良性质。通过试验展示了Huffman编码进行数据压缩的原理与实现过程,并利用一组真实的矢量地图数据验证了所提出的算法可以有效实现对矢量地图数据的压缩,具有无损、高效、压缩率高、通用性好的优点。%Huffman coding is a statistical coding method and widely used in lossless compression. The principal and implementation of Huffman coding was studied and the compression of vector maps was implemented with Huffman coding. Considering the characteristics of the vector maps the detailed algorithm of Huffman coding and the steps of compression and decompression was proposed and the property of the algorithm in vector map compression was discussed. The principle and process of Huffman coding was shown with an experiment. It is demonstrated with ex-periments using a set of real vector maps that the proposed algorithm was a lossless compression method with high efficiency high compression ratio and perfect generality.

  12. Estimating the size of Huffman code preambles

    Science.gov (United States)

    Mceliece, R. J.; Palmatier, T. H.

    1993-01-01

    Data compression via block-adaptive Huffman coding is considered. The compressor consecutively processes blocks of N data symbols, estimates source statistics by computing the relative frequencies of each source symbol in the block, and then synthesizes a Huffman code based on these estimates. In order to let the decompressor know which Huffman code is being used, the compressor must begin the transmission of each compressed block with a short preamble or header file. This file is an encoding of the list n = (n(sub 1), n(sub 2)....,n(sub m)), where n(sub i) is the length of the Hufffman codeword associated with the ith source symbol. A simple method of doing this encoding is to individually encode each n(sub i) into a fixed-length binary word of length log(sub 2)l, where l is an a priori upper bound on the codeword length. This method produces a maximum preamble length of mlog(sub 2)l bits. The object is to show that, in most cases, no substantially shorter header of any kind is possible.

  13. Short Huffman Codes Producing 1s Half of the Time

    CERN Document Server

    Altenbach, Fabian; Mathar, Rudolf

    2011-01-01

    The design of the channel part of a digital communication system (e.g., error correction, modulation) is heavily based on the assumption that the data to be transmitted forms a fair bit stream. However, simple source encoders such as short Huffman codes generate bit streams that poorly match this assumption. As a result, the channel input distribution does not match the original design criteria. In this work, a simple method called half Huffman coding (halfHc) is developed. halfHc transforms a Huffman code into a source code whose output is more similar to a fair bit stream. This is achieved by permuting the codewords such that the frequency of 1s at the output is close to 0.5. The permutations are such that the optimality in terms of achieved compression ratio is preserved. halfHc is applied in a practical example, and the resulting overall system performs better than when conventional Huffman coding is used.

  14. Difference-Huffman Coding of Multidimensional Databases

    CERN Document Server

    Szépkúti, István

    2011-01-01

    A new compression method called difference-Huffman coding (DHC) is introduced in this paper. It is verified empirically that DHC results in a smaller multidimensional physical representation than those for other previously published techniques (single count header compression, logical position compression, base-offset compression and difference sequence compression). The article examines how caching influences the expected retrieval time of the multidimensional and table representations of relations. A model is proposed for this, which is then verified with empirical data. Conclusions are drawn, based on the model and the experiment, about when one physical representation outperforms another in terms of retrieval time. Over the tested range of available memory, the performance for the multidimensional representation was always much quicker than for the table representation.

  15. 基于Huffman编码的XML数据压缩方法%Design and application of an XML data compression algorithm based on Huffman coding

    Institute of Scientific and Technical Information of China (English)

    施鹏; 李敏; 于涛; 赵利强; 王建林

    2013-01-01

    An XML data compression method based on Huffman coding has been proposed for the problem where the accessing rate of a production process report system for a large data source is not high in a certain bandwidth.A data processing class was constructed for XML documents to get a high rate word units in this algorithm.With the help of Huffman coding to code specific unit words,the coded document was compressed by the LZMA compression algorithm.The problem of needing the assistance of the document type definition and XML parser in the traditional XML data compression algorithm was solved using this algorithm,which resulted in a good compression effect.The Huffman-LZMA compression algorithm was constructed and was applied to the production process report system design.The experimental compression ratio of the report data reached about 88%.The bandwidth and storage space were saved effectively,and the report accessing rate was improved.%针对一定网络带宽下生产过程报表系统对大型数据源访问速率不高的问题,提出了一种基于Huffman编码的XML数据压缩方法.通过构造数据处理类获取XML文档中重复率高的节点单元,采用Huff man编码对节点单元进行编码,将编码后文档利用LZMA算法压缩,构建了Huffman-LZMA压缩算法,并将该压缩算法应用于生产过程报表系统设计.实际应用结果表明,该压缩算法对生产过程报表数据源的压缩率达到约88%,有效的节省了网络带宽和存储空间,提高了报表系统的访问速率.

  16. Research of Data Compression Method Based on the Improved Huffman Code Algorithm%基于改进哈夫曼编码的数据压缩方法研究

    Institute of Scientific and Technical Information of China (English)

    张红军; 徐超

    2014-01-01

    As a non-losing compressing coding algorithm, Huffman coding has many important application to the current data compression field.The classic algorithm to get Huffman coding is from bottom to top on the basis of the Huffman tree. This paper gives an improved Huffman algorithm of data compression by the analysis of the Huffman algorithm, in which algorithm go from the root node to leaf nodes of the Huffman tree by using the queue structure.In the coding process, every leaf node is only scanned once before getting the Huffman coding.The experimental result shows the fact that the improved algorithm not only the compression ratio is higher than classic algorithm, but also ensure the security and confidentiality of the resulting compressed.%作为一种无损压缩编码方法,哈夫曼编码在数据压缩中具有重要的应用。经典的哈夫曼编码是在构造哈夫曼的基础上自下而上进行的,通过分析哈夫曼算法的思想,给出了一种改进的哈夫曼数据压缩算法。该算法利用队列结构,从哈夫曼的根节点出发,向叶子节点进行编码,在编码过程中仅将哈夫曼树的每个叶子节点进行一次扫描便可以得到各个叶子节点的哈夫曼编码。实验表明,改进算法不仅压缩率高于以往算法,而且保证了最终生成的压缩文件的安全性。

  17. Performance Improvement Of Bengali Text Compression Using Transliteration And Huffman Principle

    Directory of Open Access Journals (Sweden)

    Md. Mamun Hossain

    2016-09-01

    Full Text Available In this paper, we propose a new compression technique based on transliteration of Bengali text to English. Compared to Bengali, English is a less symbolic language. Thus transliteration of Bengali text to English reduces the number of characters to be coded. Huffman coding is well known for producing optimal compression. When Huffman principal is applied on transliterated text significant performance improvement is achieved in terms of decoding speed and space requirement compared to Unicode compression

  18. A quantum analog of Huffman coding

    CERN Document Server

    Braunstein, S L; Gottesman, D; Lo, H K; Braunstein, Samuel L.; Fuchs, Christopher A.; Gottesman, Daniel; Lo, Hoi-Kwong

    1998-01-01

    We analyse a generalization of Huffman coding to the quantum case. In particular, we notice various difficulties in using instantaneous codes for quantum communication. However, for the storage of quantum information, we have succeeded in constructing a Huffman-coding inspired quantum scheme. The number of computational steps in the encoding and decoding processes of N quantum signals can be made to be polynomial in log N by a massively parallel implementation of a quantum gate array. This is to be compared with the N^3 computational steps required in the sequential implementation by Cleve and DiVincenzo of the well-known quantum noiseless block coding scheme by Schumacher. The powers and limitations in using this scheme in communication are also discussed.

  19. 利用改进的哈夫曼编码实现文件的压缩与解压%Using the Improved Huffman Code to Realize Compression and Decompression of the Document

    Institute of Scientific and Technical Information of China (English)

    卢冰; 刘兴海

    2013-01-01

      通过分析哈夫曼算法的思想,提出了一种改进的哈夫曼数据压缩算法。针对经典哈夫曼算法的不足,采用堆排序的思想构建哈夫曼树并得到哈夫曼编码,这种方法可以减少内存的读写次数,提高系统的响应时间。通过二次映射,把编码文件中每8位二进制转换成一个对应字符,提高了文件的压缩率,保证了最终生成的压缩文件的安全保密性。本文最后采用3个文本文件对改进的哈夫曼算法进行了压缩测试,实验表明,改进的算法,在压缩率上略强于经典算法。%  Through the analysis of the Huffman algorithm, an improved Huffman algorithm of data compression is pro-posed. According to the classic Huffman algorithm, using the heap sort thought to build the Huffman tree and the Huff-man coding, this method can reduce the memory read and write times,improving the system response time. Through the second mapping, each 8 encoded file binary is converted into a corresponding character, improve the compression ratio of files and ensure the security and confidentiality of the resulting compressed file. Finally, three text files compression test on the improved Huffman algorithm, experiments show that the improved algorithm, the compression ratio is slightly bet-ter than classic algorithm.

  20. Joint compression and encryption using chaotically mutated Huffman trees

    Science.gov (United States)

    Hermassi, Houcemeddine; Rhouma, Rhouma; Belghith, Safya

    2010-10-01

    This paper introduces a new scheme for joint compression and encryption using the Huffman codec. A basic tree is first generated for a given message and then based on a keystream generated from a chaotic map and depending from the input message, the basic tree is mutated without changing the statistical model. Hence a symbol can be coded by more than one codeword having the same length. The security of the scheme is tested against the known plaintext attack and the brute force attack. Performance analysis including encryption/decryption speed, additional computational complexity and compression ratio are given.

  1. Ternary Tree and Clustering Based Huffman Coding Algorithm

    Directory of Open Access Journals (Sweden)

    Pushpa R. Suri

    2010-09-01

    Full Text Available In this study, the focus was on the use of ternary tree over binary tree. Here, a new two pass Algorithm for encoding Huffman ternary tree codes was implemented. In this algorithm we tried to find out the codeword length of the symbol. Here I used the concept of Huffman encoding. Huffman encoding was a two pass problem. Here the first pass was to collect the letter frequencies. You need to use that information to create the Huffman tree. Note that char values range from -128 to 127, so you will need to cast them. I stored the data as unsigned chars to solve this problem, and then the range is 0 to 255. Open the output file and write the frequency table to it. Open the input file, read characters from it, gets the codes, and writes the encoding into the output file. Once a Huffman code has been generated, data may be encoded simply by replacing each symbol with its code. To reduce the memory size and fasten the process of finding the codeword length for a symbol in a Huffman tree, we proposed a memory efficient data structure to represent the codeword length of Huffman ternary tree. In this algorithm we tried to find out the length of the code of the symbols used in the tree.

  2. Entropy-Based Bounds On Redundancies Of Huffman Codes

    Science.gov (United States)

    Smyth, Padhraic J.

    1992-01-01

    Report presents extension of theory of redundancy of binary prefix code of Huffman type which includes derivation of variety of bounds expressed in terms of entropy of source and size of alphabet. Recent developments yielded bounds on redundancy of Huffman code in terms of probabilities of various components in source alphabet. In practice, redundancies of optimal prefix codes often closer to 0 than to 1.

  3. Visually Improved Image Compression by Combining EZW Encoding with Texture Modeling using Huffman Encoder

    Directory of Open Access Journals (Sweden)

    Vinay U. Kale

    2010-05-01

    Full Text Available This paper proposes a technique for image compression which uses the Wavelet-based Image/Texture Coding Hybrid (WITCH scheme [1] in combination with Huffman encoder. It implements a hybrid coding approach, while nevertheless preserving the features of progressive and lossless coding. The hybrid scheme was designed to encode the structural image information by Embedded Zerotree Wavelet (EZW encoding algorithm [2] and the stochastic texture in a model-based manner and this encoded data is then compressed using Huffman encoder. The scheme proposed here achieves superior subjective quality while increasing the compression ratio by more than a factor of three or even four. With this technique, it is possible to achieve compression ratios as high as 10 to 12 but with some minor distortions in the encoded image.

  4. 动态哈夫曼算法在电力线计算机网络数据压缩中的应用%Applications of Dynamic Huffman Code Algorithms in the Data Compression of Power-Line Computer Network

    Institute of Scientific and Technical Information of China (English)

    黄荣辉; 周明天; 曾家智

    2000-01-01

    This thesis is to analyze the characteristics of data packets in power-line computer network and to discuss a data compression method of present study in abroad.Briefly describing the different Huffman code algorithms,it presents the data compression results by testingg the data packets in power-line computer network.The result shows that it is better to use the Advanced Dynamic Huffman Code method in power-line computer network.Finally,the methods of improving operation in engineering are proposed.

  5. Loss less DNA Solidity Using Huffman and Arithmetic Coding

    Directory of Open Access Journals (Sweden)

    Lakshmi Mythri Dasari

    2014-07-01

    Full Text Available DNA Sequences making up any bacterium comprise the blue print of that bacterium so that understanding and analyzing different genes with in sequences has become an exceptionally significant mission. Naturalists are manufacturing huge volumes of DNA Sequences every day that makes genome sequence catalogue emergent exponentially. The data bases such as Gen-bank represents millions of DNA Sequences filling many thousands of gigabytes workstation storing capability. Solidity of Genomic sequences can decrease the storage requirements, and increase the broadcast speed. In this paper we compare two lossless solidity algorithms (Huffman and Arithmetic coding. In Huffman coding, individual bases are coded and assigned a specific binary number. But for Arithmetic coding entire DNA is coded in to a single fraction number and binary word is coded to it. Solidity ratio is compared for both the methods and finally we conclude that arithmetic coding is the best.

  6. Design and performance of Huffman sequences in medical ultrasound coded excitation.

    Science.gov (United States)

    Polpetta, Alessandro; Banelli, Paolo

    2012-04-01

    This paper deals with coded-excitation techniques for ultrasound medical echography. Specifically, linear Huffman coding is proposed as an alternative approach to other widely established techniques, such as complementary Golay coding and linear frequency modulation. The code design is guided by an optimization procedure that boosts the signal-to-noise ratio gain (GSNR) and, interestingly, also makes the code robust in pulsed-Doppler applications. The paper capitalizes on a thorough analytical model that can be used to design any linear coded-excitation system. This model highlights that the performance in frequency-dependent attenuating media mostly depends on the pulse-shaping waveform when the codes are characterized by almost ideal (i.e., Kronecker delta) autocorrelation. In this framework, different pulse shapers and different code lengths are considered to identify coded signals that optimize the contrast resolution at the output of the receiver pulse compression. Computer simulations confirm that the proposed Huffman codes are particularly effective, and that there are scenarios in which they may be preferable to the other established approaches, both in attenuating and non-attenuating media. Specifically, for a single scatterer at 150 mm in a 0.7-dB/(MHz·cm) attenuating medium, the proposed Huffman design achieves a main-to-side lobe ratio (MSR) equal to 65 dB, whereas tapered linear frequency modulation and classical complementary Golay codes achieve 35 and 45 dB, respectively.

  7. Canonical Huffman code based full-text index

    Institute of Scientific and Technical Information of China (English)

    Yi Zhang; Zhili Pei; Jinhui Yang; Yanchun Liang

    2008-01-01

    Full-text indices are data structures that can be used to find any substring of a given string. Many full-text indices require space larger than the original string. In this paper, we introduce the canonical Huffman code to the wavelet tree of a string T[1...n]. Compared with Huffman code based wavelet tree, the memory space used to represent the shape of wavelet tree is not needed. In case of large alphabet, this part of memory is not negligible. The operations of wavelet tree are also simpler and more efficient due to the canonical Huffman code. Based on the resulting structure, the multi-key rank and select functions can be performed using at most nH0 + |X|(lglgn + lgn - lg|Σ|)+O(nH0) bits and in O(H0) time for average cases, where H0 is the zeroth order empirical entropy of T. In the end, we present an efficient construction algorithm for this index, which is on-line and linear.

  8. M-ary Anti - Uniform Huffman Codes for Infinite Sources With Geometric Distribution

    OpenAIRE

    Tarniceriu, Daniela; Munteanu, Valeriu; Zaharia, Gheorghe,

    2013-01-01

    International audience; In this paper we consider the class of generalized antiuniform Huffman (AUH) codes for sources with infinite alphabet and geometric distribution. This distribution leads to infinite anti- uniform sources for some ranges of its parameters. Huffman coding of these sources results in AUH codes. We perform a generalization of binary Huffman encoding, using a M-letter code alphabet and prove that as a result of this encoding, sources with memory are obtained. For these sour...

  9. A Dynamic Programming Approach To Length-Limited Huffman Coding

    CERN Document Server

    Golin, Mordecai

    2008-01-01

    The ``state-of-the-art'' in Length Limited Huffman Coding algorithms is the $\\Theta(ND)$-time, $\\Theta(N)$-space one of Hirschberg and Larmore, where $D\\le N$ is the length restriction on the code. This is a very clever, very problem specific, technique. In this note we show that there is a simple Dynamic-Programming (DP) method that solves the problem with the same time and space bounds. The fact that there was an $\\Theta(ND)$ time DP algorithm was previously known; it is a straightforward DP with the Monge property (which permits an order of magnitude speedup). It was not interesting, though, because it also required $\\Theta(ND)$ space. The main result of this paper is the technique developed for reducing the space. It is quite simple and applicable to many other problems modeled by DPs with the Monge property. We illustrate this with examples from web-proxy design and wireless mobile paging.

  10. Improved Huffman coding-based data transmission and compression method for agricultural machinery operation%基于改进Huffman编码的农机作业数据传输压缩方法

    Institute of Scientific and Technical Information of China (English)

    杨敬锋; 张南峰; 李勇; 薛月菊; 吕伟; 何堃

    2014-01-01

    为解决通讯环境较差的农业机械作业状态数据的传输难题,该文提出了基于改进Huffman编码技术的数据压缩方法实现数据的压缩、传输、解析与解压。数据压缩与解压测试的结果表明,数据采集周期为5 s、数据长度为918.38 kb时,基于改进Huffman算法压缩的数据长度为412.56 kb,同样条件下对比传统Huffman算法压缩的数据长度498.56 kb小86 kb,压缩率从传统Huffman算法的45.71%提升至改进Huffman算法的55.08%;传统Huffman算法中数据传输出错率和数据传输丢包率为2.47%和4.18%,而在同样传输要求下的筛选压缩传输中数据传输出错率和数据传输丢包率降至2.06%和0.78%。该方法能满足农业机械作业状态数据压缩传输要求,在单个数据包数据较少、传输时间短的压缩传输方式下能够获得较低的传输出错率和丢包率,且该方法具有计算量少、压缩效率较高特点,适合在农业机械作业区域进行数据传输。%In order to solve the poor communication environment problem of agricultural machinery operation state data transmission caused by the unbalanced coverage of a mobile communication base station, a data filtering and data compression method based on an improved Huffman coding technique was proposed for data selecting, compression, transmission, parsing, and extracting in this paper. First, the agricultural machinery operation data types, exchange mode, and compression mode were defined. Then, data collection and exchange were realized based on a Compass/GPS dual-mode state data collection terminal. Finally, an improved Huffman coding technique was proposed. At present, most of the data transmission is using a compression-decompression method to ensure data integrity of data transmission, which can reduce the data traffic and save many communication costs, but its disadvantages are also obvious. The disadvantages are fewer on the terminal in the data

  11. On constructing symmetrical reversible variable-length codes independent of the Huffman code

    Institute of Scientific and Technical Information of China (English)

    HUO Jun-yan; CHANG Yi-lin; MA Lin-hua; LUO Zhong

    2006-01-01

    Reversible variable length codes (RVLCs) have received much attention due to their excellent error resilient capabilities. In this paper, a novel construction algorithm for symmetrical RVLC is proposed which is independent of the Huffman code. The proposed algorithm's codeword assignment is only based on symbol occurrence probability. It has many advantages over symmetrical construction algorithms available for easy realization and better code performance. In addition, the proposed algorithm simplifies the codeword selection mechanism dramatically.

  12. JOINT SOURCE-CHANNEL DECODING OF HUFFMAN CODES WITH LDPC CODES

    Institute of Scientific and Technical Information of China (English)

    Mei Zhonghui; Wu Lenan

    2006-01-01

    In this paper, we present a Joint Source-Channel Decoding algorithm (JSCD) for Low-Density Parity Check (LDPC) codes by modifying the Sum-Product Algorithm (SPA) to account for the source redundancy, which results from the neighbouring Huffman coded bits. Simulations demonstrate that in the presence of source redundancy, the proposed algorithm gives better performance than the Separate Source and Channel Decoding algorithm (SSCD).

  13. PERFORMANCE COMPARISON OF HUFFMAN AND LEMPEL-ZIV WELCH DATA COMPRESSION FOR WIRELESS SENSOR NODE APPLICATION

    Directory of Open Access Journals (Sweden)

    Asral Bahari Jambek

    2014-01-01

    Full Text Available Wireless Sensor Networks (WSNs are becoming important in today’s technology in helping monitoring our surrounding environment. However, wireless sensor nodes are powered by limited energy supply. To extend the lifetime of the device, energy consumption must be reduced. Data transmission is known to consume the largest amount of energy in a sensor node. Thus, one method to reduce the energy used is by compressing the data before transmitting it. This study analyses the performance of the Huffman and Lempel-Ziv Welch (LZW algorithms when compressing data that are commonly used in WSN. From the experimental results, the Huffman algorithm gives a better performance when compared to the LZW algorithm for this type of data. The Huffman algorithm is able to reduce the data size by 43% on average, which is four times faster than the LZW algorithm.

  14. Load Balancing Scheme on the Basis of Huffman Coding for P2P Information Retrieval

    Science.gov (United States)

    Kurasawa, Hisashi; Takasu, Atsuhiro; Adachi, Jun

    Although a distributed index on a distributed hash table (DHT) enables efficient document query processing in Peer-to-Peer information retrieval (P2P IR), the index costs a lot to construct and it tends to be an unfair management because of the unbalanced term frequency distribution. We devised a new distributed index, named Huffman-DHT, for P2P IR. The new index uses an algorithm similar to Huffman coding with a modification to the DHT structure based on the term distribution. In a Huffman-DHT, a frequent term is assigned to a short ID and allocated a large space in the node ID space in DHT. Throuth ID management, the Huffman-DHT balances the index registration accesses among peers and reduces load concentrations. Huffman-DHT is the first approach to adapt concepts of coding theory and term frequency distribution to load balancing. We evaluated this approach in experiments using a document collection and assessed its load balancing capabilities in P2P IR. The experimental results indicated that it is most effective when the P2P system consists of about 30, 000 nodes and contains many documents. Moreover, we proved that we can construct a Huffman-DHT easily by estimating the probability distribution of the term occurrence from a small number of sample documents.

  15. AN APPLICATION OF PLANAR BINARY BITREES TO PREFIX AND HUFFMAN PREFIX CODE

    OpenAIRE

    Erjavec, Zlatko

    2004-01-01

    In this paper we construct prefix code in which the use of planar binary trees is replaced by the use of the planar binary bitrees. In addition, we apply the planar binary bitrees to the Huffman prefix code. Finally, we code English alphabet in such a way that characters have codewords different from already established ones.

  16. Does an Arithmetic Coding Followed by Run-length Coding Enhance the Compression Ratio?

    Directory of Open Access Journals (Sweden)

    Mohammed Otair

    2015-07-01

    Full Text Available Compression is a technique to minimize the quantity of image without excessively decreasing the quality of the image. Then, the translating of compressed image is much more efficient and rapidly than original image. Arithmetic and Huffman coding are mostly used techniques in the entropy coding. This study tries to prove that RLC may be added after Arithmetic coding as an extra processing step which may therefore be coded efficiently without any further degradation of the image quality. So, the main purpose of this study is to answer the following question "Which entropy coding, arithmetic with RLC or Huffman with RLC, is more suitable from the compression ratio perspective?" Finally, experimental results show that an Arithmetic followed by RLC coding yields better compression performance than Huffman with RLC coding.

  17. A Compression & Encryption Algorithm on DNA Sequences Using Dynamic Look up Table and Modified Huffman Techniques

    Directory of Open Access Journals (Sweden)

    Syed Mahamud Hossein

    2013-09-01

    Full Text Available Storing, transmitting and security of DNA sequences are well known research challenge. The problem has got magnified with increasing discovery and availability of DNA sequences. We have represent DNA sequence compression algorithm based on Dynamic Look Up Table (DLUT and modified Huffman technique. DLUT consists of 43(64 bases that are 64 sub-stings, each sub-string is of 3 bases long. Each sub-string are individually coded by single ASCII code from 33(! to 96(` and vice versa. Encode depends on encryption key choose by user from four base pair {a,t.g and c}and decode also require decryption key provide by the encoded user. Decoding must require authenticate input for encode the data. The sub-strings are combined into a Dynamic Look up Table based pre-coding routine. This algorithm is tested on reverse; complement & reverse complement the DNA sequences and also test on artificial DNA sequences of equivalent length. Speed of encryption and security levels are two important measurements for evaluating any encryption system. Due to proliferate of ubiquitous computing system, where digital contents are accessible through resource constraint biological database security concern is very important issue. A lot of research has been made to find an encryption system which can be run effectively in those biological databases. Information security is the most challenging question to protect the data from unauthorized user. The proposed method may protect the data from hackers. It can provide the three tier security, in tier one is ASCII code, in tier two is nucleotide (a,t,g and c choice by user and tier three is change of label or change of node position in Huffman Tree. Compression of the genome sequences will help to increase the efficiency of their use. The greatest advantage of this algorithm is fast execution, small memory occupation and easy implementation. Since the program to implement the technique have been written originally in the C language

  18. Huffman Coding with Letter Costs: A Linear-Time Approximation Scheme

    OpenAIRE

    Golin, Mordecai; Mathieu, Claire; Young, Neal E.

    2002-01-01

    We give a polynomial-time approximation scheme for the generalization of Huffman Coding in which codeword letters have non-uniform costs (as in Morse code, where the dash is twice as long as the dot). The algorithm computes a (1+epsilon)-approximate solution in time O(n + f(epsilon) log^3 n), where n is the input size.

  19. Conditional entropy coding of DCT coefficients for video compression

    Science.gov (United States)

    Sipitca, Mihai; Gillman, David W.

    2000-04-01

    We introduce conditional Huffman encoding of DCT run-length events to improve the coding efficiency of low- and medium-bit rate video compression algorithms. We condition the Huffman code for each run-length event on a classification of the current block. We classify blocks according to coding mode and signal type, which are known to the decoder, and according to energy, which the decoder must receive as side information. Our classification schemes improve coding efficiency with little or no increased running time and some increased memory use.

  20. Analysis and Research on Adaptive Huffman Coding%自适应Huffman编码算法分析及研究

    Institute of Scientific and Technical Information of China (English)

    彭文艺

    2012-01-01

    Huffman编码作为一种高效而简单的可变长编码常用于信源编码.但现有的Huffman编码算法存在效率不高,同时应用受到一些限制,因此,提出一种自适应Huffman编码算法,该算法与其他的Huffman编码相比效率更高,应用范围更广.%Huffman coding, as an efficient and simple variable length coding, is used in source coding. But the existing Huffman coding algorithm efficiency is not high, but application is also limited, therefore, this paper proposes an adaptive Huffman coding algorithm, this algorithm with other Huffman encoding compared with high efficiency, wide application range.

  1. Tight Bounds on the Average Length, Entropy, and Redundancy of Anti-Uniform Huffman Codes

    CERN Document Server

    Mohajer, Soheil

    2007-01-01

    In this paper we consider the class of anti-uniform Huffman codes and derive tight lower and upper bounds on the average length, entropy, and redundancy of such codes in terms of the alphabet size of the source. The Fibonacci distributions are introduced which play a fundamental role in AUH codes. It is shown that such distributions maximize the average length and the entropy of the code for a given alphabet size. Another previously known bound on the entropy for given average length follows immediately from our results.

  2. Implementation of Huffman Decoder on Fpga

    Directory of Open Access Journals (Sweden)

    Safia Amir Dahri

    2016-01-01

    Full Text Available Lossless data compression algorithm is most widely used algorithm in data transmission, reception and storage systems in order to increase data rate, speed and save lots of space on storage devices. Now-a-days, different algorithms are implemented in hardware to achieve benefits of hardware realizations. Hardware implementation of algorithms, digital signal processing algorithms and filter realization is done on programmable devices i.e. FPGA. In lossless data compression algorithms, Huffman algorithm is most widely used because of its variable length coding features and many other benefits. Huffman algorithms are used in many applications in software form, e.g. Zip and Unzip, communication, etc. In this paper, Huffman algorithm is implemented on Xilinx Spartan 3E board. This FPGA is programmed by Xilinx tool, Xilinx ISE 8.2i. The program is written in VHDL and text data is decoded by a Huffman algorithm on Hardware board which was previously encoded by Huffman algorithm. In order to visualize the output clearly in waveforms, the same code is simulated on ModelSim v6.4. Huffman decoder is also implemented in the MATLAB for verification of operation. The FPGA is a configurable device which is more efficient in all aspects. Text application, image processing, video streaming and in many other applications Huffman algorithms are implemented.

  3. The number of Huffman codes, compact trees, and sums of unit fractions

    CERN Document Server

    Elsholtz, Christian; Prodinger, Helmut

    2011-01-01

    The number of "nonequivalent" Huffman codes of length r over an alphabet of size t has been studied frequently. Equivalently, the number of "nonequivalent" complete t-ary trees has been examined. We first survey the literature, unifying several independent approaches to the problem. Then, improving on earlier work we prove a very precise asymptotic result on the counting function, consisting of two main terms and an error term.

  4. 一种改进的Huffman编码技术增加QR码的信息容量%An lmproved Huffman Coding to lncrease lnformation Capacity in QR Code

    Institute of Scientific and Technical Information of China (English)

    邹敏; 张瑞林; 吴桐树; 王啸

    2015-01-01

    QR码用于存储信息,很容易受存储容量的限制。针对QR码存储容量较低的缺点,提出了一种改进的Huffman编码来扩大QR码的信息容量。首先,对编码数据采用希尔排序,构造Huffman树得到Huffman编码,并将编码后的数据进行QR的编码,从而得到数据压缩后的QR码。然后,对QR码扫描译码时,利用Huffman树的编码性质对QR码译码后的数据进行解码,从而得到被压缩编码后的原始数据。实验结果表明:该算法能够增加QR码的信息存储容量。%QR codes are used to store information,easily restricted by storage capacity.According to the QR code storage capacity is relatively low.This paper presents a kind of expanding the information capacity of QR codes by using improved Huffman code.First,for the coding data using the shel sort structure Huffman tree to generate Huffman code,coding them to QR code,then get the data of compressed QR code.Secondary,When scanning and decoding of QR code.With the property of the Huffman tree to decode the QR codes,thus get the raw data compressed and encoded.The experimental results show that the algorithm can increase the information storage capacity of QR codes.

  5. Writing on the Facade of RWTH ICT Cubes: Cost Constrained Geometric Huffman Coding

    CERN Document Server

    Böcherer, Georg; Malsbender, Martina; Mathar, Rudolf

    2011-01-01

    In this work, a coding technique called cost constrained Geometric Huffman coding (ccGhc) is developed. ccGhc minimizes the Kullback-Leibler distance between a dyadic probability mass function (pmf) and a target pmf subject to an affine inequality constraint. An analytical proof is given that when ccGhc is applied to blocks of symbols, the optimum is asymptotically achieved when the blocklength goes to infinity. The derivation of ccGhc is motivated by the problem of encoding a text to a sequence of slats subject to architectural design criteria. For the considered architectural problem, for a blocklength of 3, the codes found by ccGhc match the design criteria. For communications channels with average cost constraints, ccGhc can be used to efficiently find prefix-free modulation codes that are provably capacity achieving.

  6. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding

    Science.gov (United States)

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.

    2016-08-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.

  7. Grassmannian Beamforming for MIMO-OFDM Systems with Frequency and Spatially Correlated Channels Using Huffman Coding

    CERN Document Server

    Gutman, Igor; Wulich, Dov

    2009-01-01

    Multiple input multiple output (MIMO) precoding is an efficient scheme that may significantly enhance the communication link. However, this enhancement comes with a cost. Many precoding schemes require channel knowledge at the transmitter that is obtained through feedback from the receiver. Focusing on the natural common fusion of orthogonal frequency division multiplexing (OFDM) and MIMO, we exploit the channel correlation in the frequency and spatial domain to reduce the required feedback rate in a frequency division duplex (FDD) system. The proposed feedback method is based on Huffman coding and is employed here for the single stream case. The method leads to a significant reduction in the required feedback rate, without any loss in performance. The proposed method may be extended to the multi-stream case.

  8. Wavelet based hierarchical coding scheme for radar image compression

    Science.gov (United States)

    Sheng, Wen; Jiao, Xiaoli; He, Jifeng

    2007-12-01

    This paper presents a wavelet based hierarchical coding scheme for radar image compression. Radar signal is firstly quantized to digital signal, and reorganized as raster-scanned image according to radar's repeated period frequency. After reorganization, the reformed image is decomposed to image blocks with different frequency band by 2-D wavelet transformation, each block is quantized and coded by the Huffman coding scheme. A demonstrating system is developed, showing that under the requirement of real time processing, the compression ratio can be very high, while with no significant loss of target signal in restored radar image.

  9. Modified adaptive Huffman coding algorithm for wireless sensor network%无线传感网改进型自适应Huffman编码算法

    Institute of Scientific and Technical Information of China (English)

    许磊; 李千目; 朱保平

    2013-01-01

    为压缩传输数据的数据量,提出了一种改进型自适应Huffman编码算法,适用于计算资源受限的无线传感网络节点。选择修剪树自适应Huffman编码算法中提供的来自Porcupines的两组测试数据作为实验数据。在TinyOS提供的TOSSIM上对上述数据进行了模拟测试,算法采用C++语言编程实现。结果显示:与修剪树自适应Huffman编码算法相比较,两者的内存资源使用量相等,但该文算法对两组数据的压缩比分别提高了8%和12%。%To reduce the transmission data,a modified adaptive Huffman coding algorithm is proposed for the wireless sensor network(WSN)nodes with poor computational resources. Two groups of test data of Porcupines of tailoring adaptive Huffman coding algorithm are selected as the experimental data. Simulation tests of the two groups of data are proposed by using TOSSIM provided by TinyOS, and the algorithm is realized by using C++. The results show:compared with the tailoring adaptive Huffman coding algorithm,both have the same amount of memory usage,but the compression ratios of the two groups of data of the algorithm proposed here are increased by 8% and 12% respectively.

  10. Ultraspectral sounder data compression using the Tunstall coding

    Science.gov (United States)

    Wei, Shih-Chieh; Huang, Bormin; Gu, Lingjia

    2007-09-01

    In an error-prone environment the compression of ultraspectral sounder data is vulnerable to error propagation. The Tungstall coding is a variable-to-fixed length code which compresses data by mapping a variable number of source symbols to a fixed number of codewords. It avoids the resynchronization difficulty encountered in fixed-to-variable length codes such as Huffman coding and arithmetic coding. This paper explores the use of the Tungstall coding in reducing the error propagation for ultraspectral sounder data compression. The results show that our Tunstall approach has a favorable compression ratio compared with JPEG-2000, 3D SPIHT, JPEG-LS, CALIC and CCSDS IDC 5/3. It also has less error propagation compared with JPEG-2000.

  11. A novel DNA sequence similarity calculation based on simplified pulse-coupled neural network and Huffman coding

    Science.gov (United States)

    Jin, Xin; Nie, Rencan; Zhou, Dongming; Yao, Shaowen; Chen, Yanyan; Yu, Jiefu; Wang, Quan

    2016-11-01

    A novel method for the calculation of DNA sequence similarity is proposed based on simplified pulse-coupled neural network (S-PCNN) and Huffman coding. In this study, we propose a coding method based on Huffman coding, where the triplet code was used as a code bit to transform DNA sequence into numerical sequence. The proposed method uses the firing characters of S-PCNN neurons in DNA sequence to extract features. Besides, the proposed method can deal with different lengths of DNA sequences. First, according to the characteristics of S-PCNN and the DNA primary sequence, the latter is encoded using Huffman coding method, and then using the former, the oscillation time sequence (OTS) of the encoded DNA sequence is extracted. Simultaneously, relevant features are obtained, and finally the similarities or dissimilarities of the DNA sequences are determined by Euclidean distance. In order to verify the accuracy of this method, different data sets were used for testing. The experimental results show that the proposed method is effective.

  12. Compressing industrial computed tomography images by means of contour coding

    Science.gov (United States)

    Jiang, Haina; Zeng, Li

    2013-10-01

    An improved method for compressing industrial computed tomography (CT) images is presented. To have higher resolution and precision, the amount of industrial CT data has become larger and larger. Considering that industrial CT images are approximately piece-wise constant, we develop a compression method based on contour coding. The traditional contour-based method for compressing gray images usually needs two steps. The first is contour extraction and then compression, which is negative for compression efficiency. So we merge the Freeman encoding idea into an improved method for two-dimensional contours extraction (2-D-IMCE) to improve the compression efficiency. By exploiting the continuity and logical linking, preliminary contour codes are directly obtained simultaneously with the contour extraction. By that, the two steps of the traditional contour-based compression method are simplified into only one. Finally, Huffman coding is employed to further losslessly compress preliminary contour codes. Experimental results show that this method can obtain a good compression ratio as well as keeping satisfactory quality of compressed images.

  13. 改进的赫夫曼树(Huffman Tree)和赫夫曼编码(Huffman Code)构造算法

    Institute of Scientific and Technical Information of China (English)

    刘帮涛; 罗敏

    2008-01-01

    通过将待排序的数据应用快速排序算法进行排序处理,使得赫夫曼算法(Huffman Algorithm)的时同复杂度从O(n2降低为O(n*log2n).当用于构造赫夫曼树(Huffman Tree)的结点比较多时,可较大的提高程序的运行时间.

  14. Research on Packet Marking Algorithm Based on Huffman Code%基于 Huffman 编码的包标记算法研究

    Institute of Scientific and Technical Information of China (English)

    李明珍; 覃运初; 唐凤仙

    2015-01-01

    防范DDoS攻击的关键在于攻击源的定位,包标记是攻击源定位技术研究的热点。针对传统概率包标记存在的问题,提出选择IPv4数据报首部的选项字段作为标记区域,采用Huffman编码压缩标记信息,减少路径重构时所需标记包的数量;利用IPv6的隧道模式,在IPv4到IPv6网络时增加一个复制操作,将标记信息转存到IPv6的hop-by-hop字段,增加改进算法的适用范围。实验结果表明,改进算法快速、准确和高效,只需一个数据报即可完成路径重构,适用于IPv4和IPv6网络。%The key to prevent DDoS attacks is locating attack source , and packet marking is the hot spot of attack source locating technology .Aiming at the problems of packet marking , an improved algorithm is proposed . The improved algorithm chooses option field of IPv 4 datagram header as the marking area and uses Huffman code to reduce the number of marked packets during path reconstruction .Packets pass from IPv4 network to IPv6 network, adding a copy operation to copy marking information to IPv 6 extension header of hop -by-hop.Thus, it increases the application scope .The experimental results show that the improved algorithm is rapid , accurate and efficient .It can complete path reconstruction only needing a datagram , which can be applied to IPv 4 and IPv6 network .

  15. Channel Efficiency with Security Enhancement for Remote Condition Monitoring of Multi Machine System Using Hybrid Huffman Coding

    Science.gov (United States)

    Datta, Jinia; Chowdhuri, Sumana; Bera, Jitendranath

    2016-12-01

    This paper presents a novel scheme of remote condition monitoring of multi machine system where a secured and coded data of induction machine with different parameters is communicated between a state-of-the-art dedicated hardware Units (DHU) installed at the machine terminal and a centralized PC based machine data management (MDM) software. The DHUs are built for acquisition of different parameters from the respective machines, and hence are placed at their nearby panels in order to acquire different parameters cost effectively during their running condition. The MDM software collects these data through a communication channel where all the DHUs are networked using RS485 protocol. Before transmitting, the parameter's related data is modified with the adoption of differential pulse coded modulation (DPCM) and Huffman coding technique. It is further encrypted with a private key where different keys are used for different DHUs. In this way a data security scheme is adopted during its passage through the communication channel in order to avoid any third party attack into the channel. The hybrid mode of DPCM and Huffman coding is chosen to reduce the data packet length. A MATLAB based simulation and its practical implementation using DHUs at three machine terminals (one healthy three phase, one healthy single phase and one faulty three phase machine) proves its efficacy and usefulness for condition based maintenance of multi machine system. The data at the central control room are decrypted and decoded using MDM software. In this work it is observed that Chanel efficiency with respect to different parameter measurements has been increased very much.

  16. 哈夫曼算法及其应用研究%Research and Application of Huffman Algorithm

    Institute of Scientific and Technical Information of China (English)

    张荣梅

    2013-01-01

    The Huffman algorithm firstly is analyzed in this paper. Then, a implementation method of the Huffman algorithm is giv?en. Next, the applications of the Huffman algorithm on compression coding, decision tree and optimal merge tree are discussed.%该文首先分析了赫夫曼算法,给出了一种赫夫曼算法的实现方法,然后研究了赫夫曼算法在压缩编码,判定树,在外部文件排序中的最佳归并树等中的应用.

  17. 基于概率补偿的无哈夫曼树变长压缩编码%Variable-length Compressed Encoding Without Huffman Tree Based on the Probability Compensation

    Institute of Scientific and Technical Information of China (English)

    杨多星; 刘蕴红

    2011-01-01

    现在广泛使用的压缩编码方法都要通过哈夫曼树来实现,这样围绕着哈夫曼树就存在着许多运算过程.为了化简编码过程,提出了一种无需哈夫曼树就能实现的变长最佳编码方法,通过一个概率补偿的过程,可以直接得到所有信源的最佳码长.知道码长和概率后也无需通过哈夫曼树就可以确定最后的编码,并且可以证明结果满足变长最佳编码定理和前缀编码.经测试,该方法可以快速有效得到变长最佳编码,并简化了变长编码的运算存储过程.%Nowadays most of the widely used compressions encoding methods are implemented by using the Huffman tree. There are many operational processes around the Huffrnan tree. One variable-length optimal encoding method is proposed to simplify the encoding process. The optimal code length can be get from the probability compensation process. The final coding can be determined without the Huffman tree after knowing the code length and probability, and the result meets the challenge of the variable-length optimal coding theorem and the prefix code. After testing, the method is proved to be able to get the variable-length optimal coding quickly and effectively. The calculation and stored procedures are simplified.

  18. On adaptive Huffman coding based on Look-up table%基于查找表的自适应Huffman编码算法

    Institute of Scientific and Technical Information of China (English)

    雒莎; 葛海波

    2011-01-01

    Considering that the existing Huffman coding algorithms are not efficient, an adaptive Huffman coding algorithm based on look-up table is proposed, which encodes the data according as the dynamic ta- bles are changing. By this algorithm, the first character is encoded to the code words of "KEY" firstly, and then, "KEY" is moved down until a new character turns up. Compared with others, the proposed algorithm can make Huffman coding run more efficiently.%Huffman压缩编码作为一种高效而简单的可变长编码而被广泛应用于信源编码。但现有的Huffman编码算法普遍存在着效率不高的问题,因此,提出一种自适应查找表Huffman编码算法。该算法对数据进行编码的依据是动态变化的表,对于首次出现的字符使用“KEY”的码字进行编码,将“KEY”下移,等待下一个首次出现的字符。与其他算法相比,改进算法Huffman编码的效率得以提高。

  19. Second Generation Wavelet Applied to Lossless Compression Coding of Image%第二代小波应用于图象的无损压缩编码

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide higher compression ratio than Huffman coding while it reconstructs image without loss compared with the first generation wavelet transform. The experimental results show that the second generation wavelet transform can obtain excellent performance in medical image compression coding.

  20. 与Huffman码相结合的卷积码软判决译码方案%Soft Decoding Scheme of Convolution Code Combined with Huffman Coding

    Institute of Scientific and Technical Information of China (English)

    郭东亮; 陈小蔷; 吴乐南

    2002-01-01

    This paper proposes a modification of the soft output Viterbi decoding algorithm (SOVA) which combines convolution code with Huffman coding. The idea is to extract the bit probability information from the Huffman coding and use it to compute the a priori source information which can be used when the channel environment is bad. The suggested scheme does not require changes on the transmitter side. Compared with separate decoding systems, the gain in signal to noise ratio is about 0.5-1.0 dB with a limited added complexity. Simulation results show that the suggested algorithm is effective.%提出了一种与Huffman码相结合的卷积码软判决译码方案.对卷积码的软判决维特比译码算法进行了改进,由Huffman编码的码字概率计算出比特转移概率,进而得出与维特比译码的支路似然值相对应的信源先验信息,通信系统的编码端不作改动,当由于信道条件恶化等原因造成维特比译码算法的支路量度相差很小而难以进行可靠译码时,将信源先验信息作为支路量度的修正值,以改善译码的性能.与分离的信源、信道译码相比,性能增益约为0.5~1.0?dB,增加的复杂性很小.仿真实验验证了算法的有效性.

  1. Context-based lossless image compression with optimal codes for discretized Laplacian distributions

    Science.gov (United States)

    Giurcaneanu, Ciprian Doru; Tabus, Ioan; Stanciu, Cosmin

    2003-05-01

    Lossless image compression has become an important research topic, especially in relation with the JPEG-LS standard. Recently, the techniques known for designing optimal codes for sources with infinite alphabets have been applied for the quantized Laplacian sources which have probability mass functions with two geometrically decaying tails. Due to the simple parametric model of the source distribution the Huffman iterations are possible to be carried out analytically, using the concept of reduced source, and the final codes are obtained as a sequence of very simple arithmetic operations, avoiding the need to store coding tables. We propose the use of these (optimal) codes in conjunction with context-based prediction, for noiseless compression of images. To reduce further the average code length, we design Escape sequences to be employed when the estimation of the distribution parameter is unreliable. Results on standard test files show improvements in compression ratio when comparing with JPEG-LS.

  2. Study of Run-length and Huffman Coding in Binary Image and Implementation in Matlab%二值图像游程-Huffman编码方法研究及Matlab实现

    Institute of Scientific and Technical Information of China (English)

    魏佳圆; 温媛媛; 周诠

    2015-01-01

    In this paper,a lossless compression algorithm in binary image based on run-length and Huffman coding is proposed. Different image is tested by the algorithm,and the experiment results indicate that this method can perform well at images of clearer block and less texture. Moreover,the algorithm is easy to accomplish and has practical value in binary im-age applications.%文章提出一种游程编码和Huffman编码相结合的二值图像无损压缩算法,并对算法进行不同图像的仿真实验,实验结果证明本算法对分块清晰,纹理较少的二值图像压缩效果明显,算法实现简单,具有一定的实用价值。

  3. Image sensor dark current elimination system based on DPCM-Huffman compression algorithm%基于DPCM-Huffman压缩算法的图像传感器去暗电流系统

    Institute of Scientific and Technical Information of China (English)

    钟晨峰; 李斌桥; 徐江涛

    2012-01-01

    为了解决图像传感器暗电流消除过程中数据存储问题,本文提出了一种基于DPCM- Huffman压缩算法的数据压缩去暗电流系统并进行硬件实现;在该系统工作之前,首先通过DPCM与Huffman组合压缩算法将图像传感器暗电流数据进行编码压缩,并将压缩后的数据存储于FLASH存储器中.而后在图像传感器工作过程中,通过读取存储器中数据,进行Huffman与DPCM解码,最终消除图像传感器中的暗电流.实验证明,采用该压缩去暗电流系统处理后,以分辨率为256×256的CMOS图像传感器为例,压缩后数据压缩比为3.12,数据量降为原始数据的32%,提高3倍的工作速度.实践证明,本文提出的解压系统提高了数据压缩比,保证了数据精度,提高了图像传感器的工作速度,是一种适用于CMOS图像传感器暗电流消除的压缩系统.%To reliably realize the data storage during the dark current elimination of image sensor, a data compression dark current elimination system which based on the DPCM-Huffman compression algorithm is presented in this paper. It is realized by hardware. Before the system works, the coding compression of the image sensor dark current data is executed with DPCM and Huffman compression algorithm, and then the compressed data is stored in the Flash memory. In the working process of the image sensor, the data in the reader-writer memory is used to carry out Huffman and DPCM decoding to eliminate the dark current in image sensors. The experiment proves that, after the processing by the dark current compression elimination system, taking the CMOS image senor whose revolution is 256X256 as an example, the system's data compression ratio is 3. 12, the quantity of data is decreased by a factor of about 1/3, and the work speed is raised 3 times. Therefore, the decompression system proposed in this paper improved the data compression ratio, ensured the data accuracy and improved the working speed of the image

  4. Lossless compression of medical images using Hilbert scan

    Science.gov (United States)

    Sun, Ziguang; Li, Chungui; Liu, Hao; Zhang, Zengfang

    2007-12-01

    The effectiveness of Hilbert scan in lossless medical images compression is discussed. In our methods, after coding of intensities, the pixels in a medical images have been decorrelated with differential pulse code modulation, then the error image has been rearranged using Hilbert scan, finally we implement five coding schemes, such as Huffman coding, RLE, lZW coding, Arithmetic coding, and RLE followed by Huffman coding. The experiments show that the case, which applies DPCM followed by Hilbert scan and then compressed by the Arithmetic coding scheme, has the best compression result, also indicate that Hilbert scan can enhance pixel locality, and increase the compression ratio effectively.

  5. LDPC Codes for Compressed Sensing

    CERN Document Server

    Dimakis, Alexandros G; Vontobel, Pascal O

    2010-01-01

    We present a mathematical connection between channel coding and compressed sensing. In particular, we link, on the one hand, \\emph{channel coding linear programming decoding (CC-LPD)}, which is a well-known relaxation of maximum-likelihood channel decoding for binary linear codes, and, on the other hand, \\emph{compressed sensing linear programming decoding (CS-LPD)}, also known as basis pursuit, which is a widely used linear programming relaxation for the problem of finding the sparsest solution of an under-determined system of linear equations. More specifically, we establish a tight connection between CS-LPD based on a zero-one measurement matrix over the reals and CC-LPD of the binary linear channel code that is obtained by viewing this measurement matrix as a binary parity-check matrix. This connection allows the translation of performance guarantees from one setup to the other. The main message of this paper is that parity-check matrices of "good" channel codes can be used as provably "good" measurement ...

  6. 无损图像压缩编码方法及其比较%A Study on Ways of Lossless Image Compression and Coding and Relevant Comparisons

    Institute of Scientific and Technical Information of China (English)

    冉晓娟

    2014-01-01

    This essay studies the principles of three ways of lossless image compression including run length coding, LZW coding and Huffman coding as well as making comparative analyses of them,which contributes to the applica-tions of various coding methods in lossless image compression.%研究游程编码,LZW编码和哈夫曼编码三种无损图像压缩的原理,并对其进行分析,这有助于针对不同类型的图像选择合适的压缩编码方法。

  7. A high capacity MP3 steganography based on Huffman coding%基于Huffman编码的大容量MP3隐写算法

    Institute of Scientific and Technical Information of China (English)

    严迪群; 王让定; 张力光

    2011-01-01

    A high capacity steganography method for mp3 audios is proposed in this paper. According to the characteristic of Huffman coding, the code words in Huffman tables are first classified to ensure that the embedding operation does not change the bitstream structure in MP3 standard. Then secret data are embedded by replacing the corresponding code words. The embedding strategy is based on multiple-base nation system. The structure of bit stream and the size of the cover audio can be kept unchanged after embedding. The results show that the proposed method can obtain higher hiding capacity and better effi ciency than that of the method under binary case. Furthermore, the imperceptibility can also be better maintained in our method.%本文针对MP3编码标准中哈夫曼码字对特点,提出了一种借助码字替换实现秘密信息隐写的新算法.该算法首先对哈夫曼码表中的码字进行分类,以保证替换操作不改变MP3码流的固定结构,再借鉴混合进制的概念,采用多进制方式隐藏秘密信息.给出了算法在二进制和多进制两种模式下的仿真结果,表明多进制隐写模式可以获得更高的隐写速率和效率,同时算法的感知透明性也能得到较好保持.

  8. Application of grammar-based codes for lossless compression of digital mammograms

    Science.gov (United States)

    Li, Xiaoli; Krishnan, Srithar; Ma, Ngok-Wah

    2006-01-01

    A newly developed grammar-based lossless source coding theory and its implementation was proposed in 1999 and 2000, respectively, by Yang and Kieffer. The code first transforms the original data sequence into an irreducible context-free grammar, which is then compressed using arithmetic coding. In the study of grammar-based coding for mammography applications, we encountered two issues: processing time and limited number of single-character grammar G variables. For the first issue, we discover a feature that can simplify the matching subsequence search in the irreducible grammar transform process. Using this discovery, an extended grammar code technique is proposed and the processing time of the grammar code can be significantly reduced. For the second issue, we propose to use double-character symbols to increase the number of grammar variables. Under the condition that all the G variables have the same probability of being used, our analysis shows that the double- and single-character approaches have the same compression rates. By using the methods proposed, we show that the grammar code can outperform three other schemes: Lempel-Ziv-Welch (LZW), arithmetic, and Huffman on compression ratio, and has similar error tolerance capabilities as LZW coding under similar circumstances.

  9. Ultraspectral sounder data compression using error-detecting reversible variable-length coding

    Science.gov (United States)

    Huang, Bormin; Ahuja, Alok; Huang, Hung-Lung; Schmit, Timothy J.; Heymann, Roger W.

    2005-08-01

    Nonreversible variable-length codes (e.g. Huffman coding, Golomb-Rice coding, and arithmetic coding) have been used in source coding to achieve efficient compression. However, a single bit error during noisy transmission can cause many codewords to be misinterpreted by the decoder. In recent years, increasing attention has been given to the design of reversible variable-length codes (RVLCs) for better data transmission in error-prone environments. RVLCs allow instantaneous decoding in both directions, which affords better detection of bit errors due to synchronization losses over a noisy channel. RVLCs have been adopted in emerging video coding standards--H.263+ and MPEG-4--to enhance their error-resilience capabilities. Given the large volume of three-dimensional data that will be generated by future space-borne ultraspectral sounders (e.g. IASI, CrIS, and HES), the use of error-robust data compression techniques will be beneficial to satellite data transmission. In this paper, we investigate a reversible variable-length code for ultraspectral sounder data compression, and present its numerical experiments on error propagation for the ultraspectral sounder data. The results show that the RVLC performs significantly better error containment than JPEG2000 Part 2.

  10. Research on Differential Coding Method for Satellite Remote Sensing Data Compression

    Science.gov (United States)

    Lin, Z. J.; Yao, N.; Deng, B.; Wang, C. Z.; Wang, J. H.

    2012-07-01

    Data compression, in the process of Satellite Earth data transmission, is of great concern to improve the efficiency of data transmission. Information amounts inherent to remote sensing images provide a foundation for data compression in terms of information theory. In particular, distinct degrees of uncertainty inherent to distinct land covers result in the different information amounts. This paper first proposes a lossless differential encoding method to improve compression rates. Then a district forecast differential encoding method is proposed to further improve the compression rates. Considering the stereo measurements in modern photogrammetry are basically accomplished by means of automatic stereo image matching, an edge protection operator is finally utilized to appropriately filter out high frequency noises which could help magnify the signals and further improve the compression rates. The three steps were applied to a Landsat TM multispectral image and a set of SPOT-5 panchromatic images of four typical land cover types (i.e., urban areas, farm lands, mountain areas and water bodies). Results revealed that the average code lengths obtained by the differential encoding method, compared with Huffman encoding, were more close to the information amounts inherent to remote sensing images. And the compression rates were improved to some extent. Furthermore, the compression rates of the four land cover images obtained by the district forecast differential encoding method were nearly doubled. As for the images with the edge features preserved, the compression rates are average four times as large as those of the original images.

  11. Feature-based Image Sequence Compression Coding

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A novel compressing method for video teleconference applications is presented. Semantic-based coding based on human image feature is realized, where human features are adopted as parameters. Model-based coding and the concept of vector coding are combined with the work on image feature extraction to obtain the result.

  12. Ultraspectral sounder data compression using the non-exhaustive Tunstall coding

    Science.gov (United States)

    Wei, Shih-Chieh; Huang, Bormin

    2008-08-01

    With its bulky volume, the ultraspectral sounder data might still suffer a few bits of error after channel coding. Therefore it is beneficial to incorporate some mechanism in source coding for error containment. The Tunstall code is a variable-to- fixed length code which can reduce the error propagation encountered in fixed-to-variable length codes like Huffman and arithmetic codes. The original Tunstall code uses an exhaustive parse tree where internal nodes extend every symbol in branching. It might result in assignment of precious codewords to less probable parse strings. Based on an infinitely extended parse tree, a modified Tunstall code is proposed which grows an optimal non-exhaustive parse tree by assigning the complete codewords only to top probability nodes in the infinite tree. Comparison will be made among the original exhaustive Tunstall code, our modified non-exhaustive Tunstall code, the CCSDS Rice code, and JPEG-2000 in terms of compression ratio and percent error rate using the ultraspectral sounder data.

  13. Semantic Source Coding for Flexible Lossy Image Compression

    National Research Council Canada - National Science Library

    Phoha, Shashi; Schmiedekamp, Mendel

    2007-01-01

    Semantic Source Coding for Lossy Video Compression investigates methods for Mission-oriented lossy image compression, by developing methods to use different compression levels for different portions...

  14. LEMPEL - ZIV - WELCH & HUFFMAN” - THE LOSSLESS COMPRESSION TECHNIQUES; (IMPLEMENTATION ANALYSIS AND COMPARISON THEREOF)

    OpenAIRE

    Kapil Kapoor*, Dr. Abhay Sharma

    2016-01-01

    This paper is about the Implementation Analysis and Comparison of Lossless Compression Techniques viz. Lempel-Ziv-Welch and Huffman. LZW technique assigns fixed length code words. It requires no prior information about the probability of occurrence of symbols to be encoded. Basic idea in Huffman technique is that different gray levels occur with different probability (non-uniform- '•histogram). It uses shorter code words for the more common gray levels and longer code words for the l...

  15. PERFORMANCE ANALYSIS OF IMAGE COMPRESSION USING FUZZY LOGIC ALGORITHM

    Directory of Open Access Journals (Sweden)

    Rohit Kumar Gangwar

    2014-04-01

    Full Text Available With the increase in demand, product of multimedia is increasing fast and thus contributes to insufficient network bandwidth and memory storage. Therefore image compression is more significant for reducing data redundancy for save more memory and transmission bandwidth. An efficient compression technique has been proposed which combines fuzzy logic with that of Huffman coding. While normalizing image pixel, each value of pixel image belonging to that image foreground are characterized and interpreted. The image is sub divided into pixel which is then characterized by a pair of set of approximation. Here encoding represent Huffman code which is statistically independent to produce more efficient code for compression and decoding represents rough fuzzy logic which is used to rebuilt the pixel of image. The method used here are rough fuzzy logic with Huffman coding algorithm (RFHA. Here comparison of different compression techniques with Huffman coding is done and fuzzy logic is applied on the Huffman reconstructed image. Result shows that high compression rates are achieved and visually negligible difference between compressed images and original images.

  16. Video Coding Technique using MPEG Compression Standards

    Directory of Open Access Journals (Sweden)

    A. J. Falade

    2013-06-01

    Full Text Available Digital video compression technologies have become part of life, in the way visual information is created, communicated and consumed. Some application areas of video compression focused on the problem of optimizing storage space and transmission bandwidth (BW. The two dimensional discrete cosine transform (2-D DCT is an integral part of video and image compression, which is used in Moving Picture Expert Group (MPEG encoding standards. Thus, several video compression algorithms had been developed to reduce the data quantity and provide the acceptable quality standard. In the proposed study, the Matlab Simulink Model (MSM has been used for video coding/compression. The approach is more modern and reduces error resilience image distortion.

  17. An Empirical Evaluation of Coding Methods for Multi-Symbol Alphabets.

    Science.gov (United States)

    Moffat, Alistair; And Others

    1994-01-01

    Evaluates the performance of different methods of data compression coding in several situations. Huffman's code, arithmetic coding, fixed codes, fast approximations to arithmetic coding, and splay coding are discussed in terms of their speed, memory requirements, and proximity to optimal performance. Recommendations for the best methods of…

  18. A Fast Fractal Image Compression Coding Method

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Fast algorithms for reducing encoding complexity of fractal image coding have recently been an important research topic. Search of the best matched domain block is the most computation intensive part of the fractal encoding process. In this paper, a fast fractal approximation coding scheme implemented on a personal computer based on matching in range block's neighbours is presented. Experimental results show that the proposed algorithm is very simple in implementation, fast in encoding time and high in compression ratio while PSNR is almost the same as compared with Barnsley's fractal block coding .

  19. Ternary Tree and Memory-Efficient Huffman Decoding Algorithm

    Directory of Open Access Journals (Sweden)

    Pushpa R. Suri

    2011-01-01

    Full Text Available In this study, the focus was on the use of ternary tree over binary tree. Here, a new one pass Algorithm for Decoding adaptive Huffman ternary tree codes was implemented. To reduce the memory size and fasten the process of searching for a symbol in a Huffman tree, we exploited the property of the encoded symbols and proposed a memory efficient data structure to represent the codeword length of Huffman ternary tree. In this algorithm we tried to find out the staring and ending address of the code to know the length of the code. And then in second algorithm we tried to decode the ternary tree code using binary search method. In this algorithm we tried to find out the staring and ending address of the code to know the length of the code. And then in second algorithm we tried to decode the ternary tree code using binary search method.

  20. 广义哈夫曼树及其在汉字编码中的应用%GENERALIZED HUFFMAN TREE AND ITS APPLICATION IN CHINESE CHARACTER cODING

    Institute of Scientific and Technical Information of China (English)

    游洪跃; 汪建武; 陶郁

    2000-01-01

    提出了广义哈夫曼树的概念,证明了有关的定理和结论,构造了广义哈夫曼树的算法,最后在汉字编码方面进行了应用.%Authors present a concept-generalized Huffman tree(GHT) and prove some pertinent theorems. Meanwhile, authors design the GHTs algorithm. In particular, authors give its application in Chinese character coding.

  1. Lossless Compression of JPEG Coded Photo Collections.

    Science.gov (United States)

    Wu, Hao; Sun, Xiaoyan; Yang, Jingyu; Zeng, Wenjun; Wu, Feng

    2016-04-06

    The explosion of digital photos has posed a significant challenge to photo storage and transmission for both personal devices and cloud platforms. In this paper, we propose a novel lossless compression method to further reduce the size of a set of JPEG coded correlated images without any loss of information. The proposed method jointly removes inter/intra image redundancy in the feature, spatial, and frequency domains. For each collection, we first organize the images into a pseudo video by minimizing the global prediction cost in the feature domain. We then present a hybrid disparity compensation method to better exploit both the global and local correlations among the images in the spatial domain. Furthermore, the redundancy between each compensated signal and the corresponding target image is adaptively reduced in the frequency domain. Experimental results demonstrate the effectiveness of the proposed lossless compression method. Compared to the JPEG coded image collections, our method achieves average bit savings of more than 31%.

  2. An efficient adaptive arithmetic coding image compression technology

    Institute of Scientific and Technical Information of China (English)

    Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei

    2011-01-01

    This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm.The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding.The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block.The decoded image block can accurately recover the encoded image according to the code book information.We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate.The results show that it is an effective compression technology.

  3. VH-1: Multidimensional ideal compressible hydrodynamics code

    Science.gov (United States)

    Hawley, John; Blondin, John; Lindahl, Greg; Lufkin, Eric

    2012-04-01

    VH-1 is a multidimensional ideal compressible hydrodynamics code written in FORTRAN for use on any computing platform, from desktop workstations to supercomputers. It uses a Lagrangian remap version of the Piecewise Parabolic Method developed by Paul Woodward and Phil Colella in their 1984 paper. VH-1 comes in a variety of versions, from a simple one-dimensional serial variant to a multi-dimensional version scalable to thousands of processors.

  4. Compressing subbanded image data with Lempel-Ziv-based coders

    Science.gov (United States)

    Glover, Daniel; Kwatra, S. C.

    1993-01-01

    A method of improving the compression of image data using Lempel-Ziv-based coding is presented. Image data is first processed with a simple transform, such as the Walsh Hadamard Transform, to produce subbands. The subbanded data can be rounded to eight bits or it can be quantized for higher compression at the cost of some reduction in the quality of the reconstructed image. The data is then run-length coded to take advantage of the large runs of zeros produced by quantization. Compression results are presented and contrasted with a subband compression method using quantization followed by run-length coding and Huffman coding. The Lempel-Ziv-based coding in conjunction with run-length coding produces the best compression results at the same reconstruction quality (compared with the Huffman-based coding) on the image data used.

  5. The Rice coding algorithm achieves high-performance lossless and progressive image compression based on the improving of integer lifting scheme Rice coding algorithm

    Science.gov (United States)

    Jun, Xie Cheng; Su, Yan; Wei, Zhang

    2006-08-01

    In this paper, a modified algorithm was introduced to improve Rice coding algorithm and researches of image compression with the CDF (2,2) wavelet lifting scheme was made. Our experiments show that the property of the lossless image compression is much better than Huffman, Zip, lossless JPEG, RAR, and a little better than (or equal to) the famous SPIHT. The lossless compression rate is improved about 60.4%, 45%, 26.2%, 16.7%, 0.4% on average. The speed of the encoder is faster about 11.8 times than the SPIHT's and its efficiency in time can be improved by 162%. The speed of the decoder is faster about 12.3 times than that of the SPIHT's and its efficiency in time can be rasied about 148%. This algorithm, instead of largest levels wavelet transform, has high coding efficiency when the wavelet transform levels is larger than 3. For the source model of distributions similar to the Laplacian, it can improve the efficiency of coding and realize the progressive transmit coding and decoding.

  6. Research on compression and improvement of vertex chain code

    Science.gov (United States)

    Yu, Guofang; Zhang, Yujie

    2009-10-01

    Combined with the Huffman encoding theory, the code 2 with highest emergence-probability and continution-frequency is indicated by a binary number 0,the combination of 1 and 3 with higher emergence-probability and continutionfrequency are indicated by two binary number 10,and the corresponding frequency-code are attached to the two kinds of code,the length of the frequency-code can be assigned beforehand or adaptive automatically,the code 1 and 3 with lowest emergence-probability and continution-frequency are indicated by the binary number 110 and 111 respectively.The relative encoding efficiency and decoding efficiency are supplemented to the current performance evaluation system of the chain code.the new chain code is compared with a current chain code through the test system progamed by VC++, the results show that the basic performances of the new chain code are significantly improved, and the performance advantages are improved with the size increase of the graphics.

  7. Coding Strategies and Implementations of Compressive Sensing

    Science.gov (United States)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or

  8. HUFFMAN-BASED GROUP KEY ESTABLISHMENT SCHEME WITH LOCATION-AWARE

    Institute of Scientific and Technical Information of China (English)

    Gu Xiaozhuo; Yang Jianzu; Lan Julong

    2009-01-01

    Time efficiency of key establishment and update is one of the major problems contributory key managements strive to address. To achieve better time efficiency in key establishment, we propose a Location-based Huffman (L-Huffman) scheme. First, users are separated into several small groups to minimize communication cost when they are distributed over large networks. Second, both user's computation difference and message transmission delay are taken into consideration when Huffman coding is employed to forming the optimal key tree. Third, the combined weights in Huffman tree are located in a higher place of the key tree to reduce the variance of the average key generation time and minimize the longest key generation time. Simulations demonstrate that L-Huffman has much better performance in wide area networks and is a little better in local area network than Huffman scheme.

  9. An efficient medical image compression scheme.

    Science.gov (United States)

    Li, Xiaofeng; Shen, Yi; Ma, Jiachen

    2005-01-01

    In this paper, a fast lossless compression scheme is presented for the medical image. This scheme consists of two stages. In the first stage, a Differential Pulse Code Modulation (DPCM) is used to decorrelate the raw image data, therefore increasing the compressibility of the medical image. In the second stage, an effective scheme based on the Huffman coding method is developed to encode the residual image. This newly proposed scheme could reduce the cost for the Huffman coding table while achieving high compression ratio. With this algorithm, a compression ratio higher than that of the lossless JPEG method for image can be obtained. At the same time, this method is quicker than the lossless JPEG2000. In other words, the newly proposed algorithm provides a good means for lossless medical image compression.

  10. Accelerating Lossless Data Compression with GPUs

    CERN Document Server

    Cloud, R L; Ward, H L; Skjellum, A; Bangalore, P

    2011-01-01

    Huffman compression is a statistical, lossless, data compression algorithm that compresses data by assigning variable length codes to symbols, with the more frequently appearing symbols given shorter codes than the less. This work is a modification of the Huffman algorithm which permits uncompressed data to be decomposed into indepen- dently compressible and decompressible blocks, allowing for concurrent compression and decompression on multiple processors. We create implementations of this modified algorithm on a current NVIDIA GPU using the CUDA API as well as on a current Intel chip and the performance results are compared, showing favorable GPU performance for nearly all tests. Lastly, we discuss the necessity for high performance data compression in today's supercomputing ecosystem.

  11. 基于Huffman编码的多媒体加密技术研究%Research of Multimedia Encryption Based on Huffman Codeing

    Institute of Scientific and Technical Information of China (English)

    李莉萍; 吴蒙

    2011-01-01

    随着多媒体信息在移动、手持设备上应用的日益广泛,人们开始研究低复杂度、对硬件要求较小的多媒体加密技术.如今很多音视频文件格式中(如MPEG4、JPEG、MP3等)都用到了Huffman 编码,基于Huffman 编码的低复杂度的多媒体加密技术逐渐进入人们的研究视野.文章首先介绍了最早提出的基于多重Huffman码表的加密技术,并进一步分析其在抵抗唯密文攻击、已知明文攻击、选择明文攻击时的安全性,最后针对其具有的安全问题提出一种改进Huffman加密方案的建议.

  12. External-Compression Supersonic Inlet Design Code

    Science.gov (United States)

    Slater, John W.

    2011-01-01

    A computer code named SUPIN has been developed to perform aerodynamic design and analysis of external-compression, supersonic inlets. The baseline set of inlets include axisymmetric pitot, two-dimensional single-duct, axisymmetric outward-turning, and two-dimensional bifurcated-duct inlets. The aerodynamic methods are based on low-fidelity analytical and numerical procedures. The geometric methods are based on planar geometry elements. SUPIN has three modes of operation: 1) generate the inlet geometry from a explicit set of geometry information, 2) size and design the inlet geometry and analyze the aerodynamic performance, and 3) compute the aerodynamic performance of a specified inlet geometry. The aerodynamic performance quantities includes inlet flow rates, total pressure recovery, and drag. The geometry output from SUPIN includes inlet dimensions, cross-sectional areas, coordinates of planar profiles, and surface grids suitable for input to grid generators for analysis by computational fluid dynamics (CFD) methods. The input data file for SUPIN and the output file from SUPIN are text (ASCII) files. The surface grid files are output as formatted Plot3D or stereolithography (STL) files. SUPIN executes in batch mode and is available as a Microsoft Windows executable and Fortran95 source code with a makefile for Linux.

  13. MP3 Steganalysis based on Huffman code tabel index%基于Huffman码表索引的MP3Stego隐写分析方法

    Institute of Scientific and Technical Information of China (English)

    陈益如; 王让定; 严迪群

    2012-01-01

    MP3Stego是经典的MP3音频隐写算法之一.通过分析MP3Stego隐写算法对编码器内循环模块的影响,发现哈夫曼码表索引值在隐写前后发生了不同程度的改变.在此基础上,从待检测的MP3音频的解码参数中提取Huffman码表索引值,计算其二阶差分值,将其作为隐写分析的特征,结合SVM支持向量机实现隐写分析.实验结果表明,所提取的特征能够有效地反映MP3Stego算法在不同嵌入速率下的隐写痕迹.%MP3Stego is a typical steganographic algorithm for MP3 audio. By analysing the influence of the MP3Stego made to inner loop of MP3 encoder, it is found that the index values of Huffman table change differently after embedding. In the proposed algorithm, the index values of Huffman table are extracted from the decoder parameters. The second-order difference of the values is calculated as the steganalysis feature and SVM is used to classify the cover and stego MP3 audios. The experimental results show that the proposed algorithm is effective for detecting MP3 Stego.

  14. Modified 8×8 quantization table and Huffman encoding steganography

    Science.gov (United States)

    Guo, Yongning; Sun, Shuliang

    2014-10-01

    A new secure steganography, which is based on Huffman encoding and modified quantized discrete cosine transform (DCT) coefficients, is provided in this paper. Firstly, the cover image is segmented into 8×8 blocks and modified DCT transformation is applied on each block. Huffman encoding is applied to code the secret image before embedding. DCT coefficients are quantized by modified quantization table. Inverse DCT(IDCT) is conducted on each block. All the blocks are combined together and the steg image is finally achieved. The experiment shows that the proposed method is better than DCT and Mahender Singh's in PSNR and Capacity.

  15. Application of Embedded Zerotree Wavelet to the Compression of Infrared Spectra

    Institute of Scientific and Technical Information of China (English)

    Meng Long LI; Hua Yi QI; Fu Sheng NIE; Zhi Ning WEN; Bin KANG

    2003-01-01

    In this paper the embedded zerotree wavelet (EZW) method and Huffman coding are proposed to compress infrared (IR) spectra. We found that this technique is much better than others in terms of efficiently coding wavelet coefficients because the zerotree quantization is an effective way of exploiting the self-similarities of wavelet coefficients at various resolutions.

  16. On Using Goldbach G0 Codes and Even-Rodeh Codes for Text Compression on Using Goldbach G0 Codes and Even-Rodeh Codes for Text Compression

    Science.gov (United States)

    Budiman, M. A.; Rachmawati, D.

    2017-03-01

    This research aims to study the efficiency of two variants of variable-length codes (i.e., Goldbach G0 codes and Even-Rodeh codes) in compressing texts. The parameters being examined are the ratio of compression, the space savings, and the bit rate. As a benchmark, all of the original (uncompressed) texts are assumed to be encoded in American Standard Codes for Information Interchange (ASCII). Several texts, including those derived from some corpora (the Artificial corpus, the Calgary corpus, the Canterbury corpus, the Large corpus, and the Miscellaneous corpus) are tested in the experiment. The overall result shows that the Even-Rodeh codes are consistently more efficient to compress texts than the unoptimzed Goldbach G0 codes.

  17. KODE HUFFMAN UNTUK KOMPRESI PESAN

    Directory of Open Access Journals (Sweden)

    Erna Zuni Astuti

    2013-05-01

    Full Text Available Dalam ilmu komunikasi data, pesan yang dikirim kepada seseorang, seringkali ukurannya terlalu besar, sehingga membutuhkan tempat penyimpanan yang terlalu besar pula. Demikian juga pesan yang terlalu besar, akan membutuhkan waktu pengiriman yang lebih lama bila dibandingkan dengan pesan yang berukuran relatif lebih kecil. Dua masalah tersebut di atas, sebenarnya bisa diatasi dengan pengkodean pesan dengan tujuan agar isi pesan yang sebenarnya besar, bisa dibuat sesingkat mungkin sehingga waktu pengiriman yang mestinya lama bisa dibuat relatif lebih cepat dan tempat penyimpanan yang besar bisa dibuat relatif lebih efisien dibandingkan dengan sebelum dilakukan pengkodean. Dari Hasil uji coba penerapan dan penghitungan kode Huffman, maka dapat disimpulkan antara lain bahwa  dengan menggunakan kode Huffman ternyata dapat mengurangi beban alias dapat mengkompres data lebih dari 50%. Kata Kunci: Kode Huffman, Kompresi Pesan, Komunikasi

  18. A SORT-ONCE AND DYNAMIC ENCODING (SODE) BASED HUFFMAN CODING ALGORITHM%基于一次排序动态编码的Huffman编码算法

    Institute of Scientific and Technical Information of China (English)

    刘燕清; 龚声蓉

    2009-01-01

    Huffman编码作为一种高效的不等长编码技术正日益广泛地在文本、图像、视频等数据压缩、存储及通信等领域得到应用.为了有效提高时空效率、简化编码思想和操作,首先研究了传统Huffman 编码的算法及具体做法,并针对性地提出了一种基于一次排序动态编码的Huffman编码算法.与传统的 Huffman算法及近年来国内外文献中提出的改进算法相比,该方法从编码思想上将构树简化为线性编码,在空间复杂度相近的情况下,不仅时间复杂度上有明显降低,而且编码步骤和相关操作更简洁,更利于程序的实现和移植.实验结果验证了算法的有效性.

  19. Lossless Compression Method for Medical Image Sequences Using Super-Spatial Structure Prediction and Inter-frame Coding

    Directory of Open Access Journals (Sweden)

    Mudassar Raza

    2012-08-01

    Full Text Available Space research organizations, hospitals and military air surveillance activities, among others, produce a huge amountof data in the form of images hence a large storage space is required to record this information. In hospitals, dataproduced during medical examination is in the form of a sequence of images and are very much correlated; becausethese images have great importance, some kind of lossless image compression technique is needed. Moreover, theseimages are often required to be transmitted over the network. Since the availability of storage and bandwidth islimited, a compression technique is required to reduce the number of bits to store these images and take less time totransmit them over the network. For this purpose, there are many state-of the-art lossless image compressionalgorithms like CALIC, LOCO-I, JPEG-LS, JPEG20000; Nevertheless, these compression algorithms take only asingle file to compress and cannot exploit the correlation among the sequence frames of MRI or CE images. Toexploit the correlation, a new algorithm is proposed in this paper. The primary goals of the proposed compressionmethod are to minimize the memory resource during storage of compressed data as well as minimize the bandwidthrequirement during transmission of compressed data. For achieving these goals, the proposed compression methodcombines the single image compression technique called super spatial structure prediction with inter-frame coding toacquire grater compression ratio. An efficient compression method requires elimination of redundancy of data duringcompression; therefore, for elimination of redundancy of data, initially, the super spatial structure prediction algorithmis applied with the fast block matching approach and later Huffman coding is applied for reducing the number of bitsrequired for transmitting and storing single pixel value. Also, to speed up the block-matching process during motionestimation, the proposed method compares those blocks

  20. Grayscale Image Compression Based on Min Max Block Truncating Coding

    Directory of Open Access Journals (Sweden)

    Hilal Almarabeh

    2011-11-01

    Full Text Available This paper presents an image compression techniques based on block truncating coding. In this work, a min max block truncating coding (MM_BTC is presented for grayscale image compression relies on applying dividing image into non-overlapping blocks. MM_BTC differ from other block truncating coding such as block truncating coding (BTC in the way of selecting the quantization level in order to remove redundancy. Objectives measures such as: Bit Rate (BR, Mean Square Error (MSE, Peak Signal to Noise Ratio (PSNR, and Redundancy (R, were used to present a detailed evaluation of MM_BTC of image quality.

  1. Simulated performance results of the OMV video compression telemetry system

    Science.gov (United States)

    Ingels, Frank; Parker, Glenn; Thomas, Lee Ann

    The control system of NASA's Orbital Maneuvering Vehicle (OMV) will employ range/range-rate radar, a forward command link, and a compressed video return link. The video data is compressed by sampling every sixth frame of data; a rate of 5 frames/sec is adequate for the OMV docking speeds. Further axial compression is obtained, albeit at the expense of spatial resolution, by averaging adjacent pixels. The remaining compression is achieved on the basis of differential pulse-code modulation and Huffman run-length encoding. A concatenated error-correction coding system is used to protect the compressed video data stream from channel errors.

  2. Compressive imaging using fast transform coding

    Science.gov (United States)

    Thompson, Andrew; Calderbank, Robert

    2016-10-01

    We propose deterministic sampling strategies for compressive imaging based on Delsarte-Goethals frames. We show that these sampling strategies result in multi-scale measurements which can be related to the 2D Haar wavelet transform. We demonstrate the effectiveness of our proposed strategies through numerical experiments.

  3. New Methods for Lossless Image Compression Using Arithmetic Coding.

    Science.gov (United States)

    Howard, Paul G.; Vitter, Jeffrey Scott

    1992-01-01

    Identifies four components of a good predictive lossless image compression method: (1) pixel sequence, (2) image modeling and prediction, (3) error modeling, and (4) error coding. Highlights include Laplace distribution and a comparison of the multilevel progressive method for image coding with the prediction by partial precision matching method.…

  4. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    Science.gov (United States)

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.

  5. Error-free image compression algorithm using classifying-sequencing techniques.

    Science.gov (United States)

    He, J D; Dereniak, E L

    1992-05-10

    The development of a new error-free digital image compression algorithm is discussed. Without the help of any statistics information of the images being processed, this algorithm achieves average bits-per-word ratios near the entropy of the neighboring pixel differences. Because this algorithm does not involve statistical modeling, generation of a code book, or long integer-floating point arithmetics, it is simpler and, therefore, faster than the studied statistics codes, such as the Huffman code or the arithmetic code.

  6. Architecture for hardware compression/decompression of large images

    Science.gov (United States)

    Akil, Mohamed; Perroton, Laurent; Gailhard, Stephane; Denoulet, Julien; Bartier, Frederic

    2001-04-01

    In this article, we present a popular loseless compression/decompression algorithm, GZIP, and the study to implement it on a FPGA based architecture. The algorithm is loseless, and applied to 'bi-level' images of large size. It insures a minimum compression rate for the images we are considering. The proposed architecture for the compressor is based ona hash table and the decompressor is based on a parallel decoder of the Huffman codes.

  7. DPCM与自适应Huffman结合的压缩算法%Compression Algorithm for Radar Original Video Signal Based on DPCM and Adaptive Huffman

    Institute of Scientific and Technical Information of China (English)

    李灵芝; 江晶; 刘志高; 马晓岩

    2005-01-01

    为了解决大容量雷达数据传输,满足雷达原始视频信号实时无损的要求,根据雷达原始视频信号的特点,给出了采用DPCM(Difference Pulse Coding Modulation)与自适应Huffman编码相结合的压缩编码方式,分析了该算法的有效性和溢出问题,实验表明该方法相对于传统的自适应Huffman编码而言能改善实时性,提高压缩比.

  8. Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Barni Mauro

    2007-01-01

    Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.

  9. Bi-level image compression with tree coding

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1996-01-01

    Presently, tree coders are the best bi-level image coders. The current ISO standard, JBIG, is a good example. By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code. Three general-purpose coders...... version that without sacrificing speed brings it close to the multi-pass coders in compression performance...

  10. Compressed Domain Packet Loss Concealment of Sinusoidally Coded Speech

    DEFF Research Database (Denmark)

    Rødbro, Christoffer A.; Christensen, Mads Græsbøll; Andersen, Søren Vang

    2003-01-01

    We consider the problem of packet loss concealment for voice over IP (VoIP). The speech signal is compressed at the transmitter using a sinusoidal coding scheme working at 8 kbit/s. At the receiver, packet loss concealment is carried out working directly on the quantized sinusoidal parameters......, based on time-scaling of the packets surrounding the missing ones. Subjective listening tests show promising results indicating the potential of sinusoidal speech coding for VoIP....

  11. A Lossless hybrid wavelet-fractal compression for welding radiographic images.

    Science.gov (United States)

    Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud

    2016-01-01

    In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm.

  12. Embedded memory compression for video and graphics applications

    Science.gov (United States)

    Teng, Andy; Gokce, Dane; Aleksic, Mickey; Reznik, Yuriy A.

    2010-08-01

    We describe design of a low-complexity lossless and near-lossless image compression system with random access, suitable for embedded memory compression applications. This system employs a block-based DPCM coder using variable-length encoding for the residual. As part of this design, we propose to use non-prefix (one-to-one) codes for coding of residuals, and show that they offer improvements in compression performance compared to conventional techniques, such as Golomb-Rice and Huffman codes.

  13. Comparison of transform coding methods with an optimal predictor for the data compression of digital elevation models

    Science.gov (United States)

    Lewis, Michael

    1994-01-01

    Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.

  14. Development of 1D Liner Compression Code for IDL

    Science.gov (United States)

    Shimazu, Akihisa; Slough, John; Pancotti, Anthony

    2015-11-01

    A 1D liner compression code is developed to model liner implosion dynamics in the Inductively Driven Liner Experiment (IDL) where FRC plasmoid is compressed via inductively-driven metal liners. The driver circuit, magnetic field, joule heating, and liner dynamics calculations are performed at each time step in sequence to couple these effects in the code. To obtain more realistic magnetic field results for a given drive coil geometry, 2D and 3D effects are incorporated into the 1D field calculation through use of correction factor table lookup approach. Commercial low-frequency electromagnetic fields solver, ANSYS Maxwell 3D, is used to solve the magnetic field profile for static liner condition at various liner radius in order to derive correction factors for the 1D field calculation in the code. The liner dynamics results from the code is verified to be in good agreement with the results from commercial explicit dynamics solver, ANSYS Explicit Dynamics, and previous liner experiment. The developed code is used to optimize the capacitor bank and driver coil design for better energy transfer and coupling. FRC gain calculations are also performed using the liner compression data from the code for the conceptual design of the reactor sized system for fusion energy gains.

  15. Efficient image compression scheme based on differential coding

    Science.gov (United States)

    Zhu, Li; Wang, Guoyou; Liu, Ying

    2007-11-01

    Embedded zerotree (EZW) and Set Partitioning in Hierarchical Trees (SPIHT) coding, introduced by J.M. Shapiro and Amir Said, are very effective and being used in many fields widely. In this study, brief explanation of the principles of SPIHT was first provided, and then, some improvement of SPIHT algorithm according to experiments was introduced. 1) For redundancy among the coefficients in the wavelet region, we propose differential method to reduce it during coding. 2) Meanwhile, based on characteristic of the coefficients' distribution in subband, we adjust sorting pass and optimize differential coding, in order to reduce the redundancy coding in each subband. 3) The image coding result, calculated by certain threshold, shows that through differential coding, the rate of compression get higher, and the quality of reconstructed image have been get raised greatly, when bpp (bit per pixel)=0.5, PSNR (Peak Signal to Noise Ratio) of reconstructed image exceeds that of standard SPIHT by 0.2~0.4db.

  16. Improved zerotree coding algorithm for wavelet image compression

    Science.gov (United States)

    Chen, Jun; Li, Yunsong; Wu, Chengke

    2000-12-01

    A listless minimum zerotree coding algorithm based on the fast lifting wavelet transform with lower memory requirement and higher compression performance is presented in this paper. Most state-of-the-art image compression techniques based on wavelet coefficients, such as EZW and SPIHT, exploit the dependency between the subbands in a wavelet transformed image. We propose a minimum zerotree of wavelet coefficients which exploits the dependency not only between the coarser and the finer subbands but also within the lowest frequency subband. And a ne listless significance map coding algorithm based on the minimum zerotree, using new flag maps and new scanning order different form Wen-Kuo Lin et al. LZC, is also proposed. A comparison reveals that the PSNR results of LMZC are higher than those of LZC, and the compression performance of LMZC outperforms that of SPIHT in terms of hard implementation.

  17. Doppler Properties of Polyphase Coded Pulse Compression Waveforms.

    Science.gov (United States)

    1982-09-30

    phase structure. An example is shown in Fig. 12 for a PI code. Each code can be generated or compressed with the same standard FFT phase filter shown...in Fig. 13. The phase shifts used before and after the FFT phase filter depend on the particular code. One way to reduce the 4-dB cyclic variation of...could be achieved by the use of additional phase shifters and delay lines in the F, output ports of the FFT phase filter shown in Fig. 12. EXPANSION r r

  18. JPEG2000 COMPRESSION CODING USING HUMAN VISUAL SYSTEM MODEL

    Institute of Scientific and Technical Information of China (English)

    Xiao Jiang; Wu Chengke

    2005-01-01

    In order to apply the Human Visual System (HVS) model to JPEG2000 standard,several implementation alternatives are discussed and a new scheme of visual optimization isintroduced with modifying the slope of rate-distortion. The novelty is that the method of visual weighting is not lifting the coefficients in wavelet domain, but is complemented by code stream organization. It remains all the features of Embedded Block Coding with Optimized Truncation (EBCOT) such as resolution progressive, good robust for error bit spread and compatibility of lossless compression. Well performed than other methods, it keeps the shortest standard codestream and decompression time and owns the ability of VIsual Progressive (VIP) coding.

  19. 基于Matlab文本文件哈夫曼编解码仿真%Simulation of Huffman codec of text based on Matlab

    Institute of Scientific and Technical Information of China (English)

    王向鸿

    2013-01-01

    根据当前数据压缩技术的现状,论述了Huffman可变长压缩的编解码方法。为了验证Huffman编解码的具体过程和特点,采用Matlab软件编程仿真的方法,将优先队列转成二叉树并建立编码和解码的字典表。对一随机英文文本文件进行了Huffman编解码仿真,得到了各个字母的概率、码字、平均信息量、平均长度、冗余度以及编码解码序列输出,具有明确的压缩特点。%According to the current situationof the data-compression technology,Huffman codec method which can change codon length to compress is described in this paper. In order to validate the course and characteristics of Huffman-encode-de-code,a method of programming simulation based on Matlab was adopted to convert the priority-queue to binary-tree and consti-tute a code-table of encoding and decoding,and conduct the Huffman-encode-decode simulation of a random English text. The output of the probability,codon,entropy,average length,redundancy,encoding sequence and decoding sequence of each let-ter was obtained,which has a definite compression feature.

  20. An Efficient Image Compression Technique Based on Arithmetic Coding

    Directory of Open Access Journals (Sweden)

    Prof. Rajendra Kumar Patel

    2012-12-01

    Full Text Available The rapid growth of digital imaging applications, including desktop publishing, multimedia, teleconferencing, and high visual definition has increased the need for effective and standardized image compression techniques. Digital Images play a very important role for describing the detailed information. The key obstacle for many applications is the vast amount of data required to represent a digital image directly. The various processes of digitizing the images to obtain it in the best quality for the more clear and accurate information leads to the requirement of more storage space and better storage and accessing mechanism in the form of hardware or software. In this paper we concentrate mainly on the above flaw so that we reduce the space with best quality image compression. State-ofthe-art techniques can compress typical images from 1/10 to 1/50 their uncompressed size without visibly affecting image quality. From our study I observe that there is a need of good image compression technique which provides better reduction technique in terms of storage and quality. Arithmetic coding is the best way to reducing encoding data. So in this paper we propose arithmetic coding with walsh transformation based image compression technique which is an efficient way of reduction

  1. Hybrid coding for split gray values in radiological image compression

    Science.gov (United States)

    Lo, Shih-Chung B.; Krasner, Brian; Mun, Seong K.; Horii, Steven C.

    1992-05-01

    Digital techniques are used more often than ever in a variety of fields. Medical information management is one of the largest digital technology applications. It is desirable to have both a large data storage resource and extremely fast data transmission channels for communication. On the other hand, it is also essential to compress these data into an efficient form for storage and transmission. A variety of data compression techniques have been developed to tackle a diversity of situations. A digital value decomposition method using a splitting and remapping method has recently been proposed for image data compression. This method attempts to employ an error-free compression for one part of the digital value containing highly significant value and uses another method for the second part of the digital value. We have reported that the effect of this method is substantial for the vector quantization and other spatial encoding techniques. In conjunction with DCT type coding, however, the splitting method only showed a limited improvement when compared to the nonsplitting method. With the latter approach, we used a nonoptimized method for the images possessing only the top three-most-significant- bit value (3MSBV) and produced a compression ratio of approximately 10:1. Since the 3MSB images are highly correlated and the same values tend to aggregate together, the use of area or contour coding was investigated. In our experiment, we obtained an average error-free compression ratio of 30:1 and 12:1 for 3MSB and 4MSB images, respectively, with the alternate value contour coding. With this technique, we clearly verified that the splitting method is superior to the nonsplitting method for finely digitized radiographs.

  2. Tools for signal compression applications to speech and audio coding

    CERN Document Server

    Moreau, Nicolas

    2013-01-01

    This book presents tools and algorithms required to compress/uncompress signals such as speech and music. These algorithms are largely used in mobile phones, DVD players, HDTV sets, etc. In a first rather theoretical part, this book presents the standard tools used in compression systems: scalar and vector quantization, predictive quantization, transform quantization, entropy coding. In particular we show the consistency between these different tools. The second part explains how these tools are used in the latest speech and audio coders. The third part gives Matlab programs simulating t

  3. Pencil: Finite-difference Code for Compressible Hydrodynamic Flows

    Science.gov (United States)

    Brandenburg, Axel; Dobler, Wolfgang

    2010-10-01

    The Pencil code is a high-order finite-difference code for compressible hydrodynamic flows with magnetic fields. It is highly modular and can easily be adapted to different types of problems. The code runs efficiently under MPI on massively parallel shared- or distributed-memory computers, like e.g. large Beowulf clusters. The Pencil code is primarily designed to deal with weakly compressible turbulent flows. To achieve good parallelization, explicit (as opposed to compact) finite differences are used. Typical scientific targets include driven MHD turbulence in a periodic box, convection in a slab with non-periodic upper and lower boundaries, a convective star embedded in a fully nonperiodic box, accretion disc turbulence in the shearing sheet approximation, self-gravity, non-local radiation transfer, dust particle evolution with feedback on the gas, etc. A range of artificial viscosity and diffusion schemes can be invoked to deal with supersonic flows. For direct simulations regular viscosity and diffusion is being used. The code is written in well-commented Fortran90.

  4. Fast minimum-redundancy prefix coding for real-time space data compression

    Science.gov (United States)

    Huang, Bormin

    2007-09-01

    The minimum-redundancy prefix-free code problem is to determine an array l = {l I ,..., f n} of n integer codeword lengths, given an array f = {f I ,..., f n} of n symbol occurrence frequencies, such that the Kraft-McMillan inequality [equation] holds and the number of the total coded bits [equation] is minimized. Previous minimum-redundancy prefix-free code based on Huffman's greedy algorithm solves this problem in O (n) time if the input array f is sorted; but in O (n log n) time if f is unsorted. In this paper a fast algorithm is proposed to solve this problem in linear time if f is unsorted. It is suitable for real-time applications in satellite communication and consumer electronics. We also develop its VLSI architecture that consists of four modules, namely, the frequency table builder, the codeword length table builder, the codeword table builder, and the input-to-codeword mapper.

  5. Image Compression using GSOM Algorithm

    Directory of Open Access Journals (Sweden)

    SHABBIR AHMAD

    2015-10-01

    Full Text Available compression. Conventional techniques such as Huffman coding and the Shannon Fano method, LZ Method, Run Length Method, LZ-77 are more recent methods for the compression of data. A traditional approach to reduce the large amount of data would be to discard some data redundancy and introduce some noise after reconstruction. We present a neural network based Growing self-organizing map technique that may be a reliable and efficient way to achieve vector quantization. Typical application of such algorithm is image compression. Moreover, Kohonen networks realize a mapping between an input and an output space that preserves topology. This feature can be used to build new compression schemes which allow obtaining better compression rate than with classical method as JPEG without reducing the image quality .the experiment result show that proposed algorithm improve the compression ratio in BMP, JPG and TIFF File.

  6. A NOVEL MULTIDICTIONARY BASED TEXT COMPRESSION

    Directory of Open Access Journals (Sweden)

    Y. Venkataramani

    2012-01-01

    Full Text Available The amount of digital contents grows at a faster speed as a result does the demand for communicate them. On the other hand, the amount of storage and bandwidth increases at a slower rate. Thus powerful and efficient compression methods are required. The repetition of words and phrases cause the reordered text much more compressible than the original text. On the whole system is fast and achieves close to the best result on the test files. In this study a novel fast dictionary based text compression technique MBRH (Multidictionary with burrows wheeler transforms, Run length coding and Huffman coding is proposed for the purpose of obtaining improved performance on various document sizes. MBRH algorithm comprises of two stages, the first stage is concerned with the conversion of input text into dictionary based compression .The second stage deals mainly with reduction of the redundancy in multidictionary based compression by using BWT, RLE and Huffman coding. Bib test files of input size of 111, 261 bytes achieves compression ratio of 0.192, bit rate of 1.538 and high speed using MBRH algorithm. The algorithm has attained a good compression ratio, reduction of bit rate and the increase in execution speed.

  7. Design and implementation of static Huffman encoding hardware using a parallel shifting algorithm

    CERN Document Server

    Tae Yeon Lee

    2004-01-01

    This paper discusses the implementation of static Huffman encoding hardware for real-time lossless compression for the electromagnetic calorimeter in the CMS experiment. The construction of the Huffman encoding hardware illustrates the implementation for optimizing the logic size. The number of logic gates in the parallel shift operation required for the hardware was examined. The experiment with a simulated environment and an FPGA shows that the real-time constraint has been fulfilled and the design of the buffer length is appropriate. (16 refs).

  8. Lossless Image Compression Based on Multiple-Tables Arithmetic Coding

    Directory of Open Access Journals (Sweden)

    Rung-Ching Chen

    2009-01-01

    Full Text Available This paper is intended to present a lossless image compression method based on multiple-tables arithmetic coding (MTAC method to encode a gray-level image f. First, the MTAC method employs a median edge detector (MED to reduce the entropy rate of f. The gray levels of two adjacent pixels in an image are usually similar. A base-switching transformation approach is then used to reduce the spatial redundancy of the image. The gray levels of some pixels in an image are more common than those of others. Finally, the arithmetic encoding method is applied to reduce the coding redundancy of the image. To promote high performance of the arithmetic encoding method, the MTAC method first classifies the data and then encodes each cluster of data using a distinct code table. The experimental results show that, in most cases, the MTAC method provides a higher efficiency in use of storage space than the lossless JPEG2000 does.

  9. Lossy Compression of Haptic Data by Using DCT

    Science.gov (United States)

    Tanaka, Hiroyuki; Ohnishi, Kouhei

    In this paper, lossy data compression of haptic data is presented and the results of its application to a motion copying system are described. Lossy data compression has been studied and practically applied in audio and image coding. Lossy data compression of the haptic data has been not studied extensively. Haptic data compression using discrete cosine transform (DCT) and modified DCT (MDCT) for haptic data storage are described in this paper. In the lossy compression, calculated DCT/MDCT coefficients are quantized by quantization vector. The quantized coefficients are further compressed by lossless coding based on Huffman coding. The compressed haptic data is applied to the motion copying system, and the results are provided.

  10. Information preserving image compression for archiving NMR images.

    Science.gov (United States)

    Li, C C; Gokmen, M; Hirschman, A D; Wang, Y

    1991-01-01

    This paper presents a result on information preserving compression of NMR images for the archiving purpose. Both Lynch-Davisson coding and linear predictive coding have been studied. For NMR images of 256 x 256 x 12 resolution, the Lynch-Davisson coding with a block size of 64 as applied to prediction error sequences in the Gray code bit planes of each image gave an average compression ratio of 2.3:1 for 14 testing images. The predictive coding with a third order linear predictor and the Huffman encoding of the prediction error gave an average compression ratio of 3.1:1 for 54 images under test, while the maximum compression ratio achieved was 3.8:1. This result is one step further toward the improvement, albeit small, of the information preserving image compression for medical applications.

  11. 无线传感网数据信息的一种压缩算法%A Data Information Compression Algorithm for Wireless Sensor Network

    Institute of Scientific and Technical Information of China (English)

    许磊; 李千目; 戚湧

    2013-01-01

    提出了一种改进型自适应Huffman编码算法,目的在于压缩传输数据的容量,该算法适用于内存和计算资源受限的无线传感网络节点。它与修剪树自适应Huffman编码算法[1]相比较,能够更有效地利用内存空间,提供更好的压缩比。%In this paper,a modified adaptive Huffman coding algorithm is proposed particularly suited to the reduced data volume and computational resources of a WSN node. The performance of the modified adaptive Huffman algorithm is analyzed and compared with the tailoring adaptive Huffman coding algorithm [1]. The results indicate that our algorithm can use memory more efficiently,and provide better compression ratio.

  12. Lossless quantum coding in many-letter spaces

    CERN Document Server

    Boström, K J

    2000-01-01

    Based on the concept of many-letter theory a general characterization of quantum codes using the Kraus representation is given. Compression codes are defined by their property of decreasing the average information content of a given a priori message ensemble. Lossless quantum codes, in contrast to lossy codes, provide the retrieval of the original input states with perfect fidelity. A general lossless coding scheme is given that translates between two quantum alphabets. It is shown that this scheme is never compressive. Furthermore, a lossless quantum coding scheme, analog to the classical Huffman scheme but different from the Braunstein scheme, is implemented, which provides optimal compression. Motivated by the concept of lossless quantum compression, an observable is defined that measures the amount of compressible quantum information contained in a particular message with respect to a given \\emph{a priori} message ensemble. The average of this observable yields the von Neumann entropy, which is finally es...

  13. A new hybrid jpeg image compression scheme using symbol reduction technique

    CERN Document Server

    Kumar, Bheshaj; Sinha, G R

    2012-01-01

    Lossy JPEG compression is a widely used compression technique. Normally the JPEG standard technique uses three process mapping reduces interpixel redundancy, quantization, which is lossy process and entropy encoding, which is considered lossless process. In this paper, a new technique has been proposed by combining the JPEG algorithm and Symbol Reduction Huffman technique for achieving more compression ratio. The symbols reduction technique reduces the number of symbols by combining together to form a new symbol. As a result of this technique the number of Huffman code to be generated also reduced. It is simple fast and easy to implement. The result shows that the performance of standard JPEG method can be improved by proposed method. This hybrid approach achieves about 20% more compression ratio than the Standard JPEG.

  14. RESEARCH ON ADAPTIVE COMPRESSION CODING FOR NETWORK CODING IN WIRELESS SENSOR NETWORK

    Institute of Scientific and Technical Information of China (English)

    Liu Ying; Yang Zhen; Mei Zhonghui; Kong Yuanyuan

    2012-01-01

    Based on the sequence entropy of Shannon information theory,we work on the network coding technology in Wireless Sensor Network (WSN).In this paper,we take into account the similarity of the transmission sequences at the network coding node in the multi-sources and multi-receivers network in order to compress the data redundancy.Theoretical analysis and computer simulation results show that this proposed scheme not only further improves the efficiency of network transmission and enhances the throughput of the network,but also reduces the energy consumption of sensor nodes and extends the network life cycle.

  15. Wavelet transform in electrocardiography--data compression.

    Science.gov (United States)

    Provazník, I; Kozumplík, J

    1997-06-01

    An application of the wavelet transform to electrocardiography is described in the paper. The transform is used as a first stage of a lossy compression algorithm for efficient coding of rest ECG signals. The proposed technique is based on the decomposition of the ECG signal into a set of basic functions covering the time-frequency domain. Thus, non-stationary character of ECG data is considered. Some of the time-frequency signal components are removed because of their low influence to signal characteristics. Resulting components are efficiently coded by quantization, composition into a sequence of coefficients and compression by a run-length coder and a entropic Huffman coder. The proposed wavelet-based compression algorithm can compress data to average code length about 1 bit/sample. The algorithm can be also implemented to a real-time processing system when wavelet transform is computed by fast linear filters described in the paper.

  16. IMAGE COMPRESSION BASED ON IWT, IWPT & DPCM-IWPT

    Directory of Open Access Journals (Sweden)

    SHILPA S. DHULAP

    2010-12-01

    Full Text Available In Image Compression, the researcher’s aim is to reduce the number of bits required to represent an image by removing the spatial and spectral redundancies. Recently wavelet packet has emerged as popular techniques for image compression. In this paper proposes a wavelet-based compression scheme that is able to operate in lossyas well as lossless mode. First we describe integer wavelet transform (IWT and integer wavelet packet transform (IWPT as an application of lifting scheme (LS.After analyzing and implementing results for IWT and IWPT , another method combining DPCM and IWPT is implemented using Huffman coding for grey scale images. Then we propose to implement the same for color images using Shannon source coding technique. We measure the level of compression by the compression ratio (CR and compression factor (CF. Comparing with IWT and IWPT the DPCM-IWPT shows better performance in image compression.

  17. Lossless compression of very large volume data with fast dynamic access

    Science.gov (United States)

    Zhao, Rongkai; Tao, Tao; Gabriel, Michael; Belford, Geneva

    2002-09-01

    The volumetric data set is important in many scientific and biomedical fields. Since such sets may be extremely large, a compression method is critical to store and transmit them. To achieve a high compression rate, most of the existing volume compression methods are lossy, which is usually unacceptable in biomedical applications. We developed a new context-based non-linear prediction method to preprocess the volume data set in order to effectively lower the prediction entropy. The prediction error is further encoded using Huffman code. Unlike the conventional methods, the volume is divided into cubical blocks to take advantage of the data's spatial locality. Instead of building one Huffman tree for each block, we developed a novel binning algorithm that build a Huffman tree for each group (bin) of blocks. Combining all the effects above, we achieved an excellent compression rate compared to other lossless volume compression methods. In addition, an auxiliary data structure, Scalable Hyperspace File (SHSF) is used to index the huge volume so that we can obtain many other benefits including parallel construction, on-the-fly accessing of compressed data without global decompression, fast previewing, efficient background compressing, and scalability etc.

  18. P-adic arithmetic coding

    CERN Document Server

    Rodionov, Anatoly

    2007-01-01

    A new incremental algorithm for data compression is presented. For a sequence of input symbols algorithm incrementally constructs a p-adic integer number as an output. Decoding process starts with less significant part of a p-adic integer and incrementally reconstructs a sequence of input symbols. Algorithm is based on certain features of p-adic numbers and p-adic norm. p-adic coding algorithm may be considered as of generalization a popular compression technique - arithmetic coding algorithms. It is shown that for p = 2 the algorithm works as integer variant of arithmetic coding; for a special class of models it gives exactly the same codes as Huffman's algorithm, for another special model and a specific alphabet it gives Golomb-Rice codes.

  19. Edge-Oriented Compression Coding on Image Sequence

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    An edge-oriented image sequence coding scheme is presented.On the basis of edge detecting,an image could be divided into the sensitized region and the smooth region.In this scheme,the architecture of sensityzed region is approximated with linear type of segments.Then a rectangle belt is constructed for each segment.Finally,the gray value distribution in the region is fitted by normal forms polynomials.The model matching and motion analysis are also based on the architecture of sensityized region.For the smooth region we use the run length scanning and linear approximating.By means of normal forms polynomial fitting and motion prediction by matching,the images are compressed.It is shown through the simulations that the subjective quality of reconstructed picture is excellent at 0.0075 bit-per-pel.

  20. A CMOS Imager with Focal Plane Compression using Predictive Coding

    Science.gov (United States)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  1. Auto-shape lossless compression of pharynx and esophagus fluoroscopic images.

    Science.gov (United States)

    Arif, Arif Sameh; Mansor, Sarina; Logeswaran, Rajasvaran; Karim, Hezerul Abdul

    2015-02-01

    The massive number of medical images produced by fluoroscopic and other conventional diagnostic imaging devices demand a considerable amount of space for data storage. This paper proposes an effective method for lossless compression of fluoroscopic images. The main contribution in this paper is the extraction of the regions of interest (ROI) in fluoroscopic images using appropriate shapes. The extracted ROI is then effectively compressed using customized correlation and the combination of Run Length and Huffman coding, to increase compression ratio. The experimental results achieved show that the proposed method is able to improve the compression ratio by 400 % as compared to that of traditional methods.

  2. Performance analysis of reversible image compression techniques for high-resolution digital teleradiology.

    Science.gov (United States)

    Kuduvalli, G R; Rangayyan, R M

    1992-01-01

    The performances of a number of block-based, reversible, compression algorithms suitable for compression of very-large-format images (4096x4096 pixels or more) are compared to that of a novel two-dimensional linear predictive coder developed by extending the multichannel version of the Burg algorithm to two dimensions. The compression schemes implemented are: Huffman coding, Lempel-Ziv coding, arithmetic coding, two-dimensional linear predictive coding (in addition to the aforementioned one), transform coding using discrete Fourier-, discrete cosine-, and discrete Walsh transforms, linear interpolative coding, and combinations thereof. The performances of these coding techniques for a few mammograms and chest radiographs digitized to sizes up to 4096x4096 10 b pixels are discussed. Compression from 10 b to 2.5-3.0 b/pixel on these images has been achieved without any loss of information. The modified multichannel linear predictor outperforms the other methods while offering certain advantages in implementation.

  3. Design and implementation for static Huffman encoding hardware with parallel shifting algorithm

    CERN Document Server

    Tae Yeon Lee

    2004-01-01

    This paper presents an implementation of static Huffman encoding hardware for real-time lossless compression in the ECAL of the CMS detector. The construction of the Huffman encoding hardware shows an implementation for optimizing its logic size. The number of logic gates of the parallel shift operation for the hardware is analyzed. Two kinds of implementation methods of the parallel shift operation are compared in aspect of logic size. The experiment with the hardware on a simulated ECAL environment covering 99.9999% of original distribution shows promising result with the simulation that the compression rate was 4.0039 and the maximum length of the stored data in the input buffer was 44. (14 refs).

  4. Compressed Technology Based on Huffman Code by Java%基于Huffman编码的压缩技术的Java实现

    Institute of Scientific and Technical Information of China (English)

    陈旭辉; 范肖南; 巩天宁

    2008-01-01

    当前,广泛采用的无损压缩技术主要有2种,一种是短语式压缩,另一种是编码式压缩.本文介绍采用java编程语言利用Huffman算法实现文件的压缩功能,是实现的编码式压缩技术.

  5. 基于Huffman编码的DSP图像无损压缩系统%DSP Lossless Image Compression System Based on Huffman Coding

    Institute of Scientific and Technical Information of China (English)

    邹文辉

    2014-01-01

    当今社会是一个大数据时代,信息量巨大.每天一睁开双眼,图像和视频就席卷而来.人们对图像的依赖越来越多,对图像的要求也越来越高,既追求保真度高,又希望占用空间少,因此对图像压缩也提出了更高的要求.本系统基于TMS320DM6437平台搭建,利用Huffman编码实现图像无损压缩,压缩比达1.77.

  6. New Data Compression Algorithm Based on Huffman Coding%运用Huffman编码进行数据压缩的新算法

    Institute of Scientific and Technical Information of China (English)

    何昭青

    2008-01-01

    探讨研究文件压缩的一种新思路,在进行文件压缩时,把文件看成为"0"和"1"组成的二进制流,定义若干个二进制位为一个"字",这样文件就是由"字"组成的流,统计这些不同"字"出现的概率,然后利用Huffman算法进行编码压缩;讨论了各类文件在不同"字"下的压缩情况,并给出各种情况下的实验结果.

  7. 基于Huffman编码的图像压缩解压研究%Huffman-based Coding of Image Compression Decompression

    Institute of Scientific and Technical Information of China (English)

    饶兴

    2011-01-01

    根据BMP图像的特点,提出了基于Huffman编码的压缩方法,分别采用RGB统一编码和RGB分别编码两种方式对图像进行压缩和解压程序设计,然后对多幅图像进行了压缩和解压实验,最后对实验结果进行了相关的分析.

  8. The Methed to Compress the File Using Huffman Code%用Huffman编码进行文件压缩的方法

    Institute of Scientific and Technical Information of China (English)

    潘玮华

    2010-01-01

    介绍了使用Huffman编码进行文件压缩的思路和压缩的方法.详细阐述了该方法所用类的设计和压缩、解压的具体设计方法,并给出使用C++语言描述的完整的程序.

  9. Segmentation-based CT image compression

    Science.gov (United States)

    Thammineni, Arunoday; Mukhopadhyay, Sudipta; Kamath, Vidya

    2004-04-01

    The existing image compression standards like JPEG and JPEG 2000, compress the whole image as a single frame. This makes the system simple but inefficient. The problem is acute for applications where lossless compression is mandatory viz. medical image compression. If the spatial characteristics of the image are considered, it can give rise to a more efficient coding scheme. For example, CT reconstructed images have uniform background outside the field of view (FOV). Even the portion within the FOV can be divided as anatomically relevant and irrelevant parts. They have distinctly different statistics. Hence coding them separately will result in more efficient compression. Segmentation is done based on thresholding and shape information is stored using 8-connected differential chain code. Simple 1-D DPCM is used as the prediction scheme. The experiments show that the 1st order entropies of images fall by more than 11% when each segment is coded separately. For simplicity and speed of decoding Huffman code is chosen for entropy coding. Segment based coding will have an overhead of one table per segment but the overhead is minimal. Lossless compression of image based on segmentation resulted in reduction of bit rate by 7%-9% compared to lossless compression of whole image as a single frame by the same prediction coder. Segmentation based scheme also has the advantage of natural ROI based progressive decoding. If it is allowed to delete the diagnostically irrelevant portions, the bit budget can go down as much as 40%. This concept can be extended to other modalities.

  10. Prefix Codes: Equiprobable Words, Unequal Letter Costs

    OpenAIRE

    Golin, Mordecai; Young, Neal E.

    2002-01-01

    Describes a near-linear-time algorithm for a variant of Huffman coding, in which the letters may have non-uniform lengths (as in Morse code), but with the restriction that each word to be encoded has equal probability. [See also ``Huffman Coding with Unequal Letter Costs'' (2002).

  11. Variable Quality Compression of Fluid Dynamical Data Sets Using a 3D DCT Technique

    Science.gov (United States)

    Loddoch, A.; Schmalzl, J.

    2005-12-01

    In this work we present a data compression scheme that is especially suited for the compression of data sets resulting from computational fluid dynamics (CFD). By adopting the concept of the JPEG compression standard and extending the approach of Schmalzl (Schmalzl, J. Using standard image compression algorithms to store data from computational fluid dynamics. Computers and Geosciences, 29, 10211031, 2003) we employ a three-dimensional discrete cosine transform of the data. The resulting frequency components are rearranged, quantized and finally stored using Huffman-encoding and standard variable length integer codes. The compression ratio and also the introduced loss of accuracy can be adjusted by means of two compression parameters to give the desired compression profile. Using the proposed technique compression ratios of more than 60:1 are possible with an mean error of the compressed data of less than 0.1%.

  12. A Global-Scale Image Lossless Compression Method Based on QTM Pixels

    Institute of Scientific and Technical Information of China (English)

    SUN Wen-bin; ZHAO Xue-sheng

    2006-01-01

    In this paper, a new predictive model, adapted to QTM (Quaternary Triangular Mesh) pixel compression, is introduced. Our approach starts with the principles of proposed predictive models based on available QTM neighbor pixels. An algorithm of ascertaining available QTM neighbors is also proposed. Then, the method for reducing space complexities in the procedure of predicting QTM pixel values is presented. Next, the structure for storing compressed QTM pixel is proposed. In the end, the experiment on comparing compression ratio of this method with other methods is carried out by using three wave bands data of 1 km resolution of NOAA images in China. The results indicate that: 1) the compression method performs better than any other, such as Run Length Coding, Arithmetic Coding, Huffman Coding, etc; 2) the average size of compressed three wave band data based on the neighbor QTM pixel predictive model is 31.58% of the origin space requirements and 67.5% of Arithmetic Coding without predictive model.

  13. Effective wavelet-based compression method with adaptive quantization threshold and zerotree coding

    Science.gov (United States)

    Przelaskowski, Artur; Kazubek, Marian; Jamrogiewicz, Tomasz

    1997-10-01

    Efficient image compression technique especially for medical applications is presented. Dyadic wavelet decomposition by use of Antonini and Villasenor bank filters is followed by adaptive space-frequency quantization and zerotree-based entropy coding of wavelet coefficients. Threshold selection and uniform quantization is made on a base of spatial variance estimate built on the lowest frequency subband data set. Threshold value for each coefficient is evaluated as linear function of 9-order binary context. After quantization zerotree construction, pruning and arithmetic coding is applied for efficient lossless data coding. Presented compression method is less complex than the most effective EZW-based techniques but allows to achieve comparable compression efficiency. Specifically our method has similar to SPIHT efficiency in MR image compression, slightly better for CT image and significantly better in US image compression. Thus the compression efficiency of presented method is competitive with the best published algorithms in the literature across diverse classes of medical images.

  14. Lossless compression of VLSI layout image data.

    Science.gov (United States)

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.

  15. Ultraspectral sounder data compression using a novel marker-based error-resilient arithmetic coder

    Science.gov (United States)

    Huang, Bormin; Sriraja, Y.; Wei, Shih-Chieh

    2006-08-01

    Entropy coding techniques aim to achieve the entropy of the source data by assigning variable-length codewords to symbols with the code lengths linked to the corresponding symbol probabilities. Entropy coders (e.g. Huffman coding, arithmetic coding), in one form or the other, are commonly used as the last stage in various compression schemes. While these variable-length coders provide better compression than fixed-length coders, they are vulnerable to transmission errors. Even a single bit error in the transmission process can cause havoc in the subsequent decoded stream. To cope with it, this research proposes a marker-based sentinel mechanism in entropy coding for error detection and recovery. We use arithmetic coding as an example to demonstrate this error-resilient technique for entropy coding. Experimental results on ultraspectral sounder data indicate that the marker-based error-resilient arithmetic coder provides remarkable robustness to correct transmission errors without significantly compromising the compression gains.

  16. An Enhanced Static Data Compression Scheme Of Bengali Short Message

    CERN Document Server

    Arif, Abu Shamim Mohammod; Islam, Rashedul

    2009-01-01

    This paper concerns a modified approach of compressing Short Bengali Text Message for small devices. The prime objective of this research technique is to establish a low complexity compression scheme suitable for small devices having small memory and relatively lower processing speed. The basic aim is not to compress text of any size up to its maximum level without having any constraint on space and time, rather than the main target is to compress short messages up to an optimal level which needs minimum space, consume less time and the processor requirement is lower. We have implemented Character Masking, Dictionary Matching, Associative rule of data mining and Hyphenation algorithm for syllable based compression in hierarchical steps to achieve low complexity lossless compression of text message for any mobile devices. The scheme to choose the diagrams are performed on the basis of extensive statistical model and the static Huffman coding is done through the same context.

  17. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  18. An Improved Code Compression Algorithm for Low Power Embedded System Designs%一种改进算法的低功耗嵌入式系统代码压缩设计

    Institute of Scientific and Technical Information of China (English)

    张瑞峰; 马文杰

    2016-01-01

    A new code compression scheme was proposed to solve the power consumption problem in embedded systems .After analyzing the characteristics of instructions in target program,instructions will be combined and split ,then the canonical Huffman algorithm is used to encode the modified instructions to generate look-up tables.Finally ,code will be compressed and decompressed on the base of the corresponding relations between instructions and code words in look-up tables.According to the compression rate and power reduction rate of partial programs of embedded benchmarks compressed by simplescalar simulator,statistical data showed that the proposed algorithm can effectively save the storage space and reduce the power consumption of system.%采用一种代码压缩的方法来降低嵌入式系统的功耗.在分析指令的特性后对目标代码进行指令合并和指令分割,运用范式Huffman算法对处理后的指令进行编码生成索引查找表.最后,通过查找表中索引字和指令的对应关系完成目标代码的压缩和解压缩.实验使用simplescalar模拟器对部分嵌入式基准测试程序进行压缩,用代码压缩率和功耗减少率进行评估,统计数据表明提出的改进算法可有效节省存储空间、降低系统功耗.

  19. Channel coding/decoding alternatives for compressed TV data on advanced planetary missions.

    Science.gov (United States)

    Rice, R. F.

    1972-01-01

    The compatibility of channel coding/decoding schemes with a specific TV compressor developed for advanced planetary missions is considered. Under certain conditions, it is shown that compressed data can be transmitted at approximately the same rate as uncompressed data without any loss in quality. Thus, the full gains of data compression can be achieved in real-time transmission.

  20. Adaptive uniform grayscale coded aperture design for high dynamic range compressive spectral imaging

    Science.gov (United States)

    Diaz, Nelson; Rueda, Hoover; Arguello, Henry

    2016-05-01

    Imaging spectroscopy is an important area with many applications in surveillance, agriculture and medicine. The disadvantage of conventional spectroscopy techniques is that they collect the whole datacube. In contrast, compressive spectral imaging systems capture snapshot compressive projections, which are the input of reconstruction algorithms to yield the underlying datacube. Common compressive spectral imagers use coded apertures to perform the coded projections. The coded apertures are the key elements in these imagers since they define the sensing matrix of the system. The proper design of the coded aperture entries leads to a good quality in the reconstruction. In addition, the compressive measurements are prone to saturation due to the limited dynamic range of the sensor, hence the design of coded apertures must consider saturation. The saturation errors in compressive measurements are unbounded and compressive sensing recovery algorithms only provide solutions for bounded noise or bounded with high probability. In this paper it is proposed the design of uniform adaptive grayscale coded apertures (UAGCA) to improve the dynamic range of the estimated spectral images by reducing the saturation levels. The saturation is attenuated between snapshots using an adaptive filter which updates the entries of the grayscale coded aperture based on the previous snapshots. The coded apertures are optimized in terms of transmittance and number of grayscale levels. The advantage of the proposed method is the efficient use of the dynamic range of the image sensor. Extensive simulations show improvements in the image reconstruction of the proposed method compared with grayscale coded apertures (UGCA) and adaptive block-unblock coded apertures (ABCA) in up to 10 dB.

  1. 基于哈夫曼(Huffman)算法的探讨和改进

    Institute of Scientific and Technical Information of China (English)

    毕智超

    2011-01-01

    最优二叉树是一种十分重要的数据结构,首先针对最优二叉树--哈夫曼(Huffman)树进行探讨分析并给出算法描述,然后通过快速排序算法将带排序的数据进行排序处理,使哈夫曼算法的时间复杂度降低.最后基于哈夫曼树在编码问题中的应用--哈夫曼编码(Huffman Code),通过简要的说明对哈夫曼编码的存储结构进行了改进.

  2. A simple data compression scheme for binary images of bacteria compared with commonly used image data compression schemes.

    Science.gov (United States)

    Wilkinson, M H

    1994-04-01

    A run length code compression scheme of extreme simplicity, used for image storage in an automated bacterial morphometry system, is compared with more common compression schemes, such as are used in the tag image file format. These schemes are Lempel-Ziv and Welch (LZW), Macintosh Packbits, and CCITT Group 3 Facsimile 1-dimensional modified Huffman run length code. In a set of 25 images consisting of full microscopic fields of view of bacterial slides, the method gave a 10.3-fold compression: 1.074 times better than LZW. In a second set of images of single areas of interest within each field of view, compression ratios of over 600 were obtained, 12.8 times that of LZW. The drawback of the system is its bad worst case performance. The method could be used in any application requiring storage of binary images of relatively small objects with fairly large spaces in between.

  3. Improving the efficiency of the genetic code by varying the codon length--the perfect genetic code.

    Science.gov (United States)

    Doig, A J

    1997-10-07

    The function of DNA is to specify protein sequences. The four-base "alphabet" used in nucleic acids is translated to the 20 base alphabet of proteins (plus a stop signal) via the genetic code. The code is neither overlapping nor punctuated, but has mRNA sequences read in successive triplet codons until reaching a stop codon. The true genetic code uses three bases for every amino acid. The efficiency of the genetic code can be significantly increased if the requirement for a fixed codon length is dropped so that the more common amino acids have shorter codon lengths and rare amino acids have longer codon lengths. More efficient codes can be derived using the Shannon-Fano and Huffman coding algorithms. The compression achieved using a Huffman code cannot be improved upon. I have used these algorithms to derive efficient codes for representing protein sequences using both two and four bases. The length of DNA required to specify the complete set of protein sequences could be significantly shorter if transcription used a variable codon length. The restriction to a fixed codon length of three bases means that it takes 42% more DNA than the minimum necessary, and the genetic code is 70% efficient. One can think of many reasons why this maximally efficient code has not evolved: there is very little redundancy so almost any mutation causes an amino acid change. Many mutations will be potentially lethal frame-shift mutations, if the mutation leads to a change in codon length. It would be more difficult for the machinery of transcription to cope with a variable codon length. Nevertheless, in the strict and narrow sense of coding for protein sequences using the minimum length of DNA possible, the Huffman code derived here is perfect.

  4. A channel differential EZW coding scheme for EEG data compression.

    Science.gov (United States)

    Dehkordi, Vahid R; Daou, Hoda; Labeau, Fabrice

    2011-11-01

    In this paper, a method is proposed to compress multichannel electroencephalographic (EEG) signals in a scalable fashion. Correlation between EEG channels is exploited through clustering using a k-means method. Representative channels for each of the clusters are encoded individually while other channels are encoded differentially, i.e., with respect to their respective cluster representatives. The compression is performed using the embedded zero-tree wavelet encoding adapted to 1-D signals. Simulations show that the scalable features of the scheme lead to a flexible quality/rate tradeoff, without requiring detailed EEG signal modeling.

  5. A Biblock Wavelet Zero Tree Coding for Hyperspectral Imagery Data Compression

    Institute of Scientific and Technical Information of China (English)

    YAN Jingwen; SHEN Guiming; HU Xiaoyi; XU Fang

    2001-01-01

    In this paper, a biblock zero tree compression coding (BBZTC) method, based on wavelet zero tree compression coding (ZTC), is used to exploit redundancy in hyperspectral imagery data. Because of ZTC scanning every wavelet zero tree coefficients with low efficiency, BBZTC method outperforms ZTC in the aspects of high compression ratio, simple real-time implementing, and higher coding/decoding speed, and convenient real-time transmission. The experimental results show that this method can obtain the compression ratio 17~40 times of 224 spectral bands to remove the spectral correlation without any coding by using KLT, and the total compression performance of KLT+BBZTC method is better than the method of KLT and other one dimensional transformation (such as DCT) to remove the spectral correlation. To compare with the total compression ratio of KLT+JPEG method and SFCVQ method, this method reaches 180 at the PSNR of 33.6 db. This method is superior to the present any other method at compression rate.

  6. A Coded Aperture Compressive Imaging Array and Its Visual Detection and Tracking Algorithms for Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Hanxiao Wu

    2012-10-01

    Full Text Available In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l1 optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l1 tracker without any optimization.

  7. 混沌权值变异的Huffman树图像加密算法%An Image Encryption Algorithm Using Chaos-based Weight Variation of Huffman Tree

    Institute of Scientific and Technical Information of China (English)

    龙敏; 谭丽

    2011-01-01

    Using chaos-based weight variation of Huffman tree,an image/video encryption algorithm is proposed in this paper. In the process of the entropy coding,DC coefficients are encrypted by the weight variation of Huffman tree with the double Logistic chaos and AC coefficients are encrypted by the indexes of codeword. The security,complexity and compression ration of the algorithm are analyzed. Simulation results show that this algorithm has no impact on the compression efficiency and has low complexity,high security and good real-time property. Therefore,it is suitable for real-time image on the network.%提出一种采用混沌权值变异的Huff man树的图像加密算法.此算法在熵编码过程中,以基本的Huffman树为标准,利用双耦合混沌序列1对DC系数进行树的结构未变异、路径值变异的加密;再利用双耦合混沌序列2对AC系数进行码字序号的加密.论文对算法进行了仿真,并对安全性、计算复杂度、压缩比性能进行了分析.实验结果表明,该算法基本上不影响压缩效率,且计算复杂度低、安全性高和实时性好,可用于网络上的图像服务.

  8. Property study of integer wavelet transform lossless compression coding based on lifting scheme

    Science.gov (United States)

    Xie, Cheng Jun; Yan, Su; Xiang, Yang

    2006-01-01

    In this paper the algorithms and its improvement of integer wavelet transform combining SPIHT and arithmetic coding in image lossless compression is mainly studied. The experimental result shows that if the order of low-pass filter vanish matrix is fixed, the improvement of compression effect is not evident when invertible integer wavelet transform is satisfied and focusing of energy property monotonic increase with transform scale. For the same wavelet bases, the order of low-pass filter vanish matrix is more important than the order of high-pass filter vanish matrix in improving the property of image compression. Integer wavelet transform lossless compression coding based on lifting scheme has no relation to the entropy of image. The effect of compression is depended on the the focuing of energy property of image transform.

  9. Multiple Description Coding with Feedback Based Network Compression

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Østergaard, Jan; Popovski, Petar

    2010-01-01

    and an intermediate node, respectively. A trade-off exists between reducing the delay of the feedback by adapting in the vicinity of the receiver and increasing the gain from compression by adapting close to the source. The analysis shows that adaptation in the network provides a better trade-off than adaptation...

  10. Comparison study of EMG signals compression by methods transform using vector quantization, SPIHT and arithmetic coding.

    Science.gov (United States)

    Ntsama, Eloundou Pascal; Colince, Welba; Ele, Pierre

    2016-01-01

    In this article, we make a comparative study for a new approach compression between discrete cosine transform (DCT) and discrete wavelet transform (DWT). We seek the transform proper to vector quantization to compress the EMG signals. To do this, we initially associated vector quantization and DCT, then vector quantization and DWT. The coding phase is made by the SPIHT coding (set partitioning in hierarchical trees coding) associated with the arithmetic coding. The method is demonstrated and evaluated on actual EMG data. Objective performance evaluations metrics are presented: compression factor, percentage root mean square difference and signal to noise ratio. The results show that method based on the DWT is more efficient than the method based on the DCT.

  11. Efficient data compression from statistical physics of codes over finite fields

    CERN Document Server

    Braunstein, Alfredo; Zecchina, Riccardo

    2011-01-01

    In this paper we discuss a novel data compression technique for binary symmetric sources based on the cavity method over a Galois Field of order q (GF(q)). We present a scheme of low complexity and near optimal empirical performance. The compression step is based on a reduction of sparse low density parity check codes over GF(q) and is done through the so called reinforced belief-propagation equations. These reduced codes appear to have a non-trivial geometrical modification of the space of codewords which makes such compression computationally feasible. The computational complexity is O(d.n.q.log(q)) per iteration, where d is the average degree of the check nodes and n is the number of bits. For our code ensemble, decompression can be done in a time linear in the code's length by a simple leaf-removal algorithm.

  12. Improved vector quantization scheme for grayscale image compression

    Science.gov (United States)

    Hu, Y.-C.; Chen, W.-L.; Lo, C.-C.; Chuang, J.-C.

    2012-06-01

    This paper proposes an improved image coding scheme based on vector quantization. It is well known that the image quality of a VQ-compressed image is poor when a small-sized codebook is used. In order to solve this problem, the mean value of the image block is taken as an alternative block encoding rule to improve the image quality in the proposed scheme. To cut down the storage cost of compressed codes, a two-stage lossless coding approach including the linear prediction technique and the Huffman coding technique is employed in the proposed scheme. The results show that the proposed scheme achieves better image qualities than vector quantization while keeping low bit rates.

  13. A dynamical systems proof of Kraft-McMillan inequality and its converse for prefix-free codes

    Science.gov (United States)

    Nagaraj, Nithin

    2009-03-01

    Uniquely decodable codes are central to lossless data compression in both classical and quantum communication systems. The Kraft-McMillan inequality is a basic result in information theory which gives a necessary and sufficient condition for a code to be uniquely decodable and also has a quantum analogue. In this letter, we provide a novel dynamical systems proof of this inequality and its converse for prefix-free codes (no codeword is a prefix of another—the popular Huffman codes are an example). For constrained sources, the problem is still open.

  14. A novel technique for image steganography based on Block-DCT and Huffman Encoding

    Directory of Open Access Journals (Sweden)

    A.Nag

    2010-06-01

    Full Text Available Image steganography is the art of hiding information into a cover image. This paper presents anovel technique for Image steganography based on Block-DCT, where DCT is used to transform originalimage (cover image blocks from spatial domain to frequency domain. Firstly a gray level image of size M× N is divided into no joint 8 × 8 blocks and a two dimensional Discrete Cosine Transform(2-d DCT isperformed on each of the P = MN / 64 blocks. Then Huffman encoding is also performed on the secretmessages/images before embedding and each bit of Huffman code of secret message/image is embedded inthe frequency domain by altering the least significant bit of each of the DCT coefficients of cover imageblocks. The experimental results show that the algorithm has a high capacity and a good invisibility.Moreover PSNR of cover image with stego-image shows the better results in comparison with otherexisting steganography approaches. Furthermore, satisfactory security is maintained since the secretmessage/image cannot be extracted without knowing decoding rules and Huffman table.

  15. Design of vector quantizer for image compression using self-organizing feature map and surface fitting.

    Science.gov (United States)

    Laha, Arijit; Pal, Nikhil R; Chanda, Bhabatosh

    2004-10-01

    We propose a new scheme of designing a vector quantizer for image compression. First, a set of codevectors is generated using the self-organizing feature map algorithm. Then, the set of blocks associated with each code vector is modeled by a cubic surface for better perceptual fidelity of the reconstructed images. Mean-removed vectors from a set of training images is used for the construction of a generic codebook. Further, Huffman coding of the indices generated by the encoder and the difference-coded mean values of the blocks are used to achieve better compression ratio. We proposed two indices for quantitative assessment of the psychovisual quality (blocking effect) of the reconstructed image. Our experiments on several training and test images demonstrate that the proposed scheme can produce reconstructed images of good quality while achieving compression at low bit rates. Index Terms-Cubic surface fitting, generic codebook, image compression, self-organizing feature map, vector quantization.

  16. Image compression with embedded wavelet coding via vector quantization

    Science.gov (United States)

    Katsavounidis, Ioannis; Kuo, C.-C. Jay

    1995-09-01

    In this research, we improve Shapiro's EZW algorithm by performing the vector quantization (VQ) of the wavelet transform coefficients. The proposed VQ scheme uses different vector dimensions for different wavelet subbands and also different codebook sizes so that more bits are assigned to those subbands that have more energy. Another feature is that the vector codebooks used are tree-structured to maintain the embedding property. Finally, the energy of these vectors is used as a prediction parameter between different scales to improve the performance. We investigate the performance of the proposed method together with the 7 - 9 tap bi-orthogonal wavelet basis, and look into ways to incorporate loseless compression techniques.

  17. Non-US data compression and coding research. FASAC Technical Assessment Report

    Energy Technology Data Exchange (ETDEWEB)

    Gray, R.M.; Cohn, M.; Craver, L.W.; Gersho, A.; Lookabaugh, T.; Pollara, F.; Vetterli, M.

    1993-11-01

    This assessment of recent data compression and coding research outside the United States examines fundamental and applied work in the basic areas of signal decomposition, quantization, lossless compression, and error control, as well as application development efforts in image/video compression and speech/audio compression. Seven computer scientists and engineers who are active in development of these technologies in US academia, government, and industry carried out the assessment. Strong industrial and academic research groups in Western Europe, Israel, and the Pacific Rim are active in the worldwide search for compression algorithms that provide good tradeoffs among fidelity, bit rate, and computational complexity, though the theoretical roots and virtually all of the classical compression algorithms were developed in the United States. Certain areas, such as segmentation coding, model-based coding, and trellis-coded modulation, have developed earlier or in more depth outside the United States, though the United States has maintained its early lead in most areas of theory and algorithm development. Researchers abroad are active in other currently popular areas, such as quantizer design techniques based on neural networks and signal decompositions based on fractals and wavelets, but, in most cases, either similar research is or has been going on in the United States, or the work has not led to useful improvements in compression performance. Because there is a high degree of international cooperation and interaction in this field, good ideas spread rapidly across borders (both ways) through international conferences, journals, and technical exchanges. Though there have been no fundamental data compression breakthroughs in the past five years--outside or inside the United State--there have been an enormous number of significant improvements in both places in the tradeoffs among fidelity, bit rate, and computational complexity.

  18. Evaluation of peripheral compression and auditory nerve fiber intensity coding using auditory steady-state responses

    DEFF Research Database (Denmark)

    Encina Llamas, Gerard; M. Harte, James; Epp, Bastian

    2015-01-01

    cause auditory nerve fiber (ANF) deafferentation in predominantly low-spontaneous rate (SR) fibers. In the present study, auditory steadystate response (ASSR) level growth functions were measured to evaluate the applicability of ASSR to assess compression and the ability to code intensity fluctuations...... at high stimulus levels. Level growth functions were measured in normal-hearing adults at stimulus levels ranging from 20 to 90 dB SPL. To evaluate compression, ASSR were measured for multiple carrier frequencies simultaneously. To evaluate intensity coding at high intensities, ASSR were measured using....... The results indicate that the slope of the ASSR level growth function can be used to estimate peripheral compression simultaneously at four frequencies below 60 dB SPL, while the slope above 60 dB SPL may provide information about the integrity of intensity coding of low-SR fibers....

  19. SRComp: short read sequence compression using burstsort and Elias omega coding.

    Directory of Open Access Journals (Sweden)

    Jeremy John Selva

    Full Text Available Next-generation sequencing (NGS technologies permit the rapid production of vast amounts of data at low cost. Economical data storage and transmission hence becomes an increasingly important challenge for NGS experiments. In this paper, we introduce a new non-reference based read sequence compression tool called SRComp. It works by first employing a fast string-sorting algorithm called burstsort to sort read sequences in lexicographical order and then Elias omega-based integer coding to encode the sorted read sequences. SRComp has been benchmarked on four large NGS datasets, where experimental results show that it can run 5-35 times faster than current state-of-the-art read sequence compression tools such as BEETL and SCALCE, while retaining comparable compression efficiency for large collections of short read sequences. SRComp is a read sequence compression tool that is particularly valuable in certain applications where compression time is of major concern.

  20. Compressed Method of Traditional Vision Signal Combining DPCM and Self-adapting Huffman%DPCM与自适应Huffman结合的雷达原始视频信号压缩算法

    Institute of Scientific and Technical Information of China (English)

    李灵芝; 江晶

    2006-01-01

    为了解决大容量雷达数据传输,满足雷达原始视频信号实时无损的要求,根据雷达原始视频信号的特点,给出了采用DPCM(Difference Pulse Coding Modulation)与自适应Huffman编码相结合的压缩编码方式,分析了该算法的有效性和溢出问题,实验表明该方法相对于传统的自适应Huffman编码而言能改善实时性,提高了压缩比.

  1. Finding maximum JPEG image block code size

    Science.gov (United States)

    Lakhani, Gopal

    2012-07-01

    We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.

  2. Optimization of Channel Coding for Transmitted Image Using Quincunx Wavelets Transforms Compression

    Directory of Open Access Journals (Sweden)

    Mustapha Khelifi

    2016-05-01

    Full Text Available Many images you see on the Internet today have undergone compression for various reasons. Image compression can benefit users by having pictures load faster and webpages use up less space on a Web host. Image compression does not reduce the physical size of an image but instead compresses the data that makes up the image into a smaller size. In case of image transmission the noise will decrease the quality of recivide image which obliges us to use channel coding techniques to protect our data against the channel noise. The Reed-Solomon code is one of the most popular channel coding techniques used to correct errors in many systems ((Wireless or mobile communications, Satellite communications, Digital television / DVB,High-speed modems such as ADSL, xDSL, etc.. Since there is lot of possibilities to select the input parameters of RS code this will make us concerned about the optimum input that can protect our data with minimum number of redundant bits. In this paper we are going to use the genetic algorithm to optimize in the selction of input parameters of RS code acording to the channel conditions wich reduce the number of bits needed to protect our data with hight quality of received image.

  3. Terminal Cancer: Malignant Spinal Cord Compression and Full Code Status

    Directory of Open Access Journals (Sweden)

    Yaseen Ali

    2014-09-01

    Full Text Available Background: Malignant spinal cord compression has significantly increased hospitalization costs and even with best approach in treatment the disease course remains relatively stable with dire outcomes. Case presentation: The patient was an 80 years old male with the past medical history of hypertension, stroke with chronic right sided weakness, recently diagnosed with non-squamous cell lung carcinoma stage T4N0Mx presently undergoing chemotherapy as outpatient with carboplatin and taxol presented to the emergency room with the chief complaint of right leg pain with weakness and chest pain for 1~2 days. On d 4 of the admission patient complained of chest pain again and a CT angiogram was ordered as part of the work up for chest pain based on high probability for a pulmonary embolus per “Wells Score”. The CT angiogram revealed a large soft tissue mass centered at T5 vertebral body and probable spinal canal invasion. Conclusion: A more favorable outcome requires the input of both a surgeon and a radiation oncologist to find the most effective approach depending on the area involved and the extent of the lesion, and patient’s choice of treatment always must be respected as well. Despite aggressive treatment patient did not respond well and was deteriorating. Options were discussed with the patient, including the futility of care and lack of response. Patient opted to return home with hospice care and was subsequently discharged home with family.

  4. Lossless Compression of Chemical Fingerprints Using Integer Entropy Codes Improves Storage and Retrieval

    Science.gov (United States)

    Baldi, Pierre; Benz, Ryan W.

    2008-01-01

    Many modern chemoinformatics systems for small molecules rely on large fingerprint vector representations, where the components of the vector record the presence or number of occurrences in the molecular graphs of particular combinatorial features, such as labeled paths or labeled trees. These large fingerprint vectors are often compressed to much shorter fingerprint vectors using a lossy compression scheme based on a simple modulo procedure. Here we combine statistical models of fingerprints with integer entropy codes, such as Golomb and Elias codes, to encode the indices or the run-lengths of the fingerprints. After reordering the fingerprint components by decreasing frequency order, the indices are monotone increasing and the run-lenghts are quasi-monotone increasing, and both exhibit power-law distribution trends. We take advantage of these statistical properties to derive new efficient, lossless, compression algorithms for monotone integer sequences: Monotone Value (MOV) Coding and Monotone Length (MOL) Coding. In contrast with lossy systems that use 1,024 or more bits of storage per molecule, we can achieve lossless compression of long chemical fingerprints based on circular substructures in slightly over 300 bits per molecule, close to the Shannon entropy limit, using a MOL Elias Gamma code for run-lengths. The improvement in storage comes at a modest computational cost. Furthermore, because the compression is lossless, uncompressed similarity (e.g. Tanimoto) between molecules can be computed exactly from their compressed representations, leading to significant improvements in retrival performance, as shown on six benchmark datasets of drug-like molecules. PMID:17967006

  5. Visually Improved Image Compression by using Embedded Zero-tree Wavelet Coding

    Directory of Open Access Journals (Sweden)

    Janaki R

    2011-03-01

    Full Text Available Image compression is very important for efficient transmission and storage of images. Embedded Zero- tree Wavelet (EZW algorithm is a simple yet powerful algorithm having the property that the bits in the stream are generated in the order of their importance. Image compression can improve the performance of the digital systems by reducing time and cost in image storage and transmission without significant reduction of the image quality. For image compression it is desirable that the selection of transform should reduce the size of resultant data set as compared to source data set. EZW is computationally very fast and among the best image compression algorithm known today. This paper proposes a technique for image compression which uses the Wavelet-based Image Coding. A large number of experimental results are shown that this method saves a lot of bits in transmission, further enhances the compression performance. This paper aims to determine the best threshold to compress the still image at a particular decomposition level by using Embedded Zero-tree Wavelet encoder. Compression Ratio (CR and Peak-Signal-to-Noise (PSNR is determined for different threshold values ranging from 6 to 60 for decomposition level 8.

  6. Parameter optimization of pulse compression in ultrasound imaging systems with coded excitation.

    Science.gov (United States)

    Behar, Vera; Adam, Dan

    2004-08-01

    A linear array imaging system with coded excitation is considered, where the proposed excitation/compression scheme maximizes the signal-to-noise ratio (SNR) and minimizes sidelobes at the output of the compression filter. A pulse with linear frequency modulation (LFM) is used for coded excitation. The excitation/compression scheme is based on the fast digital mismatched filtering. The parameter optimization of the excitation/compression scheme includes (i) choice of an optimal filtering function for the mismatched filtering; (ii) choice of an optimal window function for tapering of the chirp amplitude; (iii) optimization of a chirp-to-transducer bandwidth ratio; (iv) choice of an appropriate n-bit quantizer. The simulation results show that the excitation/compression scheme can be implemented as a Dolph-Chebyshev filter including amplitude tapering of the chirp with a Lanczos window. An example of such an optimized system is given where the chirp bandwidth is chosen to be 2.5 times the transducer bandwidth and equals 6 MHz: The sidelobes are suppressed to -80 dB, for a central frequency of 4 MHz, and to -94 dB, for a central frequency of 8 MHz. The corresponding improvement of the SNR is 18 and 21 dB, respectively, when compared to a conventional short pulse imaging system. Simulation of B-mode images demonstrates the advantage of coded excitation systems of detecting regions with low contrast.

  7. A lossless compression method for medical image sequences using JPEG-LS and interframe coding.

    Science.gov (United States)

    Miaou, Shaou-Gang; Ke, Fu-Sheng; Chen, Shu-Ching

    2009-09-01

    Hospitals and medical centers produce an enormous amount of digital medical images every day, especially in the form of image sequences, which requires considerable storage space. One solution could be the application of lossless compression. Among available methods, JPEG-LS has excellent coding performance. However, it only compresses a single picture with intracoding and does not utilize the interframe correlation among pictures. Therefore, this paper proposes a method that combines the JPEG-LS and an interframe coding with motion vectors to enhance the compression performance of using JPEG-LS alone. Since the interframe correlation between two adjacent images in a medical image sequence is usually not as high as that in a general video image sequence, the interframe coding is activated only when the interframe correlation is high enough. With six capsule endoscope image sequences under test, the proposed method achieves average compression gains of 13.3% and 26.3% over the methods of using JPEG-LS and JPEG2000 alone, respectively. Similarly, for an MRI image sequence, coding gains of 77.5% and 86.5% are correspondingly obtained.

  8. Should compression of coded waveforms be done before or after focusing

    DEFF Research Database (Denmark)

    Bjerngaard, R.T.; Jensen, Jørgen Arendt

    2002-01-01

    In medical ultrasound signal-to-noise ratio improvements of approximately 15-20 dB can be achieved by using coded waveforms. Exciting the transducer with an encoded waveform necessitates compression of the response which is computationally demanding. This paper investigates the possibility...

  9. DESIGN OF MODULATION AND COMPRESSION CODING IN UNDERWATER ACOUSTIC IMAGE TRANSMISSION

    Institute of Scientific and Technical Information of China (English)

    程恩; 余丽敏; 林耿超

    2002-01-01

    This paper describes the design of modulation, compression coding and transmissi on control in underwater acoustic color image transmission system. This design adap ts a special system of modulation and transmission control based on a DSP(Digital Signal Processing) chip, to cope with the complex underwater acoustic channel. The hardware block diagram and software flow chart are presented.

  10. DESIGN OF MODULATION AND COMPRESSION CODING IN UNDERWATER ACOUSTIC IMAGE TRANSMISSION

    Institute of Scientific and Technical Information of China (English)

    程恩; 余丽敏; 林耿超

    2002-01-01

    This paper describes the design of modulation, compression coding and transmission control in underwater acoustic color image transmission system. This design adapts a special system of modulation and transmission control based on a DSP(Digital Signal Processing) chip, to cope with the complex underwater acoustic channel. The hardware block diagram and software flow chart are presented.

  11. Random wavelet transforms, algebraic geometric coding, and their applications in signal compression and de-noising

    Energy Technology Data Exchange (ETDEWEB)

    Bieleck, T.; Song, L.M.; Yau, S.S.T. [Univ. of Illinois, Chicago, IL (United States); Kwong, M.K. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.

    1995-07-01

    The concepts of random wavelet transforms and discrete random wavelet transforms are introduced. It is shown that these transforms can lead to simultaneous compression and de-noising of signals that have been corrupted with fractional noises. Potential applications of algebraic geometric coding theory to encode the ensuing data are also discussed.

  12. Complete Focal Plane Compression Based on CMOS Image Sensor Using Predictive Coding

    Institute of Scientific and Technical Information of China (English)

    Yao Suying; Yu Xiao; Gao Jing; Xu Jiangtao

    2015-01-01

    In this paper, a CMOS image sensor(CIS) is proposed, which can accomplish both decorrelation and en-tropy coding of image compression directly on the focal plane. The design is based on predictive coding for image decorrelation. The predictions are performed in analog domain by 2×2 pixel units. Both the prediction residuals and original pixel values are quantized and encoded in parallel. Since the residuals have a peak distribution around zero, the output codewords can be replaced by the valid part of the residuals’ binary mode. The compressed bit stream is accessible directly at the output of CIS without extra disposition. Simulation results show that the proposed approach achieves a compression rate of 2. 2 and PSNR of 51 on different test images.

  13. Hierarchical prediction and context adaptive coding for lossless color image compression.

    Science.gov (United States)

    Kim, Seyun; Cho, Nam Ik

    2014-01-01

    This paper presents a new lossless color image compression algorithm, based on the hierarchical prediction and context-adaptive arithmetic coding. For the lossless compression of an RGB image, it is first decorrelated by a reversible color transform and then Y component is encoded by a conventional lossless grayscale image compression method. For encoding the chrominance images, we develop a hierarchical scheme that enables the use of upper, left, and lower pixels for the pixel prediction, whereas the conventional raster scan prediction methods use upper and left pixels. An appropriate context model for the prediction error is also defined and the arithmetic coding is applied to the error signal corresponding to each context. For several sets of images, it is shown that the proposed method further reduces the bit rates compared with JPEG2000 and JPEG-XR.

  14. A high capacity text steganography scheme based on LZW compression and color coding

    Directory of Open Access Journals (Sweden)

    Aruna Malik

    2017-02-01

    Full Text Available In this paper, capacity and security issues of text steganography have been considered by employing LZW compression technique and color coding based approach. The proposed technique uses the forward mail platform to hide the secret data. This algorithm first compresses secret data and then hides the compressed secret data into the email addresses and also in the cover message of the email. The secret data bits are embedded in the message (or cover text by making it colored using a color coding table. Experimental results show that the proposed method not only produces a high embedding capacity but also reduces computational complexity. Moreover, the security of the proposed method is significantly improved by employing stego keys. The superiority of the proposed method has been experimentally verified by comparing with recently developed existing techniques.

  15. Data compression in wireless sensors network using MDCT and embedded harmonic coding.

    Science.gov (United States)

    Alsalaet, Jaafar K; Ali, Abduladhem A

    2015-05-01

    One of the major applications of wireless sensors networks (WSNs) is vibration measurement for the purpose of structural health monitoring and machinery fault diagnosis. WSNs have many advantages over the wired networks such as low cost and reduced setup time. However, the useful bandwidth is limited, as compared to wired networks, resulting in relatively low sampling. One solution to this problem is data compression which, in addition to enhancing sampling rate, saves valuable power of the wireless nodes. In this work, a data compression scheme, based on Modified Discrete Cosine Transform (MDCT) followed by Embedded Harmonic Components Coding (EHCC) is proposed to compress vibration signals. The EHCC is applied to exploit harmonic redundancy present is most vibration signals resulting in improved compression ratio. This scheme is made suitable for the tiny hardware of wireless nodes and it is proved to be fast and effective. The efficiency of the proposed scheme is investigated by conducting several experimental tests.

  16. Lossy Source Compression of Non-Uniform Binary Sources Using GQ-LDGM Codes

    CERN Document Server

    Cappellari, Lorenzo

    2010-01-01

    In this paper, we study the use of GF(q)-quantized LDGM codes for binary source coding. By employing quantization, it is possible to obtain binary codewords with a non-uniform distribution. The obtained statistics is hence suitable for optimal, direct quantization of non-uniform Bernoulli sources. We employ a message-passing algorithm combined with a decimation procedure in order to perform compression. The experimental results based on GF(q)-LDGM codes with regular degree distributions yield performances quite close to the theoretical rate-distortion bounds.

  17. Some possible codes for encrypting data in DNA.

    Science.gov (United States)

    Smith, Geoff C; Fiddes, Ceridwyn C; Hawkins, Jonathan P; Cox, Jonathan P L

    2003-07-01

    Three codes are reported for storing written information in DNA. We refer to these codes as the Huffman code, the comma code and the alternating code. The Huffman code was devised using Huffman's algorithm for constructing economical codes. The comma code uses a single base to punctuate the message, creating an automatic reading frame and DNA which is obviously artificial. The alternating code comprises an alternating sequence of purines and pyrimidines, again creating DNA that is clearly artificial. The Huffman code would be useful for routine, short-term storage purposes, supposing--not unrealistically--that very fast methods for assembling and sequencing large pieces of DNA can be developed. The other two codes would be better suited to archiving data over long periods of time (hundreds to thousands of years).

  18. Joint image encryption and compression scheme based on a new hyperchaotic system and curvelet transform

    Science.gov (United States)

    Zhang, Miao; Tong, Xiaojun

    2017-07-01

    This paper proposes a joint image encryption and compression scheme based on a new hyperchaotic system and curvelet transform. A new five-dimensional hyperchaotic system based on the Rabinovich system is presented. By means of the proposed hyperchaotic system, a new pseudorandom key stream generator is constructed. The algorithm adopts diffusion and confusion structure to perform encryption, which is based on the key stream generator and the proposed hyperchaotic system. The key sequence used for image encryption is relation to plain text. By means of the second generation curvelet transform, run-length coding, and Huffman coding, the image data are compressed. The joint operation of compression and encryption in a single process is performed. The security test results indicate the proposed methods have high security and good compression effect.

  19. Transference & Retrieval of Pulse-code modulation Audio over Short Messaging Service

    CERN Document Server

    Khan, Muhammad Fahad

    2012-01-01

    The paper presents the method of transferring PCM (Pulse-Code Modulation) based audio messages through SMS (Short Message Service) over GSM (Global System for Mobile Communications) network. As SMS is text based service, and could not send voice. Our method enables voice transferring through SMS, by converting PCM audio into characters. Than Huffman coding compression technique is applied in order to reduce numbers of characters which will latterly set as payload text of SMS. Testing the said method we develop an application using J2me platform

  20. To Improvement in Image Compression ratio using Artificial Neural Network Technique

    Directory of Open Access Journals (Sweden)

    Shabbir Ahmad

    2015-10-01

    Full Text Available Compression of data in any form is a large and active field as well as a big business. This paper presents a neural network based technique that may be applied to data compression. This paper breaks down large images into smaller windows and eliminates redundant information. Finally, the technique uses a neural network trained by direct solution methods. Conventional techniques such as Huffman coding and the Shannon Fano method, LZ Method, Run Length Method, LZ-77 are discussed as well as more recent methods for the compression of data presents a neural network based technique that may be applied to data compression. The proposed technique and images. Intelligent methods for data compression are reviewed including the use of Back propagation and Kohonen neural networks. The proposed technique has been implemented in C on the SP2 and tested on digital mammograms and other images. The results obtained are presented in this paper.

  1. Low complexity efficient raw SAR data compression

    Science.gov (United States)

    Rane, Shantanu; Boufounos, Petros; Vetro, Anthony; Okada, Yu

    2011-06-01

    We present a low-complexity method for compression of raw Synthetic Aperture Radar (SAR) data. Raw SAR data is typically acquired using a satellite or airborne platform without sufficient computational capabilities to process the data and generate a SAR image on-board. Hence, the raw data needs to be compressed and transmitted to the ground station, where SAR image formation can be carried out. To perform low-complexity compression, our method uses 1-dimensional transforms, followed by quantization and entropy coding. In contrast to previous approaches, which send uncompressed or Huffman-coded bits, we achieve more efficient entropy coding using an arithmetic coder that responds to a continuously updated probability distribution. We present experimental results on compression of raw Ku-SAR data. In those we evaluate the effect of the length of the transform on compression performance and demonstrate the advantages of the proposed framework over a state-of-the-art low complexity scheme called Block Adaptive Quantization (BAQ).

  2. High-dynamic range compressive spectral imaging by grayscale coded aperture adaptive filtering

    Directory of Open Access Journals (Sweden)

    Nelson Eduardo Diaz

    2015-12-01

    Full Text Available The coded aperture snapshot spectral imaging system (CASSI is an imaging architecture which senses the three dimensional informa-tion of a scene with two dimensional (2D focal plane array (FPA coded projection measurements. A reconstruction algorithm takes advantage of the compressive measurements sparsity to recover the underlying 3D data cube. Traditionally, CASSI uses block-un-block coded apertures (BCA to spatially modulate the light. In CASSI the quality of the reconstructed images depends on the design of these coded apertures and the FPA dynamic range. This work presents a new CASSI architecture based on grayscaled coded apertu-res (GCA which reduce the FPA saturation and increase the dynamic range of the reconstructed images. The set of GCA is calculated in a real-time adaptive manner exploiting the information from the FPA compressive measurements. Extensive simulations show the attained improvement in the quality of the reconstructed images when GCA are employed.  In addition, a comparison between traditional coded apertures and GCA is realized with respect to noise tolerance.

  3. A review on compressed pattern matching

    Directory of Open Access Journals (Sweden)

    Surya Prakash Mishra

    2016-09-01

    Full Text Available Compressed pattern matching (CPM refers to the task of locating all the occurrences of a pattern (or set of patterns inside the body of compressed text. In this type of matching, pattern may or may not be compressed. CPM is very useful in handling large volume of data especially over the network. It has many applications in computational biology, where it is useful in finding similar trends in DNA sequences; intrusion detection over the networks, big data analytics etc. Various solutions have been provided by researchers where pattern is matched directly over the uncompressed text. Such solution requires lot of space and consumes lot of time when handling the big data. Various researchers have proposed the efficient solutions for compression but very few exist for pattern matching over the compressed text. Considering the future trend where data size is increasing exponentially day-by-day, CPM has become a desirable task. This paper presents a critical review on the recent techniques on the compressed pattern matching. The covered techniques includes: Word based Huffman codes, Word Based Tagged Codes; Wavelet Tree Based Indexing. We have presented a comparative analysis of all the techniques mentioned above and highlighted their advantages and disadvantages.

  4. Huffman Coding Used in Compression of 1 - bit Code Stream of Beamformer Based on Sigma - delta ADC%Huffman编码用于Sigma-delta ADC波束形成器中1bit码流的压缩

    Institute of Scientific and Technical Information of China (English)

    韩雪梅; 彭虎; 杜宏伟; 陈强; 冯焕清

    2005-01-01

    基于过采样Sigma-delta ADC的波束形成器直接利用过采样Sigma-delta ADC所产生的1bit码流的相位信息进行高质量的聚焦延迟-求和.但此1bit码流速率极高,一般不能直接用USB接口送到计算机进行波束形成等后续处理,须先将其进行无损压缩,即在降低码流速度的同时保留波束形成所需的相位信息.采用Huffman编码方式对高速1bit流进行压缩.结果表明,Huffman编码能实现一半以上的压缩,从而使1bit码流通过USB接口传送成为可能.

  5. A lossless multichannel bio-signal compression based on low-complexity joint coding scheme for portable medical devices.

    Science.gov (United States)

    Kim, Dong-Sun; Kwon, Jin-San

    2014-09-18

    Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor.

  6. A mixed transform approach for efficient compression of medical images.

    Science.gov (United States)

    Ramaswamy, A; Mikhael, W B

    1996-01-01

    A novel technique is presented to compress medical data employing two or more mutually nonorthogonal transforms. Both lossy and lossless compression implementations are considered. The signal is first resolved into subsignals such that each subsignal is compactly represented in a particular transform domain. An efficient lossy representation of the signal is achieved by superimposing the dominant coefficients corresponding to each subsignal. The residual error, which is the difference between the original signal and the reconstructed signal is properly formulated. Adaptive algorithms in conjunction with an optimization strategy are developed to minimize this error. Both two-dimensional (2-D) and three-dimensional (3-D) approaches for the technique are developed. It is shown that for a given number of retained coefficients, the discrete cosine transform (DCT)-Walsh mixed transform representation yields a more compact representation than using DCT or Walsh alone. This lossy technique is further extended for the lossless case. The coefficients are quantized and the signal is reconstructed. The resulting reconstructed signal samples are rounded to the nearest integer and the modified residual error is computed. This error is transmitted employing a lossless technique such as the Huffman coding. It is shown that for a given number of retained coefficients, the mixed transforms again produces the smaller rms-modified residual error. The first-order entropy of the error is also smaller for the mixed-transforms technique than for the DCT, thus resulting in smaller length Huffman codes.

  7. A Compressible High-Order Unstructured Spectral Difference Code for Stratified Convection in Rotating Spherical Shells

    CERN Document Server

    Wang, Junfeng; Miesch, Mark S

    2015-01-01

    We present a novel and powerful Compressible High-ORder Unstructured Spectral-difference (CHORUS) code for simulating thermal convection and related fluid dynamics in the interiors of stars and planets. The computational geometries are treated as rotating spherical shells filled with stratified gas. The hydrodynamic equations are discretized by a robust and efficient high-order Spectral Difference Method (SDM) on unstructured meshes. The computational stencil of the spectral difference method is compact and advantageous for parallel processing. CHORUS demonstrates excellent parallel performance for all test cases reported in this paper, scaling up to 12,000 cores on the Yellowstone High-Performance Computing cluster at NCAR. The code is verified by defining two benchmark cases for global convection in Jupiter and the Sun. CHORUS results are compared with results from the ASH code and good agreement is found. The CHORUS code creates new opportunities for simulating such varied phenomena as multi-scale solar co...

  8. Assessment of error propagation in ultraspectral sounder data via JPEG2000 compression and turbo coding

    Science.gov (United States)

    Olsen, Donald P.; Wang, Charles C.; Sklar, Dean; Huang, Bormin; Ahuja, Alok

    2005-08-01

    Research has been undertaken to examine the robustness of JPEG2000 when corrupted by transmission bit errors in a satellite data stream. Contemporary and future ultraspectral sounders such as Atmospheric Infrared Sounder (AIRS), Cross-track Infrared Sounder (CrIS), Infrared Atmospheric Sounding Interferometer (IASI), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS), and Hyperspectral Environmental Suite (HES) generate a large volume of three-dimensional data. Hence, compression of ultraspectral sounder data will facilitate data transmission and archiving. There is a need for lossless or near-lossless compression of ultraspectral sounder data to avoid potential retrieval degradation of geophysical parameters due to lossy compression. This paper investigates the simulated error propagation in AIRS ultraspectral sounder data with advanced source and channel coding in a satellite data stream. The source coding is done via JPEG2000, the latest International Organization for Standardization (ISO)/International Telecommunication Union (ITU) standard for image compression. After JPEG2000 compression the AIRS ultraspectral sounder data is then error correction encoded using a rate 0.954 turbo product code (TPC) for channel error control. Experimental results of error patterns on both channel and source decoding are presented. The error propagation effects are curbed via the block-based protection mechanism in the JPEG2000 codec as well as memory characteristics of the forward error correction (FEC) scheme to contain decoding errors within received blocks. A single nonheader bit error in a source code block tends to contaminate the bits until the end of the source code block before the inverse discrete wavelet transform (IDWT), and those erroneous bits propagate even further after the IDWT. Furthermore, a single header bit error may result in the corruption of almost the entire decompressed granule. JPEG2000 appears vulnerable to bit errors in a noisy channel of

  9. Low Complexity DCT-based DSC approach forHyperspectral Image Compression with Arithmetic Code

    Directory of Open Access Journals (Sweden)

    Meena Babu Vallakati

    2012-09-01

    Full Text Available This paper proposes low complexity codec for lossy compression on a sample hyperspectral image. These images have two kinds of redundancies: 1 spatial; and 2 spectral. A discrete cosine transform (DCT- based Distributed Source Coding(DSC paradigm with Arithmetic code for low complexity is introduced. Here, Set-partitioning based approach is applied to reorganize DCT coefficients into wavelet like tree structure as Setpartitioning works on wavelet transform, and extract the sign, refinement, and significance bitplanes. The extracted refinement bits are Arithmetic encoded, then by applying low density parity check based (LDPC-based Slepian-Wolf coder is implement to our DSC strategy. Experimental results for SAMSON (Spectroscopic Aerial Mapping System with Onboard Navigation data show that proposed scheme achieve peak signal to noise ratio and compression to a very good extent for water cube compared to building, land or forest cube.

  10. Lossless image compression based on optimal prediction, adaptive lifting, and conditional arithmetic coding.

    Science.gov (United States)

    Boulgouris, N V; Tzovaras, D; Strintzis, M G

    2001-01-01

    The optimal predictors of a lifting scheme in the general n-dimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row-column sampling. In each case, the efficiency of the linear predictors is enhanced nonlinearly. Directional postprocessing is used in the quincunx case, and adaptive-length postprocessing in the row-column case. Both methods are seen to perform well. The resulting nonlinear interpolation schemes achieve extremely efficient image decorrelation. We further investigate context modeling and adaptive arithmetic coding of wavelet coefficients in a lossless compression framework. Special attention is given to the modeling contexts and the adaptation of the arithmetic coder to the actual data. Experimental evaluation shows that the best of the resulting coders produces better results than other known algorithms for multiresolution-based lossless image coding.

  11. Inferential multi-spectral image compression based on distributed source coding

    Science.gov (United States)

    Wu, Xian-yun; Li, Yun-song; Wu, Cheng-ke; Kong, Fan-qiang

    2008-08-01

    Based on the analyses of the interferential multispectral imagery(IMI), a new compression algorithm based on distributed source coding is proposed. There are apparent push motions between the IMI sequences, the relative shift between two images is detected by the block match algorithm at the encoder. Our algorithm estimates the rate of each bitplane with the estimated side information frame. then our algorithm adopts a ROI coding algorithm, in which the rate-distortion lifting procedure is carried out in rate allocation stage. Using our algorithm, the FBC can be removed from the traditional scheme. The compression algorithm developed in the paper can obtain up to 3dB's gain comparing with JPEG2000 and significantly reduce the complexity and storage consumption comparing with 3D-SPIHT at the cost of slight degrade in PSNR.

  12. LP Decoding meets LP Decoding: A Connection between Channel Coding and Compressed Sensing

    CERN Document Server

    Dimakis, Alexandros G

    2009-01-01

    This is a tale of two linear programming decoders, namely channel coding linear programming decoding (CC-LPD) and compressed sensing linear programming decoding (CS-LPD). So far, they have evolved quite independently. The aim of the present paper is to show that there is a tight connection between, on the one hand, CS-LPD based on a zero-one measurement matrix over the reals and, on the other hand, CC-LPD of the binary linear code that is obtained by viewing this measurement matrix as a binary parity-check matrix. This connection allows one to translate performance guarantees from one setup to the other.

  13. Compact all-CMOS spatiotemporal compressive sensing video camera with pixel-wise coded exposure.

    Science.gov (United States)

    Zhang, Jie; Xiong, Tao; Tran, Trac; Chin, Sang; Etienne-Cummings, Ralph

    2016-04-18

    We present a low power all-CMOS implementation of temporal compressive sensing with pixel-wise coded exposure. This image sensor can increase video pixel resolution and frame rate simultaneously while reducing data readout speed. Compared to previous architectures, this system modulates pixel exposure at the individual photo-diode electronically without external optical components. Thus, the system provides reduction in size and power compare to previous optics based implementations. The prototype image sensor (127 × 90 pixels) can reconstruct 100 fps videos from coded images sampled at 5 fps. With 20× reduction in readout speed, our CMOS image sensor only consumes 14μW to provide 100 fps videos.

  14. A New Approach for Fingerprint Image Compression

    Energy Technology Data Exchange (ETDEWEB)

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  15. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M. [Los Alamos National Lab., NM (United States); Hopper, T. [Federal Bureau of Investigation, Washington, DC (United States)

    1993-05-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI`s Integrated Automated Fingerprint Identification System.

  16. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M. (Los Alamos National Lab., NM (United States)); Hopper, T. (Federal Bureau of Investigation, Washington, DC (United States))

    1993-01-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.

  17. Identification of Sparse Audio Tampering Using Distributed Source Coding and Compressive Sensing Techniques

    Directory of Open Access Journals (Sweden)

    Valenzise G

    2009-01-01

    Full Text Available In the past few years, a large amount of techniques have been proposed to identify whether a multimedia content has been illegally tampered or not. Nevertheless, very few efforts have been devoted to identifying which kind of attack has been carried out, especially due to the large data required for this task. We propose a novel hashing scheme which exploits the paradigms of compressive sensing and distributed source coding to generate a compact hash signature, and we apply it to the case of audio content protection. The audio content provider produces a small hash signature by computing a limited number of random projections of a perceptual, time-frequency representation of the original audio stream; the audio hash is given by the syndrome bits of an LDPC code applied to the projections. At the content user side, the hash is decoded using distributed source coding tools. If the tampering is sparsifiable or compressible in some orthonormal basis or redundant dictionary, it is possible to identify the time-frequency position of the attack, with a hash size as small as 200 bits/second; the bit saving obtained by introducing distributed source coding ranges between 20% to 70%.

  18. Identification of Sparse Audio Tampering Using Distributed Source Coding and Compressive Sensing Techniques

    Directory of Open Access Journals (Sweden)

    G. Valenzise

    2009-02-01

    Full Text Available In the past few years, a large amount of techniques have been proposed to identify whether a multimedia content has been illegally tampered or not. Nevertheless, very few efforts have been devoted to identifying which kind of attack has been carried out, especially due to the large data required for this task. We propose a novel hashing scheme which exploits the paradigms of compressive sensing and distributed source coding to generate a compact hash signature, and we apply it to the case of audio content protection. The audio content provider produces a small hash signature by computing a limited number of random projections of a perceptual, time-frequency representation of the original audio stream; the audio hash is given by the syndrome bits of an LDPC code applied to the projections. At the content user side, the hash is decoded using distributed source coding tools. If the tampering is sparsifiable or compressible in some orthonormal basis or redundant dictionary, it is possible to identify the time-frequency position of the attack, with a hash size as small as 200 bits/second; the bit saving obtained by introducing distributed source coding ranges between 20% to 70%.

  19. Compressive Sampling based Image Coding for Resource-deficient Visual Communication.

    Science.gov (United States)

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen

    2016-04-14

    In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.

  20. Relation between temporal envelope coding, pitch discrimination, and compression estimates in listeners with sensorineural hearing loss

    DEFF Research Database (Denmark)

    Bianchi, Federica; Santurette, Sébastien; Fereczkowski, Michal

    2015-01-01

    Recent physiological studies in animals showed that noise-induced sensorineural hearing loss (SNHL) increased the amplitude of envelope coding in single auditory-nerve fibers. The present study investigated whether SNHL in human listeners was associated with enhanced temporal envelope coding......, whether this enhancement affected pitch discrimination performance, and whether loss of compression following SNHL was a potential factor in envelope coding enhancement. Envelope processing was assessed in normal-hearing (NH) and hearing-impaired (HI) listeners in a behavioral amplitude...... resolvability. For the unresolved conditions, all five HI listeners performed as good as or better than NH listeners with matching musical experience. Two HI listeners showed lower amplitude-modulation detection thresholds than NH listeners for low modulation rates, and one of these listeners also showed a loss...

  1. Analysis of Doppler Effect on the Pulse Compression of Different Codes Emitted by an Ultrasonic LPS

    Directory of Open Access Journals (Sweden)

    Jorge Morera

    2011-11-01

    Full Text Available This work analyses the effect of the receiver movement on the detection by pulse compression of different families of codes characterizing the emissions of an Ultrasonic Local Positioning System. Three families of codes have been compared: Kasami, Complementary Sets of Sequences and Loosely Synchronous, considering in all cases three different lengths close to 64, 256 and 1,024 bits. This comparison is first carried out by using a system model in order to obtain a set of results that are then experimentally validated with the help of an electric slider that provides radial speeds up to 2 m/s. The performance of the codes under analysis has been characterized by means of the auto-correlation and cross-correlation bounds. The results derived from this study should be of interest to anyone performing matched filtering of ultrasonic signals with a moving emitter/receiver.

  2. Analysis of Doppler effect on the pulse compression of different codes emitted by an ultrasonic LPS.

    Science.gov (United States)

    Paredes, José A; Aguilera, Teodoro; Alvarez, Fernando J; Lozano, Jesús; Morera, Jorge

    2011-01-01

    This work analyses the effect of the receiver movement on the detection by pulse compression of different families of codes characterizing the emissions of an ultrasonic local positioning system. Three families of codes have been compared: Kasami, Complementary Sets of Sequences and Loosely Synchronous, considering in all cases three different lengths close to 64, 256 and 1,024 bits. This comparison is first carried out by using a system model in order to obtain a set of results that are then experimentally validated with the help of an electric slider that provides radial speeds up to 2 m/s. The performance of the codes under analysis has been characterized by means of the auto-correlation and cross-correlation bounds. The results derived from this study should be of interest to anyone performing matched filtering of ultrasonic signals with a moving emitter/receiver.

  3. CODEVECTOR MODELING USING LOCAL POLYNOMIAL REGRESSION FOR VECTOR QUANTIZATION BASED IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    P. Arockia Jansi Rani

    2010-08-01

    Full Text Available Image compression is very important in reducing the costs of data storage and transmission in relatively slow channels. In this paper, a still image compression scheme driven by Self-Organizing Map with polynomial regression modeling and entropy coding, employed within the wavelet framework is presented. The image compressibility and interpretability are improved by incorporating noise reduction into the compression scheme. The implementation begins with the classical wavelet decomposition, quantization followed by Huffman encoder. The codebook for the quantization process is designed using an unsupervised learning algorithm and further modified using polynomial regression to control the amount of noise reduction. Simulation results show that the proposed method reduces bit rate significantly and provides better perceptual quality than earlier methods.

  4. A Power Efficient Sensing/Communication Scheme: Joint Source-Channel-Network Coding by Using Compressive Sensing

    CERN Document Server

    Feizi, Soheil

    2011-01-01

    We propose a joint source-channel-network coding scheme, based on compressive sensing principles, for wireless networks with AWGN channels (that may include multiple access and broadcast), with sources exhibiting temporal and spatial dependencies. Our goal is to provide a reconstruction of sources within an allowed distortion level at each receiver. We perform joint source-channel coding at each source by randomly projecting source values to a lower dimensional space. We consider sources that satisfy the restricted eigenvalue (RE) condition as well as more general sources for which the randomness of the network allows a mapping to lower dimensional spaces. Our approach relies on using analog random linear network coding. The receiver uses compressive sensing decoders to reconstruct sources. Our key insight is the fact that, compressive sensing and analog network coding both preserve the source characteristics required for compressive sensing decoding.

  5. 物联网PML文件的压缩算法%Compression Algorithm for PML Document Based on Internet of Things

    Institute of Scientific and Technical Information of China (English)

    李奕; 付晓梅; 卢毅; 戴居丰

    2012-01-01

    针对物联网中PML格式文件海量性的问题,提出了基于无损Huff man编码的改进压缩算法.利用PML语法的特点,分离PML元素与数据内容,以元素为单位赋权值,再与数据一同构建Huffman树.仿真结果表明,改进算法的压缩比在6.0∶1左右,比标准Huffman编码的1.6∶1左右要高,能为物联网提供更为高效的传输效率.%To solve the problem of large-scale PML files, an improved Huffman compression algorithm is presented. Based on the characters of syntax of PML, the PML files are divided into two parts- element and data. Then the element part is assigned the weight wholly and constructs the Huffman tree with data part. Comparing the simulation results, the compression ratios of the improved algorithm, which is about 6. 0:1, is better than the standard Huffman coding as 1. 6:1. It's obvious that the improved algorithm improves the transmission efficiency for IOT.

  6. A novel technique for image steganography based on Block-DCT and Huffman Encoding

    CERN Document Server

    Nag, A; Sarkar, D; Sarkar, P P; 10.5121/ijcsit.2010.2308

    2010-01-01

    Image steganography is the art of hiding information into a cover image. This paper presents a novel technique for Image steganography based on Block-DCT, where DCT is used to transform original image (cover image) blocks from spatial domain to frequency domain. Firstly a gray level image of size M x N is divided into no joint 8 x 8 blocks and a two dimensional Discrete Cosine Transform (2-d DCT) is performed on each of the P = MN / 64 blocks. Then Huffman encoding is also performed on the secret messages/images before embedding and each bit of Huffman code of secret message/image is embedded in the frequency domain by altering the least significant bit of each of the DCT coefficients of cover image blocks. The experimental results show that the algorithm has a high capacity and a good invisibility. Moreover PSNR of cover image with stego-image shows the better results in comparison with other existing steganography approaches. Furthermore, satisfactory security is maintained since the secret message/image ca...

  7. Single stock dynamics on high-frequency data: from a compressed coding perspective.

    Directory of Open Access Journals (Sweden)

    Hsieh Fushing

    Full Text Available High-frequency return, trading volume and transaction number are digitally coded via a nonparametric computing algorithm, called hierarchical factor segmentation (HFS, and then are coupled together to reveal a single stock dynamics without global state-space structural assumptions. The base-8 digital coding sequence, which is capable of revealing contrasting aggregation against sparsity of extreme events, is further compressed into a shortened sequence of state transitions. This compressed digital code sequence vividly demonstrates that the aggregation of large absolute returns is the primary driving force for stimulating both the aggregations of large trading volumes and transaction numbers. The state of system-wise synchrony is manifested with very frequent recurrence in the stock dynamics. And this data-driven dynamic mechanism is seen to correspondingly vary as the global market transiting in and out of contraction-expansion cycles. These results not only elaborate the stock dynamics of interest to a fuller extent, but also contradict some classical theories in finance. Overall this version of stock dynamics is potentially more coherent and realistic, especially when the current financial market is increasingly powered by high-frequency trading via computer algorithms, rather than by individual investors.

  8. Single stock dynamics on high-frequency data: from a compressed coding perspective.

    Science.gov (United States)

    Fushing, Hsieh; Chen, Shu-Chun; Hwang, Chii-Ruey

    2014-01-01

    High-frequency return, trading volume and transaction number are digitally coded via a nonparametric computing algorithm, called hierarchical factor segmentation (HFS), and then are coupled together to reveal a single stock dynamics without global state-space structural assumptions. The base-8 digital coding sequence, which is capable of revealing contrasting aggregation against sparsity of extreme events, is further compressed into a shortened sequence of state transitions. This compressed digital code sequence vividly demonstrates that the aggregation of large absolute returns is the primary driving force for stimulating both the aggregations of large trading volumes and transaction numbers. The state of system-wise synchrony is manifested with very frequent recurrence in the stock dynamics. And this data-driven dynamic mechanism is seen to correspondingly vary as the global market transiting in and out of contraction-expansion cycles. These results not only elaborate the stock dynamics of interest to a fuller extent, but also contradict some classical theories in finance. Overall this version of stock dynamics is potentially more coherent and realistic, especially when the current financial market is increasingly powered by high-frequency trading via computer algorithms, rather than by individual investors.

  9. Image Compression Technique Based on Discrete 2-D wavelet transforms with Arithmetic Coding

    Directory of Open Access Journals (Sweden)

    Deepika Sunoriya

    2012-06-01

    Full Text Available Digital Images play a very important role fordescribing the detailed information about man,money, machine almost in every field. The variousprocesses of digitizing the images to obtain it in thebest quality for the more clear and accurateinformation leads to the requirement of morestorage space and better storage and accessingmechanism in the form of hardware or software. Inthis paper we apply a technique for imagecompression. Our proposed approach is thecombination of several approaches to make thecompression better than the previous usedapproach. In this technique we first apply walshtransformation. Split all DC values form eachtransformed block 8x8.After that we applyarithmetic coding for compress an image. In thispaper we also present a brief survey on severalImage Compression Techniques.

  10. Spatio-temporal Compressed Sensing with Coded Apertures and Keyed Exposures

    CERN Document Server

    Harmany, Zachary T; Willett, Rebecca M

    2011-01-01

    Optical systems which measure independent random projections of a scene according to compressed sensing (CS) theory face a myriad of practical challenges related to the size of the physical platform, photon efficiency, the need for high temporal resolution, and fast reconstruction in video settings. This paper describes a coded aperture and keyed exposure approach to compressive measurement in optical systems. The proposed projections satisfy the Restricted Isometry Property for sufficiently sparse scenes, and hence are compatible with theoretical guarantees on the video reconstruction quality. These concepts can be implemented in both space and time via either amplitude modulation or phase shifting, and this paper describes the relative merits of the two approaches in terms of theoretical performance, noise and hardware considerations, and experimental results. Fast numerical algorithms which account for the nonnegativity of the projections and temporal correlations in a video sequence are developed and appl...

  11. Bit-Based Joint Source-Channel Decoding of Huffman Encoded Markov Multiple Sources

    Directory of Open Access Journals (Sweden)

    Weiwei Xiang

    2010-04-01

    Full Text Available Multimedia transmission over time-varying channels such as wireless channels has recently motivated the research on the joint source-channel technique. In this paper, we present a method for joint source-channel soft decision decoding of Huffman encoded multiple sources. By exploiting the a priori bit probabilities in multiple sources, the decoding performance is greatly improved. Compared with the single source decoding scheme addressed by Marion Jeanne, the proposed technique is more practical in wideband wireless communications. Simulation results show our new method obtains substantial improvements with a minor increasing of complexity. For two sources, the gain in SNR is around 1.5dB by using convolutional codes when symbol-error rate (SER reaches 10-2 and around 2dB by using Turbo codes.

  12. Implementation of Huffman Decoder on Fpga

    OpenAIRE

    Safia Amir Dahri; Dr Abdul Fattah Chandio

    2016-01-01

    Lossless data compression algorithm is most widely used algorithm in data transmission, reception and storage systems in order to increase data rate, speed and save lots of space on storage devices. Now-a-days, different algorithms are implemented in hardware to achieve benefits of hardware realizations. Hardware implementation of algorithms, digital signal processing algorithms and filter realization is done on programmable devices i.e. FPGA. In lossless data compression algorith...

  13. High performance optical encryption based on computational ghost imaging with QR code and compressive sensing technique

    Science.gov (United States)

    Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan

    2015-10-01

    In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.

  14. Advanced low-complexity compression for maskless lithography data

    Science.gov (United States)

    Dai, Vito; Zakhor, Avideh

    2004-05-01

    A direct-write maskless lithography system using 25nm for 50nm feature sizes requires data rates of about 10 Tb/s to maintain a throughput of one wafer per minute per layer achieved by today"s optical lithography systems. In a previous paper, we presented an architecture that achieves this data rate contingent on 25 to 1 compression of lithography data, and on implementation of a real-time decompressor fabricated on the same chip as a massively parallel array of lithography writers for 50 nm feature sizes. A number of compression techniques, including JBIG, ZIP, the novel 2D-LZ, and BZIP2 were demonstrated to achieve sufficiently high compression ratios on lithography data to make the architecture feasible, although no single technique could achieve this for all test layouts. In this paper we present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4) specifically tailored for lithography data. It successfully combines the advantages of context-based modeling in JBIG and copying in ZIP to achieve higher compression ratios across all test layouts. As part of C4, we have developed a low-complexity binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and 2D-LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for grey-pixel image data. The tradeoff between decoder buffer size, which directly affects implementation complexity and compression ratio is examined. For the same buffer size, C4 achieves higher compression than LZ77, ZIP, and BZIP2.

  15. Sub-Nyquist sampling and detection in Costas coded pulse compression radars

    Science.gov (United States)

    Hanif, Adnan; Mansoor, Atif Bin; Imran, Ali Shariq

    2016-12-01

    Modern pulse compression radar involves digital signal processing of high bandwidth pulses modulated with different coding schemes. One of the limiting factors in the radar's design to achieve desired target range and resolution is the need of high rate analog-to-digital (A/D) conversion fulfilling the Nyquist sampling criteria. The high sampling rates necessitate huge storage capacity, more power consumption, and extra processing requirement. We introduce a new approach to sample wideband radar waveform modulated with Costas sequence at a sub-Nyquist rate based upon the concept of compressive sensing (CS). Sub-Nyquist measurements of Costas sequence waveform are performed in an analog-to-information (A/I) converter based upon random demodulation replacing traditional A/D converter. The novel work presents an 8-order Costas coded waveform with sub-Nyquist sampling and its reconstruction. The reconstructed waveform is compared with the conventionally sampled signal and depicts high-quality signal recovery from sub-Nyquist sampled signal. Furthermore, performance of CS-based detections after reconstruction are evaluated in terms of receiver operating characteristic (ROC) curves and compared with conventional Nyquist-rate matched filtering scheme.

  16. Simultaneous denoising and compression of multispectral images

    Science.gov (United States)

    Hagag, Ahmed; Amin, Mohamed; Abd El-Samie, Fathi E.

    2013-01-01

    A new technique for denoising and compression of multispectral satellite images to remove the effect of noise on the compression process is presented. One type of multispectral images has been considered: Landsat Enhanced Thematic Mapper Plus. The discrete wavelet transform (DWT), the dual-tree DWT, and a simple Huffman coder are used in the compression process. Simulation results show that the proposed technique is more effective than other traditional compression-only techniques.

  17. Lossless Compression Performance of a Simple Counter-Based Entropy Coder

    OpenAIRE

    Armein Z. R. Langi

    2011-01-01

    This paper describes the performance of a simple counter based entropy coder, as compared to other entropy coders, especially Huffman coder. Lossless data compression, such as Huffman coder and arithmetic coder, are designed to perform well over a wide range of data entropy. As a result, the coders require significant computational resources that could be the bottleneck of a compression implementation performance. In contrast, counter-based coders are designed to be optimal on a limited entro...

  18. Computationally efficient sub-band coding of ECG signals.

    Science.gov (United States)

    Husøy, J H; Gjerde, T

    1996-03-01

    A data compression technique is presented for the compression of discrete time electrocardiogram (ECG) signals. The compression system is based on sub-band coding, a technique traditionally used for compressing speech and images. The sub-band coder employs quadrature mirror filter banks (QMF) with up to 32 critically sampled sub-bands. Both finite impulse response (FIR) and the more computationally efficient infinite impulse response (IIR) filter banks are considered as candidates in a complete ECG coding system. The sub-bands are threshold, quantized using uniform quantizers and run-length coded. The output of the run-length coder is further compressed by a Huffman coder. Extensive simulations indicate that 16 sub-bands are a suitable choice for this application. Furthermore, IIR filter banks are preferable due to their superiority in terms of computational efficiency. We conclude that the present scheme, which is suitable for real time implementation on a PC, can provide compression ratios between 5 and 15 without loss of clinical information.

  19. Oncologic image compression using both wavelet and masking techniques.

    Science.gov (United States)

    Yin, F F; Gao, Q

    1997-12-01

    A new algorithm has been developed to compress oncologic images using both wavelet transform and field masking methods. A compactly supported wavelet transform is used to decompose the original image into high- and low-frequency subband images. The region-of-interest (ROI) inside an image, such as an irradiated field in an electronic portal image, is identified using an image segmentation technique and is then used to generate a mask. The wavelet transform coefficients outside the mask region are then ignored so that these coefficients can be efficiently coded to minimize the image redundancy. In this study, an adaptive uniform scalar quantization method and Huffman coding with a fixed code book are employed in subsequent compression procedures. Three types of typical oncologic images are tested for compression using this new algorithm: CT, MRI, and electronic portal images with 256 x 256 matrix size and 8-bit gray levels. Peak signal-to-noise ratio (PSNR) is used to evaluate the quality of reconstructed image. Effects of masking and image quality on compression ratio are illustrated. Compression ratios obtained using wavelet transform with and without masking for the same PSNR are compared for all types of images. The addition of masking shows an increase of compression ratio by a factor of greater than 1.5. The effect of masking on the compression ratio depends on image type and anatomical site. A compression ratio of greater than 5 can be achieved for a lossless compression of various oncologic images with respect to the region inside the mask. Examples of reconstructed images with compression ratio greater than 50 are shown.

  20. N-Square Approach for the Erection of Redundancy Codes

    Directory of Open Access Journals (Sweden)

    G. Srinivas,

    2010-04-01

    Full Text Available This paper addresses the area of data compression which is an application of image processing. There are several lossy and lossless coding techniques developed all through the last two decades. Although very high compression can be achieved with lossy compression techniques, they are deficient in obtaining the original image. While lossless compression technique recovers the image exactly. In applications related to medical imaging lossless techniques are required, as the loss of information is deplorable. The objective of image compression is to symbolize an image with a handful number of bits as possible while preserving the quality required for the given application. In this paper we are introducing a new lossless compression technique which even better reduces the entropy there by reducing the average number of bits with the utility of Non BinaryHuffman coding through the use of N-Square approach. Our extensive experimental results demonstrate that the proposed scheme is very competitive and this addresses the limitations of D value in the existing system by proposing a pattern called N-Square approach for it. The newly proposed algorithm provides a good means for lossless image compression.

  1. Wave Mode Discrimination of Coded Ultrasonic Guided Waves Using Two-Dimensional Compressed Pulse Analysis.

    Science.gov (United States)

    Malo, Sergio; Fateri, Sina; Livadas, Makis; Mares, Cristinel; Gan, Tat-Hean

    2017-07-01

    Ultrasonic guided waves testing is a technique successfully used in many industrial scenarios worldwide. For many complex applications, the dispersive nature and multimode behavior of the technique still poses a challenge for correct defect detection capabilities. In order to improve the performance of the guided waves, a 2-D compressed pulse analysis is presented in this paper. This novel technique combines the use of pulse compression and dispersion compensation in order to improve the signal-to-noise ratio (SNR) and temporal-spatial resolution of the signals. The ability of the technique to discriminate different wave modes is also highlighted. In addition, an iterative algorithm is developed to identify the wave modes of interest using adaptive peak detection to enable automatic wave mode discrimination. The employed algorithm is developed in order to pave the way for further in situ applications. The performance of Barker-coded and chirp waveforms is studied in a multimodal scenario where longitudinal and flexural wave packets are superposed. The technique is tested in both synthetic and experimental conditions. The enhancements in SNR and temporal resolution are quantified as well as their ability to accurately calculate the propagation distance for different wave modes.

  2. Hyperspectral image compression using 3D discrete cosine transform and entropy-constrained trellis-coded quantization

    Science.gov (United States)

    Abousleman, Glen P.; Marcellin, Michael W.; Hunt, Bobby R.

    1994-07-01

    A system is presented for compression of hyperspectral imagery which utilizes trellis coded quantization (TCQ). Specifically, TCQ is used to encode transform coefficients resulting from the application of an 8X8X8 discrete cosine transform. Side information and rate allocation strategies are discussed. Entropy-constrained codebooks are designed using a modified version of the generalized Lloyd algorithm. This entropy constrained system achieves a compression ratio of greater than 70:1 with an average PSNR of the coded hyperspectral sequence exceeding 40.5 dB.

  3. IC image compressing technology based on energy entropy distribution gradient and huffman coding%基于能量熵分布梯度与Huffman编码的IC图像压缩技术

    Institute of Scientific and Technical Information of China (English)

    梁忠伟; 张春良; 叶邦彦; 江帆; 胡晓

    2009-01-01

    IC芯片的远程在线制造监控技术使得芯片图像信息的存储、处理与传送要求不断提高.提出了基于能量熵分布梯度与Huffman编码的IC图像压缩技术,通过建立能量熵分布梯度,可提取反映芯片图像细节的特征平面,并结合Huffman编码技术进行图像的编码压缩,实现在高压缩率情况下对于图像细节特征的描述.经过编程实现与图像解压实验,方法获得了较为稳定的压缩结果与清晰的解压图像,为芯片制造的在线远程监控提供了基础.

  4. Partial Encryption of Entropy-Coded Video Compression Using Coupled Chaotic Maps

    Directory of Open Access Journals (Sweden)

    Fadi Almasalha

    2014-10-01

    Full Text Available Due to pervasive communication infrastructures, a plethora of enabling technologies is being developed over mobile and wired networks. Among these, video streaming services over IP are the most challenging in terms of quality, real-time requirements and security. In this paper, we propose a novel scheme to efficiently secure variable length coded (VLC multimedia bit streams, such as H.264. It is based on code word error diffusion and variable size segment shuffling. The codeword diffusion and the shuffling mechanisms are based on random operations from a secure and computationally efficient chaos-based pseudo-random number generator. The proposed scheme is ubiquitous to the end users and can be deployed at any node in the network. It provides different levels of security, with encrypted data volume fluctuating between 5.5–17%. It works on the compressed bit stream without requiring any decoding. It provides excellent encryption speeds on different platforms, including mobile devices. It is 200% faster and 150% more power efficient when compared with AES software-based full encryption schemes. Regarding security, the scheme is robust to well-known attacks in the literature, such as brute force and known/chosen plain text attacks.

  5. Worst configurations (instantons) for compressed sensing over reals: a channel coding approach

    Energy Technology Data Exchange (ETDEWEB)

    Chertkov, Michael [Los Alamos National Laboratory; Chilappagari, Shashi K [UNIV OF AZ; Vasic, Bane [UNIV OF AZ

    2010-01-01

    We consider Linear Programming (LP) solution of a Compressed Sensing (CS) problem over reals, also known as the Basis Pursuit (BasP) algorithm. The BasP allows interpretation as a channel-coding problem, and it guarantees the error-free reconstruction over reals for properly chosen measurement matrix and sufficiently sparse error vectors. In this manuscript, we examine how the BasP performs on a given measurement matrix and develop a technique to discover sparse vectors for which the BasP fails. The resulting algorithm is a generalization of our previous results on finding the most probable error-patterns, so called instantons, degrading performance of a finite size Low-Density Parity-Check (LDPC) code in the error-floor regime. The BasP fails when its output is different from the actual error-pattern. We design CS-Instanton Search Algorithm (ISA) generating a sparse vector, called CS-instanton, such that the BasP fails on the instanton, while its action on any modification of the CS-instanton decreasing a properly defined norm is successful. We also prove that, given a sufficiently dense random input for the error-vector, the CS-ISA converges to an instanton in a small finite number of steps. Performance of the CS-ISA is tested on example of a randomly generated 512 * 120 matrix, that outputs the shortest instanton (error vector) pattern of length 11.

  6. APPLICATION OF IMPROVED HYBRID COMPRESSION ALGORITHM IN GPS DATA COMPRESSION%改进的混合压缩算法在 GPS数据压缩中的应用

    Institute of Scientific and Technical Information of China (English)

    周桂宇; 马宪民; 李卫斌

    2013-01-01

    In the paper we introduce a hybrid compression algorithm , which is the combination of Huffman algorithm and RLE algorithm , for compressing the GPS data .This algorithm acquires statistical characteristics of GPS data according to the NMEA 0183 protocol , mixes Huffman algorithm and RLE algorithm to compress GPS data , to improve the coding efficiency and to restrain data expansion .Huffman algorithm has high compression rate on duplicated single-byte data while RLE algorithm has high compression rate on duplicated code segment.The flag bit is added in the process of encoding for the classification processing on GPS data in order to effectively identify the outputs of two kinds of algorithm when decoding and to ensure the complete decoding of compressed data .This improved hybrid compression algorithm is applied to local storage and 3G remote transmission of vehicle terminal GPS data , results show that the algorithm has clear improvement in compression performance of GPS data .%介绍一种Huffman算法与RLE( Run-Length Encoding )算法相结合的混合压缩算法对车载监控系统GPS数据进行压缩处理。该算法依据NMEA0183协议获取GPS数据的统计特性,混合对重复的单字节数据的压缩率高的Huffman算法以及对重复码段压缩率高的RLE算法,对GPS数据进行压缩,提高数据的编码效率,抑制数据膨胀。在编码过程中添加标志位,对GPS数据进行分类处理,便于解码时有效识别两种算法的输出,保证对压缩的数据进行完整解码。将改进的混合压缩算法应用于车载终端GPS数据的本地存储与3G远程传输,结果表明该算法对GPS数据的压缩性能具有明显提高。

  7. Effect of Lossy JPEG Compression of an Image with Chromatic Aberrations on Target Measurement Accuracy

    Science.gov (United States)

    Matsuoka, R.

    2014-05-01

    This paper reports an experiment conducted to investigate the effect of lossy JPEG compression of an image with chromatic aberrations on the measurement accuracy of target center by the intensity-weighted centroid method. I utilized six images shooting a white sheet with 30 by 20 black filled circles in the experiment. The images were acquired by a digital camera Canon EOS 20D. The image data were compressed by using two compression parameter sets of a downsampling ratio, a quantization table and a Huffman code table utilized in EOS 20D. The experiment results clearly indicate that lossy JPEG compression of an image with chromatic aberrations would produce a significant effect on measurement accuracy of target center by the intensity-weighted centroid method. The maximum displacements of red, green and blue components caused by lossy JPEG compression were 0.20, 0.09, and 0.20 pixels respectively. The results also suggest that the downsampling of the chrominance components Cb and Cr in lossy JPEG compression would produce displacements between uncompressed image data and compressed image data. In conclusion, since the author consider that it would be unable to correct displacements caused by lossy JPEG compression, the author would recommend that lossy JPEG compression before recording an image in a digital camera should not be executed in case of highly precise image measurement by using color images acquired by a non-metric digital camera.

  8. A novel wavelet based approach for near lossless image compression using modified duplicate free run length coding

    Directory of Open Access Journals (Sweden)

    Pacha Sreenivasulu

    2014-12-01

    Full Text Available In this paper we are presenting a three-stage near lossless image compression scheme. It belongs to the class of lossless coding which consists of wavelet based decomposition followed by modified duplicate free run-length coding. We go for the selection of optimum bit rate to guarantee minimum MSE (mean square error, high PSNR (peak signal to noise ratio and also ensure that time required for computation is very less unlike other compression schemes. Hence we propose 'A wavelet based novel approach for near lossless image compression'. Which is very much useful for real time applications and is also compared with EZW, SPIHT, SOFM and the proposed method is out performed.

  9. Improving quality of medical image compression using biorthogonal CDF wavelet based on lifting scheme and SPIHT coding

    Directory of Open Access Journals (Sweden)

    Beladgham Mohammed

    2011-01-01

    Full Text Available As the coming era is that of digitized medical information, an important challenge to deal with is the storage and transmission requirements of enormous data, including medical images. Compression is one of the indispensable techniques to solve this problem. In this work, we propose an algorithm for medical image compression based on a biorthogonal wavelet transform CDF 9/7 coupled with SPIHT coding algorithm, of which we applied the lifting structure to improve the drawbacks of wavelet transform. In order to enhance the compression by our algorithm, we have compared the results obtained with wavelet based filters bank. Experimental results show that the proposed algorithm is superior to traditional methods in both lossy and lossless compression for all tested images. Our algorithm provides very important PSNR and MSSIM values for MRI images.

  10. 基于提升小波的地形数据混合熵编码压缩与实时渲染%Terrain Data Hybrid Entropy Coding Compression Based on Lifting Wavelet and Real-time Rendering

    Institute of Scientific and Technical Information of China (English)

    郭浩然; 庞建民

    2012-01-01

    High resolution terrain Digital Elevation Model (DEM) and orthophoto bring severely load including data storage, schedule and real-time rendering, etc.. A high performance terrain data compression method is proposed based on lifting wavelet transform and parallel hybrid entropy codec, and combined with Graphics Process Unit (GPU) Ray-casting to achieve large-scale 3D terrain visualization. First, the multi-resolution wavelet transform model of terrain tile is constructed to map the refinement and simplification operation. Then the multi-resolution quadtree of DEM and terrain texture is built separately based on lifting wavelet transform, the sparse wavelet coefficient generated from quantization is compressed by a hybrid entropy codec which combined with parallel run-length coding and variable-length Huffman coding. The compressed data are organized into progressive stream to do real-time decoding and rendering. The present lifting wavelet transform and hybrid entropy codec is implemented by Compute Unified Device Architecture (CUDA) in GPU. Experiment results show that the data compression ratio is effective with this method, PSNR and code-decode data throughput. High Frames Per Second (FPS) in real-time rendering satisfied the demand of interactive visualization.%高分辨率地形高程和影像数据给交互式3维地形可视化应用带来沉重压力,主要体现在数据存储、调度传输及实时渲染等方面.该文设计一种基于提升小波变换与并行混合熵编码的地形数据高性能压缩方法,并结合图形处理器(Graphics Process Unit,GPU)Ray-casting实现大规模3维地形可视化.首先建立多分辨率地形块的小波变换模型来映射其求精和化简操作;其次,基于提升小波变换分别构建格网数字高程模型(Digital Elevation Model,DEM)和地表纹理的多分辨率四叉树,对量化后的稀疏小波系数引入并行游程编码与并行变长霍夫曼编码相结合的混合熵编码进行压

  11. Remotely sensed image compression based on wavelet transform

    Science.gov (United States)

    Kim, Seong W.; Lee, Heung K.; Kim, Kyung S.; Choi, Soon D.

    1995-01-01

    In this paper, we present an image compression algorithm that is capable of significantly reducing the vast amount of information contained in multispectral images. The developed algorithm exploits the spectral and spatial correlations found in multispectral images. The scheme encodes the difference between images after contrast/brightness equalization to remove the spectral redundancy, and utilizes a two-dimensional wavelet transform to remove the spatial redundancy. the transformed images are then encoded by Hilbert-curve scanning and run-length-encoding, followed by Huffman coding. We also present the performance of the proposed algorithm with the LANDSAT MultiSpectral Scanner data. The loss of information is evaluated by PSNR (peak signal to noise ratio) and classification capability.

  12. Lightweight Object Tracking in Compressed Video Streams Demonstrated in Region-of-Interest Coding

    Directory of Open Access Journals (Sweden)

    Rik Van de Walle

    2007-01-01

    Full Text Available Video scalability is a recent video coding technology that allows content providers to offer multiple quality versions from a single encoded video file in order to target different kinds of end-user devices and networks. One form of scalability utilizes the region-of-interest concept, that is, the possibility to mark objects or zones within the video as more important than the surrounding area. The scalable video coder ensures that these regions-of-interest are received by an end-user device before the surrounding area and preferably in higher quality. In this paper, novel algorithms are presented making it possible to automatically track the marked objects in the regions of interest. Our methods detect the overall motion of a designated object by retrieving the motion vectors calculated during the motion estimation step of the video encoder. Using this knowledge, the region-of-interest is translated, thus following the objects within. Furthermore, the proposed algorithms allow adequate resizing of the region-of-interest. By using the available information from the video encoder, object tracking can be done in the compressed domain and is suitable for real-time and streaming applications. A time-complexity analysis is given for the algorithms proving the low complexity thereof and the usability for real-time applications. The proposed object tracking methods are generic and can be applied to any codec that calculates the motion vector field. In this paper, the algorithms are implemented within MPEG-4 fine-granularity scalability codec. Different tests on different video sequences are performed to evaluate the accuracy of the methods. Our novel algorithms achieve a precision up to 96.4%.

  13. Lightweight Object Tracking in Compressed Video Streams Demonstrated in Region-of-Interest Coding

    Directory of Open Access Journals (Sweden)

    Lerouge Sam

    2007-01-01

    Full Text Available Video scalability is a recent video coding technology that allows content providers to offer multiple quality versions from a single encoded video file in order to target different kinds of end-user devices and networks. One form of scalability utilizes the region-of-interest concept, that is, the possibility to mark objects or zones within the video as more important than the surrounding area. The scalable video coder ensures that these regions-of-interest are received by an end-user device before the surrounding area and preferably in higher quality. In this paper, novel algorithms are presented making it possible to automatically track the marked objects in the regions of interest. Our methods detect the overall motion of a designated object by retrieving the motion vectors calculated during the motion estimation step of the video encoder. Using this knowledge, the region-of-interest is translated, thus following the objects within. Furthermore, the proposed algorithms allow adequate resizing of the region-of-interest. By using the available information from the video encoder, object tracking can be done in the compressed domain and is suitable for real-time and streaming applications. A time-complexity analysis is given for the algorithms proving the low complexity thereof and the usability for real-time applications. The proposed object tracking methods are generic and can be applied to any codec that calculates the motion vector field. In this paper, the algorithms are implemented within MPEG-4 fine-granularity scalability codec. Different tests on different video sequences are performed to evaluate the accuracy of the methods. Our novel algorithms achieve a precision up to 96.4 .

  14. Lossless image compression technique for infrared thermal images

    Science.gov (United States)

    Allred, Lloyd G.; Kelly, Gary E.

    1992-07-01

    The authors have achieved a 6.5-to-one image compression technique for thermal images (640 X 480, 1024 colors deep). Using a combination of new and more traditional techniques, the combined algorithm is computationally simple, enabling `on-the-fly' compression and storage of an image in less time than it takes to transcribe the original image to or from a magnetic medium. Similar compression has been achieved on visual images by virtue of the feature that all optical devices possess a modulation transfer function. As a consequence of this property, the difference in color between adjacent pixels is a usually small number, often between -1 and +1 graduations for a meaningful color scheme. By differentiating adjacent rows and columns, the original image can be expressed in terms of these small numbers. A simple compression algorithm for these small numbers achieves a four to one image compression. By piggy-backing this technique with a LZW compression or a fixed Huffman coding, an additional 35% image compression is obtained, resulting in a 6.5-to-one lossless image compression. Because traditional noise-removal operators tend to minimize the color graduations between adjacent pixels, an additional 20% reduction can be obtained by preprocessing the image with a noise-removal operator. Although noise removal operators are not lossless, their application may prove crucial in applications requiring high compression, such as the storage or transmission of a large number or images. The authors are working with the Air Force Photonics Technology Application Program Management office to apply this technique to transmission of optical images from satellites.

  15. Compression of 3D integral images using wavelet decomposition

    Science.gov (United States)

    Mazri, Meriem; Aggoun, Amar

    2003-06-01

    This paper presents a wavelet-based lossy compression technique for unidirectional 3D integral images (UII). The method requires the extraction of different viewpoint images from the integral image. A single viewpoint image is constructed by extracting one pixel from each microlens, then each viewpoint image is decomposed using a Two Dimensional Discrete Wavelet Transform (2D-DWT). The resulting array of coefficients contains several frequency bands. The lower frequency bands of the viewpoint images are assembled and compressed using a 3 Dimensional Discrete Cosine Transform (3D-DCT) followed by Huffman coding. This will achieve decorrelation within and between 2D low frequency bands from the different viewpoint images. The remaining higher frequency bands are Arithmetic coded. After decoding and decompression of the viewpoint images using an inverse 3D-DCT and an inverse 2D-DWT, each pixel from every reconstructed viewpoint image is put back into its original position within the microlens to reconstruct the whole 3D integral image. Simulations were performed on a set of four different grey level 3D UII using a uniform scalar quantizer with deadzone. The results for the average of the four UII intensity distributions are presented and compared with previous use of 3D-DCT scheme. It was found that the algorithm achieves better rate-distortion performance, with respect to compression ratio and image quality at very low bit rates.

  16. Huffman decoding module based on the hardware and software co-design%基于软硬件协同设计的Huffman解码模块

    Institute of Scientific and Technical Information of China (English)

    刘华; 刘卫东; 邢文峰

    2011-01-01

    With the rapid development of multi-media technology,digital audio technology also developed rapidly.MP3 is a lossy audio compression format with high compression rate,at present in many fields it has begun to widely used with good market prospects.This paper is mainly based on hardware and software co-design approach to implement the Huffman decoding module of MP3.The solution proposed in this paper can not only efficiently realize the Huffman decoding module of MP3 but also be applied to the Huffman decoding module of WMA,AAC and other audio formats,The approach ensure efficient while taking into account the module's versatility.%随着多媒体技术的迅猛发展,数字音频技术也快速发展起来。MP3是一种有损音频压缩编码,其压缩程度很高,目前在很多领域已经开始广泛应用,具有良好的市场前景。主要基于软硬件协同设计的方法,实现MP3的Huff-man解码模块。提出的解决方案不仅可以实现对MP3的Huffman模块的高效解码,同样也可以应用于WMA、AAC等其他音频格式的Huffman模块,在保证高效的同时,兼顾了模块的通用性。

  17. ADAPTIVE TCHEBICHEF MOMENT TRANSFORM IMAGE COMPRESSION USING PSYCHOVISUAL MODEL

    Directory of Open Access Journals (Sweden)

    Ferda Ernawan

    2013-01-01

    Full Text Available An extension of the standard JPEG image compression known as JPEG-3 allows rescaling of the quantization matrix to achieve a certain image output quality. Recently, Tchebichef Moment Transform (TMT has been introduced in the field of image compression. TMT has been shown to perform better than the standard JPEG image compression. This study presents an adaptive TMT image compression. This task is obtained by generating custom quantization tables for low, medium and high image output quality levels based on a psychovisual model. A psychovisual model is developed to approximate visual threshold on Tchebichef moment from image reconstruction error. The contribution of each moment will be investigated and analyzed in a quantitative experiment. The sensitivity of TMT basis functions can be measured by evaluating their contributions to image reconstruction for each moment order. The psychovisual threshold model allows a developer to design several custom TMT quantization tables for a user to choose from according to his or her target output preference. Consequently, these quantization tables produce lower average bit length of Huffman code while still retaining higher image quality than the extended JPEG scaling scheme.

  18. Finite element stress analysis of a compression mold. Final report. [Using SASL and WILSON codes

    Energy Technology Data Exchange (ETDEWEB)

    Watterson, C.E.

    1980-03-01

    Thermally induced stresses occurring in a compression mold during production molding were evaluated using finite element analysis. A complementary experimental stress analysis, including strain gages and thermocouple arrays, verified the finite element model under typical loading conditions.

  19. An innovative lossless compression method for discrete-color images.

    Science.gov (United States)

    Alzahir, Saif; Borici, Arber

    2015-01-01

    In this paper, we present an innovative method for lossless compression of discrete-color images, such as map images, graphics, GIS, as well as binary images. This method comprises two main components. The first is a fixed-size codebook encompassing 8×8 bit blocks of two-tone data along with their corresponding Huffman codes and their relative probabilities of occurrence. The probabilities were obtained from a very large set of discrete color images which are also used for arithmetic coding. The second component is the row-column reduction coding, which will encode those blocks that are not in the codebook. The proposed method has been successfully applied on two major image categories: 1) images with a predetermined number of discrete colors, such as digital maps, graphs, and GIS images and 2) binary images. The results show that our method compresses images from both categories (discrete color and binary images) with 90% in most case and higher than the JBIG-2 by 5%-20% for binary images, and by 2%-6.3% for discrete color images on average.

  20. Tonal Language Speech Compression Based on a Bitrate Scalable Multi-Pulse Based Code Excited Linear Prediction Coder

    Directory of Open Access Journals (Sweden)

    Suphattharachai Chomphan

    2011-01-01

    Full Text Available Problem statement: Speech compression is an important issue in the modern digital speech communication. The functionality of bitrates scalability also plays significant role, since the capacity of communication system varies all the time. When considering tonal speech, such as Thai, tone plays important role on the naturalness and the intelligibility of the speech, it must be treated appropriately. Therefore these issues are taken into account in this study. Approach: This study proposes a modification of flexible Multi-Pulse based Code Excited Linear Predictive (MP-CELP coder with bitrates scalabilities for tonal language speech in the multimedia applications. The coder consists of a core coder and bitrates scalable tools. The high pitch delay resolutions are applied to the adaptive codebook of core coder for tonal language speech quality improvement. The bitrates scalable tool employs multi-stage excitation coding based on an embedded-coding approach. The multi-pulse excitation codebook at each stage is adaptively produced depending on the selected excitation signal at the previous stage. Results: The experimental results show that the speech quality of the proposed coder is improved above the speech quality of the conventional coder without pitch-resolution adaptation. Conclusion: From the study, the proposed approach is able to improve the speech compression quality for tonal language and the functionality of bitrates scalability is also developed.

  1. ECG signal compression by multi-iteration EZW coding for different wavelets and thresholds.

    Science.gov (United States)

    Tohumoglu, Gülay; Sezgin, K Erbil

    2007-02-01

    The modified embedded zero-tree wavelet (MEZW) compression algorithm for the one-dimensional signal was originally derived for image compression based on Shapiro's EZW algorithm. It is revealed that the proposed codec is significantly more efficient in compression and in computation than previously proposed ECG compression schemes. The coder also attains exact bit rate control and generates a bit stream progressive in quality or rate. The EZW and MEZW algorithms apply the chosen threshold values or the expressions in order to specify that the significant transformed coefficients are greatly significant. Thus, two different threshold definitions, namely percentage and dyadic thresholds, are used, and they are applied for different wavelet types in biorthogonal and orthogonal classes. In detail, the MEZW and EZW algorithms results are quantitatively compared in terms of the compression ratio (CR) and percentage root mean square difference (PRD). Experiments are carried out on the selected records from the MIT-BIH arrhythmia database and an original ECG signal. It is observed that the MEZW algorithm shows a clear advantage in the CR achieved for a given PRD over the traditional EZW, and it gives better results for the biorthogonal wavelets than the orthogonal wavelets.

  2. Lattice codes for the Gaussian relay channel: Decode-and-Forward and Compress-and-Forward

    CERN Document Server

    Song, Yiwei

    2011-01-01

    Lattice codes are known to achieve capacity in the Gaussian point-to-point channel, thereby achieving the same rates as random Gaussian codebooks. Lattice codes are also known to outperform random codes for certain channel models that are able to exploit their linearity. In this work, we show that lattice codes may be used to achieve the same performance as known Gaussian random coding techniques for the Gaussian relay channel, and show several examples of how this may be combined with the linearity of lattices codes in multi-source relay networks. In particular, we present a nested lattice list decoding technique, by which, lattice codes are shown to achieve the Decode-and-Forward (DF) rate of single source, single destination Gaussian relay channels with one or more relays. We next present a few examples of how this DF scheme may be combined with the linearity of lattice codes to achieve rates which may outperform analogous Gaussian random coding techniques in multi-source relay channels such as the two-way...

  3. Soft and Joint Source-Channel Decoding of Quasi-Arithmetic Codes

    Science.gov (United States)

    Guionnet, Thomas; Guillemot, Christine

    2004-12-01

    The issue of robust and joint source-channel decoding of quasi-arithmetic codes is addressed. Quasi-arithmetic coding is a reduced precision and complexity implementation of arithmetic coding. This amounts to approximating the distribution of the source. The approximation of the source distribution leads to the introduction of redundancy that can be exploited for robust decoding in presence of transmission errors. Hence, this approximation controls both the trade-off between compression efficiency and complexity and at the same time the redundancy ( excess rate) introduced by this suboptimality. This paper provides first a state model of a quasi-arithmetic coder and decoder for binary and[InlineEquation not available: see fulltext.]-ary sources. The design of an error-resilient soft decoding algorithm follows quite naturally. The compression efficiency of quasi-arithmetic codes allows to add extra redundancy in the form of markers designed specifically to prevent desynchronization. The algorithm is directly amenable for iterative source-channel decoding in the spirit of serial turbo codes. The coding and decoding algorithms have been tested for a wide range of channel signal-to-noise ratios (SNRs). Experimental results reveal improved symbol error rate (SER) and SNR performances against Huffman and optimal arithmetic codes.

  4. Data compression on the sphere

    CERN Document Server

    McEwen, J D; Eyers, D M; 10.1051/0004-6361/201015728

    2011-01-01

    Large data-sets defined on the sphere arise in many fields. In particular, recent and forthcoming observations of the anisotropies of the cosmic microwave background (CMB) made on the celestial sphere contain approximately three and fifty mega-pixels respectively. The compression of such data is therefore becoming increasingly important. We develop algorithms to compress data defined on the sphere. A Haar wavelet transform on the sphere is used as an energy compression stage to reduce the entropy of the data, followed by Huffman and run-length encoding stages. Lossless and lossy compression algorithms are developed. We evaluate compression performance on simulated CMB data, Earth topography data and environmental illumination maps used in computer graphics. The CMB data can be compressed to approximately 40% of its original size for essentially no loss to the cosmological information content of the data, and to approximately 20% if a small cosmological information loss is tolerated. For the topographic and il...

  5. Progressive encoding with non-linear source codes for compression of low-entropy sources

    OpenAIRE

    Ramírez Javega, Francisco; Lamarca Orozco, M. Meritxell; García Frías, Javier

    2010-01-01

    We propose a novel scheme for source coding of non-uniform memoryless binary sources based on progressively encoding the input sequence with non-linear encoders. At each stage, a number of source bits is perfectly recovered, and these bits are thus not encoded in the next stage. The last stage consists of an LDPC code acting as a source encoder over the bits that have not been recovered in the previous stages. Peer Reviewed

  6. Application of wavelet filtering and Barker-coded pulse compression hybrid method to air-coupled ultrasonic testing

    Science.gov (United States)

    Zhou, Zhenggan; Ma, Baoquan; Jiang, Jingtao; Yu, Guang; Liu, Kui; Zhang, Dongmei; Liu, Weiping

    2014-10-01

    Air-coupled ultrasonic testing (ACUT) technique has been viewed as a viable solution in defect detection of advanced composites used in aerospace and aviation industries. However, the giant mismatch of acoustic impedance in air-solid interface makes the transmission efficiency of ultrasound low, and leads to poor signal-to-noise (SNR) ratio of received signal. The utilisation of signal-processing techniques in non-destructive testing is highly appreciated. This paper presents a wavelet filtering and phase-coded pulse compression hybrid method to improve the SNR and output power of received signal. The wavelet transform is utilised to filter insignificant components from noisy ultrasonic signal, and pulse compression process is used to improve the power of correlated signal based on cross-correction algorithm. For the purpose of reasonable parameter selection, different families of wavelets (Daubechies, Symlet and Coiflet) and decomposition level in discrete wavelet transform are analysed, different Barker codes (5-13 bits) are also analysed to acquire higher main-to-side lobe ratio. The performance of the hybrid method was verified in a honeycomb composite sample. Experimental results demonstrated that the proposed method is very efficient in improving the SNR and signal strength. The applicability of the proposed method seems to be a very promising tool to evaluate the integrity of high ultrasound attenuation composite materials using the ACUT.

  7. Review of design codes of concrete encased steel short columns under axial compression

    Directory of Open Access Journals (Sweden)

    K.Z. Soliman

    2013-08-01

    Full Text Available In recent years, the use of encased steel concrete columns has been increased significantly in medium-rise or high-rise buildings. The aim of the present investigation is to assess experimentally the current methods and codes for evaluating the ultimate load behavior of concrete encased steel short columns. The current state of design provisions for composite columns from the Egyptian codes ECP203-2007 and ECP-SC-LRFD-2012, as well as, American Institute of Steel Construction, AISC-LRFD-2010, American Concrete Institute, ACI-318-2008, and British Standard BS-5400-5 was reviewed. The axial capacity portion of both the encased steel section and the concrete section was also studied according to the previously mentioned codes. Ten encased steel concrete columns have been investigated experimentally to study the effect of concrete confinement and different types of encased steel sections. The measured axial capacity of the tested ten composite columns was compared with the values calculated by the above mentioned codes. It is concluded that non-negligible discrepancies exist between codes and the experimental results as the confinement effect was not considered in predicting both the strength and ductility of concrete. The confining effect was obviously influenced by the shape of the encased steel section. The tube-shaped steel section leads to better confinement than the SIB section. Among the used codes, the ECP-SC-LRFD-2012 led to the most conservative results.

  8. Real-time distortionless high-factor compression scheme.

    Science.gov (United States)

    Liénard, J

    1989-01-01

    Nowadays, digital subtraction angiography systems must be able to sustain real-time acquisition (30 frames per second) of 512 x 512 x 8 bit images and store several sequences of such images on low cost and general-purpose mass memories. Concretely, that means a 7.8 Mbytes per second rate and about 780 Mbytes disk space to hold a 100-s cardiac examination. To fulfill these requirements at competitive cost, a distortionless compressor/decompressor system can be designed: during acquisition, the real-time compressor transforms the input images into a lower quantity of coded information through a predictive coder and a variable-length Huffman code. The process is fully reversible because during review, the real-time decompressor exactly recovers the acquired images from the stored compressed data. Test results on many raw images demonstrate that real-time compression is feasible and takes place with absolutely no loss of information. The designed system indifferently works on 512 or 1024 formats, and 256 or 1024 gray levels.

  9. Response to the Critique of the Huffman (2014) Article, "Reading Rate Gains during a One-Semester Extensive Reading Course"

    Science.gov (United States)

    Huffman, Jeffrey

    2016-01-01

    In his critique of the Huffman (2014) article, McLean (2016) undertakes an important reflective exercise that is too often missing in the field of second language acquisition and in the social sciences in general: questioning whether the claims made by researchers are warranted by their results. In this article, Jeffrey Huffman says that McLean…

  10. Picture data compression coder using subband/transform coding with a Lempel-Ziv-based coder

    Science.gov (United States)

    Glover, Daniel R. (Inventor)

    1995-01-01

    Digital data coders/decoders are used extensively in video transmission. A digitally encoded video signal is separated into subbands. Separating the video into subbands allows transmission at low data rates. Once the data is separated into these subbands it can be coded and then decoded by statistical coders such as the Lempel-Ziv based coder.

  11. Characterization of coded random access with compressive sensing based multi user detection

    DEFF Research Database (Denmark)

    Ji, Yalei; Stefanovic, Cedomir; Bockelmann, Carsten

    2014-01-01

    The emergence of Machine-to-Machine (M2M) communication requires new Medium Access Control (MAC) schemes and physical (PHY) layer concepts to support a massive number of access requests. The concept of coded random access, introduced recently, greatly outperforms other random access methods...

  12. Video compression using lapped transforms for motion estimation/compensation and coding

    Science.gov (United States)

    Young, Robert W.; Kingsbury, Nick G.

    1992-11-01

    Many conventional video coding schemes, such as the CCITT H.261 recommendation, are based on the independent processing of non-overlapping image blocks. An important disadvantage with this approach is that blocking artifacts may be visible in the decoded frames. In this paper, we propose a coding scheme based entirely on the processing of overlapping, windowed data blocks, thus eliminating blocking effects. Motion estimation and compensation are both performed in the frequency domain using a complex lapped transform (CLT), which may be viewed as a complex extension of the lapped orthogonal transform (LOT). The motion compensation algorithm is equivalent to overlapped compensation in the spatial domain, but also allows image interpolation for sub-pel displacements and sophisticated loop filters to be conveniently applied in the frequency domain. For inter- and intra-frame coding, we define the modified fast lapped transform (MFLT). This is a modified form of the LOT, which entirely eliminates blocking artifacts in the reconstructed data. The transform is applied in a hierarchical structure, and performs better than the discrete cosine transform (DCT) for both coding modes. The proposed coder is compared with the H.261 scheme, and is found to have significantly improved performance.

  13. Video compression using lapped transforms for motion estimation compensation and coding

    Science.gov (United States)

    Young, Robert W.; Kingsbury, Nick G.

    1993-07-01

    Many conventional video coding schemes, such as the CCITT H.261 recommendation, are based on the independent processing of nonoverlapping image blocks. An important disadvantage with this approach is that blocking artifacts may be visible in the decoded frames. We propose a coding scheme based entirely on the processing of overlapping, windowed data blocks, thus eliminating blocking effects. Motion estimation and, in part, compensation are performed in the frequency domain using a complex lapped transform (CLT), which can be viewed as a complex extension of the lapped orthogonal transform (LOT). The motion compensation algorithm is equivalent to overlapped compensation in the spatial domain, but also allows image interpolation for subpixel displacements and sophisticated loop filters to be conveniently applied in the frequency domain. For inter- and intraframe coding, we define the modified fast lapped transform (MFLT). This is a modified form of the LOT that entirely eliminates blocking artifacts in the reconstructed data. The transform is applied in a hierarchical structure, and performs better than the discrete cosine transform (DCT) for both coding modes. The proposed coder is compared with the H.261 scheme and is found to have significantly improved performance.

  14. Euler Technology Assessment for Preliminary Aircraft Design: Compressibility Predictions by Employing the Cartesian Unstructured Grid SPLITFLOW Code

    Science.gov (United States)

    Finley, Dennis B.; Karman, Steve L., Jr.

    1996-01-01

    The objective of the second phase of the Euler Technology Assessment program was to evaluate the ability of Euler computational fluid dynamics codes to predict compressible flow effects over a generic fighter wind tunnel model. This portion of the study was conducted by Lockheed Martin Tactical Aircraft Systems, using an in-house Cartesian-grid code called SPLITFLOW. The Cartesian grid technique offers several advantages, including ease of volume grid generation and reduced number of cells compared to other grid schemes. SPLITFLOW also includes grid adaption of the volume grid during the solution to resolve high-gradient regions. The SPLITFLOW code predictions of configuration forces and moments are shown to be adequate for preliminary design, including predictions of sideslip effects and the effects of geometry variations at low and high angles-of-attack. The transonic pressure prediction capabilities of SPLITFLOW are shown to be improved over subsonic comparisons. The time required to generate the results from initial surface data is on the order of several hours, including grid generation, which is compatible with the needs of the design environment.

  15. Lossless Compression Performance of a Simple Counter-Based Entropy Coder

    Directory of Open Access Journals (Sweden)

    Armein Z.R. Langi

    2013-09-01

    Full Text Available This paper describes the performance of a simple counter based entropy coder XE "entropy coder" , as compared to other entropy coders, especially Huffman coder. Lossless data compression, such as Huffman coder and arithmetic coder, are designed to perform well over a wide range of data entropy. As a result, the coders require significant computational resources that could be the bottleneck of a compression implementation performance. In contrast, counter-based coders are designed to be optimal on a limited entropy range only. This paper shows the encoding and decoding process of counter-based coder XE "counter-based coder"  can be simple and fast, very suitable for hardware and software implementations. It also reports that the performance of the designed coder is comparable to that of a much more complex Huffman coder.

  16. Lossless Compression Performance of a Simple Counter-Based Entropy Coder

    Directory of Open Access Journals (Sweden)

    Armein Z R Langi

    2011-12-01

    Full Text Available This paper describes the performance of a simple counter based entropy coder, as compared to other entropy coders, especially Huffman coder. Lossless data compression, such as Huffman coder and arithmetic coder, are designed to perform well over a wide range of data entropy. As a result, the coders require significant computational resources that could be the bottleneck of a compression implementation performance. In contrast, counter-based coders are designed to be optimal on a limited entropy range only. This paper shows the encoding and decoding process of counter-based coder can be simple and fast, very suitable for hardware and software implementations. It also reports that the performance of the designed coder is comparable to that of a much more complex Huffman coder.

  17. Terminal Cancer:Malignant Spinal Cord Compression and Full Code Status

    Institute of Scientific and Technical Information of China (English)

    Yaseen Ali; Amila M. Parekh; Rahul K. Rao; Mirza R. Baig

    2014-01-01

    Background:Malignant spinal cord compression has signiifcantly increased hospitalization costs and even with best approach in treatment the disease course remains relatively stable with dire outcomes. Case presentation: The patient was an 80 years old male with the past medical history of hypertension, stroke with chronic right sided weakness, recently diagnosed with non-squamous cell lung carcinoma stage T4N0Mx presently undergoing chemotherapy as outpatient with carboplatin and taxol presented to the emergency room with the chief complaint of right leg pain with weakness and chest pain for 1~2 days. On d 4 of the admission patient complained of chest pain again and a CT angiogram was ordered as part of the work up for chest pain based on high probability for a pulmonary embolus per“Wells Score”. The CT angiogram revealed a large soft tissue mass centered at T5 vertebral body and probable spinal canal invasion. Conclusion:A more favorable outcome requires the input of both a surgeon and a radiation oncologist to ifnd the most effective approach depending on the area involved and the extent of the lesion, and patient’s choice of treatment always must be respected as well. Despite aggressive treatment patient did not respond well and was deteriorating. Options were discussed with the patient, including the futility of care and lack of response. Patient opted to return home with hospice care and was subsequently discharged home with family.

  18. Irregular Segmented Region Compression Coding Based on Pulse Coupled Neural Network

    Institute of Scientific and Technical Information of China (English)

    MA Yi-de; QI Chun-liang; QIAN Zhi-bai; SHI Fei; ZHANG Bei-dou

    2006-01-01

    An irregular segmented region coding algorithm based on pulse coupled neural network(PCNN) is presented. PCNN has the property of pulse-coupled and changeable threshold, through which these adjacent pixels with approximate gray values can be activated simultaneously. One can draw a conclusion that PCNN has the advantage of realizing the regional segmentation, and the details of original image can be achieved by the parameter adjustment of segmented images, and at the same time, the trivial segmented regions can be avoided. For the better approximation of irregular segmented regions, the Gram-Schmidt method, by which a group of orthonormal basis functions is constructed from a group of linear independent initial base functions, is adopted. Because of the orthonormal reconstructing method, the quality of reconstructed image can be greatly improved and the progressive image transmission will also be possible.

  19. Spatial super-resolution in coded aperture-based optical compressive hyperspectral imaging systems

    Directory of Open Access Journals (Sweden)

    Hoover Fabian Rueda Chacón

    2013-01-01

    Full Text Available El sistema de adquisición de imágenes espectrales basado en apertura codificada de única captura (CASSI es una arquitectura óptica notable, que permite capturar la información espectral de una escena utilizando proyecciones bidimensionales codificadas. Las proyecciones en CASSI se encuentran ubicadas de tal manera, que cada medición contiene únicamente información espectral específica de una región del cubo de datos. La resolución espacial en el sistema CASSI depende altamente de la resolución del detector utilizado; así, imágenes de alta resolución requieren detectores de alta resolución, que a su vez demandan altos costos. Como solución a este problema, en éste artículo se propone un modelo óptico de súper-resolución para el mejoramiento de la resolución espacial de imágenes hiperespectrales denominado SR-CASSI. Súper-resolución espacial se logra tras solucionar un problema inverso utilizando un algoritmo de compressive sensing (CS, que tiene como entrada las mediciones codificadas de baja resolución capturadas. Éste modelo permite la reconstrucción de cubos de datos hiperespectrales súper resueltos, cuya resolución espacial es aumentada significativamente. Los resultados de las simulaciones muestran un mejoramiento de más de 8 dB en PSNR cuando el modelo propuesto es utilizado.

  20. Data Compression on Zero Suppressed High Energy Physics Data

    CERN Document Server

    Schindler, M; CERN. Geneva

    1996-01-01

    Future High Energy Physics experiments will produce unprecedented data volumes (up to 1 GB/s [1]). In most cases it will be impossible to analyse these data in real time and they will have to be stored on durable mostly magnetic linear media (e.g. tapes) for later analysis. This threatens to become a major cost factor for the running of these experiments. Here we present some ideas developed together with the Institute of Computer Graphics, Department for Algorithms and Programming on how this volume and the related cost can be reduced significantly. The algorithms presented are not general ones but aimed in particular to physics experiments data. Taking advantage of the knowledge of the data they are highly superior to general ones (Huffman, LZW, arithmetic coding) both in compression rate but more importantly in speed as to keep up with the output rate to modern tape drives. Above standard algorithms are, however, used after the data have been transferred in a more 'compressible' data space. These algorithm...

  1. Rate-adaptive Constellation Shaping for Near-capacity Achieving Turbo Coded BICM

    DEFF Research Database (Denmark)

    Yankov, Metodi Plamenov; Forchhammer, Søren; Larsen, Knud J.

    2014-01-01

    In this paper the problem of constellation shaping is considered. Mapping functions are designed for a many- to-one signal shaping strategy, combined with a turbo coded Bit-interleaved Coded Modulation (BICM), based on symmetric Huffman codes with binary reflected Gray-like properties. An algorit...

  2. Optimal source codes for geometrically distributed integer alphabets

    Science.gov (United States)

    Gallager, R. G.; Van Voorhis, D. C.

    1975-01-01

    An approach is shown for using the Huffman algorithm indirectly to prove the optimality of a code for an infinite alphabet if an estimate concerning the nature of the code can be made. Attention is given to nonnegative integers with a geometric probability assignment. The particular distribution considered arises in run-length coding and in encoding protocol information in data networks. Questions of redundancy of the optimal code are also investigated.

  3. Speckle Reduction for Ultrasonic Imaging Using Frequency Compounding and Despeckling Filters along with Coded Excitation and Pulse Compression

    Directory of Open Access Journals (Sweden)

    Joshua S. Ullom

    2012-01-01

    Full Text Available A method for improving the contrast-to-noise ratio (CNR while maintaining the −6 dB axial resolution of ultrasonic B-mode images is proposed. The technique proposed is known as eREC-FC, which enhances a recently developed REC-FC technique. REC-FC is a combination of the coded excitation technique known as resolution enhancement compression (REC and the speckle-reduction technique frequency compounding (FC. In REC-FC, image CNR is improved but at the expense of a reduction in axial resolution. However, by compounding various REC-FC images made from various subband widths, the tradeoff between axial resolution and CNR enhancement can be extended. Further improvements in CNR can be obtained by applying postprocessing despeckling filters to the eREC-FC B-mode images. The despeckling filters evaluated were the following: median, Lee, homogeneous mask area, geometric, and speckle-reducing anisotropic diffusion (SRAD. Simulations and experimental measurements were conducted with a single-element transducer (f/2.66 having a center frequency of 2.25 MHz and a −3 dB bandwidth of 50%. In simulations and experiments, the eREC-FC technique resulted in the same axial resolution that would be typically observed with conventional excitation with a pulse. Moreover, increases in CNR of 348% were obtained in experiments when comparing eREC-FC with a Lee filter to conventional pulsing methods.

  4. Compression des fichiers son de type wave.

    OpenAIRE

    BAKLI, Meriem

    2014-01-01

    Ce travail de projet de fin d’étude s’intéresse à une étude comparative sur la compression d’un fichier son. La compression est l'action utilisée pour réduire la taille physique d'un bloc d'information.. Il existe plusieurs algorithmes pour la compression comme HUFFMAN, …etc. Nous avons fait la compression d’un fichier son de format WAVE non compressé à un fichier MP3 compressé avec différent format de codage, différent frame et quelque soit le fichier mono où stéréo. A partir ...

  5. Compressive Sensing Over Networks

    CERN Document Server

    Feizi, Soheil; Effros, Michelle

    2010-01-01

    In this paper, we demonstrate some applications of compressive sensing over networks. We make a connection between compressive sensing and traditional information theoretic techniques in source coding and channel coding. Our results provide an explicit trade-off between the rate and the decoding complexity. The key difference of compressive sensing and traditional information theoretic approaches is at their decoding side. Although optimal decoders to recover the original signal, compressed by source coding have high complexity, the compressive sensing decoder is a linear or convex optimization. First, we investigate applications of compressive sensing on distributed compression of correlated sources. Here, by using compressive sensing, we propose a compression scheme for a family of correlated sources with a modularized decoder, providing a trade-off between the compression rate and the decoding complexity. We call this scheme Sparse Distributed Compression. We use this compression scheme for a general multi...

  6. Reserved-Length Prefix Coding

    CERN Document Server

    Baer, Michael B

    2008-01-01

    Huffman coding finds an optimal prefix code for a given probability mass function. Consider situations in which one wishes to find an optimal code with the restriction that all codewords have lengths that lie in a user-specified set of lengths (or, equivalently, no codewords have lengths that lie in a complementary set). This paper introduces a polynomial-time dynamic programming algorithm that finds optimal codes for this reserved-length prefix coding problem. This has applications to quickly encoding and decoding lossless codes. In addition, one modification of the approach solves any quasiarithmetic prefix coding problem, while another finds optimal codes restricted to the set of codes with g codeword lengths for user-specified g (e.g., g=2).

  7. Huffman和S-DES混合加密算法的研究%Analysis of Huffman and S-DES of Mixed Encryption Algorithm

    Institute of Scientific and Technical Information of China (English)

    郑静; 王腾

    2014-01-01

    In contrast to the existing common encryption software and classical cryptography, combined with the present situa-tion and development of the current text encryption, this paper will be based on dynamic Huffman coding and S-DES algo-rithm, make up for the shortcomings of the two, achieve the best effect on text information encryption.%在对比现有的加密软件和古典密码学常见的加密算法后,结合文本加密的现状及发展趋势,该文将基于动态Huff-man编码和S-DES算法相结合,弥补两者的缺点,达到对文本信息的最佳加密及解密效果。

  8. 安全的LZW编码算法及其在GIF图像加密中的应用%Secure LZW coding algorithm and its application in GIF image encryption

    Institute of Scientific and Technical Information of China (English)

    向涛; 王安

    2012-01-01

    This paper proposed a Secure LZW (SLZW) coding algorithm, where encryption was embedded into the improved LZW coding process, and SLZW can fulfill compression and encryption in a single step. In SLZW algorithm, dynamic Huffman tree was utilized to code the dictionary of LZW, and the initialization and updating of Huffman tree were controlled by a sequence of keystream generated by Coupled Map Lattcie (CML). The code words were further XORed with the keystream to generate the ciphertext. The SLZW was applied to GIF image encryption. The experimental results and their analyses indicate that the proposed SLZW algorithm not only has good security, but can also improves the compression ratio by about 10% . Therefore, SLZW can find its wide applications in practice.%提出了一种安全的LZW编码算法——SLZW.该算法在改进的LZW编码过程中嵌入加密,从而能够同时完成压缩和加密.SLZW编码利用动态Huffman树作为LZW的字典,并且通过耦合映像格子(CML)产生的密钥流对字典的构建和更新进行控制,编码输出进一步和密钥流进行异或后产生密文.并且,该算法被应用于GIF图像加密中,实验结果和分析表明,该算法不仅具有较好的安全性,同时也将标准LZW算法的压缩效率提高了10%左右,具有广泛的实用性.

  9. Wavelet-Based Mixed-Resolution Coding Approach Incorporating with SPT for the Stereo Image

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    With the advances of display technology, three-dimensional(3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acquired from different perspectives. Since there is an inherent redundancy between the images of a stereo pairs, data compression algorithms should be employed to represent stereo pairs efficiently. The proposed techniques generally use blockbased disparity compensation. In order to get the higher compression ratio, this paper employs the wavelet-based mixed-resolution coding technique to incorporate with SPT-based disparity-compensation to compress the stereo image data. The mixed-resolution coding is a perceptually justified technique that is achieved by presenting one eye with a low-resolution image and the other with a high-resolution image. Psychophysical experiments show that the stereo image pairs with one high-resolution image and one low-resolution image provide almost the same stereo depth to that of a stereo image with two high-resolution images. By combining the mixed-resolution coding and SPT-based disparity-compensation techniques, one reference (left) high-resolution image can be compressed by a hierarchical wavelet transform followed by vector quantization and Huffman encoder. After two level wavelet decompositions, for the lowresolution right image and low-resolution left image, subspace projection technique using the fixed block size disparity compensation estimation is used. At the decoder, the low-resolution right subimage is estimated using the disparity from the low-resolution left subimage. A full-size reconstruction is obtained by upsampling a factor of 4 and reconstructing with the synthesis low pass filter. Finally, experimental results are presented, which show that our scheme achieves a PSNR gain (about 0.92dB) as compared to the current block-based disparity compensation coding techniques.``

  10. Modified Huffman Code and Its Applications%改进的Huffman编码及其应用

    Institute of Scientific and Technical Information of China (English)

    武善玉; 晏振鸣

    2009-01-01

    该文探讨了JPEG压缩技术,重点针对Htuffman编码中最优二又树的"形态"不唯一问题,提出一种基于"简单原则"的新方法.经过这种方法改进的Huffman编码,使得JPEG中相应的值或字符的Huffman编码是唯一的.与传统的Huffman算法及近年来国内外文献中提出的改进算法相比,该方法编码步骤和相关操作更简洁,因而更利于程序的实现和移植.最后给出一个实例,表明此方法的实用性.

  11. 关于Huffman编码的一个注记%A Note on Huffman Coding

    Institute of Scientific and Technical Information of China (English)

    林嘉宇; 刘荧

    2003-01-01

    Huffman编码是无损压缩中的重要方法,在数据压缩、音频编码、图像编码中得到广泛的应用.除了压缩效率以外,作为变长码的Huffman编码,还有其他的判断其编码优劣的准则,例如码方差、抗误码的能力等.本文讨论Huffman编码后的码流中0、1码元(二进制情况下)出现的概率问题.研究结果表明,通常的经典Huffman编码的0、1码元出现的概率差最大,在出现概率均衡准则下的性能最劣.文章进行了严格的数学建模,并给出了一种算法,可以使编码后码流中0、1码元的分布概率(趋向)均等;并且,算法可在原Huffman编码中结合进行,所增加的计算量很小.文章最后进行了实验验证.

  12. Huffman编码的另类算法%Huffman Code Algorithm Using Other Way

    Institute of Scientific and Technical Information of China (English)

    王敏; 刘洋

    2006-01-01

    本文从Huffman树的"原始"构造及其编码算法出发,分析影响其算法性能的因素,介绍了Canonical Huffman编码.从提高算法性能的角度,利用Canonical Huffman编码规则改进"原始"算法,并提出新的算法及其实例.

  13. 用Perl语言实现Huffman编码%IMPLEMENTATION OF HUFFMAN CODES BY PERL PROGRAMMING

    Institute of Scientific and Technical Information of China (English)

    刘学军

    2006-01-01

    Perl是一种功能强大的编程语言.Huffman编码是压缩文件的一种常用算法.采用Perl语言编程来产生Huffman编码,并阐述了用Perl编写此程序的基本思想及其数据类型的使用技巧.最后根据此程序的输出结果,简要讨论并分析了Huffman算法对文件的压缩率随字符种类及其出现频率的变化规律.

  14. Implement of Huffman code in Matlab%Matlab下实现huffman编码

    Institute of Scientific and Technical Information of China (English)

    吴记群; 李双科

    2006-01-01

    在matlab中模拟C中链表,利用复数运算,联系具体字符和概率,每次找到最小概率的两个字符对应的编号,依次记录下来,最后根据奇偶码的不同实现Huffman编码.本算法新颖独特,易于理解、编程.

  15. Study on methods for improving compressibility of 4-direction Freeman chain code%Freeman四方向链码压缩率提高的方法研究

    Institute of Scientific and Technical Information of China (English)

    李灵华; 刘勇奎

    2013-01-01

    文中通过大量的实验,在研究现有的基于Freeman方向链码的方法的基础上,对提高Freeman四方向链码压缩率的方法进行了深入的研究.从改变码值含义定义并对码值进行Huffman编码,进而对出现频率最高的码值进行计算编码等不同角度,进行大量的实验、比较与分析.提出了一个Freeman四方向链码新方法:计算编码不等长相对四方向Freeman链码——AVRF4.实验结果表明,其链码压缩率比Freeman八方向链码提高了26%,而比原始Freeman四方向链码提高了15%.%To study the methods for improving the efficiency of 4-direction Freeman chain code, the methods based on existing Freeman direction chain code is researched though a large number of experiments. A large number of experiments, comparison and analysis are carried from the different views, such as changing the definition of code elements and employing combining encoding for code elements, and applying arithmetic encoding for the code elements with highest probability. At last, a new method based on 4-direction Freeman chain code entitled arithmetic encoding variable-length relative 4-direction Freeman chain code, namely AVRF4 is put forward. The experimental results show that the compressibility of AVRF4 increases 26% more than 8-di-rection Freeman chain code and 15% more than 4-direction Freeman chain code.

  16. 基于四叉树的嵌入式平台Huffman解码优化%Embedded Platform Huffman Optimization Decoding Algorithm Base on Quad-Tree

    Institute of Scientific and Technical Information of China (English)

    鲁云飞; 何明华

    2012-01-01

    考虑到嵌入式设备资源的有限性,提出一种基于四叉树的Huffman解码优化算法.解码过程中,先将Huffman码表表示成四叉树结构,据此重建为一维数组,并充分利用数值计算代替判断与跳转操作.为测试本算法解码性能,将其应用于嵌入式MP3实时解码中,结果表明本算法内存损耗小,解码速率快,算法复杂度低,相比于其他优化算法,更适合应用于嵌入式设备中.%Considering the limitation of embedded system resources, a Huffman decoding optimization algorithm based on the quad tree is proposed in this paper. In this process, the Huffman code table is expressed as quad tree structure at first, and according to which a one-dimensional array is reconstructed, then make full use of numerical calculation instead of judgment and jump operation. In order to test the decoding performance, the method is applied to the embedded'realtime MP3 decoding. The results show that the algorithm memory loss is small, decoding speed is rapid and its complexity is low, compared to other optimization algorithms, this algorithm is more suitable for application in embedded devices.

  17. Lossless data compression studies for NOAA hyperspectral environmental suite using 3D integer wavelet transforms with 3D embedded zerotree coding

    Science.gov (United States)

    Huang, Bormin; Huang, Hung-Lung; Chen, Hao; Ahuja, Alok; Baggett, Kevin; Schmit, Timothy J.; Heymann, Roger W.

    2003-09-01

    Hyperspectral sounder data is a particular class of data that requires high accuracy for useful retrieval of atmospheric temperature and moisture profiles, surface characteristics, cloud properties, and trace gas information. Therefore compression of these data sets is better to be lossless or near lossless. The next-generation NOAA/NESDIS GOES-R hyperspectral sounder, now referred to as the HES (Hyperspectral Environmental Suite), will have hyperspectral resolution (over one thousand channels with spectral widths on the order of 0.5 wavenumber) and high spatial resolution (less than 10 km). Given the large volume of three-dimensional hyperspectral sounder data that will be generated by the HES instrument, the use of robust data compression techniques will be beneficial to data transfer and archive. In this paper, we study lossless data compression for the HES using 3D integer wavelet transforms via the lifting schemes. The wavelet coefficients are then processed with the 3D embedded zerotree wavelet (EZW) algorithm followed by context-based arithmetic coding. We extend the 3D EZW scheme to take on any size of 3D satellite data, each of whose dimensions need not be divisible by 2N, where N is the levels of the wavelet decomposition being performed. The compression ratios of various kinds of wavelet transforms are presented along with a comparison with the JPEG2000 codec.

  18. A Lossless Compression Algorithm Based on Multi-parameter%基于多参数的数据压缩算法

    Institute of Scientific and Technical Information of China (English)

    高怀远; 陈英豪

    2014-01-01

    According to the study and analysis of Huffman coding method, propose a kind of lossless compression algorithm which is based on multi-parameter. Through sort and statistical for the number of original data,then merge them to meet the requirement of best Huffman encoding,thereby generating a data merging table which occupies less space,and encode the original data which is divided to one-unit code ( prefix code) and distinction code ( suffix code) . The start point of the data merging is the multi-parameter in this re-search. The original data can be determined by using these parameter. There is no need to bit by bit matching or generating encoding table when decoding. Compared with the original method,the lossless compression algorithm which is based on multi-parameter has simple coding structure and operating. It has higher efficiency in both coding and decoding.%通过对Huffman编码方法的研究,文中提出了一种基于多参数的数据无损压缩算法。基于原始数据集的元素个数统计,对原始数据集进行多次的合并,使合并后所得到的新数据集满足Huffman最佳编码要求,由此生成规模较小的数据合并对应表,并将数据编码分为一元即时码(前缀)和区分码(后缀)两个部分。数据多次合并的不同起始点为文中无损压缩方法的多参数,利用这些参数结合编码前缀及后缀即可唯一表示原始数据,去除了编码表。解码时无需逐位匹配即可复原原始数据。与传统方法相比,文中构造的基于多参数的数据无损压缩方法,编码结构简单,运算开销小,编解码效率较高。

  19. Verification of the FBR fuel bundle–duct interaction analysis code BAMBOO by the out-of-pile bundle compression test with large diameter pins

    Energy Technology Data Exchange (ETDEWEB)

    Uwaba, Tomoyuki, E-mail: uwaba.tomoyuki@jaea.go.jp [Japan Atomic Energy Agency, 4002, Narita-cho, Oarai-machi, Ibaraki 311-1393 (Japan); Ito, Masahiro; Nemoto, Junichi [Japan Atomic Energy Agency, 4002, Narita-cho, Oarai-machi, Ibaraki 311-1393 (Japan); Ichikawa, Shoichi [Japan Atomic Energy Agency, 2-1, Shiraki, Tsuruga-shi, Fukui 919-1279 (Japan); Katsuyama, Kozo [Japan Atomic Energy Agency, 4002, Narita-cho, Oarai-machi, Ibaraki 311-1393 (Japan)

    2014-09-15

    The BAMBOO computer code was verified by results for the out-of-pile bundle compression test with large diameter pin bundle deformation under the bundle–duct interaction (BDI) condition. The pin diameters of the examined test bundles were 8.5 mm and 10.4 mm, which are targeted as preliminary fuel pin diameters for the upgraded core of the prototype fast breeder reactor (FBR) and for demonstration and commercial FBRs studied in the FaCT project. In the bundle compression test, bundle cross-sectional views were obtained from X-ray computer tomography (CT) images and local parameters of bundle deformation such as pin-to-duct and pin-to-pin clearances were measured by CT image analyses. In the verification, calculation results of bundle deformation obtained by the BAMBOO code analyses were compared with the experimental results from the CT image analyses. The comparison showed that the BAMBOO code reasonably predicts deformation of large diameter pin bundles under the BDI condition by assuming that pin bowing and cladding oval distortion are the major deformation mechanisms, the same as in the case of small diameter pin bundles. In addition, the BAMBOO analysis results confirmed that cladding oval distortion effectively suppresses BDI in large diameter pin bundles as well as in small diameter pin bundles.

  20. An Efficient Huffman Coding Algorithm of Non-creating Huffman Tree (NHTC)%一种不用建造Huffman树的高效Huffman编码算法

    Institute of Scientific and Technical Information of China (English)

    李伟生; 李域; 王涛

    2005-01-01

    Huffman编码作为一种高效的不等长编码技术正日益广泛地在文本、图像、视频压缩及通信、密码等领域得到应用.为了更有效地利用内存空间、简化编码步骤和相关操作,首先研究了重建Huffman树所需要的信息,并提出通过对一类一维结构数组进行相关操作来获取上述信息的方法,然后利用这些信息,并依据提出的规范Huffman树的编码性质,便能直接得到Huffman编码.与传统的Huffman算法及近年来国内外文献中提出的改进算法相比,由于该方法不需要构造Huffman树,不仅使内存需求大大减少,而且编码步骤和相关操作更简洁,因而更利于程序的实现和移植.更重要的是,该算法思路为Huffman算法的研究和发展提供了新的途径.

  1. How to Make up the Unique Huffman Tree and Huffman Code%如何构造唯一的huffman树及唯一的huffman编码

    Institute of Scientific and Technical Information of China (English)

    王森

    2003-01-01

    本文论述了在某种特殊的情况下,如何构造一棵huffman树,并使这棵树变得唯一;如何通过唯一的huffman树构造出huffman编码,使每个huffman编码代表唯一的信息单元.

  2. 空时相关MIMO信道下的空时联合Huffman有限反馈预编码%Joint space-time Huffman limited feedback precoding for spatially and temporally correlated MIMO channels

    Institute of Scientific and Technical Information of China (English)

    居美艳; 葛欣; 李岳衡; 谭国平

    2013-01-01

    For the MIMO channels with space correlation and time correlation, a novel joint space-time Huffman limited feedback precoding scheme was proposed which improves the system performance and reduces the amount of feedback. Based on space correlation, the precoding structure under zero-forcing (ZF) criterion was derived and the rotating quan-tization codebook was designed which reduces the effect of space correlation on system performance. In addition, in view of time correlation of channels, the scheme reduces the feedback data of channel state information (CSI) in the slow fad-ing channel by using neighborhood-based limited feedback. Due to different probabilities of codewords in the neighbor-hood, Huffman coding was adopted to further reduce the amount of feedback.%针对空时相关的 MIMO 信道,提出了一种新颖的 Huffman 空时联合有限反馈预编码方法,提高了系统性能,并减少了反馈量。从信道的空间相关性出发,推导了迫零准则下预编码的构成,从而设计了一种旋转量化码本,减小了空间相关性对系统性能的影响。另外,针对信道的时间相关性,利用基于邻域的有限反馈来降低慢衰落信道的反馈量。同时,由于领域内各码字被选中的概率不同,可以利用Huffman编码进一步减少反馈量。

  3. Rate-adaptive Constellation Shaping for Near-capacity Achieving Turbo Coded BICM

    DEFF Research Database (Denmark)

    Yankov, Metodi Plamenov; Forchhammer, Søren; Larsen, Knud J.;

    2014-01-01

    In this paper the problem of constellation shaping is considered. Mapping functions are designed for a many- to-one signal shaping strategy, combined with a turbo coded Bit-interleaved Coded Modulation (BICM), based on symmetric Huffman codes with binary reflected Gray-like properties. An algorithm...... is derived for finding the Huffman code with such properties for a variety of alphabet sizes, and near-capacity performance is achieved for a wide SNR region by dynamically choosing the optimal code rate, constellation size and mapping function based on the operating SNR point and assuming perfect channel...... quality estimation. Gains of more than 1dB are observed for high SNR compared to conventional turbo coded BICM, and it is shown that the mapping functions designed here significantly outperform current state of the art Turbo- Trellis Coded Modulation and other existing constellation shaping methods...

  4. Evaluation of compressive strength in cement mortars, according to the dosage established by the colombian seismic resistance code. Case study

    Directory of Open Access Journals (Sweden)

    Sergio Giovanny Valbuena Porras

    2016-06-01

    Full Text Available Context: In a masonry wall the mortar it is between 10 and 20% of the total volume of the system, despite its effect on the behavior of it is significantly higher than this percentage indicates.Objective: The purpose of this research was to evaluate the resistance to compression of two types of mortar paste (A and B, prepared with natural sand from the town of Usme in Bogotá, in accordance with the proportions set by the Standard Colombian earthquake Resistant regulation (NSR-10.Method: Two types of mortar paste were prepared, according to the proportions of cement and sand established in NSR-10 section D.3.4-1 of (Table 1; these proportions were calculated using a 0.0028 m3 container for measuring unit weight. For type A mortar rock sand was used and river sand for type B mortar.Results: The resistance to compression for mortars type A at the end of the study was on average 84% of the expected resistance, whereas for type B mortars it averaged 64% above the expected resistance.Conclusion: Mortar mixes made with crushed or rock (type A arena do not reach the compressive strength required demanded by regulatory standards, despite complying with the dosage established in NSR 10 and with NTC quality criteria; while the natural sand origin or natural river sand meet these standards.

  5. Investigation on effect of equivalence ratio and engine speed on homogeneous charge compression ignition combustion using chemistry based CFD code

    Directory of Open Access Journals (Sweden)

    Ghafouri Jafar

    2014-01-01

    Full Text Available Combustion in a large-bore natural gas fuelled diesel engine operating under Homogeneous Charge Compression Ignition mode at various operating conditions is investigated in the present paper. Computational Fluid Dynamics model with integrated chemistry solver is utilized and methane is used as surrogate of natural gas fuel. Detailed chemical kinetics mechanism is used for simulation of methane combustion. The model results are validated using experimental data by Aceves, et al. (2000, conducted on the single cylinder Volvo TD100 engine operating at Homogeneous Charge Compression Ignition conditions. After verification of model predictions using in-cylinder pressure histories, the effect of varying equivalence ratio and engine speed on combustion parameters of the engine is studied. Results indicate that increasing engine speed provides shorter time for combustion at the same equivalence ratio such that at higher engine speeds, with constant equivalence ratio, combustion misfires. At lower engine speed, ignition delay is shortened and combustion advances. It was observed that increasing the equivalence ratio retards the combustion due to compressive heating effect in one of the test cases at lower initial pressure. Peak pressure magnitude is increased at higher equivalence ratios due to higher energy input.

  6. Review on Lossless Image Compression Techniques for Welding Radiographic Images

    Directory of Open Access Journals (Sweden)

    B. Karthikeyan

    2013-01-01

    Full Text Available Recent development in image processing allows us to apply it in different domains. Radiography image of weld joint is one area where image processing techniques can be applied. It can be used to identify the quality of the weld joint. For this the image has to be stored and processed later in the labs. In order to optimize the use of disk space compression is required. The aim of this study is to find a suitable and efficient lossless compression technique for radiographic weld images. Image compression is a technique by which the amount of data required to represent information is reduced. Hence image compression is effectively carried out by removing the redundant data. This study compares different ways of compressing the radiography images using combinations of different lossless compression techniques like RLE, Huffman.

  7. Word-Based Text Compression

    CERN Document Server

    Platos, Jan

    2008-01-01

    Today there are many universal compression algorithms, but in most cases is for specific data better using specific algorithm - JPEG for images, MPEG for movies, etc. For textual documents there are special methods based on PPM algorithm or methods with non-character access, e.g. word-based compression. In the past, several papers describing variants of word-based compression using Huffman encoding or LZW method were published. The subject of this paper is the description of a word-based compression variant based on the LZ77 algorithm. The LZ77 algorithm and its modifications are described in this paper. Moreover, various ways of sliding window implementation and various possibilities of output encoding are described, as well. This paper also includes the implementation of an experimental application, testing of its efficiency and finding the best combination of all parts of the LZ77 coder. This is done to achieve the best compression ratio. In conclusion there is comparison of this implemented application wi...

  8. 基于 FPGA 的 JPEG 编解码设计%JPEG coding and decoding design based on FPGA

    Institute of Scientific and Technical Information of China (English)

    张绪珩; 王淑仙

    2014-01-01

    JPEG ( Joint Photographic Experts Group )作为一个基本的图像压缩方式,已经得到了广泛的运用。而FPGA具有的并行计算特点,使得越来越多的设备利用FPGA对jpeg文件进行编解码。从整体上介绍JPEG编解码的基本算法,并着重介绍了在DCT和Huffman两个模块中使用的方法。在DCT/IDCT模块中,为了提高处理速度,充分利用FPGA并行处理的特点。对于Huff-man解码模块,采用附加码位宽的查找表方法,并利用综合工具将查找表综合到片内存储器中这一特点来减少资源。%JPEG ( Joint Photographic Experts Group ) , as a basic way of image compression , has been widely used .There are more and more devices processing jpeg files based on FPGA since its parallel processing feature .This paper provides a whole view of the JPEG encoding and decoding and gives out more attention to the DCT/IDCT and Huffman modules .In the DCT/IDCT module , thanks to the parallel processing of the FPGA , the improvement of processing speed can be achieved; as in the Huffman module , additional code length is used , and it costs less resource as the FPGA synthesis tool put the looking up table into the memory within the chip .

  9. Modified symmetrical reversible variable length code and its theoretical bounds

    Science.gov (United States)

    Tsai, Chien-Wu; Wu, Ja-Ling; Liu, Shu-Wei

    2000-04-01

    The reversible variable length codes (RVLCs) have been adopted in the emerging video coding standards -- H.263+ and MPEG- 4, to enhance their error-resilience capability which is important and essential in the error-prone environments. The most appealing advantage of symmetrical RVLCs compared with asymmetrical RVLCs is that only one code table is required to forward and backward decoding, however, two code tables are required for asymmetrical RVLCs. In this paper, we propose a simple and efficient algorithm that can produce a symmetrical RVLC from a given Huffman code, and we also discuss theoretical bounds of the proposed symmetrical RVLCs.

  10. 基于压缩感知的语音编码新方案%New speech coding scheme based on compressed sensing

    Institute of Scientific and Technical Information of China (English)

    许佳佳

    2016-01-01

    根据语音信号的稀疏性,将压缩感知理论应用于语音信号的处理中,提出了一种语音编码的新方案。该方法在编码端采用随机高斯矩阵对语音信号进行观测,得到较少的观测值,然后使用矢量量化编码进一步压缩数据;在解码端,通过矢量量化解码得到观测值,根据语音信号在离散余弦域中的稀疏性,用正交匹配追踪算法重构语音信号。所用算法,在保证语音信号重构质量的前提下降低计算复杂度,减小时延。实验结果表明,对于采样率为44100 Hz,量化位数为16 bit,码速率为705.6 kbps单声道语音信号压缩到100 kbps左右仍具有较好的语音质量,同时算法时间延迟低。%According to the sparse of the speech signal, applied compression perception theory to speech signal processing, this paper proposes a new scheme of speech signal coding. The method using random Gaussian matrix observing the speech signal on the encoding side , obtained fewer observations,then further compress the data using vector quantization coding.In the decoder, decoding by vector quantization, getting observations based on the speech signal sparsity in the discrete cosine domain, then reconstructed speech signal using orthogonal matching pursuit algorithm . The purpose of the algorithm is to reduce the computational complexity and delay on the premise of guarantee the quality of speech signal reconstruction. Experimental results show that the mono audio signal whose sampling rate is 44100 hz, quantitative is 16 bit and bit rate is 705.6 Kbps could be compressed to around 100 Kbps, the compressed speech signal still has good voice quality, at the same time the algorithm has lower time delay.

  11. The Permutation Groups and the Equivalence of Cyclic and Quasi-Cyclic Codes

    CERN Document Server

    Guenda, Kenza

    2010-01-01

    We give the class of finite groups which arise as the permutation groups of cyclic codes over finite fields. Furthermore, we extend the results of Brand and Huffman et al. and we find the properties of the set of permutations by which two cyclic codes of length p^r can be equivalent. We also find the set of permutations by which two quasi-cyclic codes can be equivalent.

  12. The Optimal Fix-Free Code for Anti-Uniform Sources

    Directory of Open Access Journals (Sweden)

    Ali Zaghian

    2015-03-01

    Full Text Available An \\(n\\ symbol source which has a Huffman code with codelength vector \\(L_{n}=(1,2,3,\\cdots,n-2,n-1,n-1\\ is called an anti-uniform source. In this paper, it is shown that for this class of sources, the optimal fix-free code and symmetric fix-free code is \\( C_{n}^{*}=(0,11,101,1001,\\cdots,1\\overbrace{0\\cdots0}^{n-2}1.

  13. 基于3-参数变长编码的图像无损压缩算法%An Image Lossless Compression Algorithm Based on 3-PVLC

    Institute of Scientific and Technical Information of China (English)

    高健; 饶珺; 孙瑞鹏

    2013-01-01

    According to the study and analysis of Huffman coding method,this research proposes a kind of image lossless compression algorithm,which is based on 3-parameter variable length coding (3-PVLC).By conversion of image data to hybrid differential data,this research uses 3-PVLC for the 1st coding.After the 1st coding,an adaptive run-length reduction method is used for the 2nd coding to compress the achieved binary stream.The processes of coding and decoding in this research are simple,and this method possesses a higher compression ratio.%通过对Huffman编码方法的研究和分析,提出了一种基于3-参数变长编码(3-PVLC)的图像数据无损压缩算法.在图像数据转换为混合差分数据基础上,采用3-PVLC对差分数据进行一次编码,并利用一种自适应性的游长缩减法对一次编码后的二值码流进行二次编码.本文的编解码方法较灵活,可依据具体需要进行基于3-PVLC方法的一次编码或在一次编码基础上完成二次编码,且具较高压缩比.

  14. Characteristic compression strength of a brickwork masonry starting from the strength of its components. Experimental verification of analitycal equations of european codes

    Directory of Open Access Journals (Sweden)

    Rolando, A.

    2006-09-01

    Full Text Available In this paper the compression strength of a clay brickwork masonry bound with cement mortar is analyzed. The target is to obtain the characteristic compression strength of unreinforced brickwork masonry. This research try to test the validity of the analytical equations in European codes, comparing the experimental strength with the analytically obtained from the strength of its components (clay brick and cement mortar.En este artículo se analiza la resistencia a compresión de una fábrica de ladrillo cerámico, asentado con mortero de cemento.El objetivo es obtener la resistencia característica a compresión de la fábrica sin armar.La investigación comprueba la fiabilidad de las expresiones analíticas existentes en la normativa europea, comparando la resistencia obtenida experimentalmente con la obtenida analíticamente, a partir de la resistencia de sus componentes (ladrillo cerámico y mortero de cemento.

  15. 图像压缩编码中Walsh变换与DCT变换及其比较%Walsh Transform and DCT Transform in Image Compression Coding

    Institute of Scientific and Technical Information of China (English)

    龙清

    2011-01-01

    图像变换是图像处理的基础,是图像压缩的第一步.在图像压缩中,DCT变换因其变换效果好而被广泛采用,成为目前最常用的图像压缩变换方法,而Walsh变换还未被广泛采用.通过对这两种变换的算法分析以及Matlab仿真实验和峰值信噪比的对比,结果表明,Walsh变换在算法上比DCT简单,实现较为容易,其变换性能并不亚于DCT变换,在某些量化级上甚至还优于DCT变换,Walsh变换有着广泛的应用前景.%Image transform is the foundation of image processing and the first step of image compression coding. Because of its good effect in image transform, the DCT transform has been used widely, and has became the most common transform method in image compression coding. By analyzing the DCT transform and Walsh transform, and comparing the MATLAB simulation experiment with PSNR, the results show that the Walsh transform is not inferior to the DCT transform in performance, and in some quantitative level it is superior to the DCT transform. Besides, the Walsh transform is simpler than the DCT in algorithm. It can be used widely in future.

  16. A joint application of optimal threshold based discrete cosine transform and ASCII encoding for ECG data compression with its inherent encryption.

    Science.gov (United States)

    Pandey, Anukul; Singh, Butta; Saini, Barjinder Singh; Sood, Neetu

    2016-12-01

    In this paper, a joint use of the discrete cosine transform (DCT), and differential pulse code modulation (DPCM) based quantization is presented for predefined quality controlled electrocardiogram (ECG) data compression. The formulated approach exploits the energy compaction property in transformed domain. The DPCM quantization has been applied to zero-sequence grouped DCT coefficients that were optimally thresholded via Regula-Falsi method. The generated sequence is encoded using Huffman coding. This encoded series is further converted to a valid ASCII code using the standard codebook for transmission purpose. Such a coded series possesses inherent encryption capability. The proposed technique is validated on all 48 records of standard MIT-BIH database using different measures for compression and encryption. The acquisition time has been taken in accordance to that existed in literature for the fair comparison with contemporary state-of-art approaches. The chosen measures are (1) compression ratio (CR), (2) percent root mean square difference (PRD), (3) percent root mean square difference without base (PRD1), (4) percent root mean square difference normalized (PRDN), (5) root mean square (RMS) error, (6) signal to noise ratio (SNR), (7) quality score (QS), (8) entropy, (9) Entropy score (ES) and (10) correlation coefficient (r x,y ). Prominently the average values of CR, PRD and QS were equal to 18.03, 1.06, and 17.57 respectively. Similarly, the mean encryption metrics i.e. entropy, ES and r x,y were 7.9692, 0.9962 and 0.0113 respectively. The novelty in combining the approaches is well justified by the values of these metrics that are significantly higher than the comparison counterparts.

  17. On the optimality of code options for a universal noiseless coder

    Science.gov (United States)

    Yeh, Pen-Shu; Rice, Robert F.; Miller, Warner

    1991-01-01

    A universal noiseless coding structure was developed that provides efficient performance over an extremely broad range of source entropy. This is accomplished by adaptively selecting the best of several easily implemented variable length coding algorithms. Custom VLSI coder and decoder modules capable of processing over 20 million samples per second are currently under development. The first of the code options used in this module development is shown to be equivalent to a class of Huffman code under the Humblet condition, other options are shown to be equivalent to the Huffman codes of a modified Laplacian symbol set, at specified symbol entropy values. Simulation results are obtained on actual aerial imagery, and they confirm the optimality of the scheme. On sources having Gaussian or Poisson distributions, coder performance is also projected through analysis and simulation.

  18. Image data compression investigation

    Science.gov (United States)

    Myrie, Carlos

    1989-01-01

    NASA continuous communications systems growth has increased the demand for image transmission and storage. Research and analysis was conducted on various lossy and lossless advanced data compression techniques or approaches used to improve the efficiency of transmission and storage of high volume stellite image data such as pulse code modulation (PCM), differential PCM (DPCM), transform coding, hybrid coding, interframe coding, and adaptive technique. In this presentation, the fundamentals of image data compression utilizing two techniques which are pulse code modulation (PCM) and differential PCM (DPCM) are presented along with an application utilizing these two coding techniques.

  19. Coding Long Contour Shapes of Binary Objects

    Science.gov (United States)

    Sánchez-Cruz, Hermilo; Rodríguez-Díaz, Mario A.

    This is an extension of the paper appeared in [15]. This time, we compare four methods: Arithmetic coding applied to 3OT chain code (Arith-3OT), Arithmetic coding applied to DFCCE (Arith-DFCCE), Huffman coding applied to DFCCE chain code (Huff-DFCCE), and, to measure the efficiency of the chain codes, we propose to compare the methods with JBIG, which constitutes an international standard. In the aim to look for a suitable and better representation of contour shapes, our probes suggest that a sound method to represent contour shapes is 3OT, because Arithmetic coding applied to it gives the best results regarding JBIG, independently of the perimeter of the contour shapes.

  20. A Generic Top-Down Dynamic-Programming Approach to Prefix-Free Coding

    CERN Document Server

    Golin, Mordecai; Yu, Jiajin

    2008-01-01

    Given a probability distribution over a set of n words to be transmitted, the Huffman Coding problem is to find a minimal-cost prefix free code for transmitting those words. The basic Huffman coding problem can be solved in O(n log n) time but variations are more difficult. One of the standard techniques for solving these variations utilizes a top-down dynamic programming approach. In this paper we show that this approach is amenable to dynamic programming speedup techniques, permitting a speedup of an order of magnitude for many algorithms in the literature for such variations as mixed radix, reserved length and one-ended coding. These speedups are immediate implications of a general structural property that permits batching together the calculation of many DP entries.

  1. Channel coding for compressed sensing measurement matrix%用信道编码构造压缩感知测量矩阵

    Institute of Scientific and Technical Information of China (English)

    董小亮; 杨良龙; 赵生妹; 郑宝玉

    2013-01-01

    Compressed sensing,which is the emergence of a signal processing theory sparse signal and compressible signals in recent years.The measurement matrix is a vital link in the compressed sensing theory,its signal sampling and reconstruction algorithm has an important impact.Although the traditional random measurement matrix for the reconstruction is quite good,but its hardware implementation is difficult and requires a lot of storage space and other defects.While The emergence of the deterministic measurement matrix,makes up for these shortcomings.Using the advantages of the channel coding check matrix,we put forward the way to meet the requirements of the restricted isometry property,through the constructor of the deterministic measurement matrix.We make the standardization of a parity check matrix of the column vector,and extend it to a square linear combination of the permutation matrix column vector,then a deterministic measurement matrix can be created.This method ensure us to produce the measurement matrix easily,after we complete a channel encoded parity check matrix.Numerical results show that,under the same reconstruction algorithm and compression ratio,the performance of this method is close to the random measurement matrix,even improved.The same time,it costs less time with the reconstruction being run once only,which can meet the real-time requirements.The practical application of the compressed sensing algorithm,provides an effective measurement matrix construction method.%压缩感知是近年来,针对稀疏信号和可压缩信号的处理而出现的一种信号处理理论.测量矩阵是压缩感知理论中的一个至关重要的环节,它对信号采样和重构算法有着重要的影响.虽然一般传统的随机测量矩阵重建信号效果比较好,但有硬件实现比较困难的问题,并需要大量的存储空间和其他缺陷.确定性测量矩阵的出现,正好弥补了这些缺点.在本文中,基于信道编码

  2. Critical Data Compression

    CERN Document Server

    Scoville, John

    2011-01-01

    A new approach to data compression is developed and applied to multimedia content. This method separates messages into components suitable for both lossless coding and 'lossy' or statistical coding techniques, compressing complex objects by separately encoding signals and noise. This is demonstrated by compressing the most significant bits of data exactly, since they are typically redundant and compressible, and either fitting a maximally likely noise function to the residual bits or compressing them using lossy methods. Upon decompression, the significant bits are decoded and added to a noise function, whether sampled from a noise model or decompressed from a lossy code. This results in compressed data similar to the original. For many test images, a two-part image code using JPEG2000 for lossy coding and PAQ8l for lossless coding produces less mean-squared error than an equal length of JPEG2000. Computer-generated images typically compress better using this method than through direct lossy coding, as do man...

  3. Coding technique with progressive reconstruction based on VQ and entropy coding applied to medical images

    Science.gov (United States)

    Martin-Fernandez, Marcos; Alberola-Lopez, Carlos; Guerrero-Rodriguez, David; Ruiz-Alzola, Juan

    2000-12-01

    In this paper we propose a novel lossless coding scheme for medical images that allows the final user to switch between a lossy and a lossless mode. This is done by means of a progressive reconstruction philosophy (which can be interrupted at will) so we believe that our scheme gives a way to trade off between the accuracy needed for medical diagnosis and the information reduction needed for storage and transmission. We combine vector quantization, run-length bit plane and entropy coding. Specifically, the first step is a vector quantization procedure; the centroid codes are Huffman- coded making use of a set of probabilities that are calculated in the learning phase. The image is reconstructed at the coder in order to obtain the error image; this second image is divided in bit planes, which are then run-length and Huffman coded. A second statistical analysis is performed during the learning phase to obtain the parameters needed in this final stage. Our coder is currently trained for hand-radiographs and fetal echographies. We compare our results for this two types of images to classical results on bit plane coding and the JPEG standard. Our coder turns out to outperform both of them.

  4. Difference and dynamic binarization of binary arithmetic coding%差分动态二进制化的二进制算数编码

    Institute of Scientific and Technical Information of China (English)

    吴江铭

    2013-01-01

    It provides an overview of the high efficiency compression method CABAC proposed in HEVC which will be published by JCT-VC.Then it optimizes the binarization process of binary arithmetic coding by dynamic Huffman coding and makes the difference before the binarization.At last,it demonstrates the experimental results in comparison with the PAQ to validate the efficiency of the new method difference and dynamic binarization of binary arithmetic coding.%JCT-VC组织公布的HEVC协议草案沿用了H264的CABAC,改进了二进制化过程.在阐述高性能压缩算法CABAC的同时,创新性地提出了动态二进制化算数编码,并预先对数据进行差分.最后,通过压缩Java文件实验证实差分动态二进制化算数编码在压缩率方面有较大的提高,高于PAQ和CABAC.

  5. A speech coding algorithm based on compressed sensing%基于压缩感知的语音信号编码算法

    Institute of Scientific and Technical Information of China (English)

    王茂林; 黄文明; 王菊娇

    2012-01-01

    A new speech coding algorithm based on compressed sensing is presented for the speech signal sparse representation with discrete cosine transform (DCT). This algorithm uses Gaussian random matrix (or the speech waveform measurement, which ? quantized independently I>y unifonn quantizer. Saturated measuremtnts are simply discarded in the decoder, then speech signals are reconstructed on those remained measurements by Lasso algorithm. Experimental results show that the performance of the new algorithm is good.%针对语音信号在离散余弦变换基上的稀疏性,提出了一种基于压缩感知的语音压缩编码算法.算法在编码端采用随机高斯矩阵直接对语音波形进行观测,并采样均匀量化技术对随机观测进行最化.解码端利用未饱和的观测值通过Lasso算法实现语音信号的重构.仿真结果表明,该算法具有良好的重构性能.

  6. Convolutional coding techniques for data protection

    Science.gov (United States)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  7. Simulation of the Intake and Compression Strokes of a Motored 4-Valve Si Engine with a Finite Element Code Simulation de l'admission et de la compression dans un moteur 4-soupapes AC entraîné à l'aide d'un code de calcul à éléments finis

    Directory of Open Access Journals (Sweden)

    Bailly O.

    2006-12-01

    Full Text Available A CFD code, using a mixed finite volumes - finite elements method on tetraedrons, is now available for engine simulations. The code takes into account the displacement of moving walls such as piston and valves in a full automatic way: a single mesh is used for a full computation and no intervention of the user is necessary. A fourth order implicit spatial scheme and a first order implicit temporal scheme are used. The work presented in this paper is part of a larger program for the validation of this new numerical tool for engine applications. Here, comparisons between computation and experiments of the intake and compression strokes of a four-valve engine were carried out. The experimental investigations are conducted on a single cylinder four valve optical research engine. The turbulence intensity, mean velocity components, tumble and swirl ratios in the combustion chamber are deduced from the LDV measurements. The comparisons between computations and experiments are made on the mean velocity flow field at different locations inside the chamber and for different crank angles. We also present some global comparisons (swirl and tumble ratios. The simulation shows excellent agreement between computations and experiments. Un code de calcul utilisant une approche mixte éléments finis - volumes finis en tétraèdres a été développé pour les simulations moteur. Le code prend en compte le déplacement des parois mobiles comme les pistons et les soupapes de façon totalement automatique : un maillage unique est utilisé pour tout le calcul sans intervention de l'utilisateur. Un schéma implicite du quatrième ordre en espace et du premier ordre en temps est retenu. Le travail présenté dans cet article fait partie d'une démarche globale de validation de cette nouvelle approche pour les moteurs. Des comparaisons entre calculs et mesures lors des phases d'admission et de compression dans un moteur 4-soupapes AC y sont exposées. Ces exp

  8. The OMV Data Compression System Science Data Compression Workshop

    Science.gov (United States)

    Lewis, Garton H., Jr.

    1989-01-01

    The Video Compression Unit (VCU), Video Reconstruction Unit (VRU), theory and algorithms for implementation of Orbital Maneuvering Vehicle (OMV) source coding, docking mode, channel coding, error containment, and video tape preprocessed space imagery are presented in viewgraph format.

  9. 不同周期载频调制二进制编码激励脉冲压缩仿真研究%Simulation Study on Pulse Compression by Bi-phase Coded Excitation with Carrier Modulation

    Institute of Scientific and Technical Information of China (English)

    吴何珍; 刘政一

    2013-01-01

    In order to introduce the digital encoded technology to the seismic detection, the author studied coded excitation digital simulation and grasped the rules and characters of coded excitation, which is based on the basic coded excitation principle. By modulation launching of coded excitation signal, the author also did simulation research on pulse compression of received signal. Furthermore, the author processed pulse compression with 13 bit signal excitation transducer with one to five units code element (by launching of 13 bit signal excitation transducer with one to five units code element). The author did the power spectrum analyses and analyzed the influence on different excitation signal and its pulse compression result by transducer.%本文为将编码技术引入地震探测中,在分析编码激励基本原理的基础上,开展了编码激励的数值模拟研究,掌握编码激励的规律和特点.通过激励编码信号的调制发射,对接收信号进行脉冲压缩的仿真研究.其次通过单位码元载有1至5个载频周期个数的13位Barker码调制信号激励换能器(通过发射载有1至5个载频周期的13位Barker码调制信号来激励换能器),对接收信号进行脉冲压缩仿真研究和频谱分析,分析了换能器性能对编码激励信号及其脉冲压缩结果的影响.

  10. Noisy Network Coding

    CERN Document Server

    Lim, Sung Hoon; Gamal, Abbas El; Chung, Sae-Young

    2010-01-01

    A noisy network coding scheme for sending multiple sources over a general noisy network is presented. For multi-source multicast networks, the scheme naturally extends both network coding over noiseless networks by Ahlswede, Cai, Li, and Yeung, and compress-forward coding for the relay channel by Cover and El Gamal to general discrete memoryless and Gaussian networks. The scheme also recovers as special cases the results on coding for wireless relay networks and deterministic networks by Avestimehr, Diggavi, and Tse, and coding for wireless erasure networks by Dana, Gowaikar, Palanki, Hassibi, and Effros. The scheme involves message repetition coding, relay signal compression, and simultaneous decoding. Unlike previous compress--forward schemes, where independent messages are sent over multiple blocks, the same message is sent multiple times using independent codebooks as in the network coding scheme for cyclic networks. Furthermore, the relays do not use Wyner--Ziv binning as in previous compress-forward sch...

  11. Enhanced motion coding in MC-EZBC

    Science.gov (United States)

    Chen, Junhua; Zhang, Wenjun; Wang, Yingkun

    2005-07-01

    Since hierarchical variable size block matching and bidirectional motion compensation are used in the motioncompensated embedded zero block coding (MC-EZBC), the motion information consists of motion vector quadtree map and motion vectors. In the conventional motion coding scheme, the quadtree structure is coded directly, the motion vector modes are coded with Huffman codes, and the motion vector differences are coded by an m-ary arithmetic coder with 0-order models. In this paper we propose a new motion coding scheme which uses an extension of the CABAC algorithm and new context modeling for quadtree structure coding and mode coding. In addition, we use a new scalable motion coding method which scales the motion vector quadtrees according to the rate-distortion slope of the tree nodes. Experimental results show that the new coding scheme increases the efficiency of the motion coding by more than 25%. The performance of the system is improved accordingly, especially in low bit rates. Moreover, with the scalable motion coding, the subjective and objective coding performance is further enhanced in low bit rate scenarios.

  12. Minimum Redundancy Coding for Uncertain Sources

    CERN Document Server

    Baer, Michael B; Charalambous, Charalambos D

    2011-01-01

    Consider the set of source distributions within a fixed maximum relative entropy with respect to a given nominal distribution. Lossless source coding over this relative entropy ball can be approached in more than one way. A problem previously considered is finding a minimax average length source code. The minimizing players are the codeword lengths --- real numbers for arithmetic codes, integers for prefix codes --- while the maximizing players are the uncertain source distributions. Another traditional minimizing objective is the first one considered here, maximum (average) redundancy. This problem reduces to an extension of an exponential Huffman objective treated in the literature but heretofore without direct practical application. In addition to these, this paper examines the related problem of maximal minimax pointwise redundancy and the problem considered by Gawrychowski and Gagie, which, for a sufficiently small relative entropy ball, is equivalent to minimax redundancy. One can consider both Shannon-...

  13. An Adaptive Predictive Coding Based on Image Segmentation for Lossless Compression of Ultrasonic Well Logging Images%基于分块自适应预测的超声测井图象无损压缩编码

    Institute of Scientific and Technical Information of China (English)

    骆长江; 俞能海; 周亮

    2001-01-01

    In recent years, image acquisition equipment has been widely adopted in the field of well logging. However, the data transfer rate of the logging system is limited by the transmission cables. Thus, data compression is necessary, but the common compression schemes were found to be not ideal for the well logging images, which have unique properties. In this paper, the properties of typical ultrasonic well logging images were studied and a suitable compression algorithm was proposed. Row and column correlation was found to be the major characteristic of the well logging images and 2-D correlation was not significant. Some subimages showed mainly row correlation and others showed mainly column correlation. According to this observation, an adaptive predictive lossless image compression coding based on image segmentation was proposed. An image is decomposed into blocks and pre-row or pre-column prediction is adaptively selected for every block to perform DPCM coding. An improved LZW algorithm is used to be encode the prediction error. Experiments showed that this coding scheme was able to achieve higher compression ratios than lossless JPEG and JPEG\\|LS for the ultrasonic well logging image, while the complexity was comparable. The algorithm is self\\|adaptive and thus no code table is needed. Since every block is independently processed, the error propagation problem associated with normal DPCM coding schemes is avoided.%针对超声测井图象数据量较大,且要求实时传输的问题,提出了一种基于分块自适应预测的无损压缩编码方法.该方法首先对原图象分块;然后在每一子块内自适应选择预测方案,并进行DPCM编码;最后采用改进的LZW算法对差值图象进行编码输出.经过实验表明,该算法比较符合超声测井图象特点,其压缩倍数较现有无损压缩算法有很大提高,而算法复杂度没有明显增加,同时所需内存开销较小,因而特别适用于实时遥测系统.

  14. Wavelet image compression

    CERN Document Server

    Pearlman, William A

    2013-01-01

    This book explains the stages necessary to create a wavelet compression system for images and describes state-of-the-art systems used in image compression standards and current research. It starts with a high level discussion of the properties of the wavelet transform, especially the decomposition into multi-resolution subbands. It continues with an exposition of the null-zone, uniform quantization used in most subband coding systems and the optimal allocation of bitrate to the different subbands. Then the image compression systems of the FBI Fingerprint Compression Standard and the JPEG2000 S

  15. Based on the MATLAB design of Huffman coding%基于MATLAB的哈夫曼编码设计

    Institute of Scientific and Technical Information of China (English)

    林寿光

    2010-01-01

    利用哈夫曼压缩编码的原理及方法,采用MATLAB软件对两幅图片进行压缩编码程序设计,获得压缩信息及哈夫曼编码表,分析压缩后的图像像素数据及压缩比.结果表明,哈夫曼编码是一种无损压缩编码.

  16. 语音PCM的Huffman编码研究与实现%Realization of PCM to huffman coding for voice

    Institute of Scientific and Technical Information of China (English)

    邓翔宇

    2010-01-01

    传统的模拟语音PCM采用等长折叠二进制编码,其数码率较高,传输和处理所需系统资源较大.文章从语音信号抽样值的概率分布情况出发,在PCM编码的非均匀量化基础上,对13折线A律压扩特性采用变长编码,使信源的熵冗余得以减小,实现了语音MOS值不变情况下的压缩编码,同时,又运用EDA技术对压缩电路进行了基于CPLD的硬件设计.

  17. New IP traceback scheme based on Huffman codes%新的基于Huffman编码的追踪方案

    Institute of Scientific and Technical Information of China (English)

    罗莉莉; 谢冬青; 占勇军; 周再红

    2007-01-01

    面对DDoS攻击,研究人员提出了各种IP追踪技术寻找攻击包的真实源IP地址,但是目前的追踪方案存在着标记过程中的空间问题、追踪源是否准确及追踪所需包的数量等问题.提出一种新的基于Huffman编码的追踪方案,可以节省大量的存储空间,提高空间效率,而且当遭遇DoS(拒绝服务攻击)和DDoS的攻击时能迅速作出反应,仅仅收到一个攻击包即可重构出攻击路径,找到准确的攻击源, 将攻击带来的危害和损失减小到最低程度.

  18. 基于雷达视频的Huffman编码研究%Research on Huffman Code of Radar Vedio

    Institute of Scientific and Technical Information of China (English)

    韩菲

    2004-01-01

    讨论在雷达视频传输中所运用到的数据压缩算法,论述采用霍夫曼码对雷达数据进行编解码,以解决大容量雷达数据传输,满足雷达视频图像数据实时、高速、无损传输的要求.

  19. 赫夫曼编码的求解算法%Improving Algorithm for Finding Huffman-codes

    Institute of Scientific and Technical Information of China (English)

    徐凤生; 钱爱增; 李海军; 李天志

    2007-01-01

    最优二叉树是一种十分重要的数据结构,在通信、工程及软件开发等领域有着广泛的应用.文中对最优二叉树进行探讨的基础上,通过改进最优二叉树和Huffman编码的存储结构,提出了一种求赫夫曼编码的求解算法.通过设计相应的C语言程序验证了算法的有效性.

  20. Predictions for the drive capabilities of the RancheroS Flux Compression Generator into various load inductances using the Eulerian AMR Code Roxane

    Energy Technology Data Exchange (ETDEWEB)

    Watt, Robert Gregory [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-06

    The Ranchero Magnetic Flux Compression Generator (FCG) has been used to create current pulses in the 10-­100 MA range for driving both “static” low inductance (0.5 nH) loads1 for generator demonstration purposes and high inductance (10-­20 nH) imploding liner loads2 for ultimate use in physics experiments at very high energy density. Simulations of the standard Ranchero generator have recently shown that it had a design issue that could lead to flux trapping in the generator, and a non-­ robust predictability in its use in high energy density experiments. A re-­examination of the design concept for the standard Ranchero generator, prompted by the possible appearance of an aneurism at the output glide plane, has led to a new generation of Ranchero generators designated the RancheroS (for swooped). This generator has removed the problematic output glide plane and replaced it with a region of constantly increasing diameter in the output end of the FCG cavity in which the armature is driven outward under the influence of an additional HE load not present in the original Ranchero. The resultant RancheroS generator, to be tested in LA43S-­L13, probably in early FY17, has a significantly increased initial inductance and may be able to drive a somewhat higher load inductance than the standard Ranchero. This report will use the Eulerian AMR code Roxane to study the ability of the new design to drive static loads, with a goal of providing a database corresponding to the load inductances for which the generator might be used and the anticipated peak currents such loads might produce in physics experiments. Such a database, combined with a simple analytic model of an ideal generator, where d(LI)/dt = 0, and supplemented by earlier estimates of losses in actual use of the standard Ranchero, scaled to estimate the increase in losses due to the longer current carrying perimeter in the RancheroS, can then be used to bound the expectations for the current drive one may

  1. Hardware-specific image compression techniques for the animation of CFD data

    Science.gov (United States)

    Jones, Stephen C.; Moorhead, Robert J., II

    1992-06-01

    consecutive frames. If no change has occurred within a block a zero is recorded otherwise the entire block containing the 12-bit indices of the colormap is retained. The resulting block differences of the sequential frames in each segment will be saved after huffman coding and run length encoding. Playback of an animation will avoid much of the computations involved with rendering the original scene by decoding and loading the video RAM through the pixel bus. The algorithms will be written to take advantage of the systems hardware, specifically the Silicon Graphics VGX graphics adapter.

  2. Lossless Compression Schemes for ECG Signals Using Neural Network Predictors

    Directory of Open Access Journals (Sweden)

    C. Eswaran

    2007-01-01

    Full Text Available This paper presents lossless compression schemes for ECG signals based on neural network predictors and entropy encoders. Decorrelation is achieved by nonlinear prediction in the first stage and encoding of the residues is done by using lossless entropy encoders in the second stage. Different types of lossless encoders, such as Huffman, arithmetic, and runlength encoders, are used. The performances of the proposed neural network predictor-based compression schemes are evaluated using standard distortion and compression efficiency measures. Selected records from MIT-BIH arrhythmia database are used for performance evaluation. The proposed compression schemes are compared with linear predictor-based compression schemes and it is shown that about 11% improvement in compression efficiency can be achieved for neural network predictor-based schemes with the same quality and similar setup. They are also compared with other known ECG compression methods and the experimental results show that superior performances in terms of the distortion parameters of the reconstructed signals can be achieved with the proposed schemes.

  3. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    coding techniques are equally applicable to any voice signal whether or not it carries any intelligible information, as the term speech implies. Other terms that are commonly used are speech compression and voice compression since the fundamental idea behind speech coding is to reduce (compress) the transmission rate (or equivalently the bandwidth) And/or reduce storage requirements In this document the terms speech and voice shall be used interchangeably.

  4. Encoding of multi-alphabet sources by binary arithmetic coding

    Science.gov (United States)

    Guo, Muling; Oka, Takahumi; Kato, Shigeo; Kajiwara, Hiroshi; Kawamura, Naoto

    1998-12-01

    In case of encoding a multi-alphabet source, the multi- alphabet symbol sequence can be encoded directly by a multi- alphabet arithmetic encoder, or the sequence can be first converted into several binary sequences and then each binary sequence is encoded by binary arithmetic encoder, such as the L-R arithmetic coder. Arithmetic coding, however, requires arithmetic operations for each symbol and is computationally heavy. In this paper, a binary representation method using Huffman tree is introduced to reduce the number of arithmetic operations, and a new probability approximation for L-R arithmetic coding is further proposed to improve the coding efficiency when the probability of LPS (Least Probable Symbol) is near 0.5. Simulation results show that our proposed scheme has high coding efficacy and can reduce the number of coding symbols.

  5. Ultraspectral sounder data compression review

    Institute of Scientific and Technical Information of China (English)

    Bormin HUANG; Hunglung HUANG

    2008-01-01

    Ultraspectral sounders provide an enormous amount of measurements to advance our knowledge of weather and climate applications. The use of robust data compression techniques will be beneficial for ultraspectral data transfer and archiving. This paper reviews the progress in lossless compression of ultra-spectral sounder data. Various transform-based, pre-diction-based, and clustering-based compression methods are covered. Also studied is a preprocessing scheme for data reordering to improve compression gains. All the coding experiments are performed on the ultraspectral compression benchmark dataset col-lected from the NASA Atmospheric Infrared Sounder (AIRS) observations.

  6. The realization of Matlab by image coding compression algorithm on DCT%基于DCT的图像压缩及MATLAB实现

    Institute of Scientific and Technical Information of China (English)

    罗晨

    2011-01-01

    This paper mainly introduces the algorithm of the JPEG image compression.From the angle of the experimentation with MATLAB,the application of the JPEG image compression based on DCT is discussed.Much simulation experiments show that the method proposed is simple ,rapid and with little error.It can improve the efficiency and precision of the compression greatly.%介绍JPEG图像压缩算法,并在MATLAB数学分析工具环境下从实验角度出发,较为直观地探讨了DCT在JPEG图像压缩中的应用。仿真实验表明,用MATLAB来实现离散余弦变换的图像压缩,具有方法简单、速度快、误差小的优点,大大提高了图像压缩的效率和精度。

  7. File Compression and Expansion of the Genetic Code by the use of the Yin/Yang Directions to find its Sphered Cube.

    Science.gov (United States)

    Castro-Chavez, Fernando

    2014-07-01

    The objective of this article is to demonstrate that the genetic code can be studied and represented in a 3-D Sphered Cube for bioinformatics and for education by using the graphical help of the ancient "Book of Changes" or I Ching for the comparison, pair by pair, of the three basic characteristics of nucleotides: H-bonds, molecular structure, and their tautomerism. The source of natural biodiversity is the high plasticity of the genetic code, analyzable with a reverse engineering of its 2-D and 3-D representations (here illustrated), but also through the classical 64-hexagrams of the ancient I Ching, as if they were the 64-codons or words of the genetic code. In this article, the four elements of the Yin/Yang were found by correlating the 3×2=6 sets of Cartesian comparisons of the mentioned properties of nucleic acids, to the directionality of their resulting blocks of codons grouped according to their resulting amino acids and/or functions, integrating a 384-codon Sphered Cube whose function is illustrated by comparing six brain peptides and a promoter of osteoblasts from Humans versus Neanderthal, as well as to Negadi's work on the importance of the number 384 within the genetic code. Starting with the codon/anticodon correlation of Nirenberg, published in full here for the first time, and by studying the genetic code and its 3-D display, the buffers of reiteration within codons codifying for the same amino acid, displayed the two long (binary number one) and older Yin/Yang arrows that travel in opposite directions, mimicking the parental DNA strands, while annealing to the two younger and broken (binary number zero) Yin/Yang arrows, mimicking the new DNA strands; the graphic analysis of the of the genetic code and its plasticity was helpful to compare compatible sequences (human compatible to human versus neanderthal compatible to neanderthal), while further exploring the wondrous biodiversity of nature for educational purposes.

  8. An extensive Markov system for ECG exact coding.

    Science.gov (United States)

    Tai, S C

    1995-02-01

    In this paper, an extensive Markov process, which considers both the coding redundancy and the intersample redundancy, is presented to measure the entropy value of an ECG signal more accurately. It utilizes the intersample correlations by predicting the incoming n samples based on the previous m samples which constitute an extensive Markov process state. Theories of the extensive Markov process and conventional n repeated applications of m-th order Markov process are studied first in this paper. After that, they are realized for ECG exact coding. Results show that a better performance can be achieved by our system. The average code length for the extensive Markov system on the second difference signals was 2.512 b/sample, while the average Huffman code length for the second difference signals was 3.326 b/sample.

  9. On Real-Time and Causal Secure Source Coding

    CERN Document Server

    Kaspi, Yonatan

    2012-01-01

    We investigate two source coding problems with secrecy constraints. In the first problem we consider real--time fully secure transmission of a memoryless source. We show that although classical variable--rate coding is not an option since the lengths of the codewords leak information on the source, the key rate can be as low as the average Huffman codeword length of the source. In the second problem we consider causal source coding with a fidelity criterion and side information at the decoder and the eavesdropper. We show that when the eavesdropper has degraded side information, it is optimal to first use a causal rate distortion code and then encrypt its output with a key.

  10. A Comparative Study of Compression Methods and the Development of CODEC Program of Biological Signal for Emergency Telemedicine Service

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, T.S.; Kim, J.S. [Changwon National University, Changwon (Korea); Lim, Y.H. [Visionite Co., Ltd., Seoul (Korea); Yoo, S.K. [Yonsei University, Seoul (Korea)

    2003-05-01

    In an emergency telemedicine system such as the High-quality Multimedia based Real-time Emergency Telemedicine(HMRET) service, it is very important to examine the status of the patient continuously using the multimedia data including the biological signals(ECG, BP, Respiration, S{sub p}O{sub 2}) of the patient. In order to transmit these data real time through the communication means which have the limited transmission capacity, it is also necessary to compress the biological data besides other multimedia data. For this purpose, we investigate and compare the ECG compression techniques in the time domain and in the wavelet transform domain, and present an effective lossless compression method of the biological signals using JPEG Huffman table for an emergency telemedicine system. And, for the HMRET service, we developed the lossless compression and reconstruction program of the biological signals in MSVC++ 6.0 using DPCM method and JPEG Huffman table, and tested in an internet environment. (author). 15 refs., 17 figs., 7 tabs.

  11. LZW Data Compression

    Directory of Open Access Journals (Sweden)

    Dheemanth H N

    2016-07-01

    Full Text Available Lempel–Ziv–Welch (LZW is a universal lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch. LZW compression is one of the Adaptive Dictionary techniques. The dictionary is created while the data are being encoded. So encoding can be done on the fly. The dictionary need not be transmitted. Dictionary can be built up at receiving end on the fly. If the dictionary overflows then we have to reinitialize the dictionary and add a bit to each one of the code words. Choosing a large dictionary size avoids overflow, but spoils compressions. A codebook or dictionary containing the source symbols is constructed. For 8-bit monochrome images, the first 256 words of the dictionary are assigned to the gray levels 0-255. Remaining part of the dictionary is filled with sequences of the gray levels.LZW compression works best when applied on monochrome images and text files that contain repetitive text/patterns.

  12. Digital Poly-phase Filtering and Pulse Compression of PN-code Fuze%伪码引信的数字多相滤波和频域脉冲压缩

    Institute of Scientific and Technical Information of China (English)

    徐利平; 谢锡海; 曹志谦; 郑小燕

    2009-01-01

    In order to overcome the shortcoming of the time-domain correlation processing in classical PN-code fuze, the method of digital quadrature detection is proposed based on poly-phase filtering and digital pulse com-pression which are achieved by FPGA and DSP tools. The FPGA-DSP simulation results are indicated to be accu-rate according to the comparison with the results simulated by the MATLAB. Digital pulse compression tech-nique will be the trend of modern pulse compression technique because of its stable performance, strong anti-in-terference ability, flexible control mode and miniaturized hardware system.%针对传统伪随机码引信对回波信号时域相关处理法在处理大点数、大压缩比信号时的不足,提出采用软件无线电技术中的多相滤波数字正交检波器和频域数字脉冲压缩的方法.分别使用FPGA和DSP实现多相滤波数字正交检波器以及频域数字脉冲压缩.对FPGA及DSP进行的仿真与Matlab对整个过程仿真的结果比较,误差极小,验证了理论的正确性.数字脉压技术以其性能稳定、抗干扰能力强、控制方式灵活和硬件系统小型化等优点,逐步取代早期的模拟脉压技术,是现代脉压技术的发展趋势.

  13. Multiple description video coding research based on complementary segmentation in compressed domain%基于压缩域互补分割的视频多描述编码研究

    Institute of Scientific and Technical Information of China (English)

    江玉珍; 朱映辉; 欧阳春娟

    2009-01-01

    Due to the information relativity of digital video in space domain and time domain, a new multiple description video coding method is presented integrating the video compression process. Based on H. 263 criterion, the algorithm is realized by distributing the DCT domain coefficient of image blocks and motion estimation. With redistributing a few important coefficients, the receiving quality of single description is guaranteed. Meanwhile, the diagonal distribution of coefficient domain can improve their signal complementary. Experimental results show that the algorithm has good MDC characteristics such as low redundancy, rapid coding and decoding, and high compression ratio. It is an effective approach to ensure the security and real-time character in video communication.%鉴于数字视频在空间域及时间域上的信息相关性,提出在视频压缩过程中实现多描述编码的方法.算法以H.263为参考标准,在图块及运动估计的DCT域上进行系数划分,通过对少量重要信号的重复分配来保证单个描述的接收质量.同时,系数域的对角分割又使各个描述间具有较强的信号互补性.实验结果表明,算法具有描述冗余率低、编/解码直接快速、压缩率高等良好的MDC特性,是保障视频传输业务可靠性和实时性的有效方法.

  14. "Compressed" Compressed Sensing

    CERN Document Server

    Reeves, Galen

    2010-01-01

    The field of compressed sensing has shown that a sparse but otherwise arbitrary vector can be recovered exactly from a small number of randomly constructed linear projections (or samples). The question addressed in this paper is whether an even smaller number of samples is sufficient when there exists prior knowledge about the distribution of the unknown vector, or when only partial recovery is needed. An information-theoretic lower bound with connections to free probability theory and an upper bound corresponding to a computationally simple thresholding estimator are derived. It is shown that in certain cases (e.g. discrete valued vectors or large distortions) the number of samples can be decreased. Interestingly though, it is also shown that in many cases no reduction is possible.

  15. A New Scheme of Speech Coding Based on Compressed Sensing and Sinusoidal Dictionary%基于压缩感知和正弦字典的语音编码新方案

    Institute of Scientific and Technical Information of China (English)

    李尚靖; 朱琦; 朱俊华

    2015-01-01

    A novel speech coding method based on compressed sensing is proposed in this paper. Based on compressed sensing theory,the row echelon matrix retains parts of speech time domain features in the measurements,and utilize a sinusoidal dictionary and matching pur-suit for measurements sequence modeling. The model parameters are encoded by appropriate methods respectively. At the decoder,basis pursuit algorithm employs the decoded measurements for synthesized speech reconstruction. A rear low-pass filter is adopted to improve auditory effects. Simulation results show the average MOS scores of the synthesis speech are between 2. 81~3. 23 in low bit rate (2. 8~5. 7 kbps),which achieves a preferable coding effect in compressed sensing framework.%文中提出一种压缩感知框架采样下的语音编码方案。根据压缩感知原理,利用行阶梯矩阵投影产生的观测序列保留了部分语音信息的时域特征,利用正弦字典和匹配追踪算法对观测序列进行建模,对于每帧观测序列的模型参数,根据各自特性采用合适的编码方式进行编码。在解码端对解码后的观测序列利用基追踪算法重构合成语音,并后置低通滤波器提高合成语音的人耳听觉效果。仿真实验表明,提出的编码方案在2.8~5.7 kbps时得到的合成语音平均MOS分为2.81~3.23,在压缩感知框架下取得了较好的语音编码效果。

  16. Position index preserving compression of text data

    OpenAIRE

    Akhtar, Nasim; Rashid, Mamunur; Islam, Shafiqul; Kashem, Mohammod Abul; Kolybanov, Cyrll Y.

    2011-01-01

    Data compression offers an attractive approach to reducing communication cost by using available bandwidth effectively. It also secures data during transmission for its encoded form. In this paper an index based position oriented lossless text compression called PIPC ( Position Index Preserving Compression) is developed. In PIPC the position of the input word is denoted by ASCII code. The basic philosopy of the secure compression is to preprocess the text and transform it into some intermedia...

  17. Wavelet and wavelet packet compression of electrocardiograms.

    Science.gov (United States)

    Hilton, M L

    1997-05-01

    Wavelets and wavelet packets have recently emerged as powerful tools for signal compression. Wavelet and wavelet packet-based compression algorithms based on embedded zerotree wavelet (EZW) coding are developed for electrocardiogram (ECG) signals, and eight different wavelets are evaluated for their ability to compress Holter ECG data. Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECG's are clinically useful.

  18. Advanced GF(32) nonbinary LDPC coded modulation with non-uniform 9-QAM outperforming star 8-QAM.

    Science.gov (United States)

    Liu, Tao; Lin, Changyu; Djordjevic, Ivan B

    2016-06-27

    In this paper, we first describe a 9-symbol non-uniform signaling scheme based on Huffman code, in which different symbols are transmitted with different probabilities. By using the Huffman procedure, prefix code is designed to approach the optimal performance. Then, we introduce an algorithm to determine the optimal signal constellation sets for our proposed non-uniform scheme with the criterion of maximizing constellation figure of merit (CFM). The proposed nonuniform polarization multiplexed signaling 9-QAM scheme has the same spectral efficiency as the conventional 8-QAM. Additionally, we propose a specially designed GF(32) nonbinary quasi-cyclic LDPC code for the coded modulation system based on the 9-QAM non-uniform scheme. Further, we study the efficiency of our proposed non-uniform 9-QAM, combined with nonbinary LDPC coding, and demonstrate by Monte Carlo simulation that the proposed GF(23) nonbinary LDPC coded 9-QAM scheme outperforms nonbinary LDPC coded uniform 8-QAM by at least 0.8dB.

  19. Research on Imaging Technology for Reflective Compressive Coded Aperture Super-resolution%反射式压缩编码孔径超分辨率成像技术研究

    Institute of Scientific and Technical Information of China (English)

    毕祥丽

    2016-01-01

    Compressive sensing theory is led to super-resolution imaging technology for the general sparsity of most images, and an optical imaging system with all phase reflective compressive coded aperture is proposed based on 4f optical system. Reflective space optical modulator is used to perform optical system imaging experiments and the system is simulated through MATLAB program, the images with high resolution is obtained through decoding and reconstructing the collected single frame image with low resolution. Experimental results show that the best re⁃construction effect can be obtained at the condition of ensuring CCD camera matching to spatial optical modulator pixel size completely.%将压缩感知理论引入超分辨率成像,得益于绝大多数图像普遍具有稀疏性,提出了一套基于4f系统的全相位反射式压缩编码孔径光学成像系统,采用反射式空间光调制器进行光学系统成像实验,通过MATLAB程序对该系统进行仿真,对采集到的单幅低分辨率图像进行解码重建得到高分辨率的。实验结果表明,只有在保证CCD相机与空间光调制器像元尺寸完全匹配的情况下才能得到最佳的重建效果。

  20. 基于压缩感知的频率编码脉冲雷达高分辨距离成像方法%High Resolution Range Imaging Method for Frequency-coded Pulse Radar Based on Compressive Sensing

    Institute of Scientific and Technical Information of China (English)

    贺亚鹏; 庄珊娜; 李洪涛; 朱晓华

    2011-01-01

    A novel Compressive Sensing (CS) based high resolution target range imaging method for Frequency-Coded Pulse Radar (FCPR) is proposed in this paper. Considering spatial sparsity of the target scene, a FCPR target sparse signal model is derived. A FCPR pulses coherent synthesis processing method is presented. Target frequency domain response is sampled with only a few FCPR sub-pulses, from which target high resolution range information is reconstructed exactly. A dynamic creation of deduced dimension sensing matrix based on target velocity pre-estimation using FFT is proposed. This method reduces the computational complexity of CS recovery algorithms and promotes the speed of CS based FCPR pulses coherent synthesis processing. Computer simulations show that the presented method performs better than traditional IFFT pulses coherent synthesis processing algorithm with smaller magnitude estimation error of strong target scattering center and better robustness against velocity estimation error and noise.%针对频率编码脉冲雷达(Frequency-Coded Pulse Radar,FCPR),该文提出一种基于压缩感知(Compressive Sensing,cs)的目标高分辨距离成像方法.利用目标场景的空间稀疏性,建立FCPR目标回波稀疏信号模型,提出 基于CS的FCPR脉冲相参合成处理方法.该方法采用少量FCPR信号子脉冲对目标频域响应进行采样,即可提取目标高分辨距离像信息.为了降低CS重构算法的运算复杂度,提出一种基于FFT目标速度预估计的动态构造降维感知矩阵的方法,提高了采用CS进行FCPR脉冲相参合成处理的速度.仿真结果表明该方法较传统IFFT脉冲相干合成算法具有更小的目标强散射中心幅度估计误差,对速度估计误差及噪声的鲁棒性更好.

  1. 基于压缩感知的低速率语音编码新方案%New low bit rate speech coding scheme based on compressed sensing

    Institute of Scientific and Technical Information of China (English)

    叶蕾; 杨震; 孙林慧

    2011-01-01

    利用语音小波高频系数的稀疏性和压缩感知原理,提出一种新的基于压缩感知的低速率语音编码方案,其中小波高频系数的压缩感知重构分别采用l1范数优化方案及码本预测方案进行,前者对大幅度样值重构效果较好,且不仅适用于语音,也适用于音乐信号,具有传统的线性预测编码方法无法比拟的优势,后者对稀疏系数位置的估计较好,且不需要采用压缩感知重构常用的基追踪算法或匹配追踪算法,从而减少了计算量.两种方法的联合使用能发挥各自的优势,使得重构语音的音质进一步改善.%Utilizing the sparsity of high frequency wavelet transform coefficients of speech signal and theory of compressed sensing, a new low bit rate speech coding scheme based on compressed sensing is proposed. The reconstruction of high frequency wavelet transform coefficients is achieved by l1 normal optimization and codebook prediction reconstruction respectively. L1 reconstruction has good effect for large coefficients and suits for both speech and music, with which traditional linear prediction coding cannot compare. Codebook prediction reconstruction has good effect for the location of sparse coefficients and reduces the amount of calculation due to not using basis pursuit or matching pursuit. The combination of these two reconstruction methods can bring the advantages of both methods and improve the quality of the reconstructed speech.

  2. Compressing molecular dynamics trajectories: breaking the one-bit-per-sample barrier

    CERN Document Server

    Huwald, Jan; Dittrich, Peter

    2016-01-01

    Molecular dynamics simulations yield large amounts of trajectory data. For their durable storage and accessibility an efficient compression algorithm is paramount. State of the art domain-specific algorithms combine quantization, Huffman encoding and occasionally domain knowledge. We propose the high resolution trajectory compression scheme (HRTC) that relies on piecewise linear functions to approximate quantized trajectories. By splitting the error budget between quantization and approximation, our approach beats the current state of the art by several orders of magnitude given the same error tolerance. It allows storing samples at far less than one bit per sample. It is simple and fast enough to be integrated into the inner simulation loop, store every time step, and become the primary representation of trajectory data.

  3. Image Compression Algorithms Using Dct

    Directory of Open Access Journals (Sweden)

    Er. Abhishek Kaushik

    2014-04-01

    Full Text Available Image compression is the application of Data compression on digital images. The discrete cosine transform (DCT is a technique for converting a signal into elementary frequency components. It is widely used in image compression. Here we develop some simple functions to compute the DCT and to compress images. An image compression algorithm was comprehended using Matlab code, and modified to perform better when implemented in hardware description language. The IMAP block and IMAQ block of MATLAB was used to analyse and study the results of Image Compression using DCT and varying co-efficients for compression were developed to show the resulting image and error image from the original images. Image Compression is studied using 2-D discrete Cosine Transform. The original image is transformed in 8-by-8 blocks and then inverse transformed in 8-by-8 blocks to create the reconstructed image. The inverse DCT would be performed using the subset of DCT coefficients. The error image (the difference between the original and reconstructed image would be displayed. Error value for every image would be calculated over various values of DCT co-efficients as selected by the user and would be displayed in the end to detect the accuracy and compression in the resulting image and resulting performance parameter would be indicated in terms of MSE , i.e. Mean Square Error.

  4. TEM Video Compressive Sensing

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, Andrew J.; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.

    2015-08-02

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental

  5. Joint Source, Channel Coding, and Secrecy

    Directory of Open Access Journals (Sweden)

    Magli Enrico

    2007-01-01

    Full Text Available We introduce the concept of joint source coding, channel coding, and secrecy. In particular, we propose two practical joint schemes: the first one is based on error-correcting randomized arithmetic codes, while the second one employs turbo codes with compression, error protection, and securization capabilities. We provide simulation results on ideal binary data showing that the proposed schemes achieve satisfactory performance; they also eliminate the need for external compression and ciphering blocks with a significant potential computational advantage.

  6. Joint Source, Channel Coding, and Secrecy

    Directory of Open Access Journals (Sweden)

    Enrico Magli

    2007-09-01

    Full Text Available We introduce the concept of joint source coding, channel coding, and secrecy. In particular, we propose two practical joint schemes: the first one is based on error-correcting randomized arithmetic codes, while the second one employs turbo codes with compression, error protection, and securization capabilities. We provide simulation results on ideal binary data showing that the proposed schemes achieve satisfactory performance; they also eliminate the need for external compression and ciphering blocks with a significant potential computational advantage.

  7. Context-Aware Image Compression.

    Directory of Open Access Journals (Sweden)

    Jacky C K Chan

    Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.

  8. Pulse Compression Technique of Radio Fuze

    Institute of Scientific and Technical Information of China (English)

    HU Xiu-juan; DENG Jia-hao; SANG Hui-ping

    2006-01-01

    The advantages of using phase-coded pulse compression technique for radio fuze systems are evaluated. With building mathematical models a matched filter has be en implemented successfully. Various simulations for pulse compression waveform coding were done to evaluate the performance of fuze system under noisy environment. The results of the simulation and the data analysis show that the phase-coded pulse compression gets a good result in the signal identification of the radio fuze with matched filter. Simultaneously, a suitable sidelobe suppression filter is established by simulation, the suppressed sidelobe level is acceptable to radio fuze application.

  9. Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.

    Science.gov (United States)

    Gupta, Rajarshi

    2016-05-01

    Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.

  10. A novel color image compression algorithm using the human visual contrast sensitivity characteristics

    Science.gov (United States)

    Yao, Juncai; Liu, Guizhong

    2017-03-01

    In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.

  11. A novel color image compression algorithm using the human visual contrast sensitivity characteristics

    Science.gov (United States)

    Yao, Juncai; Liu, Guizhong

    2016-07-01

    In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.

  12. Recent advances in coding theory for near error-free communications

    Science.gov (United States)

    Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.

    1991-01-01

    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.

  13. Source-channel optimized trellis codes for bitonal image transmission over AWGN channels.

    Science.gov (United States)

    Kroll, J M; Phamdo, N

    1999-01-01

    We consider the design of trellis codes for transmission of binary images over additive white Gaussian noise (AWGN) channels. We first model the image as a binary asymmetric Markov source (BAMS) and then design source-channel optimized (SCO) trellis codes for the BAMS and AWGN channel. The SCO codes are shown to be superior to Ungerboeck's codes by approximately 1.1 dB (64-state code, 10(-5) bit error probability), We also show that a simple "mapping conversion" method can be used to improve the performance of Ungerboeck's codes by approximately 0.4 dB (also 64-state code and 10 (-5) bit error probability). We compare the proposed SCO system with a traditional tandem system consisting of a Huffman code, a convolutional code, an interleaver, and an Ungerboeck trellis code. The SCO system significantly outperforms the tandem system. Finally, using a facsimile image, we compare the image quality of an SCO code, an Ungerboeck code, and the tandem code, The SCO code yields the best reconstructed image quality at 4-5 dB channel SNR.

  14. Filtering, Coding, and Compression with Malvar Wavelets

    Science.gov (United States)

    1993-12-01

    2-10 2.4. The Malvar Wavelet Represented in Polyphase Form ...................... 2-11 3.1. (a) Real Part and (b) Imaginary Part of the Complex... Sleeping Pill", Using (a) 1 Point Overlap and (b) 50% (128 Point) Overlap ...... ............... 5-8 5.8. Reconstruction of the Same Sentence (From Sample...For example, if M=2 then 2-10 LICOMP_ U-Cc LCC-N SX,8) Y() Figure 2.4. The Malvar Wavelet Represented in Polyphase Form the signal would be broken

  15. Study on Embedded Coding Compression Algorithm Based on Lapped Biorthogonal Transform%一种基于双正交重叠变换的嵌入式编码算法研究

    Institute of Scientific and Technical Information of China (English)

    黄臣; 田昕; 李涛; 田金文

    2012-01-01

    Low-complexity block-based Lapped Biorthogonal Transform which can effectively reduce blocking effect is applied in Joint Photographic Expert Group's newest standard for static image coding, whose most admirable advantage is the convenience of being implemented in hardware and its low complexity in both memory-occupying and time-consuming. To avoid its weakness in the disability of compression rate control, a resolution was given, after deeply researching into this new algorithm, the Bit Plate Encoding was used to substitute the original quantization process to achieve accurate rate-controlling.%双正交变换为基于块的低复杂度变换,而且与传统的离散余弦变换相比,一定程度上减小了变换后图像的块效应,因而被采纳入联合图像专家组织JPEG(Joint Photographic Expert Group)最新制定的静态图像编码标准JPEG XR中.为了能够改善其无法实现码流长度控制的缺陷,文章深入研究了JPEG XR的编码技术,提出了一种针对固定压缩比的编码算法.主要思路是通过对双正交变换后的系数进行嵌入式位平面编码,取代了原先的量化步骤,使得压缩码流长度可以精确控制.

  16. Distributed multiple description coding

    CERN Document Server

    Bai, Huihui; Zhao, Yao

    2011-01-01

    This book examines distributed video coding (DVC) and multiple description coding (MDC), two novel techniques designed to address the problems of conventional image and video compression coding. Covering all fundamental concepts and core technologies, the chapters can also be read as independent and self-sufficient, describing each methodology in sufficient detail to enable readers to repeat the corresponding experiments easily. Topics and features: provides a broad overview of DVC and MDC, from the basic principles to the latest research; covers sub-sampling based MDC, quantization based MDC,

  17. On Network Functional Compression

    CERN Document Server

    Feizi, Soheil

    2010-01-01

    In this paper, we consider different aspects of the network functional compression problem where computation of a function (or, some functions) of sources located at certain nodes in a network is desired at receiver(s). The rate region of this problem has been considered in the literature under certain restrictive assumptions, particularly in terms of the network topology, the functions and the characteristics of the sources. In this paper, we present results that significantly relax these assumptions. Firstly, we consider this problem for an arbitrary tree network and asymptotically lossless computation. We show that, for depth one trees with correlated sources, or for general trees with independent sources, a modularized coding scheme based on graph colorings and Slepian-Wolf compression performs arbitrarily closely to rate lower bounds. For a general tree network with independent sources, optimal computation to be performed at intermediate nodes is derived. We introduce a necessary and sufficient condition...

  18. Compression of Short Text on Embedded Systems

    DEFF Research Database (Denmark)

    Rein, S.; Gühmann, C.; Fitzek, Frank

    2006-01-01

    The paper details a scheme for lossless compression of a short data series larger than 50 bytes. The method uses arithmetic coding and context modelling with a low-complexity data model. A data model that takes 32 kBytes of RAM already cuts the data size in half. The compression scheme just takes...

  19. Coded continuous wave meteor radar

    OpenAIRE

    2015-01-01

    The concept of coded continuous wave meteor radar is introduced. The radar uses a continuously transmitted pseudo-random waveform, which has several advantages: coding avoids range aliased echoes, which are often seen with commonly used pulsed specular meteor radars (SMRs); continuous transmissions maximize pulse compression gain, allowing operation with significantly lower peak transmit power; the temporal resolution can be changed after ...

  20. Wellhead compression

    Energy Technology Data Exchange (ETDEWEB)

    Harrington, Joe [Sertco Industries, Inc., Okemah, OK (United States); Vazquez, Daniel [Hoerbiger Service Latin America Inc., Deerfield Beach, FL (United States); Jacobs, Denis Richard [Hoerbiger do Brasil Industria de Equipamentos, Cajamar, SP (Brazil)

    2012-07-01

    Over time, all wells experience a natural decline in oil and gas production. In gas wells, the major problems are liquid loading and low downhole differential pressures which negatively impact total gas production. As a form of artificial lift, wellhead compressors help reduce the tubing pressure resulting in gas velocities above the critical velocity needed to surface water, oil and condensate regaining lost production and increasing recoverable reserves. Best results come from reservoirs with high porosity, high permeability, high initial flow rates, low decline rates and high total cumulative production. In oil wells, excessive annulus gas pressure tends to inhibit both oil and gas production. Wellhead compression packages can provide a cost effective solution to these problems by reducing the system pressure in the tubing or annulus, allowing for an immediate increase in production rates. Wells furthest from the gathering compressor typically benefit the most from wellhead compression due to system pressure drops. Downstream compressors also benefit from higher suction pressures reducing overall compression horsepower requirements. Special care must be taken in selecting the best equipment for these applications. The successful implementation of wellhead compression from an economical standpoint hinges on the testing, installation and operation of the equipment. Key challenges and suggested equipment features designed to combat those challenges and successful case histories throughout Latin America are discussed below.(author)

  1. Compressive beamforming

    DEFF Research Database (Denmark)

    Xenaki, Angeliki; Mosegaard, Klaus

    2014-01-01

    Sound source localization with sensor arrays involves the estimation of the direction-of-arrival (DOA) from a limited number of observations. Compressive sensing (CS) solves such underdetermined problems achieving sparsity, thus improved resolution, and can be solved efficiently with convex...

  2. Image and video compression for multimedia engineering fundamentals, algorithms, and standards

    CERN Document Server

    Shi, Yun Q

    2008-01-01

    Part I: Fundamentals Introduction Quantization Differential Coding Transform Coding Variable-Length Coding: Information Theory Results (II) Run-Length and Dictionary Coding: Information Theory Results (III) Part II: Still Image Compression Still Image Coding: Standard JPEG Wavelet Transform for Image Coding: JPEG2000 Nonstandard Still Image Coding Part III: Motion Estimation and Compensation Motion Analysis and Motion Compensation Block Matching Pel-Recursive Technique Optical Flow Further Discussion and Summary on 2-D Motion Estimation Part IV: Video Compression Fundam

  3. Combustion chamber analysis code

    Science.gov (United States)

    Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.

    1993-05-01

    A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.

  4. Preprocessing of compressed digital video

    Science.gov (United States)

    Segall, C. Andrew; Karunaratne, Passant V.; Katsaggelos, Aggelos K.

    2000-12-01

    Pre-processing algorithms improve on the performance of a video compression system by removing spurious noise and insignificant features from the original images. This increases compression efficiency and attenuates coding artifacts. Unfortunately, determining the appropriate amount of pre-filtering is a difficult problem, as it depends on both the content of an image as well as the target bit-rate of compression algorithm. In this paper, we explore a pre- processing technique that is loosely coupled to the quantization decisions of a rate control mechanism. This technique results in a pre-processing system that operates directly on the Displaced Frame Difference (DFD) and is applicable to any standard-compatible compression system. Results explore the effect of several standard filters on the DFD. An adaptive technique is then considered.

  5. Image Compression based on DCT and BPSO for MRI and Standard Images

    Directory of Open Access Journals (Sweden)

    D.J. Ashpin Pabi

    2016-11-01

    Full Text Available Nowadays, digital image compression has become a crucial factor of modern telecommunication systems. Image compression is the process of reducing total bits required to represent an image by reducing redundancies while preserving the image quality as much as possible. Various applications including internet, multimedia, satellite imaging, medical imaging uses image compression in order to store and transmit images in an efficient manner. Selection of compression technique is an application-specific process. In this paper, an improved compression technique based on Butterfly-Particle Swarm Optimization (BPSO is proposed. BPSO is an intelligence-based iterative algorithm utilized for finding optimal solution from a set of possible values. The dominant factors of BPSO over other optimization techniques are higher convergence rate, searching ability and overall performance. The proposed technique divides the input image into 88 blocks. Discrete Cosine Transform (DCT is applied to each block to obtain the coefficients. Then, the threshold values are obtained from BPSO. Based on this threshold, values of the coefficients are modified. Finally, quantization followed by the Huffman encoding is used to encode the image. Experimental results show the effectiveness of the proposed method over the existing method.

  6. IPv6下基于Huffman编码的路径回溯算法研究%Research of IPv6 path reconstruction algorithm based on Huffman code

    Institute of Scientific and Technical Information of China (English)

    胡清钟; 张斌

    2013-01-01

    包标记算法是一种常用的IP回溯算法,该算法把路径信息标记到IP报头的标记区域中,可以根据标记包中的标记信息重构出攻击路径,从而追踪到攻击的源头.由于标记空间大小的限制,标记信息有限,往往需要多个标记包才能重构出一条攻击路径,路径重构算法的复杂度较高,效率和准确率较低.为了解决这一问题,提出一种基于Huffman编码的路径回溯算法,将与上一跳路由器相关的链路信息以Huffman编码方式标记到标记区域,且不需将标记信息转存在中间节点.该算法适用于IPv6网络,仅需一个标记包就能准确地重构出攻击路径.实验结果表明,本文提出的算法在重构路径时速度快、效率和准确率高.

  7. A Multi-coordinate Linkage Interpolation Method with Huffman Code Tree%用Huffman树实现的多坐标联动插补算法

    Institute of Scientific and Technical Information of China (English)

    李志勇; 赵万生; 张勇

    2003-01-01

    将多轴联动插补指令的各坐标相对移动值作为树中节点的权值,用Huffman算法建立插补树,每次插补计算时使用逐点比较法搜索一遍插补树.基于动态Huffman编码树的坐标分组是最优的,在插补运算中具有最快的速度.以联动轴数作为输入考察插补速度,算法时间复杂度是对数阶的.该算法用于电火花机床加工航空和火箭发动机带叶冠整体涡轮叶片.

  8. 基于Huffman编码的文本信息隐藏算法%Algorithm of Text Information Hiding Based on Huffman Coding

    Institute of Scientific and Technical Information of China (English)

    戴祖旭; 洪帆; 董洁

    2007-01-01

    自然语言句子可以变换为词性标记串或句型.该文提出了基于句型Huffman编码的信息隐藏算法,根据句型分布构造Huffman编码,秘密信息解码为句型.句型在载体文本中的位置是密钥,对句型作Huffman压缩编码即可提取秘密信息,给出了信息隐藏容量公式.该算法不需要修改载体文本.

  9. Design of Experiment Teaching Platform for Huffman coding Based on MATLAB%基于MATLAB的Huffman编码实验教学平台设计

    Institute of Scientific and Technical Information of China (English)

    李荣

    2015-01-01

    针对Huffman编码实验教学中的有关计算问题,本文利用MATLAB的图形用户界面,设计开发了一个简单实用的实验教学平台.该平台实现了理论和实验相结合,为Huffman编码的实验教学提供了一个有效的工具.

  10. 比较应用STL实现Huffman编码的两种方法%Comparing Two Ways about Programming of Huffman Coding with STL

    Institute of Scientific and Technical Information of China (English)

    孙宏; 章小莉; 赵越

    2010-01-01

    Huffman编码作为信息不丢失压缩方法在现代通信、多媒体技术等领域广泛运用.研究用C++的标准模板库STL实现Huffman编码算法具有现实意义.本文讨论用STL资源的vector容器和heap技术实现Huffman编码算法编程,并比较两种实现方法的性能,指出使用STL资源时需要注意的事项.

  11. Efficient Huffman Codes-based Symmetrical-key Cryptography%基于Huffman编码的高效对称密码体制研究

    Institute of Scientific and Technical Information of China (English)

    魏茜; 龙冬阳

    2010-01-01

    当前网络中大规模数据的存储和传输需求使得数据压缩与加密相结合的研究引起了越来越多研究者的关注.虽然在信元的概率密度函数(Possibility Mass Function,PMF)保密的前提下使用Huffman编码压缩数据后得到的编码序列极难破译,但该方法中作为密钥的PMF安全性差且难于存储和传输因此很难被实际应用.为解决这个问题本文提出一种基于Huffman编码的一次一密高安全性对称密码体制.该方案使用具有多项式时间复杂度的Huffman树重构算法与有限域插值算法生成密钥,能够保证密钥长度非常短且在密钥被部分获取的情况下对加密体制的破解依然困难.此外本文证明方案的有效性和安全性并给出一个应用实例.

  12. LOB Data Exchange Based on Huffman Coding and XML%基于Huffman编码与XML的大对象数据交换

    Institute of Scientific and Technical Information of China (English)

    贾长云; 朱跃龙; 朱敏

    2006-01-01

    XML作为异构数据交换的标准格式在数据交换平台中得到了广泛的应用,多媒体数据由于其容量巨大在数据库中往往作为大对象数据来保存,因此在异构数据交换中必然涉及到大对象数据交换的问题.文章讨论了Huffman编码的原理并提出了基于XML使用Huffman编码方式实现大对象数据交换的方法,设计了相应的实现模型,对异构数据库大对象数据交换的实现具有一定的借鉴意义.

  13. The MP3 Steganography Algorithm Based on Huffman Coding%基于Huffman编码的MP3隐写算法

    Institute of Scientific and Technical Information of China (English)

    高海英

    2007-01-01

    针对MP3音频的编码特点,提出了基于Huffman码字替换原理的音频隐写算法.与以往的MP3隐写算法相比,该算法直接在MP3帧数据流中的Huffman码字上嵌入隐蔽信息,不需要局部解码,具有透明度高、嵌入量大、计算量小的特点.通过实验分析了算法的透明性、嵌入量、码字的统计特性等方面的特点.

  14. The Demo Animation Design of Huffman Coding Process Based on Flash%基于Flash的Huffman编码过程的演示动画设计

    Institute of Scientific and Technical Information of China (English)

    魏三强

    2013-01-01

    Huffman编码是数据压缩技术中的一个重要的知识点,很有必要运用最佳的现代化教学手段传播该知识.通过使用Flash软件及其ActionScript编程技术制作的演示动画课件,构建了新的视觉文化,实现了Huffman编码过程的较高仿真演示.由于它具有形象直观、生动有趣、易于学习等特点,对于提高Huffman编码知识点的教与学的效率,具有一定的辅助作用.

  15. Efficient coding and decoding algorithm based on generalized Huffman tree%基于广义规范Huffman树的高效编解码算法

    Institute of Scientific and Technical Information of China (English)

    郭建光; 张卫杰; 杨健; 安文韬; 熊涛

    2009-01-01

    为了减少编码时消耗的时间和空间,以便适应实时处理,提出了基于广义规范Huffman树的高效数据压缩算法.该算法利用层次和概率表顺序,保证编、解码的唯一性;利用移动排序替代搜索;建立索引表来简化排序操作;融入均衡编码的思想.同时,根据编码思想提出了相应的解码算法.通过实际数据验证,与传统的Huffnmn算法相比,该算法在时间和空间效率上有了一定提高,且使得码字更为均衡.

  16. Scalable motion vector coding

    Science.gov (United States)

    Barbarien, Joeri; Munteanu, Adrian; Verdicchio, Fabio; Andreopoulos, Yiannis; Cornelis, Jan P.; Schelkens, Peter

    2004-11-01

    Modern video coding applications require transmission of video data over variable-bandwidth channels to a variety of terminals with different screen resolutions and available computational power. Scalable video coding is needed to optimally support these applications. Recently proposed wavelet-based video codecs employing spatial domain motion compensated temporal filtering (SDMCTF) provide quality, resolution and frame-rate scalability while delivering compression performance comparable to that of the state-of-the-art non-scalable H.264-codec. These codecs require scalable coding of the motion vectors in order to support a large range of bit-rates with optimal compression efficiency. Scalable motion vector coding algorithms based on the integer wavelet transform followed by embedded coding of the wavelet coefficients were recently proposed. In this paper, a new and fundamentally different scalable motion vector codec (MVC) using median-based motion vector prediction is proposed. Extensive experimental results demonstrate that the proposed MVC systematically outperforms the wavelet-based state-of-the-art solutions. To be able to take advantage of the proposed scalable MVC, a rate allocation mechanism capable of optimally dividing the available rate among texture and motion information is required. Two rate allocation strategies are proposed and compared. The proposed MVC and rate allocation schemes are incorporated into an SDMCTF-based video codec and the benefits of scalable motion vector coding are experimentally demonstrated.

  17. PARAMETRIC EVALUATION ON THE PERFORMANCE OF VARIOUS IMAGE COMPRESSION ALGORITHMS

    OpenAIRE

    V. Sutha Jebakumari; P. Arockia Jansi Rani

    2011-01-01

    Wavelet analysis plays a vital role in the signal processing especially in image compression. In this paper, various compression algorithms like block truncation coding, EZW and SPIHT are studied and ana- lyzed; its algorithm idea and steps are given. The parameters for all these algorithms are analyzed and the best parameter for each of these compression algorithms is found out.

  18. PARAMETRIC EVALUATION ON THE PERFORMANCE OF VARIOUS IMAGE COMPRESSION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    V. Sutha Jebakumari

    2011-05-01

    Full Text Available Wavelet analysis plays a vital role in the signal processing especially in image compression. In this paper, various compression algorithms like block truncation coding, EZW and SPIHT are studied and ana- lyzed; its algorithm idea and steps are given. The parameters for all these algorithms are analyzed and the best parameter for each of these compression algorithms is found out.

  19. The Research for Compression Algorithm of Aerial Imagery

    Directory of Open Access Journals (Sweden)

    Zhiyong Peng

    2013-06-01

    Full Text Available In this study, the new method of the JPEG image compression algorithm with predictive coding algorithm combining was proposed, effectively eliminates redundant information of the sub-blocks and redundant information between the sub-blocks and sub-blocks. Achieved higher compression ratio compared to the JPEG compression algorithm and a good image quality.

  20. Shock compression of nitrobenzene

    Science.gov (United States)

    Kozu, Naoshi; Arai, Mitsuru; Tamura, Masamitsu; Fujihisa, Hiroshi; Aoki, Katsutoshi; Yoshida, Masatake; Kondo, Ken-Ichi

    1999-06-01

    The Hugoniot (4 - 30 GPa) and the isotherm (1 - 7 GPa) of nitrobenzene have been investigated by shock and static compression experiments. Nitrobenzene has the most basic structure of nitro aromatic compounds, which are widely used as energetic materials, but nitrobenzene has been considered not to explode in spite of the fact its calculated heat of detonation is similar to TNT, about 1 kcal/g. Explosive plane-wave generators and diamond anvil cell were used for shock and static compression, respectively. The obtained Hugoniot consists of two linear lines, and the kink exists around 10 GPa. The upper line agrees well with the Hugoniot of detonation products calculated by KHT code, so it is expected that nitrobenzene detonates in that area. Nitrobenzene solidifies under 1 GPa of static compression, and the isotherm of solid nitrobenzene was obtained by X-ray diffraction technique. Comparing the Hugoniot and the isotherm, nitrobenzene is in liquid phase under experimented shock condition. From the expected phase diagram, shocked nitrobenzene seems to remain metastable liquid in solid phase region on that diagram.

  1. Adaptive Remote Sensing Texture Compression on GPU

    Directory of Open Access Journals (Sweden)

    Xiao-Xia Lu

    2010-11-01

    Full Text Available Considering the properties of remote sensing texture such as strong randomness and weak local correlation, a novel adaptive compression method based on vector quantizer is presented and implemented on GPU. Utilizing the property of Human Visual System (HVS, a new similarity measurement function is designed instead of using Euclid distance. Correlated threshold between blocks can be obtained adaptively according to the property of different images without artificial auxiliary. Furthermore, a self-adaptive threshold adjustment during the compression is designed to improve the reconstruct quality. Experiments show that the method can handle various resolution images adaptively. It can achieve satisfied compression rate and reconstruct quality at the same time. Index is coded to further increase the compression rate. The coding way is designed to guarantee accessing the index randomly too. Furthermore, the compression and decompression process is speed up with the usage of GPU, on account of their parallelism.

  2. Compression of Short Text on Embedded Systems

    DEFF Research Database (Denmark)

    Rein, S.; Gühmann, C.; Fitzek, Frank

    2006-01-01

    The paper details a scheme for lossless compression of a short data series larger than 50 bytes. The method uses arithmetic coding and context modelling with a low-complexity data model. A data model that takes 32 kBytes of RAM already cuts the data size in half. The compression scheme just takes...... a few pages of source code,is scaleablein memory size, and may be useful in sensor or cellular networks to spare bandwidth. As we demonstrate the method allows for battery savings when applied to mobile phones.......The paper details a scheme for lossless compression of a short data series larger than 50 bytes. The method uses arithmetic coding and context modelling with a low-complexity data model. A data model that takes 32 kBytes of RAM already cuts the data size in half. The compression scheme just takes...

  3. Fast, efficient lossless data compression

    Science.gov (United States)

    Ross, Douglas

    1991-01-01

    This paper presents lossless data compression and decompression algorithms which can be easily implemented in software. The algorithms can be partitioned into their fundamental parts which can be implemented at various stages within a data acquisition system. This allows for efficient integration of these functions into systems at the stage where they are most applicable. The algorithms were coded in Forth to run on a Silicon Composers Single Board Computer (SBC) using the Harris RTX2000 Forth processor. The algorithms require very few system resources and operate very fast. The performance of the algorithms with the RTX enables real time data compression and decompression to be implemented for a wide range of applications.

  4. Multiple descriptions based wavelet image coding

    Institute of Scientific and Technical Information of China (English)

    陈海林; 杨宇航

    2004-01-01

    We present a simple and efficient scheme that combines multiple descriptions coding with wavelet transform under JPEG2000 image coding architecture. To reduce packet losses, controlled amounts of redundancy are added to the wavelet transform coefficients to produce multiple descriptions of wavelet coefficients during the compression process to produce multiple descriptions bit-stream of a compressed image. Even if areceiver gets only parts of descriptions (other descriptions being lost), it can still reconstruct image with acceptable quality. Specifically, the scheme uses not only high-performance wavelet transform to improve compression efficiency, but also multiple descriptions technique to enhance the robustness of the compressed image that is transmitted through unreliable network channels.

  5. Compressing DNA sequence databases with coil

    Directory of Open Access Journals (Sweden)

    Hendy Michael D

    2008-05-01

    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  6. Compression and Encryption of Search Survey Gamma Spectra using Compressive Sensing

    CERN Document Server

    Heifetz, Alexander

    2014-01-01

    We have investigated the application of Compressive Sensing (CS) computational method to simultaneous compression and encryption of gamma spectra measured with NaI(Tl) detector during wide area search survey applications. Our numerical experiments have demonstrated secure encryption and nearly lossless recovery of gamma spectra coded and decoded with CS routines.

  7. DSP accelerator for the wavelet compression/decompression of high- resolution images

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, M.A.; Gleason, S.S.; Jatko, W.B.

    1993-07-23

    A Texas Instruments (TI) TMS320C30-based S-Bus digital signal processing (DSP) module was used to accelerate a wavelet-based compression and decompression algorithm applied to high-resolution fingerprint images. The law enforcement community, together with the National Institute of Standards and Technology (NISI), is adopting a standard based on the wavelet transform for the compression, transmission, and decompression of scanned fingerprint images. A two-dimensional wavelet transform of the input image is computed. Then spatial/frequency regions are automatically analyzed for information content and quantized for subsequent Huffman encoding. Compression ratios range from 10:1 to 30:1 while maintaining the level of image quality necessary for identification. Several prototype systems were developed using SUN SPARCstation 2 with a 1280 {times} 1024 8-bit display, 64-Mbyte random access memory (RAM), Tiber distributed data interface (FDDI), and Spirit-30 S-Bus DSP-accelerators from Sonitech. The final implementation of the DSP-accelerated algorithm performed the compression or decompression operation in 3.5 s per print. Further increases in system throughput were obtained by adding several DSP accelerators operating in parallel.

  8. Comparative study of lossy and lossless data compression in distributed optical fiber sensing systems

    Science.gov (United States)

    Atubga, David; Wu, Huijuan; Lu, Lidong; Sun, Xiaoyan

    2017-02-01

    Typical fully distributed optical fiber sensors (DOFS) with dozens of kilometers are equivalent to tens of thousands of point sensors along the whole monitoring line, which means tens of thousands of data will be generated for one pulse launching period. Therefore, in an all-day nonstop monitoring, large volumes of data are created thereby triggering the demand for large storage space and high speed for data transmission. In addition, when the monitoring length and channel numbers increase, the data also increase extensively. The task of mitigating large volumes of data accumulation, large storage capacity, and high-speed data transmission is, therefore, the aim of this paper. To demonstrate our idea, we carried out a comparative study of two lossless methods, Huffman and Lempel Ziv Welch (LZW), with a lossy data compression algorithm, fast wavelet transform (FWT) based on three distinctive DOFS sensing data, such as Φ-OTDR, P-OTDR, and B-OTDA. Our results demonstrated that FWT yielded the best compression ratio with good consumption time, irrespective of errors in signal construction of the three DOFS data. Our outcomes indicate the promising potentials of FWT which makes it more suitable, reliable, and convenient for real-time compression of the DOFS data. Finally, it was observed that differences in the DOFS data structure have some influence on both the compression ratio and computational cost.

  9. Lossless wavelet compression on medical image

    Science.gov (United States)

    Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong

    2006-09-01

    An increasing number of medical imagery is created directly in digital form. Such as Clinical image Archiving and Communication Systems (PACS), as well as telemedicine networks require the storage and transmission of this huge amount of medical image data. Efficient compression of these data is crucial. Several lossless and lossy techniques for the compression of the data have been proposed. Lossless techniques allow exact reconstruction of the original imagery, while lossy techniques aim to achieve high compression ratios by allowing some acceptable degradation in the image. Lossless compression does not degrade the image, thus facilitating accurate diagnosis, of course at the expense of higher bit rates, i.e. lower compression ratios. Various methods both for lossy (irreversible) and lossless (reversible) image compression are proposed in the literature. The recent advances in the lossy compression techniques include different methods such as vector quantization. Wavelet coding, neural networks, and fractal coding. Although these methods can achieve high compression ratios (of the order 50:1, or even more), they do not allow reconstructing exactly the original version of the input data. Lossless compression techniques permit the perfect reconstruction of the original image, but the achievable compression ratios are only of the order 2:1, up to 4:1. In our paper, we use a kind of lifting scheme to generate truly loss-less non-linear integer-to-integer wavelet transforms. At the same time, we exploit the coding algorithm producing an embedded code has the property that the bits in the bit stream are generated in order of importance, so that all the low rate codes are included at the beginning of the bit stream. Typically, the encoding process stops when the target bit rate is met. Similarly, the decoder can interrupt the decoding process at any point in the bit stream, and still reconstruct the image. Therefore, a compression scheme generating an embedded code can

  10. Joint Encryption and Compression of Correlated Sources with Side Information

    Directory of Open Access Journals (Sweden)

    Haleem MA

    2007-01-01

    Full Text Available We propose a joint encryption and compression (JEC scheme with emphasis on application to video data. The proposed JEC scheme uses the philosophy of distributed source coding with side information to reduce the complexity of the compression process and at the same time uses cryptographic principles to ensure that security is built into the scheme. The joint distributed compression and encryption is achieved using a special class of codes called high-diffusion (HD codes that were proposed recently in the context of joint error correction and encryption. By using the duality between channel codes and Slepian-Wolf coding, we construct a joint compression and encryption scheme that uses these codes in the diffusion layer. We adapt this cipher to MJPEG2000 with the inclusion of minimal amount of joint processing of video frames at the encoder.

  11. 哈夫曼树Huffer man构成原理应用及其数学证明%Application of Huffman Tree Principle and its Mathematical Proof

    Institute of Scientific and Technical Information of China (English)

    江忠

    2016-01-01

    哈夫曼树又名最优二叉树,是一种构造带权路径长度最短的二叉树。所有树的带权路径长度,即是树中所有的叶子结点的权值乘以其到根结点的路径长度(若根root结点为0层,叶结点到根结点的路径长度就是叶结点的层数)。二叉树的带权路径长度可记为WPL值=(W1*L1+W2*L2+W3*L3+…+Wn*Ln),n个权重值Wi(i=1,2,...n)构成一棵拥有n个叶结点的二叉树,其相应的叶结点的路径长度为Li(i=1,2,…,n)。能够证明哈夫曼树的WPL的取值是最小的。%Huffman tree, also known as the optimal binary tree, is a kind of special weighted shortest path length of the binary tree. The tree weighted path length is the right of all the leaves in the tree node value multi-plied by its path length to the root node (if root node is layer 0, path length of leaf nodes to root node is a leaf node layer). Binary tree weighted path length can be written to WPL =(W1*L1+W2*L2+W3*L3+…+Wn* Ln), the weights of N Wi (I = 1,2,…,n) constitute a tree that has n leaf nodes of a binary tree, and its corresponding path length of the leaf nodes is Li (I = 1,2,…,n). This proves that the WPL of Huffman tree is the smallest.

  12. Morphological Transform for Image Compression

    Directory of Open Access Journals (Sweden)

    Luis Pastor Sanchez Fernandez

    2008-05-01

    Full Text Available A new method for image compression based on morphological associative memories (MAMs is presented. We used the MAM to implement a new image transform and applied it at the transformation stage of image coding, thereby replacing such traditional methods as the discrete cosine transform or the discrete wavelet transform. Autoassociative and heteroassociative MAMs can be considered as a subclass of morphological neural networks. The morphological transform (MT presented in this paper generates heteroassociative MAMs derived from image subblocks. The MT is applied to individual blocks of the image using some transformation matrix as an input pattern. Depending on this matrix, the image takes a morphological representation, which is used to perform the data compression at the next stages. With respect to traditional methods, the main advantage offered by the MT is the processing speed, whereas the compression rate and the signal-to-noise ratio are competitive to conventional transforms.

  13. Embedded foveation image coding.

    Science.gov (United States)

    Wang, Z; Bovik, A C

    2001-01-01

    The human visual system (HVS) is highly space-variant in sampling, coding, processing, and understanding. The spatial resolution of the HVS is highest around the point of fixation (foveation point) and decreases rapidly with increasing eccentricity. By taking advantage of this fact, it is possible to remove considerable high-frequency information redundancy from the peripheral regions and still reconstruct a perceptually good quality image. Great success has been obtained previously by a class of embedded wavelet image coding algorithms, such as the embedded zerotree wavelet (EZW) and the set partitioning in hierarchical trees (SPIHT) algorithms. Embedded wavelet coding not only provides very good compression performance, but also has the property that the bitstream can be truncated at any point and still be decoded to recreate a reasonably good quality image. In this paper, we propose an embedded foveation image coding (EFIC) algorithm, which orders the encoded bitstream to optimize foveated visual quality at arbitrary bit-rates. A foveation-based image quality metric, namely, foveated wavelet image quality index (FWQI), plays an important role in the EFIC system. We also developed a modified SPIHT algorithm to improve the coding efficiency. Experiments show that EFIC integrates foveation filtering with foveated image coding and demonstrates very good coding performance and scalability in terms of foveated image quality measurement.

  14. Lossless Compression of Digital Images

    DEFF Research Database (Denmark)

    Martins, Bo

    Presently, tree coders are the best bi-level image coders. The currentISO standard, JBIG, is a good example.By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code.A number of general-purpose coders...... version that is substantially faster than its precursorsand brings it close to the multi-pass coders in compression performance.Handprinted characters are of unequal complexity; recent work by Singer and Tishby demonstrates that utilizing the physiological process of writing one can synthesize cursive...

  15. Lossless Compression on MRI Images Using SWT.

    Science.gov (United States)

    Anusuya, V; Raghavan, V Srinivasa; Kavitha, G

    2014-10-01

    Medical image compression is one of the growing research fields in biomedical applications. Most medical images need to be compressed using lossless compression as each pixel information is valuable. With the wide pervasiveness of medical imaging applications in health-care settings and the increased interest in telemedicine technologies, it has become essential to reduce both storage and transmission bandwidth requirements needed for archival and communication of related data, preferably by employing lossless compression methods. Furthermore, providing random access as well as resolution and quality scalability to the compressed data has become of great utility. Random access refers to the ability to decode any section of the compressed image without having to decode the entire data set. The system proposes to implement a lossless codec using an entropy coder. 3D medical images are decomposed into 2D slices and subjected to 2D-stationary wavelet transform (SWT). The decimated coefficients are compressed in parallel using embedded block coding with optimized truncation of the embedded bit stream. These bit streams are decoded and reconstructed using inverse SWT. Finally, the compression ratio (CR) is evaluated to prove the efficiency of the proposal. As an enhancement, the proposed system concentrates on minimizing the computation time by introducing parallel computing on the arithmetic coding stage as it deals with multiple subslices.

  16. Compression of surface myoelectric signals using MP3 encoding.

    Science.gov (United States)

    Chan, Adrian D C

    2011-01-01

    The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios).

  17. Multiple Encryption-based Algorithm of Agricultural Product Trace Code

    Institute of Scientific and Technical Information of China (English)

    2012-01-01

    To establish a sound traceability system of agricultural products and guarantee security of agricultural products,an algorithm is proposed to encrypt trace code of agricultural products.Original trace code consists of 34 digits indicating such information as place of origin,name of product,date of production and authentication.Area code is used to indicate enterprise information,the encrypted algorithm is designed because of the increasing code length,such coding algorithms as system conversion and section division are applied for the encrypted conversion of code of origin place and production date code,moreover,section identification code and authentication code are permutated and combined to produce check code.Through the multiple encryption and code length compression,34 digits are compressed to 20 on the basis of ensuring complete coding information,shorter code length and better encryption enable the public to know information about agricultural products without consulting professional database.

  18. Data compression of scanned halftone images

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Jensen, Kim S.

    1994-01-01

    A new method for coding scanned halftone images is proposed. It is information-lossy, but still preserving the image quality, compression rates of 16-35 have been achieved for a typical test image scanned on a high resolution scanner. The bi-level halftone images are filtered, in phase...... with the halftone grid, and converted to a gray level representation. A new digital description of (halftone) grids has been developed for this purpose. The gray level values are coded according to a scheme based on states derived from a segmentation of gray values. To enable real-time processing of high resolution...... scanner output, the coding has been parallelized and implemented on a transputer system. For comparison, the test image was coded using existing (lossless) methods giving compression rates of 2-7. The best of these, a combination of predictive and binary arithmetic coding was modified and optimized...

  19. Is "Compressed Sensing" compressive? Can it beat the Nyquist Sampling Approach?

    CERN Document Server

    Yaroslavsky, L

    2015-01-01

    Measurement redundancy required for sampling and restoration of signals/images using "Compressed sensing (sampling)" techniques is compared with that of their more traditional alternatives. It is shown that "Compressed sensing" is not more compressive than the conventional sampling and that it is inferior in this respect to other available methods of sampling with reduced redundancy such as DPCM coding or random sparse sampling and restoration of image band-limited approximations. It is also shown that assertions that "Compressed sensing" can beat the Nyquist sampling approach are rooted in misinterpretation of the sampling theory.

  20. Predictive depth coding of wavelet transformed images

    Science.gov (United States)

    Lehtinen, Joonas

    1999-10-01

    In this paper, a new prediction based method, predictive depth coding, for lossy wavelet image compression is presented. It compresses a wavelet pyramid composition by predicting the number of significant bits in each wavelet coefficient quantized by the universal scalar quantization and then by coding the prediction error with arithmetic coding. The adaptively found linear prediction context covers spatial neighbors of the coefficient to be predicted and the corresponding coefficients on lower scale and in the different orientation pyramids. In addition to the number of significant bits, the sign and the bits of non-zero coefficients are coded. The compression method is tested with a standard set of images and the results are compared with SFQ, SPIHT, EZW and context based algorithms. Even though the algorithm is very simple and it does not require any extra memory, the compression results are relatively good.

  1. Multimedia signal coding and transmission

    CERN Document Server

    Ohm, Jens-Rainer

    2015-01-01

    This textbook covers the theoretical background of one- and multidimensional signal processing, statistical analysis and modelling, coding and information theory with regard to the principles and design of image, video and audio compression systems. The theoretical concepts are augmented by practical examples of algorithms for multimedia signal coding technology, and related transmission aspects. On this basis, principles behind multimedia coding standards, including most recent developments like High Efficiency Video Coding, can be well understood. Furthermore, potential advances in future development are pointed out. Numerous figures and examples help to illustrate the concepts covered. The book was developed on the basis of a graduate-level university course, and most chapters are supplemented by exercises. The book is also a self-contained introduction both for researchers and developers of multimedia compression systems in industry.

  2. Compression-based Similarity

    CERN Document Server

    Vitanyi, Paul M B

    2011-01-01

    First we consider pair-wise distances for literal objects consisting of finite binary files. These files are taken to contain all of their meaning, like genomes or books. The distances are based on compression of the objects concerned, normalized, and can be viewed as similarity distances. Second, we consider pair-wise distances between names of objects, like "red" or "christianity." In this case the distances are based on searches of the Internet. Such a search can be performed by any search engine that returns aggregate page counts. We can extract a code length from the numbers returned, use the same formula as before, and derive a similarity or relative semantics between names for objects. The theory is based on Kolmogorov complexity. We test both similarities extensively experimentally.

  3. Adaptively Compressed Exchange Operator

    CERN Document Server

    Lin, Lin

    2016-01-01

    The Fock exchange operator plays a central role in modern quantum chemistry. The large computational cost associated with the Fock exchange operator hinders Hartree-Fock calculations and Kohn-Sham density functional theory calculations with hybrid exchange-correlation functionals, even for systems consisting of hundreds of atoms. We develop the adaptively compressed exchange operator (ACE) formulation, which greatly reduces the computational cost associated with the Fock exchange operator without loss of accuracy. The ACE formulation does not depend on the size of the band gap, and thus can be applied to insulating, semiconducting as well as metallic systems. In an iterative framework for solving Hartree-Fock-like systems, the ACE formulation only requires moderate modification of the code, and can be potentially beneficial for all electronic structure software packages involving exchange calculations. Numerical results indicate that the ACE formulation can become advantageous even for small systems with tens...

  4. Content Progressive Coding of Limited Bits/pixel Images

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Forchhammer, Søren

    1999-01-01

    A new lossless context based method for content progressive coding of limited bits/pixel images is proposed. Progressive coding is achieved by separating the image into contelnt layers. Digital maps are compressed up to 3 times better than GIF.......A new lossless context based method for content progressive coding of limited bits/pixel images is proposed. Progressive coding is achieved by separating the image into contelnt layers. Digital maps are compressed up to 3 times better than GIF....

  5. Compressive light field sensing.

    Science.gov (United States)

    Babacan, S Derin; Ansorge, Reto; Luessi, Martin; Matarán, Pablo Ruiz; Molina, Rafael; Katsaggelos, Aggelos K

    2012-12-01

    We propose a novel design for light field image acquisition based on compressive sensing principles. By placing a randomly coded mask at the aperture of a camera, incoherent measurements of the light passing through different parts of the lens are encoded in the captured images. Each captured image is a random linear combination of different angular views of a scene. The encoded images are then used to recover the original light field image via a novel Bayesian reconstruction algorithm. Using the principles of compressive sensing, we show that light field images with a large number of angular views can be recovered from only a few acquisitions. Moreover, the proposed acquisition and recovery method provides light field images with high spatial resolution and signal-to-noise-ratio, and therefore is not affected by limitations common to existing light field camera designs. We present a prototype camera design based on the proposed framework by modifying a regular digital camera. Finally, we demonstrate the effectiveness of the proposed system using experimental results with both synthetic and real images.

  6. Compression limits in cascaded quadratic soliton compression

    DEFF Research Database (Denmark)

    Bache, Morten; Bang, Ole; Krolikowski, Wieslaw;

    2008-01-01

    Cascaded quadratic soliton compressors generate under optimal conditions few-cycle pulses. Using theory and numerical simulations in a nonlinear crystal suitable for high-energy pulse compression, we address the limits to the compression quality and efficiency.......Cascaded quadratic soliton compressors generate under optimal conditions few-cycle pulses. Using theory and numerical simulations in a nonlinear crystal suitable for high-energy pulse compression, we address the limits to the compression quality and efficiency....

  7. Satellite data compression

    CERN Document Server

    Huang, Bormin

    2011-01-01

    Satellite Data Compression covers recent progress in compression techniques for multispectral, hyperspectral and ultra spectral data. A survey of recent advances in the fields of satellite communications, remote sensing and geographical information systems is included. Satellite Data Compression, contributed by leaders in this field, is the first book available on satellite data compression. It covers onboard compression methodology and hardware developments in several space agencies. Case studies are presented on recent advances in satellite data compression techniques via various prediction-

  8. Lossless Video Sequence Compression Using Adaptive Prediction

    Science.gov (United States)

    Li, Ying; Sayood, Khalid

    2007-01-01

    We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.

  9. Improved lossless intra coding for next generation video coding

    Science.gov (United States)

    Vanam, Rahul; He, Yuwen; Ye, Yan

    2016-09-01

    Recently, there have been efforts by the ITU-T VCEG and ISO/IEC MPEG to further improve the compression performance of the High Efficiency Video Coding (HEVC) standard for developing a potential next generation video coding standard. The exploratory codec software of this potential standard includes new coding tools for inter and intra coding. In this paper, we present a new intra prediction mode for lossless intra coding. Our new intra mode derives a prediction filter for each input pixel using its neighboring reconstructed pixels, and applies this filter to the nearest neighboring reconstructed pixels to generate a prediction pixel. The proposed intra mode is demonstrated to improve the performance of the exploratory software for lossless intra coding, yielding a maximum and average bitrate savings of 4.4% and 2.11%, respectively.

  10. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  11. JND measurements and wavelet-based image coding

    Science.gov (United States)

    Shen, Day-Fann; Yan, Loon-Shan

    1998-06-01

    Two major issues in image coding are the effective incorporation of human visual system (HVS) properties and the effective objective measure for evaluating image quality (OQM). In this paper, we treat the two issues in an integrated fashion. We build a JND model based on the measurements of the JND (Just Noticeable Difference) property of HVS. We found that JND does not only depend on the background intensity but also a function of both spatial frequency and patten direction. Wavelet transform, due to its excellent simultaneous Time (space)/frequency resolution, is the best choice to apply the JND model. We mathematically derive an OQM called JND_PSNR that is based on the JND property and wavelet decomposed subbands. JND_PSNR is more consistent with human perception and is recommended as an alternative to the PSNR or SNR. With the JND_PSNR in mind, we proceed to propose a wavelet and JND based codec called JZW. JZW quantizes coefficients in each subband with proper step size according to the subband's importance to human perception. Many characteristics of JZW are discussed, its performance evaluated and compared with other famous algorithms such as EZW, SPIHT and TCCVQ. Our algorithm has 1 - 1.5 dB gain over SPIHT even when we use simple Huffman coding rather than the more efficient adaptive arithmetic coding.

  12. MAP decoding of variable length codes over noisy channels

    Science.gov (United States)

    Yao, Lei; Cao, Lei; Chen, Chang Wen

    2005-10-01

    In this paper, we discuss the maximum a-posteriori probability (MAP) decoding of variable length codes(VLCs) and propose a novel decoding scheme for the Huffman VLC coded data in the presence of noise. First, we provide some simulation results of VLC MAP decoding and highlight some features that have not been discussed yet in existing work. We will show that the improvement of MAP decoding over the conventional VLC decoding comes mostly from the memory information in the source and give some observations regarding the advantage of soft VLC MAP decoding over hard VLC MAP decoding when AWGN channel is considered. Second, with the recognition that the difficulty in VLC MAP decoding is the lack of synchronization between the symbol sequence and the coded bit sequence, which makes the parsing from the latter to the former extremely complex, we propose a new MAP decoding algorithm by integrating the information of self-synchronization strings (SSSs), one important feature of the codeword structure, into the conventional MAP decoding. A consistent performance improvement and decoding complexity reduction over the conventional VLC MAP decoding can be achieved with the new scheme.

  13. On-board image compression for the RAE lunar mission

    Science.gov (United States)

    Miller, W. H.; Lynch, T. J.

    1976-01-01

    The requirements, design, implementation, and flight performance of an on-board image compression system for the lunar orbiting Radio Astronomy Explorer-2 (RAE-2) spacecraft are described. The image to be compressed is a panoramic camera view of the long radio astronomy antenna booms used for gravity-gradient stabilization of the spacecraft. A compression ratio of 32 to 1 is obtained by a combination of scan line skipping and adaptive run-length coding. The compressed imagery data are convolutionally encoded for error protection. This image compression system occupies about 1000 cu cm and consumes 0.4 W.

  14. Lossless compression of hyperspectral images using hybrid context prediction.

    Science.gov (United States)

    Liang, Yuan; Li, Jianping; Guo, Ke

    2012-03-26

    In this letter a new algorithm for lossless compression of hyperspectral images using hybrid context prediction is proposed. Lossless compression algorithms are typically divided into two stages, a decorrelation stage and a coding stage. The decorrelation stage supports both intraband and interband predictions. The intraband (spatial) prediction uses the median prediction model, since the median predictor is fast and efficient. The interband prediction uses hybrid context prediction. The hybrid context prediction is the combination of a linear prediction (LP) and a context prediction. Finally, the residual image of hybrid context prediction is coded by the arithmetic coding. We compare the proposed lossless compression algorithm with some of the existing algorithms for hyperspectral images such as 3D-CALIC, M-CALIC, LUT, LAIS-LUT, LUT-NN, DPCM (C-DPCM), JPEG-LS. The performance of the proposed lossless compression algorithm is evaluated. Simulation results show that our algorithm achieves high compression ratios with low complexity and computational cost.

  15. Context adaptive coding of bi-level images

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2008-01-01

    With the advent of sequential arithmetic coding, the focus of highly efficient lossless data compression is placed on modelling the data. Rissanen's Algorithm Context provided an elegant solution to universal coding with optimal convergence rate. Context based arithmetic coding laid the grounds...... for the modern paradigm of data compression based on a modelling and a coding stage. One advantage of contexts is their flexibility, e.g. choosing a two-dimensional ("-D) context facilitates efficient image coding. The area of image coding has greatly been influenced by context adaptive coding, applied e.......g. in the lossless JBIG bi-level image coding standard, and in the entropy coding of contemporary lossless and lossy image and video coding standards and schemes. The theoretical work and analysis of universal context based coding has addressed sequences of data and finite memory models as Markov chains and sources...

  16. Tree Coding of Bilevel Images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1998-01-01

    Presently, sequential tree coders are the best general purpose bilevel image coders and the best coders of halftoned images. The current ISO standard, Joint Bilevel Image Experts Group (JBIG), is a good example. A sequential tree coder encodes the data by feeding estimates of conditional...... probabilities to an arithmetic coder. The conditional probabilities are estimated from co-occurrence statistics of past pixels, the statistics are stored in a tree. By organizing the code length calculations properly, a vast number of possible models (trees) reflecting different pixel orderings can...... is one order of magnitude slower than JBIG, obtains excellent and highly robust compression performance. A multipass free tree coding scheme produces superior compression results for all test images. A multipass free template coding scheme produces significantly better results than JBIG for difficult...

  17. Experimental Study of Fractal Image Compression Algorithm

    Directory of Open Access Journals (Sweden)

    Chetan R. Dudhagara

    2012-08-01

    Full Text Available Image compression applications have been increasing in recent years. Fractal compression is a lossy compression method for digital images, based on fractals. The method is best suited for textures and natural images, relying on the fact that parts of an image often resemble other parts of the same image. In this paper, a study on fractal-based image compression and fixed-size partitioning will be made, analyzed for performance and compared with a standard frequency domain based image compression standard, JPEG. Sample images will be used to perform compression and decompression. Performance metrics such as compression ratio, compression time and decompression time will be measured in JPEG cases. Also the phenomenon of resolution/scale independence will be studied and described with examples. Fractal algorithms convert these parts into mathematical data called "fractal codes" which are used to recreate the encoded image. Fractal encoding is a mathematical process used to encode bitmaps containing a real-world image as a set of mathematical data that describes the fractal properties of the image. Fractal encoding relies on the fact that all natural, and most artificial, objects contain redundant information in the form of similar, repeating patterns called fractals.

  18. Coding of Depth Images for 3DTV

    DEFF Research Database (Denmark)

    Zamarin, Marco; Forchhammer, Søren

    In this short paper a brief overview of the topic of coding and compression of depth images for multi-view image and video coding is provided. Depth images represent a convenient way to describe distances in the 3D scene, useful for 3D video processing purposes. Standard approaches...

  19. Optimality Of Variable-Length Codes

    Science.gov (United States)

    Yeh, Pen-Shu; Miller, Warner H.; Rice, Robert F.

    1994-01-01

    Report presents analysis of performances of conceptual Rice universal noiseless coders designed to provide efficient compression of data over wide range of source-data entropies. Includes predictive preprocessor that maps source data into sequence of nonnegative integers and variable-length-coding processor, which adapts to varying entropy of source data by selecting whichever one of number of optional codes yields shortest codeword.

  20. Wavelet-based Image Compression using Subband Threshold

    Science.gov (United States)

    Muzaffar, Tanzeem; Choi, Tae-Sun

    2002-11-01

    Wavelet based image compression has been a focus of research in recent days. In this paper, we propose a compression technique based on modification of original EZW coding. In this lossy technique, we try to discard less significant information in the image data in order to achieve further compression with minimal effect on output image quality. The algorithm calculates weight of each subband and finds the subband with minimum weight in every level. This minimum weight subband in each level, that contributes least effect during image reconstruction, undergoes a threshold process to eliminate low-valued data in it. Zerotree coding is done next on the resultant output for compression. Different values of threshold were applied during experiment to see the effect on compression ratio and reconstructed image quality. The proposed method results in further increase in compression ratio with negligible loss in image quality.

  1. Joint Compression-Segmentation of functional MRI Data

    DEFF Research Database (Denmark)

    Zhang, N.; Wu, Mo; Forchhammer, Søren

    2005-01-01

    Functional Magnetic Resonance Imaging (fMRI) data sets are four dimensional (4D) and very large in size. Compression can enhance system performance in terms of storage and transmission capacities. Two approaches are investigated: adaptive DPCM and integer wavelets. In the DPCM approach, each voxel...... information. Each voxel time sequence is DPCM coded using a quantized autoregressive model. The prediction residuals are coded by simple Rice coding for high decoder throughput. In the wavelet approach, the 4D fMRI data set is mapped to a 3D data set, with the 3D volume at each time instance being laid out...... into a 2D plane as a slice mosaic. 3D integer wavelet packets are used for lossless compression of fMRI data. The wavelet coefficients are compressed by 3D context-based adaptive arithmetic coding. An object-oriented compression mode is also introduced in the wavelet codec. An elliptic mask combined...

  2. Lossless/Lossy Compression of Bi-level Images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1997-01-01

    We present a general and robust method for lossless/lossy coding of bi-level images. The compression and decompression method is analoguous to JBIG, the current international standard for bi-level image compression, andis based on arithmetic coding and a template to determine the coding state. Loss......-too-low rate. The current flipping algorithm is intended for relatively fast encoding and moderate latency.By this method, many halftones can be compressed at perceptually lossless quality at a rate whichis half of what can be achieved with (lossless) JBIG.The (de)coding method is proposed as part of JBIG-2......, an emerging international standard for lossless/lossy compression of bi-level images....

  3. Holographic codes

    CERN Document Server

    Latorre, Jose I

    2015-01-01

    There exists a remarkable four-qutrit state that carries absolute maximal entanglement in all its partitions. Employing this state, we construct a tensor network that delivers a holographic many body state, the H-code, where the physical properties of the boundary determine those of the bulk. This H-code is made of an even superposition of states whose relative Hamming distances are exponentially large with the size of the boundary. This property makes H-codes natural states for a quantum memory. H-codes exist on tori of definite sizes and get classified in three different sectors characterized by the sum of their qutrits on cycles wrapped through the boundaries of the system. We construct a parent Hamiltonian for the H-code which is highly non local and finally we compute the topological entanglement entropy of the H-code.

  4. Sharing code

    OpenAIRE

    Kubilius, Jonas

    2014-01-01

    Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing.

  5. Efficient predictive algorithms for image compression

    CERN Document Server

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla

    2017-01-01

    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  6. Compressing bitmap indexes for faster search operations

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2002-04-25

    In this paper, we study the effects of compression on bitmap indexes. The main operations on the bitmaps during query processing are bitwise logical operations such as AND, OR, NOT, etc. Using the general purpose compression schemes, such as gzip, the logical operations on the compressed bitmaps are much slower than on the uncompressed bitmaps. Specialized compression schemes, like the byte-aligned bitmap code(BBC), are usually faster in performing logical operations than the general purpose schemes, but in many cases they are still orders of magnitude slower than the uncompressed scheme. To make the compressed bitmap indexes operate more efficiently, we designed a CPU-friendly scheme which we refer to as the word-aligned hybrid code (WAH). Tests on both synthetic and real application data show that the new scheme significantly outperforms well-known compression schemes at a modest increase in storage space. Compared to BBC, a scheme well-known for its operational efficiency, WAH performs logical operations about 12 times faster and uses only 60 percent more space. Compared to the uncompressed scheme, in most test cases WAH is faster while still using less space. We further verified with additional tests that the improvement in logical operation speed translates to similar improvement in query processing speed.

  7. Improved Techniques for Video Compression and Communication

    Science.gov (United States)

    Chen, Haoming

    2016-01-01

    Video compression and communication has been an important field over the past decades and critical for many applications, e.g., video on demand, video-conferencing, and remote education. In many applications, providing low-delay and error-resilient video transmission and increasing the coding efficiency are two major challenges. Low-delay and…

  8. Compressive Sensing for Spread Spectrum Receivers

    DEFF Research Database (Denmark)

    Fyhn, Karsten; Jensen, Tobias Lindstrøm; Larsen, Torben

    2013-01-01

    With the advent of ubiquitous computing there are two design parameters of wireless communication devices that become very important: power efficiency and production cost. Compressive sensing enables the receiver in such devices to sample below the Shannon-Nyquist sampling rate, which may lead...... to a decrease in the two design parameters. This paper investigates the use of Compressive Sensing (CS) in a general Code Division Multiple Access (CDMA) receiver. We show that when using spread spectrum codes in the signal domain, the CS measurement matrix may be simplified. This measurement scheme, named...... Compressive Spread Spectrum (CSS), allows for a simple, effective receiver design. Furthermore, we numerically evaluate the proposed receiver in terms of bit error rate under different signal to noise ratio conditions and compare it with other receiver structures. These numerical experiments show that though...

  9. A simple data compression scheme for binary images of bacteria compared with commonly used image data compression schemes

    NARCIS (Netherlands)

    Wilkinson, M.H.F.

    1994-01-01

    A run length code compression scheme of extreme simplicity, used for image storage in an automated bacterial morphometry system, is compared with more common compression schemes, such as are used in the tag image file format. These schemes are Lempel-Ziv and Welch (LZW), Macintosh Packbits, and CCIT

  10. 一维修改的哈夫曼码在气象传真图编码中的应用%Application of one-dimensional modified Huffman code in meteorological facsimile chart coding

    Institute of Scientific and Technical Information of China (English)

    刘惠敏; 刘繁明; 张琳琳

    2008-01-01

    气象传真图的信息量非常大.对其进行数据压缩,不仅可以在有限的空间内存储更多的图像,而且可以有效地降低传输时间,对于海上航行的船舶及时地掌握气象信息、降低气象风险大有帮助.在此采用一维修改的Huffman码对气象传真图进行压缩处理,并依据查表法对气象传真图像进行解压处理.实验证明,该方法可以满足气象传真图关于压缩比和压缩速度的要求,该方法是可行的.

  11. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...

  12. Polar Codes

    Science.gov (United States)

    2014-12-01

    QPSK Gaussian channels . .......................................................................... 39 vi 1. INTRODUCTION Forward error correction (FEC...Capacity of BSC. 7 Figure 5. Capacity of AWGN channel . 8 4. INTRODUCTION TO POLAR CODES Polar codes were introduced by E. Arikan in [1]. This paper...Under authority of C. A. Wilgenbusch, Head ISR Division EXECUTIVE SUMMARY This report describes the results of the project “More reliable wireless

  13. Lossless Compression of Broadcast Video

    DEFF Research Database (Denmark)

    Martins, Bo; Eriksen, N.; Faber, E.

    1998-01-01

    complexity, difficult but natural material is compressed up to 20\\% better than with coding using lossless JPEG-LS. More complex schemes lower the bit rate even further. A real-time implementation of JPEG-LS may be carried out in a DSP environment or a FPGA environment. Conservative analysis supported...... with actual measurements on a DSP suggests that a real-time implementation may be carried out using about 5 DSPs. An FPGA based solution is estimated to demand 4 or 6 FPGAs (each 40.000 gate equivalent)...

  14. 3D MHD Simulations of Spheromak Compression

    Science.gov (United States)

    Stuber, James E.; Woodruff, Simon; O'Bryan, John; Romero-Talamas, Carlos A.; Darpa Spheromak Team

    2015-11-01

    The adiabatic compression of compact tori could lead to a compact and hence low cost fusion energy system. The critical scientific issues in spheromak compression relate both to confinement properties and to the stability of the configuration undergoing compression. We present results from the NIMROD code modified with the addition of magnetic field coils that allow us to examine the role of rotation on the stability and confinement of the spheromak (extending prior work for the FRC). We present results from a scan in initial rotation, from 0 to 100km/s. We show that strong rotational shear (10km/s over 1cm) occurs. We compare the simulation results with analytic scaling relations for adiabatic compression. Work performed under DARPA grant N66001-14-1-4044.

  15. Data compression for local correlation tracking of solar granulation

    CERN Document Server

    Löptien, Björn; Duvall, Tom L; Gizon, Laurent; Schou, Jesper

    2015-01-01

    Context. Several upcoming and proposed space missions, such as Solar Orbiter, will be limited in telemetry and thus require data compression. Aims. We test the impact of data compression on local correlation tracking (LCT) of time-series of continuum intensity images. We evaluate the effect of several lossy compression methods (quantization, JPEG compression, and a reduced number of continuum images) on measurements of solar differential rotation with LCT. Methods. We apply the different compression methods to tracked and remapped continuum intensity maps obtained by the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory. We derive 2D vector velocities using the local correlation tracking code FLCT and determine the additional bias and noise introduced by compression to differential rotation. Results. We find that probing differential rotation with LCT is very robust to lossy data compression when using quantization. Our results are severely affected by systematic errors of the LCT ...

  16. Squish: Near-Optimal Compression for Archival of Relational Datasets

    Science.gov (United States)

    Gao, Yihan; Parameswaran, Aditya

    2017-01-01

    Relational datasets are being generated at an alarmingly rapid rate across organizations and industries. Compressing these datasets could significantly reduce storage and archival costs. Traditional compression algorithms, e.g., gzip, are suboptimal for compressing relational datasets since they ignore the table structure and relationships between attributes. We study compression algorithms that leverage the relational structure to compress datasets to a much greater extent. We develop Squish, a system that uses a combination of Bayesian Networks and Arithmetic Coding to capture multiple kinds of dependencies among attributes and achieve near-entropy compression rate. Squish also supports user-defined attributes: users can instantiate new data types by simply implementing five functions for a new class interface. We prove the asymptotic optimality of our compression algorithm and conduct experiments to show the effectiveness of our system: Squish achieves a reduction of over 50% in storage size relative to systems developed in prior work on a variety of real datasets.

  17. Focus on Compression Stockings

    Science.gov (United States)

    ... the stocking every other day with a mild soap. Do not use Woolite™ detergent. Use warm water ... compression clothing will lose its elasticity and its effectiveness. Compression stockings last for about 4-6 months ...

  18. A Compressive Superresolution Display

    KAUST Repository

    Heide, Felix

    2014-06-22

    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  19. Object-based wavelet compression using coefficient selection

    Science.gov (United States)

    Zhao, Lifeng; Kassim, Ashraf A.

    1998-12-01

    In this paper, we present a novel approach to code image regions of arbitrary shapes. The proposed algorithm combines a coefficient selection scheme with traditional wavelet compression for coding arbitrary regions and uses a shape adaptive embedded zerotree wavelet coding (SA-EZW) to quantize the selected coefficients. Since the shape information is implicitly encoded by the SA-EZW, our decoder can reconstruct the arbitrary region without separate shape coding. This makes the algorithm simple to implement and avoids the problem of contour coding. Our algorithm also provides a sufficient framework to address content-based scalability and improved coding efficiency as described by MPEG-4.

  20. Microbunching and RF Compression

    Energy Technology Data Exchange (ETDEWEB)

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.

    2010-05-23

    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  1. MAFCO: a compression tool for MAF files.

    Directory of Open Access Journals (Sweden)

    Luís M O Matos

    Full Text Available In the last decade, the cost of genomic sequencing has been decreasing so much that researchers all over the world accumulate huge amounts of data for present and future use. These genomic data need to be efficiently stored, because storage cost is not decreasing as fast as the cost of sequencing. In order to overcome this problem, the most popular general-purpose compression tool, gzip, is usually used. However, these tools were not specifically designed to compress this kind of data, and often fall short when the intention is to reduce the data size as much as possible. There are several compression algorithms available, even for genomic data, but very few have been designed to deal with Whole Genome Alignments, containing alignments between entire genomes of several species. In this paper, we present a lossless compression tool, MAFCO, specifically designed to compress MAF (Multiple Alignment Format files. Compared to gzip, the proposed tool attains a compression gain from 34% to 57%, depending on the data set. When compared to a recent dedicated method, which is not compatible with some data sets, the compression gain of MAFCO is about 9%. Both source-code and binaries for several operating systems are freely available for non-commercial use at: http://bioinformatics.ua.pt/software/mafco.

  2. An efficient compression scheme for bitmap indices

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2004-04-13

    When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap code (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query response time

  3. Compressed Encoding for Rank Modulation

    CERN Document Server

    Gad, Eyal En; Jiang,; Bruck, Jehoshua

    2011-01-01

    Rank modulation has been recently proposed as a scheme for storing information in flash memories. While rank modulation has advantages in improving write speed and endurance, the current encoding approach is based on the "push to the top" operation that is not efficient in the general case. We propose a new encoding procedure where a cell level is raised to be higher than the minimal necessary subset - instead of all - of the other cell levels. This new procedure leads to a significantly more compressed (lower charge levels) encoding. We derive an upper bound for a family of codes that utilize the proposed encoding procedure, and consider code constructions that achieve that bound for several special cases.

  4. Hyperspectral data compression

    CERN Document Server

    Motta, Giovanni; Storer, James A

    2006-01-01

    Provides a survey of results in the field of compression of remote sensed 3D data, with a particular interest in hyperspectral imagery. This work covers topics such as compression architecture, lossless compression, lossy techniques, and more. It also describes a lossless algorithm based on vector quantization.

  5. Compressed gas manifold

    Science.gov (United States)

    Hildebrand, Richard J.; Wozniak, John J.

    2001-01-01

    A compressed gas storage cell interconnecting manifold including a thermally activated pressure relief device, a manual safety shut-off valve, and a port for connecting the compressed gas storage cells to a motor vehicle power source and to a refueling adapter. The manifold is mechanically and pneumatically connected to a compressed gas storage cell by a bolt including a gas passage therein.

  6. Compressing Binary Decision Diagrams

    DEFF Research Database (Denmark)

    Hansen, Esben Rune; Satti, Srinivasa Rao; Tiedemann, Peter

    2008-01-01

    The paper introduces a new technique for compressing Binary Decision Diagrams in those cases where random access is not required. Using this technique, compression and decompression can be done in linear time in the size of the BDD and compression will in many cases reduce the size of the BDD to 1...

  7. Compressing Binary Decision Diagrams

    DEFF Research Database (Denmark)

    Rune Hansen, Esben; Srinivasa Rao, S.; Tiedemann, Peter

    The paper introduces a new technique for compressing Binary Decision Diagrams in those cases where random access is not required. Using this technique, compression and decompression can be done in linear time in the size of the BDD and compression will in many cases reduce the size of the BDD to 1...

  8. Compressing Binary Decision Diagrams

    DEFF Research Database (Denmark)

    Hansen, Esben Rune; Satti, Srinivasa Rao; Tiedemann, Peter

    2008-01-01

    The paper introduces a new technique for compressing Binary Decision Diagrams in those cases where random access is not required. Using this technique, compression and decompression can be done in linear time in the size of the BDD and compression will in many cases reduce the size of the BDD to 1...

  9. AN EFFICIENT BTC IMAGE COMPRESSION ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Block truncation coding (BTC) is a simple and fast image compression technique suitable for realtime image transmission, and it has high channel error resisting capability and good reconstructed image quality. The main shortcoming of the original BTC algorithm is the high bit rate (normally 2 bits/pixel). In order to reduce the bit rate, an efficient BTC image compression algorithm was presented in this paper. In the proposed algorithm, a simple look-up-table method is presented for coding the higher mean and the lower mean of a block without any extra distortion, and a prediction technique is introduced to reduce the number of bits used to code the bit plane with some extra distortion. The test results prove the effectiveness of the proposed algorithm.

  10. Content layer progressive coding of digital maps

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Jensen, Ole Riis

    2000-01-01

    A new lossless context based method is presented for content progressive coding of limited bits/pixel images, such as maps, company logos, etc., common on the WWW. Progressive encoding is achieved by separating the image into content layers based on other predefined information. Information from...... already coded layers are used when coding subsequent layers. This approach is combined with efficient template based context bi-level coding, context collapsing methods for multi-level images and arithmetic coding. Relative pixel patterns are used to collapse contexts. The number of contexts are analyzed....... The new methods outperform existing coding schemes coding digital maps and in addition provide progressive coding. Compared to the state-of-the-art PWC coder, the compressed size is reduced to 60-70% on our layered test images....

  11. Compressed Image Transmission Based on Systematic Raptor Codes with Unequal Error Protection%基于系统Raptor码不等差错保护的图像压缩传输

    Institute of Scientific and Technical Information of China (English)

    刘国; 于文慧; 吴家骥; 白宝明

    2013-01-01

    A scheme for image transmission over wireless channel is proposed. Being rateless, fountain codes could reduce system complexity and no feedback channel is needed. Traditional fountain code has a low decoding efficiency, and the quality of the recovered information is sensitive to noise. Based on systematic Raptor, this method can improve the decoding efficiency because decoding is not even needed in the ideal channel. With the introduction of Unequal Error Protection (UEP) characteristics, this scheme makes the bitrate optimized according to the importance of information, so that a better stability can be achieved at different channel conditions. The experiment results show that, compared with traditional error-correcting codes and Raptor with Equal Error Protection (EEP), this scheme can greatly improve the transmission reliability and achieve better reconstructed image quality over Binary Erasure Channel (BEC).%该文提出一种在无线信道上进行图像传输的方案。喷泉码具有码率无关性,无需反馈信道,降低了系统复杂度。然而,传统喷泉码存在译码效率低、信息恢复质量对噪声敏感的问题。该方案基于系统Raptor码,在信道条件理想的情况下无需译码,提高了译码效率;引入不等差错保护(UEP)特性,根据信息重要性进行码率优化,使系统在不同信道环境中具有更好的稳定性。实验结果表明,在二进制删除信道信道(BEC)下,与传统纠错码和 LT码及采用等差错保护(EEP)的Raptor码相比,该方案可显著提高传输的可靠性,获得更好的图像恢复效果。

  12. Wavelet-based embedded zerotree extension to color coding

    Science.gov (United States)

    Franques, Victoria T.

    1998-03-01

    Recently, a new image compression algorithm was developed which employs wavelet transform and a simple binary linear quantization scheme with an embedded coding technique to perform data compaction. This new family of coder, Embedded Zerotree Wavelet (EZW), provides a better compression performance than the current JPEG coding standard for low bit rates. Since EZW coding algorithm emerged, all of the published coding results related to this coding technique are on monochrome images. In this paper the author has enhanced the original coding algorithm to yield a better compression ratio, and has extended the wavelet-based zerotree coding to color images. Color imagery is often represented by several components, such as RGB, in which each component is generally processed separately. With color coding, each component could be compressed individually in the same manner as a monochrome image, therefore requiring a threefold increase in processing time. Most image coding standards employ de-correlated components, such as YIQ or Y, CB, CR and subsampling of the 'chroma' components, such coding technique is employed here. Results of the coding, including reconstructed images and coding performance, will be presented.

  13. Ultrasound imaging using coded signals

    DEFF Research Database (Denmark)

    Misaridis, Athanasios

    Modulated (or coded) excitation signals can potentially improve the quality and increase the frame rate in medical ultrasound scanners. The aim of this dissertation is to investigate systematically the applicability of modulated signals in medical ultrasound imaging and to suggest appropriate...... methods for coded imaging, with the goal of making better anatomic and flow images and three-dimensional images. On the first stage, it investigates techniques for doing high-resolution coded imaging with improved signal-to-noise ratio compared to conventional imaging. Subsequently it investigates how...... coded excitation can be used for increasing the frame rate. The work includes both simulated results using Field II, and experimental results based on measurements on phantoms as well as clinical images. Initially a mathematical foundation of signal modulation is given. Pulse compression based...

  14. Hybrid Prediction and Fractal Hyperspectral Image Compression

    Directory of Open Access Journals (Sweden)

    Shiping Zhu

    2015-01-01

    Full Text Available The data size of hyperspectral image is too large for storage and transmission, and it has become a bottleneck restricting its applications. So it is necessary to study a high efficiency compression method for hyperspectral image. Prediction encoding is easy to realize and has been studied widely in the hyperspectral image compression field. Fractal coding has the advantages of high compression ratio, resolution independence, and a fast decoding speed, but its application in the hyperspectral image compression field is not popular. In this paper, we propose a novel algorithm for hyperspectral image compression based on hybrid prediction and fractal. Intraband prediction is implemented to the first band and all the remaining bands are encoded by modified fractal coding algorithm. The proposed algorithm can effectively exploit the spectral correlation in hyperspectral image, since each range block is approximated by the domain block in the adjacent band, which is of the same size as the range block. Experimental results indicate that the proposed algorithm provides very promising performance at low bitrate. Compared to other algorithms, the encoding complexity is lower, the decoding quality has a great enhancement, and the PSNR can be increased by about 5 dB to 10 dB.

  15. Multiple descriptions based wavelet image coding

    Institute of Scientific and Technical Information of China (English)

    CHEN Hai-lin(陈海林); YANG Yu-hang(杨宇航)

    2004-01-01

    We present a simple and efficient scheme that combines multiple descriptions coding with wavelet transform under JPEG2000 image coding architecture. To reduce packet losses, controlled amounts of redundancy are added to the wavelet transform coefficients to produce multiple descriptions of wavelet coefficients during the compression process to produce multiple descriptions bit-stream of a compressed image. Even if a receiver gets only parts of descriptions (other descriptions being lost), it can still reconstruct image with acceptable quality. Specifically, the scheme uses not only high-performance wavelet transform to improve compression efficiency, but also multiple descriptions technique to enhance the robustness of the compressed image that is transmitted through unreliable network channels.

  16. Optimal context quantization in lossless compression of image data sequences

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Wu, X.; Andersen, Jakob Dahl

    2004-01-01

    In image compression context-based entropy coding is commonly used. A critical issue to the performance of context-based image coding is how to resolve the conflict of a desire for large templates to model high-order statistic dependency of the pixels and the problem of context dilution due to in...

  17. Halftone Coding with JBIG2

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    2000-01-01

    The emerging international standard for compression of bilevel images and bi-level documents, JBIG2,provides a mode dedicated for lossy coding of halftones. The encoding procedure involves descreening of the bi-levelimage into gray-scale, encoding of the gray-scale image, and construction...... and care must be taken to avoid introducing artifacts in the reconstructed image. We describe how to apply this coding method for halftones created by periodic ordered dithering, by clustered dot screening (offset printing), and by techniques which in effect dithers with blue noise, e.g., error diffusion....... Besides descreening and construction of the dictionary, we address graceful degradationand artifact removal....

  18. Coding of hyperspectral imagery using adaptive classification and trellis-coded quantization

    Science.gov (United States)

    Abousleman, Glen P.

    1997-08-01

    A system is presented for compression of hyperspectral imagery. Specifically, DPCM is used for spectral decorrelation, while an adaptive 2D discrete cosine transform coding scheme is used for spatial decorrelation. Trellis coded quantization is used to encode the transform coefficients. Side information and rate allocation strategies are discussed. Entropy-constrained codebooks are designed using a modified version of the generalized Lloyd algorithm. This entropy constrained system achieves a compression ratio of greater than 70:1 with an average PSNR of the coded hyperspectral sequence approaching 41 dB.

  19. Data Mining Un-Compressed Images from cloud with Clustering Compression technique using Lempel-Ziv-Welch

    Directory of Open Access Journals (Sweden)

    C. Parthasarathy

    2013-07-01

    Full Text Available Cloud computing is a highly discussed topic in the technical and economic world, and many of the big players of the software industry have entered the development of cloud services. Several companies’ and organizations wants to explore the possibilities and benefits of incorporating such cloud computing services in their business, as well as the possibilities to offer own cloud services. We are going to mine the un-compressed image from the cloud and use k-means clustering grouping the uncompressed image and compress it with Lempel-ziv-welch coding technique so that the un-compressed images becomes error-free compression and spatial redundancies.

  20. Future trends in image coding

    Science.gov (United States)

    Habibi, Ali

    1993-01-01

    The objective of this article is to present a discussion on the future of image data compression in the next two decades. It is virtually impossible to predict with any degree of certainty the breakthroughs in theory and developments, the milestones in advancement of technology and the success of the upcoming commercial products in the market place which will be the main factors in establishing the future stage to image coding. What we propose to do, instead, is look back at the progress in image coding during the last two decades and assess the state of the art in image coding today. Then, by observing the trends in developments of theory, software, and hardware coupled with the future needs for use and dissemination of imagery data and the constraints on the bandwidth and capacity of various networks, predict the future state of image coding. What seems to be certain today is the growing need for bandwidth compression. The television is using a technology which is half a century old and is ready to be replaced by high definition television with an extremely high digital bandwidth. Smart telephones coupled with personal computers and TV monitors accommodating both printed and video data will be common in homes and businesses within the next decade. Efficient and compact digital processing modules using developing technologies will make bandwidth compressed imagery the cheap and preferred alternative in satellite and on-board applications. In view of the above needs, we expect increased activities in development of theory, software, special purpose chips and hardware for image bandwidth compression in the next two decades. The following sections summarize the future trends in these areas.

  1. On-line structure-lossless digital mammogram image compression

    Science.gov (United States)

    Wang, Jun; Huang, H. K.

    1996-04-01

    This paper proposes a novel on-line structure lossless compression method for digital mammograms during the film digitization process. The structure-lossless compression segments the breast and the background, compresses the former with a predictive lossless coding method and discards the latter. This compression scheme is carried out during the film digitization process and no additional time is required for the compression. Digital mammograms are compressed on-the-fly while they are created. During digitization, lines of scanned data are first acquired into a small temporary buffer in the scanner, then they are transferred to a large image buffer in an acquisition computer which is connected to the scanner. The compression process, running concurrently with the digitization process in the acquisition computer, constantly checks the image buffer and compresses any newly arrived data. Since compression is faster than digitization, data compression is completed as soon as digitization is finished. On-line compression during digitization does not increase overall digitizing time. Additionally, it reduces the mammogram image size by a factor of 3 to 9 with no loss of information. This algorithm has been implemented in a film digitizer. Statistics were obtained based on digitizing 46 mammograms at four sampling distances from 50 to 200 microns.

  2. Wavelet-based image compression using fixed residual value

    Science.gov (United States)

    Muzaffar, Tanzeem; Choi, Tae-Sun

    2000-12-01

    Wavelet based compression is getting popular due to its promising compaction properties at low bitrate. Zerotree wavelet image coding scheme efficiently exploits multi-level redundancy present in transformed data to minimize coding bits. In this paper, a new technique is proposed to achieve high compression by adding new zerotree and significant symbols to original EZW coder. Contrary to four symbols present in basic EZW scheme, modified algorithm uses eight symbols to generate fewer bits for a given data. Subordinate pass of EZW is eliminated and replaced with fixed residual value transmission for easy implementation. This modification simplifies the coding technique as well and speeds up the process, retaining the property of embeddedness.

  3. Enhancements to MPEG4 MVC for depth compression

    Science.gov (United States)

    Iyer, Kiran Nanjunda; Maiti, Kausik; Navathe, Bilva Bhalchandra; Sharma, Anshul; Bopardikar, Ajit

    2010-07-01

    Depth map is expected to be an essential component of upcoming 3D video formats. In a multiview scenario, along with color (texture), amount of depth information will also increase linearly with the number of views. Therefore various techniques are being explored in the research community to efficiently compress the depth data. In this paper, we propose novel methods of depth compression based on MPEG4 Multiview Video Coding standard (MVC) without any substantial increase in computational complexity. Our aim is to improve depth coding gain with minimal modification to the standard. We present experimental results which indicate a considerable coding gain when compared with MVC.

  4. Three-Dimensional Image Compression With Integer Wavelet Transforms

    Science.gov (United States)

    Bilgin, Ali; Zweig, George; Marcellin, Michael W.

    2000-04-01

    A three-dimensional (3-D) image-compression algorithm based on integer wavelet transforms and zerotree coding is presented. The embedded coding of zerotrees of wavelet coefficients (EZW) algorithm is extended to three dimensions, and context-based adaptive arithmetic coding is used to improve its performance. The resultant algorithm, 3-D CB-EZW, efficiently encodes 3-D image data by the exploitation of the dependencies in all dimensions, while enabling lossy and lossless decompression from the same bit stream. Compared with the best available two-dimensional lossless compression techniques, the 3-D CB-EZW algorithm produced averages of 22%, 25%, and 20% decreases in compressed file sizes for computed tomography, magnetic resonance, and Airborne Visible Infrared Imaging Spectrometer images, respectively. The progressive performance of the algorithm is also compared with other lossy progressive-coding algorithms.

  5. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    ; alternatives to mainstream development, from performances of the live-coding scene to the organizational forms of commons-based peer production; the democratic promise of social media and their paradoxical role in suppressing political expression; and the market’s emptying out of possibilities for free...... development, Speaking Code unfolds an argument to undermine the distinctions between criticism and practice, and to emphasize the aesthetic and political aspects of software studies. Not reducible to its functional aspects, program code mirrors the instability inherent in the relationship of speech...... expression in the public realm. The book’s line of argument defends language against its invasion by economics, arguing that speech continues to underscore the human condition, however paradoxical this may seem in an era of pervasive computing....

  6. Issues in multiview autostereoscopic image compression

    Science.gov (United States)

    Shah, Druti; Dodgson, Neil A.

    2001-06-01

    Multi-view auto-stereoscopic images and image sequences require large amounts of space for storage and large bandwidth for transmission. High bandwidth can be tolerated for certain applications where the image source and display are close together but, for long distance or broadcast, compression of information is essential. We report on the results of our two- year investigation into multi-view image compression. We present results based on four techniques: differential pulse code modulation (DPCM), disparity estimation, three- dimensional discrete cosine transform (3D-DCT), and principal component analysis (PCA). Our work on DPCM investigated the best predictors to use for predicting a given pixel. Our results show that, for a given pixel, it is generally the nearby pixels within a view that provide better prediction than the corresponding pixel values in adjacent views. This led to investigations into disparity estimation. We use both correlation and least-square error measures to estimate disparity. Both perform equally well. Combining this with DPCM led to a novel method of encoding, which improved the compression ratios by a significant factor. The 3D-DCT has been shown to be a useful compression tool, with compression schemes based on ideas from the two-dimensional JPEG standard proving effective. An alternative to 3D-DCT is PCA. This has proved to be less effective than the other compression methods investigated.

  7. Context adaptive coding of bi-level images

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2008-01-01

    With the advent of sequential arithmetic coding, the focus of highly efficient lossless data compression is placed on modelling the data. Rissanen's Algorithm Context provided an elegant solution to universal coding with optimal convergence rate. Context based arithmetic coding laid the grounds.......g. in the lossless JBIG bi-level image coding standard, and in the entropy coding of contemporary lossless and lossy image and video coding standards and schemes. The theoretical work and analysis of universal context based coding has addressed sequences of data and finite memory models as Markov chains and sources...

  8. Lossless Digital Image Compression Method for Bitmap Images

    CERN Document Server

    Meyyappan, Dr T; Nachiaban, N M Jeya; 10.5121/ijma.2011.3407

    2011-01-01

    In this research paper, the authors propose a new approach to digital image compression using crack coding This method starts with the original image and develop crack codes in a recursive manner, marking the pixels visited earlier and expanding the entropy in four directions. The proposed method is experimented with sample bitmap images and results are tabulated. The method is implemented in uni-processor machine using C language source code.

  9. Lossless Medical Image Compression

    Directory of Open Access Journals (Sweden)

    Nagashree G

    2014-06-01

    Full Text Available Image compression has become an important process in today‟s world of information exchange. Image compression helps in effective utilization of high speed network resources. Medical Image Compression is very important in the present world for efficient archiving and transmission of images. In this paper two different approaches for lossless image compression is proposed. One uses the combination of 2D-DWT & FELICS algorithm for lossy to lossless Image Compression and another uses combination of prediction algorithm and Integer wavelet Transform (IWT. To show the effectiveness of the methodology used, different image quality parameters are measured and shown the comparison of both the approaches. We observed the increased compression ratio and higher PSNR values.

  10. Experiments of cylindrical isentropic compression by ultrahigh magnetic field

    Directory of Open Access Journals (Sweden)

    Gu Zhuowei

    2015-01-01

    Full Text Available The high Explosive Magnetic Flux Implosion Compression Generator (EMFICG is a kind of unique high energy density dynamic technique with characters like ultrahigh pressure and low temperature rising and could be suitable as a tool of cylindrical isentropic compression. The Institute of Fluid Physics, Chinese Academy of Engineering Physics (IFP, CAEP have developed EMFICG technique and realized cylindrical isentropic compression. In the experiments, a seed magnetic field of 5–6 Tesla were built first and compressed by a stainless steel liner which is driven by high explosive. The inner free surface velocity of sample was measured by PDV. The isentropic compression of a copper sample was verified and the isentropic pressure is over 100 GPa. The cylindrical isentropic compression process has been numerical simulated by 1D MHD code and the simulation results were compared with the experiments. Compared with the transitional X-ray flash radiograph measurement, this method will probably promote the data accuracy.

  11. CODEVECTOR MODELING USING LOCAL POLYNOMIAL REGRESSION FOR VECTOR QUANTIZATION BASED IMAGE COMPRESSION

    OpenAIRE

    P. Arockia Jansi Rani; V. Sadasivam

    2010-01-01

    Image compression is very important in reducing the costs of data storage and transmission in relatively slow channels. In this paper, a still image compression scheme driven by Self-Organizing Map with polynomial regression modeling and entropy coding, employed within the wavelet framework is presented. The image compressibility and interpretability are improved by incorporating noise reduction into the compression scheme. The implementation begins with the classical wavelet decomposition, q...

  12. Celiac Artery Compression Syndrome

    Directory of Open Access Journals (Sweden)

    Mohammed Muqeetadnan

    2013-01-01

    Full Text Available Celiac artery compression syndrome is a rare disorder characterized by episodic abdominal pain and weight loss. It is the result of external compression of celiac artery by the median arcuate ligament. We present a case of celiac artery compression syndrome in a 57-year-old male with severe postprandial abdominal pain and 30-pound weight loss. The patient eventually responded well to surgical division of the median arcuate ligament by laparoscopy.

  13. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  14. Compressed sensing & sparse filtering

    CERN Document Server

    Carmi, Avishy Y; Godsill, Simon J

    2013-01-01

    This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary. Apart from compressed sensing this book contains other related app

  15. Stiffness of compression devices

    Directory of Open Access Journals (Sweden)

    Giovanni Mosti

    2013-03-01

    Full Text Available This issue of Veins and Lymphatics collects papers coming from the International Compression Club (ICC Meeting on Stiffness of Compression Devices, which took place in Vienna on May 2012. Several studies have demonstrated that the stiffness of compression products plays a major role for their hemodynamic efficacy. According to the European Committee for Standardization (CEN, stiffness is defined as the pressure increase produced by medical compression hosiery (MCH per 1 cm of increase in leg circumference.1 In other words stiffness could be defined as the ability of the bandage/stockings to oppose the muscle expansion during contraction.

  16. A Unique Perspective on Data Coding and Decoding

    Directory of Open Access Journals (Sweden)

    Wen-Yan Wang

    2010-12-01

    Full Text Available The concept of a loss-less data compression coding method is proposed, and a detailed description of each of its steps follows. Using the Calgary Corpus and Wikipedia data as the experimental samples and compared with existing algorithms, like PAQ or PPMstr, the new coding method could not only compress the source data, but also further re-compress the data produced by the other compression algorithms. The final files are smaller, and by comparison with the original compression ratio, at least 1% redundancy could be eliminated. The new method is simple and easy to realize. Its theoretical foundation is currently under study. The corresponding Matlab source code is provided in  the Appendix.

  17. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe...... the codes succinctly using Gröbner bases....

  18. Dyadic wavelet for image coding implementation on a Xilinx MicroBlaze processor: application to neutron radiography.

    Science.gov (United States)

    Saadi, Slami; Touiza, Maamar; Kharfi, Fayçal; Guessoum, Abderrezak

    2013-12-01

    In this work, we present a mixed software/hardware implementation of 2-D signals encoder/decoder using dyadic discrete wavelet transform (DWT) based on quadrature mirror filters (QMF); using fast wavelet Mallat's algorithm. This work is designed and compiled on the embedded development kit EDK6.3i, and the synthesis software, ISE6.3i, which is available with Xilinx Virtex-IIV2MB1000 FPGA. Huffman coding scheme is used to encode the wavelet coefficients so that they can be transmitted progressively through an Ethernet TCP/IP based connection. The possible reconfiguration can be exploited to attain higher performance. The design will be integrated with the neutron radiography system that is used with the Es-Salem research reactor.

  19. 基于霍夫曼树和逆云模型的雷达拖引干扰识别%Identification of Radar Pull-off Jamming Based on Huffman Tree and Backward Cloud Model

    Institute of Scientific and Technical Information of China (English)

    李芳; 熊英; 唐斌

    2013-01-01

    A new method is presented to improve the identification rate of radar jamming for the identification of radar pull-off jamming based on Huffman tree and backward cloud model.Firstly,a parameter library is built according to the jamming library,then an identification model based on Huffman tree can be established.Finally the degree of membership is used to identify jamming on each node of the tree.Compared with traditional method,the presented method deals well with the randomness and fuzziness of jamming caused by noise,and identifies jamming effectively when parameters overlap partially.%针对噪声环境中雷达干扰正确识别率较低的问题,提出了一种新的基于霍夫曼树和逆云模型联合的雷达欺骗干扰识别方法.该方法首先利用干扰数据库,提取有效的识别特征参数库,然后基于霍夫曼树建立识别模型.在每个节点,利用基于逆云模型的隶属度分类,实现待测干扰的识别.仿真结果表明,与传统的干扰识别方法相比,该识别方法能很好地应对雷达干扰的随机性和模糊性,能在干扰参数数值区间有重叠时有效识别雷达干扰.

  20. Coded continuous wave meteor radar

    Science.gov (United States)

    Vierinen, Juha; Chau, Jorge L.; Pfeffer, Nico; Clahsen, Matthias; Stober, Gunter

    2016-03-01

    The concept of a coded continuous wave specular meteor radar (SMR) is described. The radar uses a continuously transmitted pseudorandom phase-modulated waveform, which has several advantages compared to conventional pulsed SMRs. The coding avoids range and Doppler aliasing, which are in some cases problematic with pulsed radars. Continuous transmissions maximize pulse compression gain, allowing operation at lower peak power than a pulsed system. With continuous coding, the temporal and spectral resolution are not dependent on the transmit waveform and they can be fairly flexibly changed after performing a measurement. The low signal-to-noise ratio before pulse compression, combined with independent pseudorandom transmit waveforms, allows multiple geographically separated transmitters to be used in the same frequency band simultaneously without significantly interfering with each other. Because the same frequency band can be used by multiple transmitters, the same interferometric receiver antennas can be used to receive multiple transmitters at the same time. The principles of the signal processing are discussed, in addition to discussion of several practical ways to increase computation speed, and how to optimally detect meteor echoes. Measurements from a campaign performed with a coded continuous wave SMR are shown and compared with two standard pulsed SMR measurements. The type of meteor radar described in this paper would be suited for use in a large-scale multi-static network of meteor radar transmitters and receivers. Such a system would be useful for increasing the number of meteor detections to obtain improved meteor radar data products.