Sample records for halftone image compression

  1. Data compression of scanned halftone images

    Forchhammer, Søren; Jensen, Kim S.


    A new method for coding scanned halftone images is proposed. It is information-lossy, but still preserving the image quality, compression rates of 16-35 have been achieved for a typical test image scanned on a high resolution scanner. The bi-level halftone images are filtered, in phase...... with the halftone grid, and converted to a gray level representation. A new digital description of (halftone) grids has been developed for this purpose. The gray level values are coded according to a scheme based on states derived from a segmentation of gray values. To enable real-time processing of high resolution...... scanner output, the coding has been parallelized and implemented on a transputer system. For comparison, the test image was coded using existing (lossless) methods giving compression rates of 2-7. The best of these, a combination of predictive and binary arithmetic coding was modified and optimized...

  2. Microscopic Halftone Image Segmentation

    WANG Yong-gang; YANG Jie; DING Yong-sheng


    Microscopic halftone image recognition and analysis can provide quantitative evidence for printing quality control and fault diagnosis of printing devices, while halftone image segmentation is one of the significant steps during the procedure. Automatic segmentation on microscopic dots by the aid of the Fuzzy C-Means (FCM) method that takes account of the fuzziness of halftone image and utilizes its color information adequately is realized. Then some examples show the technique effective and simple with better performance of noise immunity than some usual methods. In addition, the segmentation results obtained by the FCM in different color spaces are compared, which indicates that the method using the FCM in the f1f2f3 color space is superior to the rest.

  3. Thermodynamics-inspired inverse halftoning via multiple halftone images

    SAIKA Yohei; AOKI Toshizumi


    Based on an analogy between thermodynamics and Bayesian inference,inverse halftoning was formulated using multiple halftone images based on Bayesian inference using the maximizer of the posterior marginal (MPM) estimate.Applying Monte Carlo simulation to a set of snapshots of the Q-Ising model,it was demonstrated that optimal performance is achieved around the Bayes-optimal condition within statistical uncertainty and that the performance of the Bayes-optimal solution is superior to that of the maximum-aposteriori (MAP) estimation which is a deterministic limit of the MPM estimate.These properties were qualitatively confirmed by the mean-field theory using an infinite-range model established in statistical mechanics.Additionally,a practical and useful method was constructed using the statistical mechanical iterative method via the Bethe approximation.Numerical simulations for a 256-grayscale standard image show that Bethe approximation works as good as the MPM estimation if the parameters are set appropriately.

  4. Evaluation of Graininess for Digital Halftone Images

    Shigeru Kitakubo


    Some results of image recognition tests are given, in which a testee looks at an image and tell if he/she can recognize a certain figure in it or not. When studying digital halftoning process, it is important to discuss the resolution of the human eye, or eye and brain, from the viewpoint of image recognition.

  5. Neural net classification and LMS reconstruction to halftone images

    Chang, Pao-Chi; Yu, Che-Sheng


    The objective of this work is to reconstruct high quality gray-level images from halftone images, or the inverse halftoning process. We develop high performance halftone reconstruction methods for several commonly used halftone techniques. For better reconstruction quality, image classification based on halftone techniques is placed before the reconstruction process so that the halftone reconstruction process can be fine tuned for each halftone technique. The classification is based on enhanced 1D correlation of halftone images and processed with a three- layer back propagation neural network. This classification method reached 100 percent accuracy with a limited set of images processed by dispersed-dot ordered dithering, clustered-dot ordered dithering, constrained average, and error diffusion methods in our experiments. For image reconstruction, we apply the least-mean-square adaptive filtering algorithm which intends to discover the optimal filter weights and the mask shapes. As a result, it yields very good reconstruction image quality. The error diffusion yields the best reconstructed quality among the halftone methods. In addition, the LMS method generates optimal image masks which are significantly different for each halftone method. These optimal masks can also be applied to more sophisticated reconstruction methods as the default filter masks.

  6. Oriented modulation for watermarking in direct binary search halftone images.

    Guo, Jing-Ming; Su, Chang-Cheng; Liu, Yun-Fu; Lee, Hua; Lee, Jiann-Der


    In this paper, a halftoning-based watermarking method is presented. This method enables high pixel-depth watermark embedding, while maintaining high image quality. This technique is capable of embedding watermarks with pixel depths up to 3 bits without causing prominent degradation to the image quality. To achieve high image quality, the parallel oriented high-efficient direct binary search (DBS) halftoning is selected to be integrated with the proposed orientation modulation (OM) method. The OM method utilizes different halftone texture orientations to carry different watermark data. In the decoder, the least-mean-square-trained filters are applied for feature extraction from watermarked images in the frequency domain, and the naïve Bayes classifier is used to analyze the extracted features and ultimately to decode the watermark data. Experimental results show that the DBS-based OM encoding method maintains a high degree of image quality and realizes the processing efficiency and robustness to be adapted in printing applications.

  7. Evaluation of digital halftones image by vector error diffusion

    Kouzaki, Masahiro; Itoh, Tetsuya; Kawaguchi, Takayuki; Tsumura, Norimichi; Haneishi, Hideaki; Miyake, Yoichi


    The vector error diffusion (VED) method is applied to proudce the digital halftone images by an electrophotographic printer with 600 dpi. Objective image quality of those obtained images is evaluated and analyzed. As a result, in the color reproduction of halftone image by the VED method, it was clear that there are large color difference between target color and printed color typically in the mid-tone colors. We consider it is due to the printer properties including dot-gain. It was also clear that the color noise of the VED method is larger compared with that of the conventional scalar error diffusion method in some patches. It was remarkable that ununiform patterns are generated by the VED method.

  8. Classification of Error-Diffused Halftone Images Based on Spectral Regression Kernel Discriminant Analysis

    Zhigao Zeng


    Full Text Available This paper proposes a novel algorithm to solve the challenging problem of classifying error-diffused halftone images. We firstly design the class feature matrices, after extracting the image patches according to their statistics characteristics, to classify the error-diffused halftone images. Then, the spectral regression kernel discriminant analysis is used for feature dimension reduction. The error-diffused halftone images are finally classified using an idea similar to the nearest centroids classifier. As demonstrated by the experimental results, our method is fast and can achieve a high classification accuracy rate with an added benefit of robustness in tackling noise.

  9. Halftone Coding with JBIG2

    Martins, Bo; Forchhammer, Søren


    The emerging international standard for compression of bilevel images and bi-level documents, JBIG2,provides a mode dedicated for lossy coding of halftones. The encoding procedure involves descreening of the bi-levelimage into gray-scale, encoding of the gray-scale image, and construction...... and care must be taken to avoid introducing artifacts in the reconstructed image. We describe how to apply this coding method for halftones created by periodic ordered dithering, by clustered dot screening (offset printing), and by techniques which in effect dithers with blue noise, e.g., error diffusion....... Besides descreening and construction of the dictionary, we address graceful degradationand artifact removal....

  10. Halftone Coding with JBIG2

    Martins, Bo; Forchhammer, Søren


    of a halftone pattern dictionary.The decoder first decodes the gray-scale image. Then for each gray-scale pixel looks up the corresponding halftonepattern in the dictionary and places it in the reconstruction bitmap at the position corresponding to the gray-scale pixel. The coding method is inherently lossy......The emerging international standard for compression of bilevel images and bi-level documents, JBIG2,provides a mode dedicated for lossy coding of halftones. The encoding procedure involves descreening of the bi-levelimage into gray-scale, encoding of the gray-scale image, and construction...... and care must be taken to avoid introducing artifacts in the reconstructed image. We describe how to apply this coding method for halftones created by periodic ordered dithering, by clustered dot screening (offset printing), and by techniques which in effect dithers with blue noise, e.g., error diffusion...


    Fernando Pelcastre


    Full Text Available Halftoning es una técnica indispensable para mostrar imágenes digitales en pantalla e imprimirlas en papel usando cualquier tipo de impresora tales como Inkjet y láser. Además de lo anterior, la técnica de halftoning se ha empleado recientemente en diversas aplicaciones en el campo de computación y comunicación, tales como compresión y autenticación de imágenes, criptografía visual, etc. Este artículo proporciona una revisión detallada de los métodos principales de halftoning, los cuales son ordered dither, difusión de error, difusión de error con enfatización de borde, difusión de puntos, ruido verde y búsqueda binaria directa. Para el análisis de las ventajas y desventajas de cada método de halftoning se realizó una comparación de calidad de imagen halftone generada por los métodos mencionados anteriormente usando medición MOS (Mean Opinion Score. Asimismo, se consideró la complejidad computacional de cada método de halftoning.Halftoning is an indispensable technique used for showing digital images on screen and printing them on paper using any kind of printer such as Inkjet and Laser. Additionally, halftoning technique has been employed recently in several applications in the computation and communication fields, such as compression and authentication of images, visual cryptography, etc. This article provides as detailed review of the main halftoning methods, such as ordered dither, error diffusion, error diffusion with edge emphasis, dot diffusion, green noise, and direct binary search. For analyzing advantages and disadvantages of each halfoning method, a quality comparison of the halftone image generated by the already named methods was performed using Mean Opinion Score (MOS measurement. Likewise, computational complexity of each halftoning method was taken into consideration.

  12. Filters involving derivatives with application to reconstruction from scanned halftone images

    Forchhammer, Søren; Jensen, Kim S.


    This paper presents a method for designing finite impulse response (FIR) filters for samples of a 2-D signal, e.g., an image, and its gradient. The filters, which are called blended filters, are decomposable in three filters, each separable in 1-D filters on subsets of the data set. Optimality...... filters are developed and applied to the problem of gray value image reconstruction from bilevel (scanned) clustered-dot halftone images, which is an application useful in the graphic arts. Reconstruction results are given, showing that reconstruction with higher resolution than the halftone grid...... in the minimum mean square error sense (MMSE) of blended filtering is shown for signals with separable autocorrelation function. Relations between correlation functions for signals and their gradients are derived. Blended filters may be composed from FIR Wiener filters using these relations. Simple blended...

  13. Half-Tone Video Images Of Drifting Sinusoidal Gratings

    Mulligan, Jeffrey B.; Stone, Leland S.


    Digital technique for generation of slowly moving video image of sinusoidal grating avoids difficulty of transferring full image data from disk storage to image memory at conventional frame rates. Depends partly on trigonometric identity by which moving sinusoidal grating decomposed into two stationary patterns spatially and temporally modulated in quadrature. Makes motion appear smooth, even at speeds much less than one-tenth picture element per frame period. Applicable to digital video system in which image memory consists of at least 2 bits per picture element, and final brightness of picture element determined by contents of "lookup-table" memory programmed anew each frame period and indexed by coordinates of each picture element.

  14. Halftone visual cryptography.

    Zhou, Zhi; Arce, Gonzalo R; Di Crescenzo, Giovanni


    Visual cryptography encodes a secret binary image (SI) into n shares of random binary patterns. If the shares are xeroxed onto transparencies, the secret image can be visually decoded by superimposing a qualified subset of transparencies, but no secret information can be obtained from the superposition of a forbidden subset. The binary patterns of the n shares, however, have no visual meaning and hinder the objectives of visual cryptography. Extended visual cryptography [1] was proposed recently to construct meaningful binary images as shares using hypergraph colourings, but the visual quality is poor. In this paper, a novel technique named halftone visual cryptography is proposed to achieve visual cryptography via halftoning. Based on the blue-noise dithering principles, the proposed method utilizes the void and cluster algorithm [2] to encode a secret binary image into n halftone shares (images) carrying significant visual information. The simulation shows that the visual quality of the obtained halftone shares are observably better than that attained by any available visual cryptography method known to date.

  15. Lossless/Lossy Compression of Bi-level Images

    Martins, Bo; Forchhammer, Søren


    .g. halftoning and text without any segmentation of the image. The decoding is analoguous to the decoder of JBIG which means that software implementations easily have a through-put of 1 Mpixels per second.In general, the flipping method can target the lossy image for a given not-too-large distortion ornot-too......, an emerging international standard for lossless/lossy compression of bi-level images....

  16. Multimedia Data Hiding and Authentication via Halftoning and Coordinate Projection

    Wu Chai Wah


    Full Text Available We present image data hiding and authentication schemes based on halftoning and coordinate projection. The proposed data hiding scheme can embed images of the same size and similar bit depth into the cover image and robustness against compression is demonstrated. The image authentication scheme is based on the data hiding scheme and can detect, localize, and repair the tampered area of the image. Furthermore, the self-repairing feature of the authentication scheme has a hologram-like quality; any portion of the image can be used to reconstruct the entire image, with a greater quality of reconstruction as the portion size increases.

  17. Demystifying the Halftoning Process: Conventional, Stochastic, and Hybrid Halftone Dot Structures

    Oliver, Garth R.; Waite, Jerry J.


    For more than 150 years, printers have been faithfully reproducing continuous tone originals using halftoning techniques. For about 120 years, printers could only use the AM halftoning technique invented by Henry Talbot. In recent years, the advent of powerful raster image processors and high-resolution output devices has increased the variety of…

  18. Lossless/Lossy Compression of Bi-level Images

    Martins, Bo; Forchhammer, Søren


    We present a general and robust method for lossless/lossy coding of bi-level images. The compression and decompression method is analoguous to JBIG, the current international standard for bi-level image compression, andis based on arithmetic coding and a template to determine the coding state. Loss......-too-low rate. The current flipping algorithm is intended for relatively fast encoding and moderate latency.By this method, many halftones can be compressed at perceptually lossless quality at a rate whichis half of what can be achieved with (lossless) JBIG.The (de)coding method is proposed as part of JBIG-2......, an emerging international standard for lossless/lossy compression of bi-level images....

  19. Lossless Medical Image Compression

    Nagashree G


    Full Text Available Image compression has become an important process in today‟s world of information exchange. Image compression helps in effective utilization of high speed network resources. Medical Image Compression is very important in the present world for efficient archiving and transmission of images. In this paper two different approaches for lossless image compression is proposed. One uses the combination of 2D-DWT & FELICS algorithm for lossy to lossless Image Compression and another uses combination of prediction algorithm and Integer wavelet Transform (IWT. To show the effectiveness of the methodology used, different image quality parameters are measured and shown the comparison of both the approaches. We observed the increased compression ratio and higher PSNR values.

  20. Wavelet image compression

    Pearlman, William A


    This book explains the stages necessary to create a wavelet compression system for images and describes state-of-the-art systems used in image compression standards and current research. It starts with a high level discussion of the properties of the wavelet transform, especially the decomposition into multi-resolution subbands. It continues with an exposition of the null-zone, uniform quantization used in most subband coding systems and the optimal allocation of bitrate to the different subbands. Then the image compression systems of the FBI Fingerprint Compression Standard and the JPEG2000 S

  1. Polyomino-Based Digital Halftoning

    Vanderhaeghe, David


    In this work, we present a new method for generating a threshold structure. This kind of structure can be advantageously used in various halftoning algorithms such as clustered-dot or dispersed-dot dithering, error diffusion with threshold modulation, etc. The proposed method is based on rectifiable polyominoes -- a non-periodic hierarchical structure, which tiles the Euclidean plane with no gaps. Each polyomino contains a fixed number of discrete threshold values. Thanks to its inherent non-periodic nature combined with off-line optimization of threshold values, our polyomino-based threshold structure shows blue-noise spectral properties. The halftone images produced with this threshold structure have high visual quality. Although the proposed method is general, and can be applied on any polyomino tiling, we consider one particular case: tiling with G-hexominoes. We compare our polyomino-based threshold structure with the best known state-of-the-art methods for generation threshold matrices, and conclude con...

  2. Image compression for dermatology

    Cookson, John P.; Sneiderman, Charles; Colaianni, Joseph; Hood, Antoinette F.


    Color 35mm photographic slides are commonly used in dermatology for education, and patient records. An electronic storage and retrieval system for digitized slide images may offer some advantages such as preservation and random access. We have integrated a system based on a personal computer (PC) for digital imaging of 35mm slides that depict dermatologic conditions. Such systems require significant resources to accommodate the large image files involved. Methods to reduce storage requirements and access time through image compression are therefore of interest. This paper contains an evaluation of one such compression method that uses the Hadamard transform implemented on a PC-resident graphics processor. Image quality is assessed by determining the effect of compression on the performance of an image feature recognition task.

  3. Image data compression investigation

    Myrie, Carlos


    NASA continuous communications systems growth has increased the demand for image transmission and storage. Research and analysis was conducted on various lossy and lossless advanced data compression techniques or approaches used to improve the efficiency of transmission and storage of high volume stellite image data such as pulse code modulation (PCM), differential PCM (DPCM), transform coding, hybrid coding, interframe coding, and adaptive technique. In this presentation, the fundamentals of image data compression utilizing two techniques which are pulse code modulation (PCM) and differential PCM (DPCM) are presented along with an application utilizing these two coding techniques.

  4. Progressive halftone watermarking using multilayer table lookup strategy.

    Guo, Jing-Ming; Lai, Guo-Hung; Wong, Koksheik; Chang, Li-Chung


    In this paper, a halftoning-based multilayer watermarking of low computational complexity is proposed. An additional data-hiding technique is also employed to embed multiple watermarks into the watermark to be embedded to improve the security and embedding capacity. At the encoder, the efficient direct binary search method is employed to generate 256 reference tables to ensure the output is in halftone format. Subsequently, watermarks are embedded by a set of optimized compressed tables with various textural angles for table lookup. At the decoder, the least mean square metric is considered to increase the differences among those generated phenotypes of the embedding angles and reduce the required number of dimensions for each angle. Finally, the naïve Bayes classifier is employed to collect the possibilities of multilayer information for classifying the associated angles to extract the embedded watermarks. These decoded watermarks can be further overlapped for retrieving the additional hidden-layer watermarks. Experimental results show that the proposed method requires only 8.4 ms for embedding a watermark into an image of size 512×512 , under the 32-bit Windows 7 platform running on 4GB RAM, Intel core i7 Sandy Bridge with 4GB RAM and IDE Visual Studio 2010. Finally, only 2 MB is required to store the proposed compressed reference table.

  5. Parallel halftoning technique using dot diffusion optimization

    Molina-Garcia, Javier; Ponomaryov, Volodymyr I.; Reyes-Reyes, Rogelio; Cruz-Ramos, Clara


    In this paper, a novel approach for halftone images is proposed and implemented for images that are obtained by the Dot Diffusion (DD) method. Designed technique is based on an optimization of the so-called class matrix used in DD algorithm and it consists of generation new versions of class matrix, which has no baron and near-baron in order to minimize inconsistencies during the distribution of the error. Proposed class matrix has different properties and each is designed for two different applications: applications where the inverse-halftoning is necessary, and applications where this method is not required. The proposed method has been implemented in GPU (NVIDIA GeForce GTX 750 Ti), multicore processors (AMD FX(tm)-6300 Six-Core Processor and in Intel core i5-4200U), using CUDA and OpenCV over a PC with linux. Experimental results have shown that novel framework generates a good quality of the halftone images and the inverse halftone images obtained. The simulation results using parallel architectures have demonstrated the efficiency of the novel technique when it is implemented in real-time processing.

  6. Steganography in clustered-dot halftones using orientation modulation and modification of direct binary search

    Chen, Yung-Yao; Hong, Sheng-Yi; Chen, Kai-Wen


    This paper proposes a novel message-embedded halftoning scheme that is based on orientation modulation (OM) encoding. To achieve high image quality, we employ a human visual system (HVS)-based error metric between the continuous-tone image and a data-embedded halftone, and integrate a modified direct binary search (DBS) framework into the proposed message-embedded halftoning method. The modified DBS framework ensures that the resulting data-embedded halftones have optimal image quality from the viewpoint of the HVS.

  7. Compressive Transient Imaging

    Sun, Qilin


    High resolution transient/3D imaging technology is of high interest in both scientific research and commercial application. Nowadays, all of the transient imaging methods suffer from low resolution or time consuming mechanical scanning. We proposed a new method based on TCSPC and Compressive Sensing to achieve a high resolution transient imaging with a several seconds capturing process. Picosecond laser sends a serious of equal interval pulse while synchronized SPAD camera\\'s detecting gate window has a precise phase delay at each cycle. After capturing enough points, we are able to make up a whole signal. By inserting a DMD device into the system, we are able to modulate all the frames of data using binary random patterns to reconstruct a super resolution transient/3D image later. Because the low fill factor of SPAD sensor will make a compressive sensing scenario ill-conditioned, We designed and fabricated a diffractive microlens array. We proposed a new CS reconstruction algorithm which is able to denoise at the same time for the measurements suffering from Poisson noise. Instead of a single SPAD senor, we chose a SPAD array because it can drastically reduce the requirement for the number of measurements and its reconstruction time. Further more, it not easy to reconstruct a high resolution image with only one single sensor while for an array, it just needs to reconstruct small patches and a few measurements. In this thesis, we evaluated the reconstruction methods using both clean measurements and the version corrupted by Poisson noise. The results show how the integration over the layers influence the image quality and our algorithm works well while the measurements suffer from non-trival Poisson noise. It\\'s a breakthrough in the areas of both transient imaging and compressive sensing.

  8. Image Compression Algorithms Using Dct

    Er. Abhishek Kaushik


    Full Text Available Image compression is the application of Data compression on digital images. The discrete cosine transform (DCT is a technique for converting a signal into elementary frequency components. It is widely used in image compression. Here we develop some simple functions to compute the DCT and to compress images. An image compression algorithm was comprehended using Matlab code, and modified to perform better when implemented in hardware description language. The IMAP block and IMAQ block of MATLAB was used to analyse and study the results of Image Compression using DCT and varying co-efficients for compression were developed to show the resulting image and error image from the original images. Image Compression is studied using 2-D discrete Cosine Transform. The original image is transformed in 8-by-8 blocks and then inverse transformed in 8-by-8 blocks to create the reconstructed image. The inverse DCT would be performed using the subset of DCT coefficients. The error image (the difference between the original and reconstructed image would be displayed. Error value for every image would be calculated over various values of DCT co-efficients as selected by the user and would be displayed in the end to detect the accuracy and compression in the resulting image and resulting performance parameter would be indicated in terms of MSE , i.e. Mean Square Error.

  9. Compressive sensing in medical imaging.

    Graff, Christian G; Sidky, Emil Y


    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  10. Image quality (IQ) guided multispectral image compression

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik


    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  11. Image compression in local helioseismology

    Löptien, Björn; Gizon, Laurent; Schou, Jesper


    Context. Several upcoming helioseismology space missions are very limited in telemetry and will have to perform extensive data compression. This requires the development of new methods of data compression. Aims. We give an overview of the influence of lossy data compression on local helioseismology. We investigate the effects of several lossy compression methods (quantization, JPEG compression, and smoothing and subsampling) on power spectra and time-distance measurements of supergranulation flows at disk center. Methods. We applied different compression methods to tracked and remapped Dopplergrams obtained by the Helioseismic and Magnetic Imager onboard the Solar Dynamics Observatory. We determined the signal-to-noise ratio of the travel times computed from the compressed data as a function of the compression efficiency. Results. The basic helioseismic measurements that we consider are very robust to lossy data compression. Even if only the sign of the velocity is used, time-distance helioseismology is still...

  12. Image Compression using GSOM Algorithm



    Full Text Available image compression. Conventional techniques such as Huffman coding and the Shannon Fano method, LZ Method, Run Length Method, LZ-77 are more recent methods for the compression of data. A traditional approach to reduce the large amount of data would be to discard some data redundancy and introduce some noise after reconstruction. We present a neural network based Growing self-organizing map technique that may be a reliable and efficient way to achieve vector quantization. Typical application of such algorithm is image compression. Moreover, Kohonen networks realize a mapping between an input and an output space that preserves topology. This feature can be used to build new compression schemes which allow obtaining better compression rate than with classical method as JPEG without reducing the image quality .the experiment result show that proposed algorithm improve the compression ratio in BMP, JPG and TIFF File.

  13. Statistical Modelling of Print half-tone mottle in PET-G and PVC Shrink Films

    Akshay V Joshi


    Full Text Available PVC and PET-G (Glycol modified polyethylene terephthalate have the highest consumption in the shrink sleeve market due to its high shrink abilities and cost effectiveness. The reproductions of fine tone details on these films are challenging as the occurrence of graininess and image-noise results in print defect such as print half-tone mottle. The presence of print half-tone mottle is visually disturbing leading to wastage of ink, substrate and time. The purpose of this study is to investigate the effect of gravure process parameters viz. ink viscosity, press speed, impression hardness and line screen and develop statistical model for print half-tone mottle in shrink films. The base line for print half-tone mottle was determined by conducting production runs on press with a defined set of process parameters and the target was set to minimize it from the baseline. The half-tone area was scanned and processed through SFDA algorithm to calculate print half-tone mottle. The design of experiments (DOE was generated for above-mentioned process parameters and was analysed by analysis of variance (ANOVA to find the significant factor affecting the print half-tone mottle. The analysis revealed line screen, viscosity and hardness as significant factors in minimizing print half-tone mottle. The results showed minimization of print half-tone mottle by 28% for both PVC and PET-G films. Furthermore, regression model was developed and validated for print half-tone mottle and a correlation coefficient (R2 of 0.8696 and 0.879 was achieved for PET-G and PVC respectively. The proposed model is helpful in determining the impact of gravure process parameters and prediction of print half-tone mottle in shrink films.

  14. Impact of Electrostatic Assist on Halftone Mottle in Shrink Films

    Akshay V. Joshi


    Full Text Available Gravure printing delivers intricate print quality and exhibit better feasibility for printing long run packaging jobs. PVC and PETG are widely used shrink films printed by gravure process. The variation in ink transfer from gravure cells on to the substrate results in print mottle. The variation is inevitable and requires close monitoring with tight control on process parameters to deliver good dot fidelity. The electrostatic assist in gravure improves the ink transfer efficiency but is greatly influenced by ESA parameters such as air gap (distance between charge bar and impression roller and voltage. Moreover, it is imperative to study the combined effect of ESA and gravure process parameters such as line screen, viscosity and speed for the minimization of half-tone mottle in shrink films. A general full factorial design was performed for the above mentioned parameters to evaluate half-tone mottle. The significant levels of both the main and interactions were studied by ANOVA approach. The statistical analysis revealed the significance of all the process parameters with viscosity, line screen and voltage being the major contributors in minimization of half-tone mottle. The optimized setting showed reduction in halftone mottle by 33% and 32% for PVC and PET-G respectively. The developed regression model was tested that showed more than 95% predictability. Furthermore, the uniformity of dot was measured by image to non-image area (ratio distribution. The result showed reduction in halftone mottle with uniform dot distribution.

  15. Recent trends in digital halftoning

    Delabastita, Paul A.


    Screening is perhaps the oldest form of image processing. The word refers to the mechanical cross line screens that were used at the beginning of this century for the purpose of photomechanical reproduction. Later on, these mechanical screens were replaced by photographic contact screens that enabled significantly improved process control. In the early eighties, the optical screening on graphic arts scanners was replaced by a combination of laser optics and electronic screening. The algorithms, however, were still digital implementations of the original optical methods. The printing needs in the fast growing computer and software industry gave birth to a number of alternative printing technologies such as electrophotographic and inkjet printing. Originally these deices were only designed for printing text, but soon people started experimenting and using them for printing images. The relatively low spatial resolutions of these new devices however made complete review of 'the screening issue' necessary to achieve an acceptable image quality. In this paper a number of recent developments in screening technology are summarized. Special attention is given to the interaction that exists between a halftone screen and the printing devices on which they are rendered including the color mixing behavior. Improved screening techniques are presented that take advantage of modeling the physical behavior of the rendering device.

  16. Compressive Sensing for Quantum Imaging

    Howland, Gregory A.

    This thesis describes the application of compressive sensing to several challenging problems in quantum imaging with practical and fundamental implications. Compressive sensing is a measurement technique that compresses a signal during measurement such that it can be dramatically undersampled. Compressive sensing has been shown to be an extremely efficient measurement technique for imaging, particularly when detector arrays are not available. The thesis first reviews compressive sensing through the lens of quantum imaging and quantum measurement. Four important applications and their corresponding experiments are then described in detail. The first application is a compressive sensing, photon-counting lidar system. A novel depth mapping technique that uses standard, linear compressive sensing is described. Depth maps up to 256 x 256 pixel transverse resolution are recovered with depth resolution less than 2.54 cm. The first three-dimensional, photon counting video is recorded at 32 x 32 pixel resolution and 14 frames-per-second. The second application is the use of compressive sensing for complementary imaging---simultaneously imaging the transverse-position and transverse-momentum distributions of optical photons. This is accomplished by taking random, partial projections of position followed by imaging the momentum distribution on a cooled CCD camera. The projections are shown to not significantly perturb the photons' momenta while allowing high resolution position images to be reconstructed using compressive sensing. A variety of objects and their diffraction patterns are imaged including the double slit, triple slit, alphanumeric characters, and the University of Rochester logo. The third application is the use of compressive sensing to characterize spatial entanglement of photon pairs produced by spontaneous parametric downconversion. The technique gives a theoretical speedup N2/log N for N-dimensional entanglement over the standard raster scanning technique

  17. Compressive passive millimeter wave imager

    Gopalsami, Nachappa; Liao, Shaolin; Elmer, Thomas W; Koehl, Eugene R; Heifetz, Alexander; Raptis, Apostolos C


    A compressive scanning approach for millimeter wave imaging and sensing. A Hadamard mask is positioned to receive millimeter waves from an object to be imaged. A subset of the full set of Hadamard acquisitions is sampled. The subset is used to reconstruct an image representing the object.

  18. Morphological Transform for Image Compression

    Luis Pastor Sanchez Fernandez


    Full Text Available A new method for image compression based on morphological associative memories (MAMs is presented. We used the MAM to implement a new image transform and applied it at the transformation stage of image coding, thereby replacing such traditional methods as the discrete cosine transform or the discrete wavelet transform. Autoassociative and heteroassociative MAMs can be considered as a subclass of morphological neural networks. The morphological transform (MT presented in this paper generates heteroassociative MAMs derived from image subblocks. The MT is applied to individual blocks of the image using some transformation matrix as an input pattern. Depending on this matrix, the image takes a morphological representation, which is used to perform the data compression at the next stages. With respect to traditional methods, the main advantage offered by the MT is the processing speed, whereas the compression rate and the signal-to-noise ratio are competitive to conventional transforms.

  19. 基于图像阶调与人眼视觉模型的彩色误差扩散网目调方法%Color Error Diffusion Halftoning Method Based on Image Tone and Human Visual System

    易尧华; 于晓庆


    In the process of color error diffusion halfioning, the quality of the color halftoning image will be affected directly by the design of the error diffusion filter with different color channels. This paper studied the method of error diffusion based on tone and the human visual system(HVS), optimized the filter coefficient and the threshold by applying the luminance and chrominance HVS, and the color error diffusion halftoning method based on the image tone and HVS had been received. The results showed that this method can reduce the artifacts in color halftoning images effectively and improve the accuracy of color rendition.%在彩色误差扩散网目调处理过程中,各色通道不同的误差滤波器设计将直接影响彩色网目调图像的质量.本文对基于阶调的误差扩散方法以及人眼视觉特性进行了分析研究,应用亮度和色度人眼视觉模型对误差扩散过程中的滤波器系数和阈值进行优化,实现了基于图像阶调与人眼视觉模型的彩色误差扩散网目调方法.实验结果表明,该方法能够有效地减少彩色网目调图像的人工纹理,并显著提高再现彩色图像的色彩还原精度.

  20. Brain image Compression, a brief survey

    Saleha Masood


    Full Text Available Brain image compression is known as a subfield of image compression. It allows the deep analysis and measurements of brain images in different modes. Brain images are compressed to analyze and diagnose in an effective manner while reducing the image storage space. This survey study describes the different existing techniques regarding brain image compression. The techniques come under different categories. The study also discusses these categories.

  1. Comparing image compression methods in biomedical applications

    Libor Hargas


    Full Text Available Compression methods suitable for image processing are described in this article in biomedical applications. The compression is often realized by reduction of irrelevance or redundancy. There are described lossless and lossy compression methods which can be use for compress of images in biomedical applications and comparison of these methods based on fidelity criteria.

  2. Application of Modified Digital Halftoning Techniques to Data Hiding in Personalized Stamps

    Hsi-Chun Wang; Chi-Ming Lian; Pei-Chi Hsiao


    The objective of this research is to embed information in personalized stamps by modified digital halftoning techniques. The displaced and deformed halftone dots are used to encode data in the personalized stamps. Hidden information can be retrieved by either an optical decoder or digital image processing techniques.The results show that personalized stamps with value-added features like data hiding or digital watermarking can be successfully implemented.


    Sunil Agrawal


    Full Text Available Visual cryptography encodes a secret binary image (SI into shares of random binary patterns. If the shares are xeroxed onto transparencies, the secret image can be visually decoded by superimposing a qualified subset of transparencies, but no secret information can be obtained from the superposition of a forbidden subset. The binary patterns of the shares, however, have no visual meaning and hinder the objectives of visual cryptography. Halftone visual cryptography encodes a secret binary image into n halftone shares (images carrying significant visual information. When secrecy is important factor rather than the quality of recovered image the shares must be of better visual quality. Different filters such as Floyd-Steinberg, Jarvis, Stuki, Burkes, Sierra, and Stevenson’s-Arce are used and their impact on visual quality of shares is seen. The simulation shows that error filters used in error diffusion lays a great impact on the visual quality of the shares.

  4. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Xiangwei Li


    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  5. Efficient lossy compression for compressive sensing acquisition of images in compressive sensing imaging systems.

    Li, Xiangwei; Lan, Xuguang; Yang, Meng; Xue, Jianru; Zheng, Nanning


    Compressive Sensing Imaging (CSI) is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS) acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  6. Image Quality Meter Using Compression

    Muhammad Ibrar-Ul-Haque


    Full Text Available This paper proposed a new technique to compressed image blockiness/blurriness in frequency domain through edge detection method by applying Fourier transform. In image processing, boundaries are characterized by edges and thus, edges are the problems of fundamental importance. The edges have to be identified and computed thoroughly in order to retrieve the complete illustration of the image. Our novel edge detection scheme for blockiness and blurriness shows improvement of 60 and 100 blocks for high frequency components respectively than any other detection technique.

  7. Passive Copy-Move Forgery Detection Using Halftoning-based Block Truncation Coding Feature

    Harjito, Bambang; Prasetyo, Heri


    This paper presents a new method on passive copy-move forgery detection by exploiting the effectiveness and usability of Halftoning-based Block Truncation Coding (HBTC) image feature. Copy-move forgery detection precisely locates the large size or flat tampered regions of an image. On our method, the tampered input image is firstly divided into several overlapping image blocks to construct the image feature descriptors. Each image block is further divided into several non-overlapping image blocks for processing HBTC. Two image feature descriptors, namely Color Feature (CF) and Bit Pattern Feature (BF) are computed from the HBTC compressed data-stream of each image block. Lexicography sorting rearranges the image feature descriptors in ascending manner for whole image. The similarity between some tampered image regions is measured based on their CF and BF under specific shift frequency threshold. As documented in the experimental results, the proposed method yields a promising result for detecting the tampered or copy-move forgery regions. It has proved that the HBTC is not only suitable for image compression, but it can also be used in the copy-move forgery detection.

  8. BPCS steganography using EZW lossy compressed images

    Spaulding, Jeremiah; Noda, Hideki; Shirazi, Mahdad N.; Kawaguchi, Eiji


    This paper presents a steganography method based on an embedded zerotree wavelet (EZW) compression scheme and bit-plane complexity segmentation (BPCS) steganography. The proposed steganography enables us to use lossy compressed images as dummy files in bit-plane-based steganographic algorithms. Large embedding rates of around 25% of the compressed image size were achieved with little noticeable degradation in image quality.


    Sunil Agrawal; Anshul Sharma


    Visual cryptography encodes a secret binary image (SI) into shares of random binary patterns. If the shares are xeroxed onto transparencies, the secret image can be visually decoded by superimposing a qualified subset of transparencies, but no secret information can be obtained from the superposition of a forbidden subset. The binary patterns of the shares, however, have no visual meaning and hinder the objectives of visual cryptography. Halftone visual cryptography encodes a s...

  10. Image Compression Using Harmony Search Algorithm

    Ryan Rey M. Daga


    Full Text Available Image compression techniques are important and useful in data storage and image transmission through the Internet. These techniques eliminate redundant information in an image which minimizes the physical space requirement of the image. Numerous types of image compression algorithms have been developed but the resulting image is still less than the optimal. The Harmony search algorithm (HSA, a meta-heuristic optimization algorithm inspired by the music improvisation process of musicians, was applied as the underlying algorithm for image compression. Experiment results show that it is feasible to use the harmony search algorithm as an algorithm for image compression. The HSA-based image compression technique was able to compress colored and grayscale images with minimal visual information loss.

  11. Image Compression Using Discrete Wavelet Transform

    Mohammad Mozammel Hoque Chowdhury


    Full Text Available Image compression is a key technology in transmission and storage of digital images because of vast data associated with them. This research suggests a new image compression scheme with pruning proposal based on discrete wavelet transformation (DWT. The effectiveness of the algorithm has been justified over some real images, and the performance of the algorithm has been compared with other common compression standards. The algorithm has been implemented using Visual C++ and tested on a Pentium Core 2 Duo 2.1 GHz PC with 1 GB RAM. Experimental results demonstrate that the proposed technique provides sufficient high compression ratios compared to other compression techniques.

  12. Digital image compression in dermatology: format comparison.

    Guarneri, F; Vaccaro, M; Guarneri, C


    Digital image compression (reduction of the amount of numeric data needed to represent a picture) is widely used in electronic storage and transmission devices. Few studies have compared the suitability of the different compression algorithms for dermatologic images. We aimed at comparing the performance of four popular compression formats, Tagged Image File (TIF), Portable Network Graphics (PNG), Joint Photographic Expert Group (JPEG), and JPEG2000 on clinical and videomicroscopic dermatologic images. Nineteen (19) clinical and 15 videomicroscopic digital images were compressed using JPEG and JPEG2000 at various compression factors and TIF and PNG. TIF and PNG are "lossless" formats (i.e., without alteration of the image), JPEG is "lossy" (the compressed image has a lower quality than the original), JPEG2000 has a lossless and a lossy mode. The quality of the compressed images was assessed subjectively (by three expert reviewers) and quantitatively (by measuring, point by point, the color differences from the original). Lossless JPEG2000 (49% compression) outperformed the other lossless algorithms, PNG and TIF (42% and 31% compression, respectively). Lossy JPEG2000 compression was slightly less efficient than JPEG, but preserved image quality much better, particularly at higher compression factors. For its good quality and compression ratio, JPEG2000 appears to be a good choice for clinical/videomicroscopic dermatologic image compression. Additionally, its diffusion and other features, such as the possibility of embedding metadata in the image file and to encode various parts of an image at different compression levels, make it perfectly suitable for the current needs of dermatology and teledermatology.

  13. Lossless Compression of Digital Images

    Martins, Bo

    Presently, tree coders are the best bi-level image coders. The currentISO standard, JBIG, is a good example.By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code.A number of general-purpose coders...... version that is substantially faster than its precursorsand brings it close to the multi-pass coders in compression performance.Handprinted characters are of unequal complexity; recent work by Singer and Tishby demonstrates that utilizing the physiological process of writing one can synthesize cursive...

  14. Simultaneous denoising and compression of multispectral images

    Hagag, Ahmed; Amin, Mohamed; Abd El-Samie, Fathi E.


    A new technique for denoising and compression of multispectral satellite images to remove the effect of noise on the compression process is presented. One type of multispectral images has been considered: Landsat Enhanced Thematic Mapper Plus. The discrete wavelet transform (DWT), the dual-tree DWT, and a simple Huffman coder are used in the compression process. Simulation results show that the proposed technique is more effective than other traditional compression-only techniques.

  15. Region-Based Image-Fusion Framework for Compressive Imaging

    Yang Chen


    Full Text Available A novel region-based image-fusion framework for compressive imaging (CI and its implementation scheme are proposed. Unlike previous works on conventional image fusion, we consider both compression capability on sensor side and intelligent understanding of the image contents in the image fusion. Firstly, the compressed sensing theory and normalized cut theory are introduced. Then region-based image-fusion framework for compressive imaging is proposed and its corresponding fusion scheme is constructed. Experiment results demonstrate that the proposed scheme delivers superior performance over traditional compressive image-fusion schemes in terms of both object metrics and visual quality.

  16. Studies on image compression and image reconstruction

    Sayood, Khalid; Nori, Sekhar; Araj, A.


    During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included.

  17. Check the special moves Halftone a central sun sunspot different angles using local correlation tracking

    Monireh Askarikhah


    Full Text Available Sunspots, solar magnetic field effect on a large scale are outstanding. In this research field study of surface movement (special move in a Lightening Solar Shade Halftone sphere central angle of the sun in three different here. The evolution of current research and special horizontal movement in a sunspot on the basis of time-series observations imaging data in the blue spectral range with a wavelength continuum Central line spots active area of 4504 angstroms During the 3 day 10933NOAA dated 7 January (9.0 hours (UT 12:35 until (UT 12: 56, 8 January (8.0 hours (UT 06: 00 to (UT 06 21, Jan 9 (6/0 of the time (UT 05: 00 to (UT 05: 21, 2007 were obtained by using LCT (local correlation tracking has studied. Halftone stains in the three-averaged (averaged over 10 consecutive images and averaged over 20 consecutive images flow rate for each of the three categories Map angles (total 9 speed stream map obtained, as well as a lot of speed graph speed on the map, each of which is for an angle we examined. What is clear in some parts of the maps quickly climb (eruption in plasma and in some places fall (collapse plasma-level Halftone be observed. The maps quickly, the (current intensity Halftone patterns toward the inner penumbra shadow and movement patterns foreign to the outside strongly suggest Halftone That resulted in the dismissal of this shift is the dividing line that location is reached. Due to the frequency graph maps quickly we realized all three angles to this topic Slick passing moves quickly, especially given that the three angles of the half shadow has fallen. As well as speed of movement of the intensity of the Halftone patterns of the dividing line within the shadows of the reductions in external Halftone dividing line toward the photosphere increases.

  18. Development of Wavelet Image Compression Technique to Particle Image Velocimetry



    In order to reduce the noise in the images and the physical storage,the wavelet-based image compression technique was applied to PIV processing in this paper,To study the effect of the wavelet bases,the standard PIV images were compressed by some known wavelet families,Daubechies,Coifman and Baylkin families with various compression ratios.It was found that a higher order wavelet base provided good compression performance for compressing PIV images,The error analysis of velocity field obtained indicated that the high compression ratio even up to 64:1,can be realized without losing significant flow information in PIV processing.The wavelet compression technique of PIV was applied to the experimental images of jet flow and showed excellent performance,A reduced number of erroneous vectors can be realized by varying compression ratio.It can say that the wavelet image compression technique is very effective in PIV system.

  19. Experimental Study of Fractal Image Compression Algorithm

    Chetan R. Dudhagara


    Full Text Available Image compression applications have been increasing in recent years. Fractal compression is a lossy compression method for digital images, based on fractals. The method is best suited for textures and natural images, relying on the fact that parts of an image often resemble other parts of the same image. In this paper, a study on fractal-based image compression and fixed-size partitioning will be made, analyzed for performance and compared with a standard frequency domain based image compression standard, JPEG. Sample images will be used to perform compression and decompression. Performance metrics such as compression ratio, compression time and decompression time will be measured in JPEG cases. Also the phenomenon of resolution/scale independence will be studied and described with examples. Fractal algorithms convert these parts into mathematical data called "fractal codes" which are used to recreate the encoded image. Fractal encoding is a mathematical process used to encode bitmaps containing a real-world image as a set of mathematical data that describes the fractal properties of the image. Fractal encoding relies on the fact that all natural, and most artificial, objects contain redundant information in the form of similar, repeating patterns called fractals.

  20. Still image and video compression with MATLAB

    Thyagarajan, K


    This book describes the principles of image and video compression techniques and introduces current and popular compression standards, such as the MPEG series. Derivations of relevant compression algorithms are developed in an easy-to-follow fashion. Numerous examples are provided in each chapter to illustrate the concepts. The book includes complementary software written in MATLAB SIMULINK to give readers hands-on experience in using and applying various video compression methods. Readers can enhance the software by including their own algorithms.

  1. Mathematical transforms and image compression: A review

    Satish K. Singh


    Full Text Available It is well known that images, often used in a variety of computer and other scientific and engineering applications, are difficult to store and transmit due to their sizes. One possible solution to overcome this problem is to use an efficient digital image compression technique where an image is viewed as a matrix and then the operations are performed on the matrix. All the contemporary digital image compression systems use various mathematical transforms for compression. The compression performance is closely related to the performance by these mathematical transforms in terms of energy compaction and spatial frequency isolation by exploiting inter-pixel redundancies present in the image data. Through this paper, a comprehensive literature survey has been carried out and the pros and cons of various transform-based image compression models have also been discussed.

  2. An efficient medical image compression scheme.

    Li, Xiaofeng; Shen, Yi; Ma, Jiachen


    In this paper, a fast lossless compression scheme is presented for the medical image. This scheme consists of two stages. In the first stage, a Differential Pulse Code Modulation (DPCM) is used to decorrelate the raw image data, therefore increasing the compressibility of the medical image. In the second stage, an effective scheme based on the Huffman coding method is developed to encode the residual image. This newly proposed scheme could reduce the cost for the Huffman coding table while achieving high compression ratio. With this algorithm, a compression ratio higher than that of the lossless JPEG method for image can be obtained. At the same time, this method is quicker than the lossless JPEG2000. In other words, the newly proposed algorithm provides a good means for lossless medical image compression.

  3. Image Processing by Compression: An Overview


    International audience; This article aims to present the various applications of data compression in image processing. Since some time ago, several research groups have been developing methods based on different data compression techniques to classify, segment, filter and detect digital images fakery. It is necessary to analyze the relationship between different methods and put them into a framework to better understand and better exploit the possibilities that compression provides us respect...

  4. Combined Sparsifying Transforms for Compressive Image Fusion

    ZHAO, L.


    Full Text Available In this paper, we present a new compressive image fusion method based on combined sparsifying transforms. First, the framework of compressive image fusion is introduced briefly. Then, combined sparsifying transforms are presented to enhance the sparsity of images. Finally, a reconstruction algorithm based on the nonlinear conjugate gradient is presented to get the fused image. The simulations demonstrate that by using the combined sparsifying transforms better results can be achieved in terms of both the subjective visual effect and the objective evaluation indexes than using only a single sparsifying transform for compressive image fusion.

  5. Semantic Source Coding for Flexible Lossy Image Compression

    Phoha, Shashi; Schmiedekamp, Mendel


    Semantic Source Coding for Lossy Video Compression investigates methods for Mission-oriented lossy image compression, by developing methods to use different compression levels for different portions...

  6. Image compression algorithm using wavelet transform

    Cadena, Luis; Cadena, Franklin; Simonov, Konstantin; Zotin, Alexander; Okhotnikov, Grigory


    Within the multi-resolution analysis, the study of the image compression algorithm using the Haar wavelet has been performed. We have studied the dependence of the image quality on the compression ratio. Also, the variation of the compression level of the studied image has been obtained. It is shown that the compression ratio in the range of 8-10 is optimal for environmental monitoring. Under these conditions the compression level is in the range of 1.7 - 4.2, depending on the type of images. It is shown that the algorithm used is more convenient and has more advantages than Winrar. The Haar wavelet algorithm has improved the method of signal and image processing.

  7. Ink-constrained halftoning with application to QR codes

    Bayeh, Marzieh; Compaan, Erin; Lindsey, Theodore; Orlow, Nathan; Melczer, Stephen; Voller, Zachary


    This paper examines adding visually significant, human recognizable data into QR codes without affecting their machine readability by utilizing known methods in image processing. Each module of a given QR code is broken down into pixels, which are halftoned in such a way as to keep the QR code structure while revealing aspects of the secondary image to the human eye. The loss of information associated to this procedure is discussed, and entropy values are calculated for examples given in the paper. Numerous examples of QR codes with embedded images are included.

  8. Digital Image Compression Using Artificial Neural Networks

    Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.


    The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.

  9. Review Article: An Overview of Image Compression Techniques

    M. Marimuthu


    Full Text Available To store an image, large quantities of digital data are required. Due to limited bandwidth, image must be compressed before transmission. However, image compression reduces the image fidelity, when an image is compressed at low bitrates. Hence, the compressed images suffer from block artifacts. To meet this, several compression schemes have been developed in image processing. This study presents an overview of compression techniques for image applications. It covers the lossy and lossless compression algorithm used for still image and other applications. The focus of this article is based on the overview of VLSI DCT architecture for image compression. Further, this new approach may provide better results.

  10. Review on Lossless Image Compression Techniques for Welding Radiographic Images

    B. Karthikeyan


    Full Text Available Recent development in image processing allows us to apply it in different domains. Radiography image of weld joint is one area where image processing techniques can be applied. It can be used to identify the quality of the weld joint. For this the image has to be stored and processed later in the labs. In order to optimize the use of disk space compression is required. The aim of this study is to find a suitable and efficient lossless compression technique for radiographic weld images. Image compression is a technique by which the amount of data required to represent information is reduced. Hence image compression is effectively carried out by removing the redundant data. This study compares different ways of compressing the radiography images using combinations of different lossless compression techniques like RLE, Huffman.

  11. Segmentation-based CT image compression

    Thammineni, Arunoday; Mukhopadhyay, Sudipta; Kamath, Vidya


    The existing image compression standards like JPEG and JPEG 2000, compress the whole image as a single frame. This makes the system simple but inefficient. The problem is acute for applications where lossless compression is mandatory viz. medical image compression. If the spatial characteristics of the image are considered, it can give rise to a more efficient coding scheme. For example, CT reconstructed images have uniform background outside the field of view (FOV). Even the portion within the FOV can be divided as anatomically relevant and irrelevant parts. They have distinctly different statistics. Hence coding them separately will result in more efficient compression. Segmentation is done based on thresholding and shape information is stored using 8-connected differential chain code. Simple 1-D DPCM is used as the prediction scheme. The experiments show that the 1st order entropies of images fall by more than 11% when each segment is coded separately. For simplicity and speed of decoding Huffman code is chosen for entropy coding. Segment based coding will have an overhead of one table per segment but the overhead is minimal. Lossless compression of image based on segmentation resulted in reduction of bit rate by 7%-9% compared to lossless compression of whole image as a single frame by the same prediction coder. Segmentation based scheme also has the advantage of natural ROI based progressive decoding. If it is allowed to delete the diagnostically irrelevant portions, the bit budget can go down as much as 40%. This concept can be extended to other modalities.

  12. A Novel Fractal Wavelet Image Compression Approach

    SONG Chun-lin; FENG Rui; LIU Fu-qiang; CHEN Xi


    By investigating the limitation of existing wavelet tree based image compression methods, we propose a novel wavelet fractal image compression method in this paper. Briefly, the initial errors are appointed given the different levels of importance accorded the frequency sublevel band wavelet coefficients. Higher frequency sublevel bands would lead to larger initial errors. As a result, the sizes of sublevel blocks and super blocks would be changed according to the initial errors. The matching sizes between sublevel blocks and super blocks would be changed according to the permitted errors and compression rates. Systematic analyses are performed and the experimental results demonstrate that the proposed method provides a satisfactory performance with a clearly increasing rate of compression and speed of encoding without reducing SNR and the quality of decoded images. Simulation results show that our method is superior to the traditional wavelet tree based methods of fractal image compression.

  13. Compressive Imaging via Approximate Message Passing with Image Denoising

    Tan, Jin; Ma, Yanting; Baron, Dror


    We consider compressive imaging problems, where images are reconstructed from a reduced number of linear measurements. Our objective is to improve over existing compressive imaging algorithms in terms of both reconstruction error and runtime. To pursue our objective, we propose compressive imaging algorithms that employ the approximate message passing (AMP) framework. AMP is an iterative signal reconstruction algorithm that performs scalar denoising at each iteration; in order for AMP to reco...

  14. Halftone biasing OPC technology: an approach for achieving fine bias control on raster-scan systems

    Nakagawa, Kent H.; Chen, J. Fung; Socha, Robert J.; Laidig, Thomas L.; Wampler, Kurt E.; Van Den Broeke, Douglas J.; Dusa, Mircea V.; Caldwell, Roger F.


    As the semiconductor roadmap continues to require imaging of smaller features on wafers, we continue to explore new approaches in OPC strategies to enhance existing technology. Advanced reticle design, intended for printing sub-wavelength features, requires the support of very fine-increment biases on semi-densely-pitched lines, where the CD correction requires only a fraction of the spot size of an e-beam system. Halftone biasing, a new OPC strategy, has been proposed to support these biases on a raster-scan e-beam system without the need for a reduced address unit and the consequent write time penalty. The manufacturability and inspectability of halftone-biased lines are explored, using an OPC characterization reticle. Pattern fidelity is examined using both optical and SEM tools. Printed DUV resist line edge profiles are compared for both halftone and non-halftone feature edges. Halftone biasing was applied to an SRAM-type simulation reticle, to examine its impact on data volume, write time reduction, and printing performance.

  15. Wavelet transform based watermark for digital images.

    Xia, X G; Boncelet, C; Arce, G


    In this paper, we introduce a new multiresolution watermarking method for digital images. The method is based on the discrete wavelet transform (DWT). Pseudo-random codes are added to the large coefficients at the high and middle frequency bands of the DWT of an image. It is shown that this method is more robust to proposed methods to some common image distortions, such as the wavelet transform based image compression, image rescaling/stretching and image halftoning. Moreover, the method is hierarchical.

  16. Lossless Compression on MRI Images Using SWT.

    Anusuya, V; Raghavan, V Srinivasa; Kavitha, G


    Medical image compression is one of the growing research fields in biomedical applications. Most medical images need to be compressed using lossless compression as each pixel information is valuable. With the wide pervasiveness of medical imaging applications in health-care settings and the increased interest in telemedicine technologies, it has become essential to reduce both storage and transmission bandwidth requirements needed for archival and communication of related data, preferably by employing lossless compression methods. Furthermore, providing random access as well as resolution and quality scalability to the compressed data has become of great utility. Random access refers to the ability to decode any section of the compressed image without having to decode the entire data set. The system proposes to implement a lossless codec using an entropy coder. 3D medical images are decomposed into 2D slices and subjected to 2D-stationary wavelet transform (SWT). The decimated coefficients are compressed in parallel using embedded block coding with optimized truncation of the embedded bit stream. These bit streams are decoded and reconstructed using inverse SWT. Finally, the compression ratio (CR) is evaluated to prove the efficiency of the proposal. As an enhancement, the proposed system concentrates on minimizing the computation time by introducing parallel computing on the arithmetic coding stage as it deals with multiple subslices.

  17. Context-Aware Image Compression.

    Jacky C K Chan

    Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.

  18. Cloud Optimized Image Format and Compression

    Becker, P.; Plesea, L.; Maurer, T.


    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  19. Lossless compression of VLSI layout image data.

    Dai, Vito; Zakhor, Avideh


    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.

  20. Lossless wavelet compression on medical image

    Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong


    An increasing number of medical imagery is created directly in digital form. Such as Clinical image Archiving and Communication Systems (PACS), as well as telemedicine networks require the storage and transmission of this huge amount of medical image data. Efficient compression of these data is crucial. Several lossless and lossy techniques for the compression of the data have been proposed. Lossless techniques allow exact reconstruction of the original imagery, while lossy techniques aim to achieve high compression ratios by allowing some acceptable degradation in the image. Lossless compression does not degrade the image, thus facilitating accurate diagnosis, of course at the expense of higher bit rates, i.e. lower compression ratios. Various methods both for lossy (irreversible) and lossless (reversible) image compression are proposed in the literature. The recent advances in the lossy compression techniques include different methods such as vector quantization. Wavelet coding, neural networks, and fractal coding. Although these methods can achieve high compression ratios (of the order 50:1, or even more), they do not allow reconstructing exactly the original version of the input data. Lossless compression techniques permit the perfect reconstruction of the original image, but the achievable compression ratios are only of the order 2:1, up to 4:1. In our paper, we use a kind of lifting scheme to generate truly loss-less non-linear integer-to-integer wavelet transforms. At the same time, we exploit the coding algorithm producing an embedded code has the property that the bits in the bit stream are generated in order of importance, so that all the low rate codes are included at the beginning of the bit stream. Typically, the encoding process stops when the target bit rate is met. Similarly, the decoder can interrupt the decoding process at any point in the bit stream, and still reconstruct the image. Therefore, a compression scheme generating an embedded code can

  1. Iris Recognition: The Consequences of Image Compression

    Bishop DanielA


    Full Text Available Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  2. Iris Recognition: The Consequences of Image Compression

    Ives, Robert W.; Bishop, Daniel A.; Du, Yingzi; Belcher, Craig


    Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  3. Image quality, compression and segmentation in medicine.

    Morgan, Pam; Frankish, Clive


    This review considers image quality in the context of the evolving technology of image compression, and the effects image compression has on perceived quality. The concepts of lossless, perceptually lossless, and diagnostically lossless but lossy compression are described, as well as the possibility of segmented images, combining lossy compression with perceptually lossless regions of interest. The different requirements for diagnostic and training images are also discussed. The lack of established methods for image quality evaluation is highlighted and available methods discussed in the light of the information that may be inferred from them. Confounding variables are also identified. Areas requiring further research are illustrated, including differences in perceptual quality requirements for different image modalities, image regions, diagnostic subtleties, and tasks. It is argued that existing tools for measuring image quality need to be refined and new methods developed. The ultimate aim should be the development of standards for image quality evaluation which take into consideration both the task requirements of the images and the acceptability of the images to the users.

  4. Compression Techniques for Image Processing Tasks


    International audience; This article aims to present an overview of the different applications of data compression techniques in the image processing filed. Since some time ago, several research groups in the world have been developing various methods based on different data compression techniques to classify, segment, filter and detect digital images fakery. In this sense, it is necessary to analyze and clarify the relationship between different methods and put them into a framework to bette...

  5. Image Compression using Space Adaptive Lifting Scheme

    Ramu Satyabama


    Full Text Available Problem statement: Digital images play an important role both in daily life applications as well as in areas of research and technology. Due to the increasing traffic caused by multimedia information and digitized form of representation of images; image compression has become a necessity. Approach: Wavelet transform has demonstrated excellent image compression performance. New algorithms based on Lifting style implementation of wavelet transforms have been presented in this study. Adaptively is introduced in lifting by choosing the prediction operator based on the local properties of the image. The prediction filters are chosen based on the edge detection and the relative local variance. In regions where the image is locally smooth, we use higher order predictors and near edges we reduce the order and thus the length of the predictor. Results: We have applied the adaptive prediction algorithms to test images. The original image is transformed using adaptive lifting based wavelet transform and it is compressed using Set Partitioning In Hierarchical Tree algorithm (SPIHT and the performance is compared with the popular 9/7 wavelet transform. The performance metric Peak Signal to Noise Ratio (PSNR for the reconstructed image is computed. Conclusion: The proposed adaptive algorithms give better performance than 9/7 wavelet, the most popular wavelet transforms. Lifting allows us to incorporate adaptivity and nonlinear operators into the transform. The proposed methods efficiently represent the edges and appear promising for image compression. The proposed adaptive methods reduce edge artifacts and ringing and give improved PSNR for edge dominated images.

  6. Hyperspectral image data compression based on DSP

    Fan, Jiming; Zhou, Jiankang; Chen, Xinhua; Shen, Weimin


    The huge data volume of hyperspectral image challenges its transportation and store. It is necessary to find an effective method to compress the hyperspectral image. Through analysis and comparison of current various algorithms, a mixed compression algorithm based on prediction, integer wavelet transform and embedded zero-tree wavelet (EZW) is proposed in this paper. We adopt a high-powered Digital Signal Processor (DSP) of TMS320DM642 to realize the proposed algorithm. Through modifying the mixed algorithm and optimizing its algorithmic language, the processing efficiency of the program was significantly improved, compared the non-optimized one. Our experiment show that the mixed algorithm based on DSP runs much faster than the algorithm on personal computer. The proposed method can achieve the nearly real-time compression with excellent image quality and compression performance.

  7. Information preserving image compression for archiving NMR images.

    Li, C C; Gokmen, M; Hirschman, A D; Wang, Y


    This paper presents a result on information preserving compression of NMR images for the archiving purpose. Both Lynch-Davisson coding and linear predictive coding have been studied. For NMR images of 256 x 256 x 12 resolution, the Lynch-Davisson coding with a block size of 64 as applied to prediction error sequences in the Gray code bit planes of each image gave an average compression ratio of 2.3:1 for 14 testing images. The predictive coding with a third order linear predictor and the Huffman encoding of the prediction error gave an average compression ratio of 3.1:1 for 54 images under test, while the maximum compression ratio achieved was 3.8:1. This result is one step further toward the improvement, albeit small, of the information preserving image compression for medical applications.

  8. A New Approach for Fingerprint Image Compression

    Mazieres, Bertrand


    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  9. Backpropagation Neural Network Implementation for Medical Image Compression

    Kamil Dimililer


    Full Text Available Medical images require compression, before transmission or storage, due to constrained bandwidth and storage capacity. An ideal image compression system must yield high-quality compressed image with high compression ratio. In this paper, Haar wavelet transform and discrete cosine transform are considered and a neural network is trained to relate the X-ray image contents to their ideal compression method and their optimum compression ratio.

  10. An efficient adaptive arithmetic coding image compression technology

    Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei


    This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm.The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding.The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block.The decoded image block can accurately recover the encoded image according to the code book information.We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate.The results show that it is an effective compression technology.

  11. Hybrid Prediction and Fractal Hyperspectral Image Compression

    Shiping Zhu


    Full Text Available The data size of hyperspectral image is too large for storage and transmission, and it has become a bottleneck restricting its applications. So it is necessary to study a high efficiency compression method for hyperspectral image. Prediction encoding is easy to realize and has been studied widely in the hyperspectral image compression field. Fractal coding has the advantages of high compression ratio, resolution independence, and a fast decoding speed, but its application in the hyperspectral image compression field is not popular. In this paper, we propose a novel algorithm for hyperspectral image compression based on hybrid prediction and fractal. Intraband prediction is implemented to the first band and all the remaining bands are encoded by modified fractal coding algorithm. The proposed algorithm can effectively exploit the spectral correlation in hyperspectral image, since each range block is approximated by the domain block in the adjacent band, which is of the same size as the range block. Experimental results indicate that the proposed algorithm provides very promising performance at low bitrate. Compared to other algorithms, the encoding complexity is lower, the decoding quality has a great enhancement, and the PSNR can be increased by about 5 dB to 10 dB.

  12. Issues in multiview autostereoscopic image compression

    Shah, Druti; Dodgson, Neil A.


    Multi-view auto-stereoscopic images and image sequences require large amounts of space for storage and large bandwidth for transmission. High bandwidth can be tolerated for certain applications where the image source and display are close together but, for long distance or broadcast, compression of information is essential. We report on the results of our two- year investigation into multi-view image compression. We present results based on four techniques: differential pulse code modulation (DPCM), disparity estimation, three- dimensional discrete cosine transform (3D-DCT), and principal component analysis (PCA). Our work on DPCM investigated the best predictors to use for predicting a given pixel. Our results show that, for a given pixel, it is generally the nearby pixels within a view that provide better prediction than the corresponding pixel values in adjacent views. This led to investigations into disparity estimation. We use both correlation and least-square error measures to estimate disparity. Both perform equally well. Combining this with DPCM led to a novel method of encoding, which improved the compression ratios by a significant factor. The 3D-DCT has been shown to be a useful compression tool, with compression schemes based on ideas from the two-dimensional JPEG standard proving effective. An alternative to 3D-DCT is PCA. This has proved to be less effective than the other compression methods investigated.

  13. Multiband and Lossless Compression of Hyperspectral Images

    Raffaele Pizzolante


    Full Text Available Hyperspectral images are widely used in several real-life applications. In this paper, we investigate on the compression of hyperspectral images by considering different aspects, including the optimization of the computational complexity in order to allow implementations on limited hardware (i.e., hyperspectral sensors, etc.. We present an approach that relies on a three-dimensional predictive structure. Our predictive structure, 3D-MBLP, uses one or more previous bands as references to exploit the redundancies among the third dimension. The achieved results are comparable, and often better, with respect to the other state-of-art lossless compression techniques for hyperspectral images.

  14. Gated viewing laser imaging with compressive sensing.

    Li, Li; Wu, Lei; Wang, Xingbin; Dang, Ersheng


    We present a prototype of gated viewing laser imaging with compressive sensing (GVLICS). By a new framework named compressive sensing, it is possible for us to perform laser imaging using a single-pixel detector where the transverse spatial resolution is obtained. Moreover, combining compressive sensing with gated viewing, the three-dimensional (3D) scene can be reconstructed by the time-slicing technique. The simulations are accomplished to evaluate the characteristics of the proposed GVLICS prototype. Qualitative analysis of Lissajous-type eye-pattern figures indicates that the range accuracy of the reconstructed 3D images is affected by the sampling rate, the image's noise, and the complexity of the scenes.

  15. Feature-based Image Sequence Compression Coding


    A novel compressing method for video teleconference applications is presented. Semantic-based coding based on human image feature is realized, where human features are adopted as parameters. Model-based coding and the concept of vector coding are combined with the work on image feature extraction to obtain the result.

  16. Lossless image compression technique for infrared thermal images

    Allred, Lloyd G.; Kelly, Gary E.


    The authors have achieved a 6.5-to-one image compression technique for thermal images (640 X 480, 1024 colors deep). Using a combination of new and more traditional techniques, the combined algorithm is computationally simple, enabling `on-the-fly' compression and storage of an image in less time than it takes to transcribe the original image to or from a magnetic medium. Similar compression has been achieved on visual images by virtue of the feature that all optical devices possess a modulation transfer function. As a consequence of this property, the difference in color between adjacent pixels is a usually small number, often between -1 and +1 graduations for a meaningful color scheme. By differentiating adjacent rows and columns, the original image can be expressed in terms of these small numbers. A simple compression algorithm for these small numbers achieves a four to one image compression. By piggy-backing this technique with a LZW compression or a fixed Huffman coding, an additional 35% image compression is obtained, resulting in a 6.5-to-one lossless image compression. Because traditional noise-removal operators tend to minimize the color graduations between adjacent pixels, an additional 20% reduction can be obtained by preprocessing the image with a noise-removal operator. Although noise removal operators are not lossless, their application may prove crucial in applications requiring high compression, such as the storage or transmission of a large number or images. The authors are working with the Air Force Photonics Technology Application Program Management office to apply this technique to transmission of optical images from satellites.

  17. Efficient predictive algorithms for image compression

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla


    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  18. MRC for compression of Blake Archive images

    Misic, Vladimir; Kraus, Kari; Eaves, Morris; Parker, Kevin J.; Buckley, Robert R.


    The William Blake Archive is part of an emerging class of electronic projects in the humanities that may be described as hypermedia archives. It provides structured access to high-quality electronic reproductions of rare and often unique primary source materials, in this case the work of poet and painter William Blake. Due to the extensive high frequency content of Blake's paintings (namely, colored engravings), they are not suitable for very efficient compression that meets both rate and distortion criteria at the same time. Resolving that problem, the authors utilized modified Mixed Raster Content (MRC) compression scheme -- originally developed for compression of compound documents -- for the compression of colored engravings. In this paper, for the first time, we have been able to demonstrate the successful use of the MRC compression approach for the compression of colored, engraved images. Additional, but not less important benefits of the MRC image representation for Blake scholars are presented: because the applied segmentation method can essentially lift the color overlay of an impression, it provides the student of Blake the unique opportunity to recreate the underlying copperplate image, model the artist's coloring process, and study them separately.

  19. Gradient-based compressive image fusion

    Yang CHEN‡; Zheng QIN


    We present a novel image fusion scheme based on gradient and scrambled block Hadamard ensemble (SBHE) sam-pling for compressive sensing imaging. First, source images are compressed by compressive sensing, to facilitate the transmission of the sensor. In the fusion phase, the image gradient is calculated to reflect the abundance of its contour information. By com-positing the gradient of each image, gradient-based weights are obtained, with which compressive sensing coefficients are achieved. Finally, inverse transformation is applied to the coefficients derived from fusion, and the fused image is obtained. Information entropy (IE), Xydeas’s and Piella’s metrics are applied as non-reference objective metrics to evaluate the fusion quality in line with different fusion schemes. In addition, different image fusion application scenarios are applied to explore the scenario adaptability of the proposed scheme. Simulation results demonstrate that the gradient-based scheme has the best per-formance, in terms of both subjective judgment and objective metrics. Furthermore, the gradient-based fusion scheme proposed in this paper can be applied in different fusion scenarios.

  20. Compressive Sensing Image Sensors-Hardware Implementation

    Shahram Shirani


    Full Text Available The compressive sensing (CS paradigm uses simultaneous sensing and compression to provide an efficient image acquisition technique. The main advantages of the CS method include high resolution imaging using low resolution sensor arrays and faster image acquisition. Since the imaging philosophy in CS imagers is different from conventional imaging systems, new physical structures have been developed for cameras that use the CS technique. In this paper, a review of different hardware implementations of CS encoding in optical and electrical domains is presented. Considering the recent advances in CMOS (complementary metal–oxide–semiconductor technologies and the feasibility of performing on-chip signal processing, important practical issues in the implementation of CS in CMOS sensors are emphasized. In addition, the CS coding for video capture is discussed.

  1. The New CCSDS Image Compression Recommendation

    Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron; Masschelein, Bart; Moury, Gilles; Schaefer, Christoph


    The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-Earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An Application-Specific Integrated Circuit (ASIC) implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm. Performance results and comparisons with other compressors are given for a test set of space images.

  2. Quantum Discrete Cosine Transform for Image Compression

    Pang, C Y; Guo, G C; Pang, Chao Yang; Zhou, Zheng Wei; Guo, Guang Can


    Discrete Cosine Transform (DCT) is very important in image compression. Classical 1-D DCT and 2-D DCT has time complexity O(NlogN) and O(N²logN) respectively. This paper presents a quantum DCT iteration, and constructs a quantum 1-D and 2-D DCT algorithm for image compression by using the iteration. The presented 1-D and 2-D DCT has time complexity O(sqrt(N)) and O(N) respectively. In addition, the method presented in this paper generalizes the famous Grover's algorithm to solve complex unstructured search problem.

  3. Virtually Lossless Compression of Astrophysical Images

    Alparone Luciano


    Full Text Available We describe an image compression strategy potentially capable of preserving the scientific quality of astrophysical data, simultaneously allowing a consistent bandwidth reduction to be achieved. Unlike strictly lossless techniques, by which moderate compression ratios are attainable, and conventional lossy techniques, in which the mean square error of the decoded data is globally controlled by users, near-lossless methods are capable of locally constraining the maximum absolute error, based on user's requirements. An advanced lossless/near-lossless differential pulse code modulation (DPCM scheme, recently introduced by the authors and relying on a causal spatial prediction, is adjusted to the specific characteristics of astrophysical image data (high radiometric resolution, generally low noise, etc.. The background noise is preliminarily estimated to drive the quantization stage for high quality, which is the primary concern in most of astrophysical applications. Extensive experimental results of lossless, near-lossless, and lossy compression of astrophysical images acquired by the Hubble space telescope show the advantages of the proposed method compared to standard techniques like JPEG-LS and JPEG2000. Eventually, the rationale of virtually lossless compression, that is, a noise-adjusted lossles/near-lossless compression, is highlighted and found to be in accordance with concepts well established for the astronomers' community.

  4. Compressive Deconvolution in Medical Ultrasound Imaging.

    Chen, Zhouye; Basarab, Adrian; Kouamé, Denis


    The interest of compressive sampling in ultrasound imaging has been recently extensively evaluated by several research teams. Following the different application setups, it has been shown that the RF data may be reconstructed from a small number of measurements and/or using a reduced number of ultrasound pulse emissions. Nevertheless, RF image spatial resolution, contrast and signal to noise ratio are affected by the limited bandwidth of the imaging transducer and the physical phenomenon related to US wave propagation. To overcome these limitations, several deconvolution-based image processing techniques have been proposed to enhance the ultrasound images. In this paper, we propose a novel framework, named compressive deconvolution, that reconstructs enhanced RF images from compressed measurements. Exploiting an unified formulation of the direct acquisition model, combining random projections and 2D convolution with a spatially invariant point spread function, the benefit of our approach is the joint data volume reduction and image quality improvement. The proposed optimization method, based on the Alternating Direction Method of Multipliers, is evaluated on both simulated and in vivo data.

  5. JPEG2000 Image Compression on Solar EUV Images

    Fischer, Catherine E.; Müller, Daniel; De Moortel, Ineke


    For future solar missions as well as ground-based telescopes, efficient ways to return and process data have become increasingly important. Solar Orbiter, which is the next ESA/NASA mission to explore the Sun and the heliosphere, is a deep-space mission, which implies a limited telemetry rate that makes efficient onboard data compression a necessity to achieve the mission science goals. Missions like the Solar Dynamics Observatory (SDO) and future ground-based telescopes such as the Daniel K. Inouye Solar Telescope, on the other hand, face the challenge of making petabyte-sized solar data archives accessible to the solar community. New image compression standards address these challenges by implementing efficient and flexible compression algorithms that can be tailored to user requirements. We analyse solar images from the Atmospheric Imaging Assembly (AIA) instrument onboard SDO to study the effect of lossy JPEG2000 (from the Joint Photographic Experts Group 2000) image compression at different bitrates. To assess the quality of compressed images, we use the mean structural similarity (MSSIM) index as well as the widely used peak signal-to-noise ratio (PSNR) as metrics and compare the two in the context of solar EUV images. In addition, we perform tests to validate the scientific use of the lossily compressed images by analysing examples of an on-disc and off-limb coronal-loop oscillation time-series observed by AIA/SDO.



    Block truncation coding (BTC) is a simple and fast image compression technique suitable for realtime image transmission, and it has high channel error resisting capability and good reconstructed image quality. The main shortcoming of the original BTC algorithm is the high bit rate (normally 2 bits/pixel). In order to reduce the bit rate, an efficient BTC image compression algorithm was presented in this paper. In the proposed algorithm, a simple look-up-table method is presented for coding the higher mean and the lower mean of a block without any extra distortion, and a prediction technique is introduced to reduce the number of bits used to code the bit plane with some extra distortion. The test results prove the effectiveness of the proposed algorithm.

  7. Lossless Compression of Digital Images

    Martins, Bo

    Presently, tree coders are the best bi-level image coders. The currentISO standard, JBIG, is a good example.By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code.A number of general-purpose code...

  8. Listless zerotree image compression algorithm

    Lian, Jing; Wang, Ke


    In this paper, an improved zerotree structure and a new coding procedure are adopted, which improve the reconstructed image qualities. Moreover, the lists in SPIHT are replaced by flag maps, and lifting scheme is adopted to realize wavelet transform, which lowers the memory requirements and speeds up the coding process. Experimental results show that the algorithm is more effective and efficient compared with SPIHT.

  9. Performance visualization for image compression in telepathology

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace


    The conventional approach to performance evaluation for image compression in telemedicine is simply to measure compression ratio, signal-to-noise ratio and computational load. Evaluation of performance is however a much more complex and many sided issue. It is necessary to consider more deeply the requirements of the applications. In telemedicine, the preservation of clinical information must be taken into account when assessing the suitability of any particular compression algorithm. In telemedicine the metrication of this characteristic is subjective because human judgement must be brought in to identify what is of clinical importance. The assessment must therefore take into account subjective user evaluation criteria as well as objective criteria. This paper develops the concept of user based assessment techniques for image compression used in telepathology. A novel visualization approach has been developed to show and explore the highly complex performance space taking into account both types of measure. The application considered is within a general histopathology image management system; the particular component is a store-and-forward facility for second opinion elicitation. Images of histopathology slides are transmitted to the workstations of consultants working remotely to enable them to provide second opinions.

  10. Compressive imaging using fast transform coding

    Thompson, Andrew; Calderbank, Robert


    We propose deterministic sampling strategies for compressive imaging based on Delsarte-Goethals frames. We show that these sampling strategies result in multi-scale measurements which can be related to the 2D Haar wavelet transform. We demonstrate the effectiveness of our proposed strategies through numerical experiments.

  11. Compressive sensing for high resolution radar imaging

    Anitori, L.; Otten, M.P.G.; Hoogeboom, P.


    In this paper we present some preliminary results on the application of Compressive Sensing (CS) to high resolution radar imaging. CS is a recently developed theory which allows reconstruction of sparse signals with a number of measurements much lower than what is required by the Shannon sampling th

  12. Lossless Image Compression Using New Biorthogonal Wavelets

    M. Santhosh


    Full Text Available Even though a large number of wavelets exist, one n eeds new wavelets for their specific applications. One of the basic wavelet categories is orthogonal wavel ets. But it was hard to find orthogonal and symmetric wavelets. Symmetricity is required for perfect reconstruction. Hence, a need for orthogonal and symmetric arises. The solution was in the form of biorthogonal wavelets which preserves perfect reconstruction condition. Though a number of biorthogonal wavelets are proposed in the literature, in this paper four new biorthogonal wavelets are proposed which gives bett er compression performance. The new wavelets are compared with traditional wavelets by using the des ign metrics Peak Signal to Noise Ratio (PSNR and Compression Ratio (CR. Set Partitioning in Hierarc hical Trees (SPIHT coding algorithm was utilized to incorporate compression of images.

  13. Compressive Imaging via Approximate Message Passing


    20] uses an adaptive Wiener filter [21] for 2D denoising. Another option is to use a more sophisticated image 2D denoiser such as BM3D [22] within AMP... filtering ,” IEEE Trans. Image Process ., vol. 16, no. 8, pp. 2080–2095, Aug. 2007. [23] J. Tan, Y. Ma, H. Rueda, D. Baron, and G. Arce, “Application of...JOURNAL OF SELECTED TOPICS IN in Signal Processing , (06 2015): 1. doi: Jin Tan, Yanting Ma, Dror Baron. Compressive Imaging via Approximate MessagePassing

  14. Lossless compression for three-dimensional images

    Tang, Xiaoli; Pearlman, William A.


    We investigate and compare the performance of several three-dimensional (3D) embedded wavelet algorithms on lossless 3D image compression. The algorithms are Asymmetric Tree Three-Dimensional Set Partitioning In Hierarchical Trees (AT-3DSPIHT), Three-Dimensional Set Partitioned Embedded bloCK (3D-SPECK), Three-Dimensional Context-Based Embedded Zerotrees of Wavelet coefficients (3D-CB-EZW), and JPEG2000 Part II for multi-component images. Two kinds of images are investigated in our study -- 8-bit CT and MR medical images and 16-bit AVIRIS hyperspectral images. First, the performances by using different size of coding units are compared. It shows that increasing the size of coding unit improves the performance somewhat. Second, the performances by using different integer wavelet transforms are compared for AT-3DSPIHT, 3D-SPECK and 3D-CB-EZW. None of the considered filters always performs the best for all data sets and algorithms. At last, we compare the different lossless compression algorithms by applying integer wavelet transform on the entire image volumes. For 8-bit medical image volumes, AT-3DSPIHT performs the best almost all the time, achieving average of 12% decreases in file size compared with JPEG2000 multi-component, the second performer. For 16-bit hyperspectral images, AT-3DSPIHT always performs the best, yielding average 5.8% and 8.9% decreases in file size compared with 3D-SPECK and JPEG2000 multi-component, respectively. Two 2D compression algorithms, JPEG2000 and UNIX zip, are also included for reference, and all 3D algorithms perform much better than 2D algorithms.

  15. Combining image-processing and image compression schemes

    Greenspan, H.; Lee, M.-C.


    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  16. Structured-light Image Compression Based on Fractal Theory


    The method of fractal image compression is introduced which is applied to compress the line structured-light image. Based on the self-similarity of the structured-light image, we attain satisfactory compression ratio and higher peak signal-to-noise ratio (PSNR). The experimental results indicate that this method can achieve high performance.

  17. Compressed imaging by sparse random convolution.

    Marcos, Diego; Lasser, Theo; López, Antonio; Bourquard, Aurélien


    The theory of compressed sensing (CS) shows that signals can be acquired at sub-Nyquist rates if they are sufficiently sparse or compressible. Since many images bear this property, several acquisition models have been proposed for optical CS. An interesting approach is random convolution (RC). In contrast with single-pixel CS approaches, RC allows for the parallel capture of visual information on a sensor array as in conventional imaging approaches. Unfortunately, the RC strategy is difficult to implement as is in practical settings due to important contrast-to-noise-ratio (CNR) limitations. In this paper, we introduce a modified RC model circumventing such difficulties by considering measurement matrices involving sparse non-negative entries. We then implement this model based on a slightly modified microscopy setup using incoherent light. Our experiments demonstrate the suitability of this approach for dealing with distinct CS scenarii, including 1-bit CS.


    V. K. Sudha


    Full Text Available This paper analyses performance of multiwavelets - a variant of wavelet transform on compression of medical images. To do so, two processes namely, transformation for decorrelation and encoding are done. In transformation stage medical images are subjected to multiwavelet transform using multiwavelets such as Geronimo- Hardin-Massopust, Chui Lian, Cardinal 2 Balanced (Cardbal2 and orthogonal symmetric/antsymmetric multiwavelet (SA4. Set partitioned Embedded Block Coder is used as a common platform for encoding the transformed coefficients. Peak Signal to noise ratio, bit rate and Structural Similarity Index are used as metrics for performance analysis. For experiment we have used various medical images such as Magnetic Resonance Image, Computed Tomography and X-ray images.

  19. Efficient lossless compression scheme for multispectral images

    Benazza-Benyahia, Amel; Hamdi, Mohamed; Pesquet, Jean-Christophe


    Huge amounts of data are generated thanks to the continuous improvement of remote sensing systems. Archiving this tremendous volume of data is a real challenge which requires lossless compression techniques. Furthermore, progressive coding constitutes a desirable feature for telebrowsing. To this purpose, a compact and pyramidal representation of the input image has to be generated. Separable multiresolution decompositions have already been proposed for multicomponent images allowing each band to be decomposed separately. It seems however more appropriate to exploit also the spectral correlations. For hyperspectral images, the solution is to apply a 3D decomposition according to the spatial and to the spectral dimensions. This approach is not appropriate for multispectral images because of the reduced number of spectral bands. In recent works, we have proposed a nonlinear subband decomposition scheme with perfect reconstruction which exploits efficiently both the spatial and the spectral redundancies contained in multispectral images. In this paper, the problem of coding the coefficients of the resulting subband decomposition is addressed. More precisely, we propose an extension to the vector case of Shapiro's embedded zerotrees of wavelet coefficients (V-EZW) with achieves further saving in the bit stream. Simulations carried out on SPOT images indicate the outperformance of the global compression package we performed.

  20. Compressive Hyperspectral Imaging and Anomaly Detection


    Examples include the discrete cosine basis and various wavelets based bases. They have been thoroughly studied and widely considered in applications...the desired jointly sparse a"s, one shall adjust a and b. 4.4 Hyperspectral Image Reconstruction and Denoising We apply the model x* = Da’ + e! to...iteration for compressive sensing and sparse denoising ,’" Communications in Mathematical Sciences , 2008. W. Yin, "Analysis and generalizations of

  1. Lossless Astronomical Image Compression and the Effects of Random Noise

    Pence, William


    In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.

  2. Image Segmentation, Registration, Compression, and Matching

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina


    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  3. Lossless Digital Image Compression Method for Bitmap Images

    Meyyappan, Dr T; Nachiaban, N M Jeya; 10.5121/ijma.2011.3407


    In this research paper, the authors propose a new approach to digital image compression using crack coding This method starts with the original image and develop crack codes in a recursive manner, marking the pixels visited earlier and expanding the entropy in four directions. The proposed method is experimented with sample bitmap images and results are tabulated. The method is implemented in uni-processor machine using C language source code.

  4. Fast Lossless Compression of Multispectral-Image Data

    Klimesh, Matthew


    An algorithm that effects fast lossless compression of multispectral-image data is based on low-complexity, proven adaptive-filtering algorithms. This algorithm is intended for use in compressing multispectral-image data aboard spacecraft for transmission to Earth stations. Variants of this algorithm could be useful for lossless compression of three-dimensional medical imagery and, perhaps, for compressing image data in general.

  5. Unsupervised regions of interest extraction for color image compression

    Xiaoguang Shao; Kun Gao; Lili L(U); Guoqiang Ni


    A novel unsupervised approach for regions of interest (ROI) extraction that combines the modified visual attention model and clustering analysis method is proposed.Then the non-uniform color image compression algorithm is followed to compress ROI and other regions with different compression ratios through the JPEG image compression algorithm.The reconstruction algorithm of the compressed image is similar to that of the JPEG algorithm.Experimental results show that the proposed method has better performance in terms of compression ratio and fidelity when comparing with other traditional approaches.

  6. Compressed sensing imaging techniques for radio interferometry

    Wiaux, Y; Puy, G; Scaife, A M M; Vandergheynst, P


    Radio interferometry probes astrophysical signals through incomplete and noisy Fourier measurements. The theory of compressed sensing demonstrates that such measurements may actually suffice for accurate reconstruction of sparse or compressible signals. We propose new generic imaging techniques based on convex optimization for global minimization problems defined in this context. The versatility of the framework notably allows introduction of specific prior information on the signals, which offers the possibility of significant improvements of reconstruction relative to the standard local matching pursuit algorithm CLEAN used in radio astronomy. We illustrate the potential of the approach by studying reconstruction performances on simulations of two different kinds of signals observed with very generic interferometric configurations. The first kind is an intensity field of compact astrophysical objects. The second kind is the imprint of cosmic strings in the temperature field of the cosmic microwave backgroun...

  7. Fpack and Funpack User's Guide: FITS Image Compression Utilities

    Pence, William; White, Rick


    Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see The associated funpack program restores the compressed image file back to its original state (if a lossless compression algorithm is used). (An experimental method for compressing FITS binary tables is also available; see section 7). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression options.




    Full Text Available In Image Compression, the researcher’s aim is to reduce the number of bits required to represent an image by removing the spatial and spectral redundancies. Recently wavelet packet has emerged as popular techniques for image compression. In this paper proposes a wavelet-based compression scheme that is able to operate in lossyas well as lossless mode. First we describe integer wavelet transform (IWT and integer wavelet packet transform (IWPT as an application of lifting scheme (LS.After analyzing and implementing results for IWT and IWPT , another method combining DPCM and IWPT is implemented using Huffman coding for grey scale images. Then we propose to implement the same for color images using Shannon source coding technique. We measure the level of compression by the compression ratio (CR and compression factor (CF. Comparing with IWT and IWPT the DPCM-IWPT shows better performance in image compression.

  9. Compressive Imaging with Iterative Forward Models

    Liu, Hsiou-Yuan; Liu, Dehong; Mansour, Hassan; Boufounos, Petros T


    We propose a new compressive imaging method for reconstructing 2D or 3D objects from their scattered wave-field measurements. Our method relies on a novel, nonlinear measurement model that can account for the multiple scattering phenomenon, which makes the method preferable in applications where linear measurement models are inaccurate. We construct the measurement model by expanding the scattered wave-field with an accelerated-gradient method, which is guaranteed to converge and is suitable for large-scale problems. We provide explicit formulas for computing the gradient of our measurement model with respect to the unknown image, which enables image formation with a sparsity- driven numerical optimization algorithm. We validate the method both analytically and with numerical simulations.

  10. Discrete directional wavelet bases for image compression

    Dragotti, Pier L.; Velisavljevic, Vladan; Vetterli, Martin; Beferull-Lozano, Baltasar


    The application of the wavelet transform in image processing is most frequently based on a separable construction. Lines and columns in an image are treated independently and the basis functions are simply products of the corresponding one dimensional functions. Such method keeps simplicity in design and computation, but is not capable of capturing properly all the properties of an image. In this paper, a new truly separable discrete multi-directional transform is proposed with a subsampling method based on lattice theory. Alternatively, the subsampling can be omitted and this leads to a multi-directional frame. This transform can be applied in many areas like denoising, non-linear approximation and compression. The results on non-linear approximation and denoising show very interesting gains compared to the standard two-dimensional analysis.

  11. Image and video processing in the compressed domain

    Mukhopadhyay, Jayanta


    As more images and videos are becoming available in compressed formats, researchers have begun designing algorithms for different image operations directly in their domains of representation, leading to faster computation and lower buffer requirements. Image and Video Processing in the Compressed Domain presents the fundamentals, properties, and applications of a variety of image transforms used in image and video compression. It illustrates the development of algorithms for processing images and videos in the compressed domain. Developing concepts from first principles, the book introduces po

  12. A Fast Fractal Image Compression Coding Method


    Fast algorithms for reducing encoding complexity of fractal image coding have recently been an important research topic. Search of the best matched domain block is the most computation intensive part of the fractal encoding process. In this paper, a fast fractal approximation coding scheme implemented on a personal computer based on matching in range block's neighbours is presented. Experimental results show that the proposed algorithm is very simple in implementation, fast in encoding time and high in compression ratio while PSNR is almost the same as compared with Barnsley's fractal block coding .




    Image compression is applied to many fields such as television broadcasting, remote sensing, image storage etc. Digitized images are compressed by a technique which exploits the redundancy of the images so that the number of bits required to represent the image can be reduced with acceptable degradation of the decoded image. The degradation of the image quality is limited wrt. the application used. There are various application where accuracy is of major concern. To achieve the objective of p...

  14. Research on compressive fusion for remote sensing images

    Yang, Senlin; Wan, Guobin; Li, Yuanyuan; Zhao, Xiaoxia; Chong, Xin


    A compressive fusion of remote sensing images is presented based on the block compressed sensing (BCS) and non-subsampled contourlet transform (NSCT). Since the BCS requires small memory space and enables fast computation, firstly, the images with large amounts of data can be compressively sampled into block images with structured random matrix. Further, the compressive measurements are decomposed with NSCT and their coefficients are fused by a rule of linear weighting. And finally, the fused image is reconstructed by the gradient projection sparse reconstruction algorithm, together with consideration of blocking artifacts. The field test of remote sensing images fusion shows the validity of the proposed method.

  15. Watermark Compression in Medical Image Watermarking Using Lempel-Ziv-Welch (LZW) Lossless Compression Technique.

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohd; Ali, Mushtaq


    In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery.

  16. On-board image compression for the RAE lunar mission

    Miller, W. H.; Lynch, T. J.


    The requirements, design, implementation, and flight performance of an on-board image compression system for the lunar orbiting Radio Astronomy Explorer-2 (RAE-2) spacecraft are described. The image to be compressed is a panoramic camera view of the long radio astronomy antenna booms used for gravity-gradient stabilization of the spacecraft. A compression ratio of 32 to 1 is obtained by a combination of scan line skipping and adaptive run-length coding. The compressed imagery data are convolutionally encoded for error protection. This image compression system occupies about 1000 cu cm and consumes 0.4 W.




    Full Text Available Image compression is applied to many fields such as television broadcasting, remote sensing, image storage etc. Digitized images are compressed by a technique which exploits the redundancy of the images so that the number of bits required to represent the image can be reduced with acceptable degradation of the decoded image. The degradation of the image quality is limited wrt. the application used. There are various application where accuracy is of major concern. To achieve the objective of performance improvement with respect to decoded picture quality and compression ratios, compared to existing image compression techniques, a image compression technique using hybrid neural networks combining two different learning networks called Autoassociative multi-layer perceptron and self-organizing feature map is proposed.

  18. ROI-based DICOM image compression for telemedicine

    Vinayak K Bairagi; Ashok M Sapkal


    Many classes of images contain spatial regions which are more important than other regions. Compression methods capable of delivering higher reconstruction quality for important parts are attractive in this situation. For medical images, only a small portion of the image might be diagnostically useful, but the cost of a wrong interpretation is high. Hence, Region Based Coding (RBC) technique is significant for medical image compression and transmission. Lossless compression schemes with secure transmission play a key role in telemedicine applications that help in accurate diagnosis and research. In this paper, we propose lossless scalable RBC for Digital Imaging and Communications in Medicine (DICOM) images based on Integer Wavelet Transform (IWT) and with distortion limiting compression technique for other regions in image. The main objective of this work is to reject the noisy background and reconstruct the image portions losslessly. The compressed image can be accessed and sent over telemedicine network using personal digital assistance (PDA) like mobile.

  19. The impact of lossless image compression to radiographs

    Lehmann, Thomas M.; Abel, Jürgen; Weiss, Claudia


    The increasing number of digital imaging modalities results in data volumes of several Tera Bytes per year that must be transferred and archived in a common-sized hospital. Hence, data compression is an important issue for picture archiving and communication systems (PACS). The effect of lossy image compression is frequently analyzed with respect to images from a certain modality supporting a certain diagnosis. However, novel compression schemes have been developed recently allowing efficient but lossless compression. In this study, we compare the lossless compression schemes embedded in the tagged image file format (TIFF), graphics interchange format (GIF), and Joint Photographic Experts Group (JPEG 2000 II) with the Borrows-Wheeler compression algorithm (BWCA) with respect to image content and origin. Repeated measures ANOVA was based on 1.200 images in total. Statistically significant effects (p radiographs of the head, while the lowest factor of 1,05 (7.587 bpp) resulted from the TIFF packbits algorithm applied to pelvis images captured digitally. Over all, the BWCA is slightly but significantly more effective than JPEG 2000. Both compression schemes reduce the required bits per pixel (bpp) below 3. Also, secondarily digitized images are more compressible than the directly digital ones. Interestingly, JPEG outperforms BWCA for directly digital images regardless of image content, while BWCA performs better than JPEG on secondarily digitized radiographs. In conclusion, efficient lossless image compression schemes are available for PACS.

  20. Multiple snapshot colored compressive spectral imager

    Correa, Claudia V.; Hinojosa, Carlos A.; Arce, Gonzalo R.; Arguello, Henry


    The snapshot colored compressive spectral imager (SCCSI) is a recent compressive spectral imaging (CSI) architecture that senses the spatial and spectral information of a scene in a single snapshot by means of a colored mosaic FPA detector and a dispersive element. Commonly, CSI architectures allow multiple snapshot acquisition, yielding improved reconstructions of spatially detailed and spectrally rich scenes. Each snapshot is captured employing a different coding pattern. In principle, SCCSI does not admit multiple snapshots since the pixelated tiling of optical filters is directly attached to the detector. This paper extends the concept of SCCSI to a system admitting multiple snapshot acquisition by rotating the dispersive element, so the dispersed spatio-spectral source is coded and integrated at different detector pixels in each rotation. Thus, a different set of coded projections is captured using the same optical components of the original architecture. The mathematical model of the multishot SCCSI system is presented along with several simulations. Results show that a gain up to 7 dB of peak signal-to-noise ratio is achieved when four SCCSI snapshots are compared to a single snapshot reconstruction. Furthermore, a gain up to 5 dB is obtained with respect to state-of-the-art architecture, the multishot CASSI.

  1. Wavelet-based Image Compression using Subband Threshold

    Muzaffar, Tanzeem; Choi, Tae-Sun


    Wavelet based image compression has been a focus of research in recent days. In this paper, we propose a compression technique based on modification of original EZW coding. In this lossy technique, we try to discard less significant information in the image data in order to achieve further compression with minimal effect on output image quality. The algorithm calculates weight of each subband and finds the subband with minimum weight in every level. This minimum weight subband in each level, that contributes least effect during image reconstruction, undergoes a threshold process to eliminate low-valued data in it. Zerotree coding is done next on the resultant output for compression. Different values of threshold were applied during experiment to see the effect on compression ratio and reconstructed image quality. The proposed method results in further increase in compression ratio with negligible loss in image quality.

  2. DSP Implementation of Image Compression by Multiresolutional Analysis

    K. Vlcek


    Full Text Available Wavelet algorithms allow considerably higher compression rates compared to Fourier transform based methods. The most important field of applications of wavelet transforms is that the image is captured in few wavelet coefficients. The successful applications in compression of image or in series of images in both the space and the time dimensions. Compression algorithms exploit the multi-scale nature of the wavelet transform.

  3. DSP Implementation of Image Compression by Multiresolutional Analysis

    K. Vlcek


    Full Text Available Wavelet algorithms allow considerably higher compression rates compared to Fourier transform based methods. The most important field of applications of wavelet transforms is that the image is captured in few wavelet coefficients. The successful applications in compression of image or in series of images in both the space and the time dimensions. Compression algorithms exploit the multi-scale nature of the wavelet transform.

  4. Image and video compression fundamentals, techniques, and applications

    Joshi, Madhuri A; Dandawate, Yogesh H; Joshi, Kalyani R; Metkar, Shilpa P


    Image and video signals require large transmission bandwidth and storage, leading to high costs. The data must be compressed without a loss or with a small loss of quality. Thus, efficient image and video compression algorithms play a significant role in the storage and transmission of data.Image and Video Compression: Fundamentals, Techniques, and Applications explains the major techniques for image and video compression and demonstrates their practical implementation using MATLAB® programs. Designed for students, researchers, and practicing engineers, the book presents both basic principles

  5. Medical Image Compression using Wavelet Decomposition for Prediction Method

    Ramesh, S M


    In this paper offers a simple and lossless compression method for compression of medical images. Method is based on wavelet decomposition of the medical images followed by the correlation analysis of coefficients. The correlation analyses are the basis of prediction equation for each sub band. Predictor variable selection is performed through coefficient graphic method to avoid multicollinearity problem and to achieve high prediction accuracy and compression rate. The method is applied on MRI and CT images. Results show that the proposed approach gives a high compression rate for MRI and CT images comparing with state of the art methods.

  6. A Multiresolution Image Completion Algorithm for Compressing Digital Color Images

    R. Gomathi


    Full Text Available This paper introduces a new framework for image coding that uses image inpainting method. In the proposed algorithm, the input image is subjected to image analysis to remove some of the portions purposefully. At the same time, edges are extracted from the input image and they are passed to the decoder in the compressed manner. The edges which are transmitted to decoder act as assistant information and they help inpainting process fill the missing regions at the decoder. Textural synthesis and a new shearlet inpainting scheme based on the theory of p-Laplacian operator are proposed for image restoration at the decoder. Shearlets have been mathematically proven to represent distributed discontinuities such as edges better than traditional wavelets and are a suitable tool for edge characterization. This novel shearlet p-Laplacian inpainting model can effectively reduce the staircase effect in Total Variation (TV inpainting model whereas it can still keep edges as well as TV model. In the proposed scheme, neural network is employed to enhance the value of compression ratio for image coding. Test results are compared with JPEG 2000 and H.264 Intracoding algorithms. The results show that the proposed algorithm works well.


    V. Sutha Jebakumari; P. Arockia Jansi Rani


    Wavelet analysis plays a vital role in the signal processing especially in image compression. In this paper, various compression algorithms like block truncation coding, EZW and SPIHT are studied and ana- lyzed; its algorithm idea and steps are given. The parameters for all these algorithms are analyzed and the best parameter for each of these compression algorithms is found out.


    V. Sutha Jebakumari


    Full Text Available Wavelet analysis plays a vital role in the signal processing especially in image compression. In this paper, various compression algorithms like block truncation coding, EZW and SPIHT are studied and ana- lyzed; its algorithm idea and steps are given. The parameters for all these algorithms are analyzed and the best parameter for each of these compression algorithms is found out.

  9. Oncologic image compression using both wavelet and masking techniques.

    Yin, F F; Gao, Q


    A new algorithm has been developed to compress oncologic images using both wavelet transform and field masking methods. A compactly supported wavelet transform is used to decompose the original image into high- and low-frequency subband images. The region-of-interest (ROI) inside an image, such as an irradiated field in an electronic portal image, is identified using an image segmentation technique and is then used to generate a mask. The wavelet transform coefficients outside the mask region are then ignored so that these coefficients can be efficiently coded to minimize the image redundancy. In this study, an adaptive uniform scalar quantization method and Huffman coding with a fixed code book are employed in subsequent compression procedures. Three types of typical oncologic images are tested for compression using this new algorithm: CT, MRI, and electronic portal images with 256 x 256 matrix size and 8-bit gray levels. Peak signal-to-noise ratio (PSNR) is used to evaluate the quality of reconstructed image. Effects of masking and image quality on compression ratio are illustrated. Compression ratios obtained using wavelet transform with and without masking for the same PSNR are compared for all types of images. The addition of masking shows an increase of compression ratio by a factor of greater than 1.5. The effect of masking on the compression ratio depends on image type and anatomical site. A compression ratio of greater than 5 can be achieved for a lossless compression of various oncologic images with respect to the region inside the mask. Examples of reconstructed images with compression ratio greater than 50 are shown.

  10. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    Gillespy, T; Rowberg, A H


    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.

  11. Comparison of two SVD-based color image compression schemes.

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli


    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR.

  12. Reflectance and transmittance model for recto-verso halftone prints

    Hebert, M.; R. D. Hersch


    We propose a spectral prediction model for predicting the reflectance and transmittance of recto-verso halftone prints. A recto-verso halftone print is modeled as a diffusing substrate surrounded by two inked interfaces in contact with air (or with another medium). The interaction of light with the print comprises three components: (a) the attenuation of the incident light penetrating the print across the inked interface, (b) the internal reflectance and internal transmittance that accounts f...

  13. Lossless compression of medical images using Hilbert scan

    Sun, Ziguang; Li, Chungui; Liu, Hao; Zhang, Zengfang


    The effectiveness of Hilbert scan in lossless medical images compression is discussed. In our methods, after coding of intensities, the pixels in a medical images have been decorrelated with differential pulse code modulation, then the error image has been rearranged using Hilbert scan, finally we implement five coding schemes, such as Huffman coding, RLE, lZW coding, Arithmetic coding, and RLE followed by Huffman coding. The experiments show that the case, which applies DPCM followed by Hilbert scan and then compressed by the Arithmetic coding scheme, has the best compression result, also indicate that Hilbert scan can enhance pixel locality, and increase the compression ratio effectively.

  14. Adaptive Super-Spatial Prediction Approach For Lossless Image Compression

    Arpita C. Raut,


    Full Text Available Existing prediction based lossless image compression schemes perform prediction of an image data using their spatial neighborhood technique which can’t predict high-frequency image structure components, such as edges, patterns, and textures very well which will limit the image compression efficiency. To exploit these structure components, adaptive super-spatial prediction approach is developed. The super-spatial prediction approach is adaptive to compress high frequency structure components from the grayscale image. The motivation behind the proposed prediction approach is taken from motion prediction in video coding, which attempts to find an optimal prediction of structure components within the previously encoded image regions. This prediction approach is efficient for image regions with significant structure components with respect to parameters as compression ratio, bit rate as compared to CALIC (Context-based adaptive lossless image coding.

  15. Compressive optical image watermarking using joint Fresnel transform correlator architecture

    Li, Jun; Zhong, Ting; Dai, Xiaofang; Yang, Chanxia; Li, Rong; Tang, Zhilie


    A new optical image watermarking technique based on compressive sensing using joint Fresnel transform correlator architecture has been presented. A secret scene or image is first embedded into a host image to perform optical image watermarking by use of joint Fresnel transform correlator architecture. Then, the watermarked image is compressed to much smaller signal data using single-pixel compressive holographic imaging in optical domain. At the received terminal, the watermarked image is reconstructed well via compressive sensing theory and a specified holographic reconstruction algorithm. The preliminary numerical simulations show that it is effective and suitable for optical image security transmission in the coming absolutely optical network for the reason of the completely optical implementation and largely decreased holograms data volume.

  16. Texture-based medical image retrieval in compressed domain using compressive sensing.

    Yadav, Kuldeep; Srivastava, Avi; Mittal, Ankush; Ansari, M A


    Content-based image retrieval has gained considerable attention in today's scenario as a useful tool in many applications; texture is one of them. In this paper, we focus on texture-based image retrieval in compressed domain using compressive sensing with the help of DC coefficients. Medical imaging is one of the fields which have been affected most, as there had been huge size of image database and getting out the concerned image had been a daunting task. Considering this, in this paper we propose a new model of image retrieval process using compressive sampling, since it allows accurate recovery of image from far fewer samples of unknowns and it does not require a close relation of matching between sampling pattern and characteristic image structure with increase acquisition speed and enhanced image quality.

  17. CWICOM: A Highly Integrated & Innovative CCSDS Image Compression ASIC

    Poupat, Jean-Luc; Vitulli, Raffaele


    The space market is more and more demanding in terms of on image compression performances. The earth observation satellites instrument resolution, the agility and the swath are continuously increasing. It multiplies by 10 the volume of picture acquired on one orbit. In parallel, the satellites size and mass are decreasing, requiring innovative electronic technologies reducing size, mass and power consumption. Astrium, leader on the market of the combined solutions for compression and memory for space application, has developed a new image compression ASIC which is presented in this paper. CWICOM is a high performance and innovative image compression ASIC developed by Astrium in the frame of the ESA contract n°22011/08/NLL/LvH. The objective of this ESA contract is to develop a radiation hardened ASIC that implements the CCSDS 122.0-B-1 Standard for Image Data Compression, that has a SpaceWire interface for configuring and controlling the device, and that is compatible with Sentinel-2 interface and with similar Earth Observation missions. CWICOM stands for CCSDS Wavelet Image COMpression ASIC. It is a large dynamic, large image and very high speed image compression ASIC potentially relevant for compression of any 2D image with bi-dimensional data correlation such as Earth observation, scientific data compression… The paper presents some of the main aspects of the CWICOM development, such as the algorithm and specification, the innovative memory organization, the validation approach and the status of the project.

  18. Structure Assisted Compressed Sensing Reconstruction of Undersampled AFM Images

    Oxvig, Christian Schou; Arildsen, Thomas; Larsen, Torben


    The use of compressed sensing in atomic force microscopy (AFM) can potentially speed-up image acquisition, lower probe-specimen interaction, or enable super resolution imaging. The idea in compressed sensing for AFM is to spatially undersample the specimen, i.e. only acquire a small fraction...

  19. Phase Imaging: A Compressive Sensing Approach

    Schneider, Sebastian; Stevens, Andrew; Browning, Nigel D.; Pohl, Darius; Nielsch, Kornelius; Rellinghaus, Bernd


    Since Wolfgang Pauli posed the question in 1933, whether the probability densities |Ψ(r)|² (real-space image) and |Ψ(q)|² (reciprocal space image) uniquely determine the wave function Ψ(r) [1], the so called Pauli Problem sparked numerous methods in all fields of microscopy [2, 3]. Reconstructing the complete wave function Ψ(r) = a(r)e-iφ(r) with the amplitude a(r) and the phase φ(r) from the recorded intensity enables the possibility to directly study the electric and magnetic properties of the sample through the phase. In transmission electron microscopy (TEM), electron holography is by far the most established method for phase reconstruction [4]. Requiring a high stability of the microscope, next to the installation of a biprism in the TEM, holography cannot be applied to any microscope straightforwardly. Recently, a phase retrieval approach was proposed using conventional TEM electron diffractive imaging (EDI). Using the SAD aperture as reciprocal-space constraint, a localized sample structure can be reconstructed from its diffraction pattern and a real-space image using the hybrid input-output algorithm [5]. We present an alternative approach using compressive phase-retrieval [6]. Our approach does not require a real-space image. Instead, random complimentary pairs of checkerboard masks are cut into a 200 nm Pt foil covering a conventional TEM aperture (cf. Figure 1). Used as SAD aperture, subsequently diffraction patterns are recorded from the same sample area. Hereby every mask blocks different parts of gold particles on a carbon support (cf. Figure 2). The compressive sensing problem has the following formulation. First, we note that the complex-valued reciprocal-space wave-function is the Fourier transform of the (also complex-valued) real-space wave-function, Ψ(q) = F[Ψ(r)], and subsequently the diffraction pattern image is given by |Ψ(q)|2 = |F[Ψ(r)]|2. We want to find Ψ(r) given a few differently coded diffraction pattern measurements yn

  20. Compression multi-vues par representation LDI (Layered Depth Images)

    Jantet, Vincent


    This thesis presents an advanced framework for multi-view plus depth video processing and compression based on the concept of layered depth image (LDI). Several contributions are proposed for both depth-image based rendering and LDI construction and compression. The first contribution is a novel virtual view synthesis technique called Joint Projection Filling (JPF). This technique takes as input any image plus depth content and provides a virtual view in general position and performs image wa...

  1. On-line structure-lossless digital mammogram image compression

    Wang, Jun; Huang, H. K.


    This paper proposes a novel on-line structure lossless compression method for digital mammograms during the film digitization process. The structure-lossless compression segments the breast and the background, compresses the former with a predictive lossless coding method and discards the latter. This compression scheme is carried out during the film digitization process and no additional time is required for the compression. Digital mammograms are compressed on-the-fly while they are created. During digitization, lines of scanned data are first acquired into a small temporary buffer in the scanner, then they are transferred to a large image buffer in an acquisition computer which is connected to the scanner. The compression process, running concurrently with the digitization process in the acquisition computer, constantly checks the image buffer and compresses any newly arrived data. Since compression is faster than digitization, data compression is completed as soon as digitization is finished. On-line compression during digitization does not increase overall digitizing time. Additionally, it reduces the mammogram image size by a factor of 3 to 9 with no loss of information. This algorithm has been implemented in a film digitizer. Statistics were obtained based on digitizing 46 mammograms at four sampling distances from 50 to 200 microns.

  2. Wavelet/scalar quantization compression standard for fingerprint images

    Brislawn, C.M.


    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.

  3. [Lossless compression of hyperspectral image for space-borne application].

    Li, Jin; Jin, Long-xu; Li, Guo-ning


    In order to resolve the difficulty in hardware implementation, lower compression ratio and time consuming for the whole hyperspectral image lossless compression algorithm based on the prediction, transform, vector quantization and their combination, a hyperspectral image lossless compression algorithm for space-borne application was proposed in the present paper. Firstly, intra-band prediction is used only for the first image along the spectral line using a median predictor. And inter- band prediction is applied to other band images. A two-step and bidirectional prediction algorithm is proposed for the inter-band prediction. In the first step prediction, a bidirectional and second order predictor proposed is used to obtain a prediction reference value. And a improved LUT prediction algorithm proposed is used to obtain four values of LUT prediction. Then the final prediction is obtained through comparison between them and the prediction reference. Finally, the verification experiments for the compression algorithm proposed using compression system test equipment of XX-X space hyperspectral camera were carried out. The experiment results showed that compression system can be fast and stable work. The average compression ratio reached 3.05 bpp. Compared with traditional approaches, the proposed method could improve the average compression ratio by 0.14-2.94 bpp. They effectively improve the lossless compression ratio and solve the difficulty of hardware implementation of the whole wavelet-based compression scheme.

  4. Compressing industrial computed tomography images by means of contour coding

    Jiang, Haina; Zeng, Li


    An improved method for compressing industrial computed tomography (CT) images is presented. To have higher resolution and precision, the amount of industrial CT data has become larger and larger. Considering that industrial CT images are approximately piece-wise constant, we develop a compression method based on contour coding. The traditional contour-based method for compressing gray images usually needs two steps. The first is contour extraction and then compression, which is negative for compression efficiency. So we merge the Freeman encoding idea into an improved method for two-dimensional contours extraction (2-D-IMCE) to improve the compression efficiency. By exploiting the continuity and logical linking, preliminary contour codes are directly obtained simultaneously with the contour extraction. By that, the two steps of the traditional contour-based compression method are simplified into only one. Finally, Huffman coding is employed to further losslessly compress preliminary contour codes. Experimental results show that this method can obtain a good compression ratio as well as keeping satisfactory quality of compressed images.

  5. A simple data compression scheme for binary images of bacteria compared with commonly used image data compression schemes

    Wilkinson, M.H.F.


    A run length code compression scheme of extreme simplicity, used for image storage in an automated bacterial morphometry system, is compared with more common compression schemes, such as are used in the tag image file format. These schemes are Lempel-Ziv and Welch (LZW), Macintosh Packbits, and CCIT

  6. Data delivery system for MAPPER using image compression

    Yang, Jeehong; Savari, Serap A.


    The data delivery throughput of electron beam lithography systems can be improved by applying lossless image compression to the layout image and using an electron beam writer that can decode the compressed image on-the-fly. In earlier research we introduced the lossless layout image compression algorithm Corner2, which assumes a somewhat idealized writing strategy, namely row-by-row with a raster order. The MAPPER system has electron beam writers positioned in a lattice formation and each electron beam writer writes a designated block in a zig-zag order. We introduce Corner2-MEB, which redesigns Corner2 for MAPPER systems.

  7. Image encryption and compression based on kronecker compressed sensing and elementary cellular automata scrambling

    Chen, Tinghuan; Zhang, Meng; Wu, Jianhui; Yuen, Chau; Tong, You


    Because of simple encryption and compression procedure in single step, compressed sensing (CS) is utilized to encrypt and compress an image. Difference of sparsity levels among blocks of the sparsely transformed image degrades compression performance. In this paper, motivated by this difference of sparsity levels, we propose an encryption and compression approach combining Kronecker CS (KCS) with elementary cellular automata (ECA). In the first stage of encryption, ECA is adopted to scramble the sparsely transformed image in order to uniformize sparsity levels. A simple approximate evaluation method is introduced to test the sparsity uniformity. Due to low computational complexity and storage, in the second stage of encryption, KCS is adopted to encrypt and compress the scrambled and sparsely transformed image, where the measurement matrix with a small size is constructed from the piece-wise linear chaotic map. Theoretical analysis and experimental results show that our proposed scrambling method based on ECA has great performance in terms of scrambling and uniformity of sparsity levels. And the proposed encryption and compression method can achieve better secrecy, compression performance and flexibility.

  8. [Hyperspectral image compression technology research based on EZW].

    Wei, Jun-Xia; Xiangli, Bin; Duan, Xiao-Feng; Xu, Zhao-Hui; Xue, Li-Jun


    Along with the development of hyperspectral remote sensing technology, hyperspectral imaging technology has been applied in the aspect of aviation and spaceflight, which is different from multispectral imaging, and with the band width of nanoscale spectral imaging the target continuously, the image resolution is very high. However, with the increasing number of band, spectral data quantity will be more and more, and these data storage and transmission is the problem that the authors must face. Along with the development of wavelet compression technology, in field of image compression, many people adopted and improved EZW, the present paper used the method in hyperspectral spatial dimension compression, but does not involved the spectrum dimension compression. From hyperspectral image compression reconstruction results, whether from the peak signal-to-noise ratio (PSNR) and spectral curve or from the subjective comparison of source and reconstruction image, the effect is well. If the first compression of image from spectrum dimension is made, then compression on space dimension, the authors believe the effect will be better.

  9. Compressing subbanded image data with Lempel-Ziv-based coders

    Glover, Daniel; Kwatra, S. C.


    A method of improving the compression of image data using Lempel-Ziv-based coding is presented. Image data is first processed with a simple transform, such as the Walsh Hadamard Transform, to produce subbands. The subbanded data can be rounded to eight bits or it can be quantized for higher compression at the cost of some reduction in the quality of the reconstructed image. The data is then run-length coded to take advantage of the large runs of zeros produced by quantization. Compression results are presented and contrasted with a subband compression method using quantization followed by run-length coding and Huffman coding. The Lempel-Ziv-based coding in conjunction with run-length coding produces the best compression results at the same reconstruction quality (compared with the Huffman-based coding) on the image data used.

  10. Correlation and image compression for limited-bandwidth CCD.

    Thompson, Douglas G.


    As radars move to Unmanned Aerial Vehicles with limited-bandwidth data downlinks, the amount of data stored and transmitted with each image becomes more significant. This document gives the results of a study to determine the effect of lossy compression in the image magnitude and phase on Coherent Change Detection (CCD). We examine 44 lossy compression types, plus lossless zlib compression, and test each compression method with over 600 CCD image pairs. We also derive theoretical predictions for the correlation for most of these compression schemes, which compare favorably with the experimental results. We recommend image transmission formats for limited-bandwidth programs having various requirements for CCD, including programs which cannot allow performance degradation and those which have stricter bandwidth requirements at the expense of CCD performance.

  11. PCNN-Based Image Fusion in Compressed Domain

    Yang Chen


    Full Text Available This paper addresses a novel method of image fusion problem for different application scenarios, employing compressive sensing (CS as the image sparse representation method and pulse-coupled neural network (PCNN as the fusion rule. Firstly, source images are compressed through scrambled block Hadamard ensemble (SBHE for its compression capability and computational simplicity on the sensor side. Local standard variance is input to motivate PCNN and coefficients with large firing times are selected as the fusion coefficients in compressed domain. Fusion coefficients are smoothed by sliding window in order to avoid blocking effect. Experimental results demonstrate that the proposed fusion method outperforms other fusion methods in compressed domain and is effective and adaptive in different image fusion applications.

  12. Designing robust sensing matrix for image compression.

    Li, Gang; Li, Xiao; Li, Sheng; Bai, Huang; Jiang, Qianru; He, Xiongxiong


    This paper deals with designing sensing matrix for compressive sensing systems. Traditionally, the optimal sensing matrix is designed so that the Gram of the equivalent dictionary is as close as possible to a target Gram with small mutual coherence. A novel design strategy is proposed, in which, unlike the traditional approaches, the measure considers of mutual coherence behavior of the equivalent dictionary as well as sparse representation errors of the signals. The optimal sensing matrix is defined as the one that minimizes this measure and hence is expected to be more robust against sparse representation errors. A closed-form solution is derived for the optimal sensing matrix with a given target Gram. An alternating minimization-based algorithm is also proposed for addressing the same problem with the target Gram searched within a set of relaxed equiangular tight frame Grams. The experiments are carried out and the results show that the sensing matrix obtained using the proposed approach outperforms those existing ones using a fixed dictionary in terms of signal reconstruction accuracy for synthetic data and peak signal-to-noise ratio for real images.

  13. Image compression and transmission based on LAN

    Huang, Sujuan; Li, Yufeng; Zhang, Zhijiang


    In this work an embedded system is designed which implements MPEG-2 LAN transmission of CVBS or S-video signal. The hardware consists of three parts. The first is digitization of analog inputs CVBS or S-video (Y/C) from TV or VTR sources. The second is MPEG-2 compression coding primarily performed by a MPEG-2 1chip audio/video encoder. Its output is MPEG-2 system PS/TS. The third part includes data stream packing, accessing LAN and system control based on an ARM microcontroller. It packs the encoded stream into Ethernet data frames and accesses LAN, and accepts Ethernet data packets bearing control information from the network and decodes corresponding commands to control digitization, coding, and other operations. In order to increase the network transmission rate to conform to the MEPG-2 data stream, an efficient TCP/IP network protocol stack is constructed directly from network hardware provided by the embedded system, instead of using an ordinary operating system for embedded systems. In the design of the network protocol stack to obtain a high LAN transmission rate on a low-end ARM, a special transmission channel is opened for the MPEG-2 stream. The designed system has been tested on an experimental LAN. The experiment shows a maximum LAN transmission rate up to 12.7 Mbps with good sound and image quality, and satisfactory system reliability.

  14. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong


    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.


    Rohit Kumar Gangwar


    Full Text Available With the increase in demand, product of multimedia is increasing fast and thus contributes to insufficient network bandwidth and memory storage. Therefore image compression is more significant for reducing data redundancy for save more memory and transmission bandwidth. An efficient compression technique has been proposed which combines fuzzy logic with that of Huffman coding. While normalizing image pixel, each value of pixel image belonging to that image foreground are characterized and interpreted. The image is sub divided into pixel which is then characterized by a pair of set of approximation. Here encoding represent Huffman code which is statistically independent to produce more efficient code for compression and decoding represents rough fuzzy logic which is used to rebuilt the pixel of image. The method used here are rough fuzzy logic with Huffman coding algorithm (RFHA. Here comparison of different compression techniques with Huffman coding is done and fuzzy logic is applied on the Huffman reconstructed image. Result shows that high compression rates are achieved and visually negligible difference between compressed images and original images.

  16. Discrete-cosine-transform-based image compression applied to dermatology

    Cookson, John P.; Sneiderman, Charles; Rivera, Christopher


    The research reported in this paper concerns an evaluation of the impact of compression on the quality of digitized color dermatologic images. 35 mm slides of four morphologic types of skin lesions were captured at 1000 pixels per inch (ppi) in 24 bit RGB color, to give an approximate 1K X 1K image. The discrete cosine transform (DCT) algorithm, was applied to the resulting image files to achieve compression ratios of about 7:1, 28:1, and 70:1. The original scans and the decompressed files were written to a 35 mm film recorder. Together with the original photo slides, the slides resulting from digital images were evaluated in a study of morphology recognition and image quality assessment. A panel of dermatologists was asked to identify the morphology depicted and to rate the image quality of each slide. The images were shown in a progression from highest level of compression to original photo slides. We conclude that the use of DCT file compression yields acceptable performance for skin lesion images since differences in morphology recognition performance do not correlate significantly with the use of original photos versus compressed versions. Additionally, image quality evaluation does not correlate significantly with level of compression.

  17. Lossless compression of hyperspectral images using hybrid context prediction.

    Liang, Yuan; Li, Jianping; Guo, Ke


    In this letter a new algorithm for lossless compression of hyperspectral images using hybrid context prediction is proposed. Lossless compression algorithms are typically divided into two stages, a decorrelation stage and a coding stage. The decorrelation stage supports both intraband and interband predictions. The intraband (spatial) prediction uses the median prediction model, since the median predictor is fast and efficient. The interband prediction uses hybrid context prediction. The hybrid context prediction is the combination of a linear prediction (LP) and a context prediction. Finally, the residual image of hybrid context prediction is coded by the arithmetic coding. We compare the proposed lossless compression algorithm with some of the existing algorithms for hyperspectral images such as 3D-CALIC, M-CALIC, LUT, LAIS-LUT, LUT-NN, DPCM (C-DPCM), JPEG-LS. The performance of the proposed lossless compression algorithm is evaluated. Simulation results show that our algorithm achieves high compression ratios with low complexity and computational cost.

  18. CMOS low data rate imaging method based on compressed sensing

    Xiao, Long-long; Liu, Kun; Han, Da-peng


    Complementary metal-oxide semiconductor (CMOS) technology enables the integration of image sensing and image compression processing, making improvements on overall system performance possible. We present a CMOS low data rate imaging approach by implementing compressed sensing (CS). On the basis of the CS framework, the image sensor projects the image onto a separable two-dimensional (2D) basis set and measures the corresponding coefficients obtained. First, the electrical current output from the pixels in a column are combined, with weights specified by voltage, in accordance with Kirchhoff's law. The second computation is performed in an analog vector-matrix multiplier (VMM). Each element of the VMM considers the total value of each column as the input and multiplies it by a unique coefficient. Both weights and coefficients are reprogrammable through analog floating-gate (FG) transistors. The image can be recovered from a percentage of these measurements using an optimization algorithm. The percentage, which can be altered flexibly by programming on the hardware circuit, determines the image compression ratio. These novel designs facilitate image compression during the image-capture phase before storage, and have the potential to reduce power consumption. Experimental results demonstrate that the proposed method achieves a large image compression ratio and ensures imaging quality.

  19. Survey for Image Representation Using Block Compressive Sensing For Compression Applications

    Ankita Hundet


    Full Text Available Compressing sensing theory have been favourable in evolving data compression techniques, though it was put forward with objective to achieve dimension reduced sampling for saving data sampling cost. In this paper two sampling methods are explored for block CS (BCS with discrete cosine transform (DCT based image representation for compression applications - (a coefficient random permutation (b adaptive sampling. CRP method has the potency to balance the sparsity of sampled vectors in DCT field of image, and then in improving the CS sampling efficiency. To attain AS we design an adaptive measurement matrix used in CS based on the energy distribution characteristics of image in DCT domain, which has a good impact in magnifying the CS performance. It has been revealed in our experimental results that our proposed methods are efficacious in reducing the dimension of the BCS-based image representation and/or improving the recovered image quality. The planned BCS based image representation scheme could be an efficient alternative for applications of encrypted image compression and/or robust image compression.

  20. CMOS Image Sensor with On-Chip Image Compression: A Review and Performance Analysis

    Milin Zhang


    Full Text Available Demand for high-resolution, low-power sensing devices with integrated image processing capabilities, especially compression capability, is increasing. CMOS technology enables the integration of image sensing and image processing, making it possible to improve the overall system performance. This paper reviews the current state of the art in CMOS image sensors featuring on-chip image compression. Firstly, typical sensing systems consisting of separate image-capturing unit and image-compression processing unit are reviewed, followed by systems that integrate focal-plane compression. The paper also provides a thorough review of a new design paradigm, in which image compression is performed during the image-capture phase prior to storage, referred to as compressive acquisition. High-performance sensor systems reported in recent years are also introduced. Performance analysis and comparison of the reported designs using different design paradigm are presented at the end.

  1. An Efficient Image Compression Technique Based on Arithmetic Coding

    Prof. Rajendra Kumar Patel


    Full Text Available The rapid growth of digital imaging applications, including desktop publishing, multimedia, teleconferencing, and high visual definition has increased the need for effective and standardized image compression techniques. Digital Images play a very important role for describing the detailed information. The key obstacle for many applications is the vast amount of data required to represent a digital image directly. The various processes of digitizing the images to obtain it in the best quality for the more clear and accurate information leads to the requirement of more storage space and better storage and accessing mechanism in the form of hardware or software. In this paper we concentrate mainly on the above flaw so that we reduce the space with best quality image compression. State-ofthe-art techniques can compress typical images from 1/10 to 1/50 their uncompressed size without visibly affecting image quality. From our study I observe that there is a need of good image compression technique which provides better reduction technique in terms of storage and quality. Arithmetic coding is the best way to reducing encoding data. So in this paper we propose arithmetic coding with walsh transformation based image compression technique which is an efficient way of reduction

  2. Wavelet based hierarchical coding scheme for radar image compression

    Sheng, Wen; Jiao, Xiaoli; He, Jifeng


    This paper presents a wavelet based hierarchical coding scheme for radar image compression. Radar signal is firstly quantized to digital signal, and reorganized as raster-scanned image according to radar's repeated period frequency. After reorganization, the reformed image is decomposed to image blocks with different frequency band by 2-D wavelet transformation, each block is quantized and coded by the Huffman coding scheme. A demonstrating system is developed, showing that under the requirement of real time processing, the compression ratio can be very high, while with no significant loss of target signal in restored radar image.

  3. Implementation of Novel Medical Image Compression Using Artificial Intelligence

    Mohammad Al-Rababah


    Full Text Available The medical image processing process is one of the most important areas of research in medical applications in digitized medical information. A medical images have a large sizes. Since the coming of digital medical information, the important challenge is to care for the conduction and requirements of huge data, including medical images. Compression is considered as one of the necessary algorithm to explain this problem. A large amount of medical images must be compressed using lossless compression. This paper proposes a new medical image compression algorithm founded on lifting wavelet transform CDF 9/7 joined with SPIHT coding algorithm, this algorithm applied the lifting composition to confirm the benefit of the wavelet transform. To develop the proposed algorithm, the outcomes compared with other compression algorithm like JPEG codec. Experimental results proves that the anticipated algorithm is superior to another algorithm in both lossy and lossless compression for all medical images tested. The Wavelet-SPIHT algorithm provides PSNR very important values for MRI images.

  4. A simple data compression scheme for binary images of bacteria compared with commonly used image data compression schemes.

    Wilkinson, M H


    A run length code compression scheme of extreme simplicity, used for image storage in an automated bacterial morphometry system, is compared with more common compression schemes, such as are used in the tag image file format. These schemes are Lempel-Ziv and Welch (LZW), Macintosh Packbits, and CCITT Group 3 Facsimile 1-dimensional modified Huffman run length code. In a set of 25 images consisting of full microscopic fields of view of bacterial slides, the method gave a 10.3-fold compression: 1.074 times better than LZW. In a second set of images of single areas of interest within each field of view, compression ratios of over 600 were obtained, 12.8 times that of LZW. The drawback of the system is its bad worst case performance. The method could be used in any application requiring storage of binary images of relatively small objects with fairly large spaces in between.

  5. Lossy Compression Color Medical Image Using CDF Wavelet Lifting Scheme

    M. beladghem


    Full Text Available As the coming era is that of digitized medical information, an important challenge to deal with is the storage and transmission requirements of enormous data, including color medical images. Compression is one of the indispensable techniques to solve this problem. In this work, we propose an algorithm for color medical image compression based on a biorthogonal wavelet transform CDF 9/7 coupled with SPIHT coding algorithm, of which we applied the lifting structure to improve the drawbacks of wavelet transform. In order to enhance the compression by our algorithm, we have compared the results obtained with wavelet based filters bank. Experimental results show that the proposed algorithm is superior to traditional methods in both lossy and lossless compression for all tested color images. Our algorithm provides very important PSNR and MSSIM values for color medical images.

  6. Dynamic CT perfusion image data compression for efficient parallel processing.

    Barros, Renan Sales; Olabarriaga, Silvia Delgado; Borst, Jordi; van Walderveen, Marianne A A; Posthuma, Jorrit S; Streekstra, Geert J; van Herk, Marcel; Majoie, Charles B L M; Marquering, Henk A


    The increasing size of medical imaging data, in particular time series such as CT perfusion (CTP), requires new and fast approaches to deliver timely results for acute care. Cloud architectures based on graphics processing units (GPUs) can provide the processing capacity required for delivering fast results. However, the size of CTP datasets makes transfers to cloud infrastructures time-consuming and therefore not suitable in acute situations. To reduce this transfer time, this work proposes a fast and lossless compression algorithm for CTP data. The algorithm exploits redundancies in the temporal dimension and keeps random read-only access to the image elements directly from the compressed data on the GPU. To the best of our knowledge, this is the first work to present a GPU-ready method for medical image compression with random access to the image elements from the compressed data.

  7. New Methods for Lossless Image Compression Using Arithmetic Coding.

    Howard, Paul G.; Vitter, Jeffrey Scott


    Identifies four components of a good predictive lossless image compression method: (1) pixel sequence, (2) image modeling and prediction, (3) error modeling, and (4) error coding. Highlights include Laplace distribution and a comparison of the multilevel progressive method for image coding with the prediction by partial precision matching method.…

  8. Architecture for hardware compression/decompression of large images

    Akil, Mohamed; Perroton, Laurent; Gailhard, Stephane; Denoulet, Julien; Bartier, Frederic


    In this article, we present a popular loseless compression/decompression algorithm, GZIP, and the study to implement it on a FPGA based architecture. The algorithm is loseless, and applied to 'bi-level' images of large size. It insures a minimum compression rate for the images we are considering. The proposed architecture for the compressor is based ona hash table and the decompressor is based on a parallel decoder of the Huffman codes.

  9. Effect of Embedding Watermark on Compression of the Digital Images

    Aggarwal, Deepak


    Image Compression plays a very important role in image processing especially when we are to send the image on the internet. The threat to the information on the internet increases and image is no exception. Generally the image is sent on the internet as the compressed image to optimally use the bandwidth of the network. But as we are on the network, at any intermediate level the image can be changed intentionally or unintentionally. To make sure that the correct image is being delivered at the other end we embed the water mark to the image. The watermarked image is then compressed and sent on the network. When the image is decompressed at the other end we can extract the watermark and make sure that the image is the same that was sent by the other end. Though watermarking the image increases the size of the uncompressed image but that has to done to achieve the high degree of robustness i.e. how an image sustains the attacks on it. The present paper is an attempt to make transmission of the images secure from...

  10. Image Denoising of Wavelet based Compressed Images Corrupted by Additive White Gaussian Noise

    Shyam Lal


    Full Text Available In this study an efficient algorithm is proposed for removal of additive white Gaussian noise from compressed natural images in wavelet based domain. First, the natural image is compressed by discrete wavelet transform and then proposed hybrid filter is applied for image denoising of compressed images corrupted by Additive White Gaussian Noise (AWGN. The proposed hybrid filter (HMCD is combination of non-linear fourth order partial differential equation and bivariate shrinkage function. The proposed hybrid filter provides better results in term of noise suppression with keeping minimum edge blurring as compared to other existing image denoising techniques for wavelet based compressed images. Simulation and experimental results on benchmark test images demonstrate that the proposed hybrid filter attains competitive image denoising performances as compared with other state-of-the-art image denoising algorithms. It is more effective particularly for the highly corrupted images in wavelet based compressed domain.

  11. Imaging industry expectations for compressed sensing in MRI

    King, Kevin F.; Kanwischer, Adriana; Peters, Rob


    Compressed sensing requires compressible data, incoherent acquisition and a nonlinear reconstruction algorithm to force creation of a compressible image consistent with the acquired data. MRI images are compressible using various transforms (commonly total variation or wavelets). Incoherent acquisition of MRI data by appropriate selection of pseudo-random or non-Cartesian locations in k-space is straightforward. Increasingly, commercial scanners are sold with enough computing power to enable iterative reconstruction in reasonable times. Therefore integration of compressed sensing into commercial MRI products and clinical practice is beginning. MRI frequently requires the tradeoff of spatial resolution, temporal resolution and volume of spatial coverage to obtain reasonable scan times. Compressed sensing improves scan efficiency and reduces the need for this tradeoff. Benefits to the user will include shorter scans, greater patient comfort, better image quality, more contrast types per patient slot, the enabling of previously impractical applications, and higher throughput. Challenges to vendors include deciding which applications to prioritize, guaranteeing diagnostic image quality, maintaining acceptable usability and workflow, and acquisition and reconstruction algorithm details. Application choice depends on which customer needs the vendor wants to address. The changing healthcare environment is putting cost and productivity pressure on healthcare providers. The improved scan efficiency of compressed sensing can help alleviate some of this pressure. Image quality is strongly influenced by image compressibility and acceleration factor, which must be appropriately limited. Usability and workflow concerns include reconstruction time and user interface friendliness and response. Reconstruction times are limited to about one minute for acceptable workflow. The user interface should be designed to optimize workflow and minimize additional customer training. Algorithm

  12. An Improved Interpolative Vector Quantization Scheme for Image Compression

    Ms. Darshana Chaware


    Full Text Available The aim of this paper is to develop a new image compression scheme by introducing visual patterns to interpolative vector quantization (IVQ. In this scheme first input images are down-sampled by ideal filter. Then, the down sampled images are compressed lossly by JPEG and transmitted to the decoder. In the decoder side, the decoded images are first up-sampled to the original resolution. The codebook is designed using LBG algorithm. We introduce visual patterns on designing the codebook. Experiment results shows that our scheme achieves much better performance over JPEG in terms of visual quality and PSNR

  13. Applications of chaos theory to lossy image compression

    Perrone, A. L.


    The aim of this paper is to show that the theoretical issues presented elsewhere (Perrone, Lecture Notes in Computer Science 880 (1995) 9-52) and relative to a new technique of stabilization of chaotic dynamics can be partially implemented to develop a new efficient prototype for lossy image compression. The results of the comparison between the performances of this prototype and the usual algorithms for image compression will also be discussed. The tests were performed on standard test images of the European Space Agency (E.S.A.). These images were obtained from a Synthetic Aperture Radar (S.A.R.) device mounted on an ERS-1 satellite.

  14. Grayscale Image Compression Based on Min Max Block Truncating Coding

    Hilal Almarabeh


    Full Text Available This paper presents an image compression techniques based on block truncating coding. In this work, a min max block truncating coding (MM_BTC is presented for grayscale image compression relies on applying dividing image into non-overlapping blocks. MM_BTC differ from other block truncating coding such as block truncating coding (BTC in the way of selecting the quantization level in order to remove redundancy. Objectives measures such as: Bit Rate (BR, Mean Square Error (MSE, Peak Signal to Noise Ratio (PSNR, and Redundancy (R, were used to present a detailed evaluation of MM_BTC of image quality.

  15. Lossless Compression of Medical Images Using 3D Predictors.

    Lucas, Luis; Rodrigues, Nuno; Cruz, Luis; Faria, Sergio


    This paper describes a highly efficient method for lossless compression of volumetric sets of medical images, such as CTs or MRIs. The proposed method, referred to as 3D-MRP, is based on the principle of minimum rate predictors (MRP), which is one of the state-of-the-art lossless compression technologies, presented in the data compression literature. The main features of the proposed method include the use of 3D predictors, 3D-block octree partitioning and classification, volume-based optimisation and support for 16 bit-depth images. Experimental results demonstrate the efficiency of the 3D-MRP algorithm for the compression of volumetric sets of medical images, achieving gains above 15% and 12% for 8 bit and 16 bit-depth contents, respectively, when compared to JPEG-LS, JPEG2000, CALIC, HEVC, as well as other proposals based on MRP algorithm.

  16. DCT and DST Based Image Compression for 3D Reconstruction

    Siddeq, Mohammed M.; Rodrigues, Marcos A.


    This paper introduces a new method for 2D image compression whose quality is demonstrated through accurate 3D reconstruction using structured light techniques and 3D reconstruction from multiple viewpoints. The method is based on two discrete transforms: (1) A one-dimensional Discrete Cosine Transform (DCT) is applied to each row of the image. (2) The output from the previous step is transformed again by a one-dimensional Discrete Sine Transform (DST), which is applied to each column of data generating new sets of high-frequency components followed by quantization of the higher frequencies. The output is then divided into two parts where the low-frequency components are compressed by arithmetic coding and the high frequency ones by an efficient minimization encoding algorithm. At decompression stage, a binary search algorithm is used to recover the original high frequency components. The technique is demonstrated by compressing 2D images up to 99% compression ratio. The decompressed images, which include images with structured light patterns for 3D reconstruction and from multiple viewpoints, are of high perceptual quality yielding accurate 3D reconstruction. Perceptual assessment and objective quality of compression are compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results show that the proposed compression method is superior to both JPEG and JPEG2000 concerning 3D reconstruction, and with equivalent perceptual quality to JPEG2000.

  17. Secure and Faster Clustering Environment for Advanced Image Compression



    Full Text Available Cloud computing provides ample opportunity in many areas such as fastest image transmission, secure and efficient imaging as a service. In general users needs faster and secure service. Usually Image Compression Algorithms are not working faster. In spite of several ongoing researches, Conventional Compression and its Algorithms might not be able to run faster. So, we perform comparative study of three image compression algorithm and their variety of features and factors to choose best among them for cluster processing. After choosing a best one it can be applied for a cluster computing environment to run parallel image compression for faster processing. This paper is the real time implementation of a Distributed Image Compression in Clustering of Nodes. In cluster computing, security is also more important factor. So, we propose a Distributed Intrusion Detection System to monitors all the nodes in cluster . If an intrusion occur in node processing then take an prevention step based on RIC (Robust Intrusion Control Method. We demonstrate the effectiveness and feasibility of our method on a set of satellite images for defense forces. The efficiency ratio of this computation process is 91.20.

  18. K-cluster-valued compressive sensing for imaging

    Xu Mai


    Full Text Available Abstract The success of compressive sensing (CS implies that an image can be compressed directly into acquisition with the measurement number over the whole image less than pixel number of the image. In this paper, we extend the existing CS by including the prior knowledge of K-cluster values available for the pixels or wavelet coefficients of an image. In order to model such prior knowledge, we propose in this paper K-cluster-valued CS approach for imaging, by incorporating the K-means algorithm in CoSaMP recovery algorithm. One significant advantage of the proposed approach, rather than the conventional CS, is the capability of reducing measurement numbers required for the accurate image reconstruction. Finally, the performance of conventional CS and K-cluster-valued CS is evaluated using some natural images and background subtraction images.

  19. A novel psychovisual threshold on large DCT for image compression.

    Abu, Nur Azman; Ernawan, Ferda


    A psychovisual experiment prescribes the quantization values in image compression. The quantization process is used as a threshold of the human visual system tolerance to reduce the amount of encoded transform coefficients. It is very challenging to generate an optimal quantization value based on the contribution of the transform coefficient at each frequency order. The psychovisual threshold represents the sensitivity of the human visual perception at each frequency order to the image reconstruction. An ideal contribution of the transform at each frequency order will be the primitive of the psychovisual threshold in image compression. This research study proposes a psychovisual threshold on the large discrete cosine transform (DCT) image block which will be used to automatically generate the much needed quantization tables. The proposed psychovisual threshold will be used to prescribe the quantization values at each frequency order. The psychovisual threshold on the large image block provides significant improvement in the quality of output images. The experimental results on large quantization tables from psychovisual threshold produce largely free artifacts in the visual output image. Besides, the experimental results show that the concept of psychovisual threshold produces better quality image at the higher compression rate than JPEG image compression.


    Ferda Ernawan


    Full Text Available An extension of the standard JPEG image compression known as JPEG-3 allows rescaling of the quantization matrix to achieve a certain image output quality. Recently, Tchebichef Moment Transform (TMT has been introduced in the field of image compression. TMT has been shown to perform better than the standard JPEG image compression. This study presents an adaptive TMT image compression. This task is obtained by generating custom quantization tables for low, medium and high image output quality levels based on a psychovisual model. A psychovisual model is developed to approximate visual threshold on Tchebichef moment from image reconstruction error. The contribution of each moment will be investigated and analyzed in a quantitative experiment. The sensitivity of TMT basis functions can be measured by evaluating their contributions to image reconstruction for each moment order. The psychovisual threshold model allows a developer to design several custom TMT quantization tables for a user to choose from according to his or her target output preference. Consequently, these quantization tables produce lower average bit length of Huffman code while still retaining higher image quality than the extended JPEG scaling scheme.

  1. A Novel Psychovisual Threshold on Large DCT for Image Compression


    A psychovisual experiment prescribes the quantization values in image compression. The quantization process is used as a threshold of the human visual system tolerance to reduce the amount of encoded transform coefficients. It is very challenging to generate an optimal quantization value based on the contribution of the transform coefficient at each frequency order. The psychovisual threshold represents the sensitivity of the human visual perception at each frequency order to the image reconstruction. An ideal contribution of the transform at each frequency order will be the primitive of the psychovisual threshold in image compression. This research study proposes a psychovisual threshold on the large discrete cosine transform (DCT) image block which will be used to automatically generate the much needed quantization tables. The proposed psychovisual threshold will be used to prescribe the quantization values at each frequency order. The psychovisual threshold on the large image block provides significant improvement in the quality of output images. The experimental results on large quantization tables from psychovisual threshold produce largely free artifacts in the visual output image. Besides, the experimental results show that the concept of psychovisual threshold produces better quality image at the higher compression rate than JPEG image compression. PMID:25874257

  2. Optimization of wavelet decomposition for image compression and feature preservation.

    Lo, Shih-Chung B; Li, Huai; Freedman, Matthew T


    A neural-network-based framework has been developed to search for an optimal wavelet kernel that can be used for a specific image processing task. In this paper, a linear convolution neural network was employed to seek a wavelet that minimizes errors and maximizes compression efficiency for an image or a defined image pattern such as microcalcifications in mammograms and bone in computed tomography (CT) head images. We have used this method to evaluate the performance of tap-4 wavelets on mammograms, CTs, magnetic resonance images, and Lena images. We found that the Daubechies wavelet or those wavelets with similar filtering characteristics can produce the highest compression efficiency with the smallest mean-square-error for many image patterns including general image textures as well as microcalcifications in digital mammograms. However, the Haar wavelet produces the best results on sharp edges and low-noise smooth areas. We also found that a special wavelet whose low-pass filter coefficients are 0.32252136, 0.85258927, 1.38458542, and -0.14548269) produces the best preservation outcomes in all tested microcalcification features including the peak signal-to-noise ratio, the contrast and the figure of merit in the wavelet lossy compression scheme. Having analyzed the spectrum of the wavelet filters, we can find the compression outcomes and feature preservation characteristics as a function of wavelets. This newly developed optimization approach can be generalized to other image analysis applications where a wavelet decomposition is employed.

  3. Perceptually tuned JPEG coder for echocardiac image compression.

    Al-Fahoum, Amjed S; Reza, Ali M


    In this work, we propose an efficient framework for compressing and displaying medical images. Image compression for medical applications, due to available Digital Imaging and Communications in Medicine requirements, is limited to the standard discrete cosine transform-based joint picture expert group. The objective of this work is to develop a set of quantization tables (Q tables) for compression of a specific class of medical image sequences, namely echocardiac. The main issue of concern is to achieve a Q table that matches the specific application and can linearly change the compression rate by adjusting the gain factor. This goal is achieved by considering the region of interest, optimum bit allocation, human visual system constraint, and optimum coding technique. These parameters are jointly optimized to design a Q table that works robustly for a category of medical images. Application of this approach to echocardiac images shows high subjective and quantitative performance. The proposed approach exhibits objectively a 2.16-dB improvement in the peak signal-to-noise ratio and subjectively 25% improvement over the most useable compression techniques.

  4. Watermarking of ultrasound medical images in teleradiology using compressed watermark.

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohamad; Ali, Mushtaq


    The open accessibility of Internet-based medical images in teleradialogy face security threats due to the nonsecured communication media. This paper discusses the spatial domain watermarking of ultrasound medical images for content authentication, tamper detection, and lossless recovery. For this purpose, the image is divided into two main parts, the region of interest (ROI) and region of noninterest (RONI). The defined ROI and its hash value are combined as watermark, lossless compressed, and embedded into the RONI part of images at pixel's least significant bits (LSBs). The watermark lossless compression and embedding at pixel's LSBs preserve image diagnostic and perceptual qualities. Different lossless compression techniques including Lempel-Ziv-Welch (LZW) were tested for watermark compression. The performances of these techniques were compared based on more bit reduction and compression ratio. LZW was found better than others and used in tamper detection and recovery watermarking of medical images (TDARWMI) scheme development to be used for ROI authentication, tamper detection, localization, and lossless recovery. TDARWMI performance was compared and found to be better than other watermarking schemes.

  5. Efficient Short Boundary Detection & Key Frame Extraction using Image Compression

    Shilpa. R. Jadhav Anup. V. Kalaskar Shruti Bhargava


    Full Text Available This paper present novel algorithm for efficient short boundary detection and key frames extraction using image compression. The algorithm differs from conventional methods mainly in the use of image segmentation and attention model.. Matching difference between two consecutive frames is computed with different weight. Shot boundaries are detected with automatic threshold. Key frame is extracted by using reference frame-based approach. Experimental results show improved performance of short boundary detection by using the proposed algorithms, and key frames represent shot content. And also satisfactorily image compression of result frame.

  6. Three-Dimensional Image Compression With Integer Wavelet Transforms

    Bilgin, Ali; Zweig, George; Marcellin, Michael W.


    A three-dimensional (3-D) image-compression algorithm based on integer wavelet transforms and zerotree coding is presented. The embedded coding of zerotrees of wavelet coefficients (EZW) algorithm is extended to three dimensions, and context-based adaptive arithmetic coding is used to improve its performance. The resultant algorithm, 3-D CB-EZW, efficiently encodes 3-D image data by the exploitation of the dependencies in all dimensions, while enabling lossy and lossless decompression from the same bit stream. Compared with the best available two-dimensional lossless compression techniques, the 3-D CB-EZW algorithm produced averages of 22%, 25%, and 20% decreases in compressed file sizes for computed tomography, magnetic resonance, and Airborne Visible Infrared Imaging Spectrometer images, respectively. The progressive performance of the algorithm is also compared with other lossy progressive-coding algorithms.

  7. Fast Adaptive Wavelet for Remote Sensing Image Compression

    Bo Li; Run-Hai Jiao; Yuan-Cheng Li


    Remote sensing images are hard to achieve high compression ratio because of their rich texture. By analyzing the influence of wavelet properties on image compression, this paper proposes wavelet construction rules and builds a new biorthogonal wavelet construction model with parameters. The model parameters are optimized by using genetic algorithm and adopting energy compaction as the optimization object function. In addition, in order to resolve the computation complexity problem of online construction, according to the image classification rule proposed in this paper we construct wavelets for different classes of images and implement the fast adaptive wavelet selection algorithm (FAWS). Experimental results show wavelet bases of FAWS gain better compression performance than Daubechies9/7.

  8. The FBI compression standard for digitized fingerprint images

    Brislawn, C.M.; Bradley, J.N. [Los Alamos National Lab., NM (United States); Onyshczak, R.J. [National Inst. of Standards and Technology, Gaithersburg, MD (United States); Hopper, T. [Federal Bureau of Investigation, Washington, DC (United States)


    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  9. Compression of grayscale scientific and medical image data

    F Murtagh


    Full Text Available A review of issues in image compression is presented, with a strong focus on the wavelet transform and other closely related multiresolution transforms. The roles of information content, resolution scale, and image capture noise, are discussed. Experimental and practical results are reviewed.

  10. Optimal context quantization in lossless compression of image data sequences

    Forchhammer, Søren; Wu, X.; Andersen, Jakob Dahl


    In image compression context-based entropy coding is commonly used. A critical issue to the performance of context-based image coding is how to resolve the conflict of a desire for large templates to model high-order statistic dependency of the pixels and the problem of context dilution due to in...

  11. Image Compression and Watermarking scheme using Scalar Quantization

    Swamy, Kilari Veera; Reddy, Y V Bhaskar; Kumar, S Srinivas; 10.5121/ijngn.2010.2104


    This paper presents a new compression technique and image watermarking algorithm based on Contourlet Transform (CT). For image compression, an energy based quantization is used. Scalar quantization is explored for image watermarking. Double filter bank structure is used in CT. The Laplacian Pyramid (LP) is used to capture the point discontinuities, and then followed by a Directional Filter Bank (DFB) to link point discontinuities. The coefficients of down sampled low pass version of LP decomposed image are re-ordered in a pre-determined manner and prediction algorithm is used to reduce entropy (bits/pixel). In addition, the coefficients of CT are quantized based on the energy in the particular band. The superiority of proposed algorithm to JPEG is observed in terms of reduced blocking artifacts. The results are also compared with wavelet transform (WT). Superiority of CT to WT is observed when the image contains more contours. The watermark image is embedded in the low pass image of contourlet decomposition. ...

  12. Compressive SAR imaging with joint sparsity and local similarity exploitation.

    Shen, Fangfang; Zhao, Guanghui; Shi, Guangming; Dong, Weisheng; Wang, Chenglong; Niu, Yi


    Compressive sensing-based synthetic aperture radar (SAR) imaging has shown its superior capability in high-resolution image formation. However, most of those works focus on the scenes that can be sparsely represented in fixed spaces. When dealing with complicated scenes, these fixed spaces lack adaptivity in characterizing varied image contents. To solve this problem, a new compressive sensing-based radar imaging approach with adaptive sparse representation is proposed. Specifically, an autoregressive model is introduced to adaptively exploit the structural sparsity of an image. In addition, similarity among pixels is integrated into the autoregressive model to further promote the capability and thus an adaptive sparse representation facilitated by a weighted autoregressive model is derived. Since the weighted autoregressive model is inherently determined by the unknown image, we propose a joint optimization scheme by iterative SAR imaging and updating of the weighted autoregressive model to solve this problem. Eventually, experimental results demonstrated the validity and generality of the proposed approach.

  13. Prior image constrained compressed sensing: a quantitative performance evaluation

    Thériault Lauzier, Pascal; Tang, Jie; Chen, Guang-Hong


    The appeal of compressed sensing (CS) in the context of medical imaging is undeniable. In MRI, it could enable shorter acquisition times while in CT, it has the potential to reduce the ionizing radiation dose imparted to patients. However, images reconstructed using a CS-based approach often show an unusual texture and a potential loss in spatial resolution. The prior image constrained compressed sensing (PICCS) algorithm has been shown to enable accurate image reconstruction at lower levels of sampling. This study systematically evaluates an implementation of PICCS applied to myocardial perfusion imaging with respect to two parameters of its objective function. The prior image parameter α was shown here to yield an optimal image quality in the range 0.4 to 0.5. A quantitative evaluation in terms of temporal resolution, spatial resolution, noise level, noise texture, and reconstruction accuracy was performed.

  14. Compression of 3D integral images using wavelet decomposition

    Mazri, Meriem; Aggoun, Amar


    This paper presents a wavelet-based lossy compression technique for unidirectional 3D integral images (UII). The method requires the extraction of different viewpoint images from the integral image. A single viewpoint image is constructed by extracting one pixel from each microlens, then each viewpoint image is decomposed using a Two Dimensional Discrete Wavelet Transform (2D-DWT). The resulting array of coefficients contains several frequency bands. The lower frequency bands of the viewpoint images are assembled and compressed using a 3 Dimensional Discrete Cosine Transform (3D-DCT) followed by Huffman coding. This will achieve decorrelation within and between 2D low frequency bands from the different viewpoint images. The remaining higher frequency bands are Arithmetic coded. After decoding and decompression of the viewpoint images using an inverse 3D-DCT and an inverse 2D-DWT, each pixel from every reconstructed viewpoint image is put back into its original position within the microlens to reconstruct the whole 3D integral image. Simulations were performed on a set of four different grey level 3D UII using a uniform scalar quantizer with deadzone. The results for the average of the four UII intensity distributions are presented and compared with previous use of 3D-DCT scheme. It was found that the algorithm achieves better rate-distortion performance, with respect to compression ratio and image quality at very low bit rates.

  15. Onboard low-complexity compression of solar stereo images.

    Wang, Shuang; Cui, Lijuan; Cheng, Samuel; Stanković, Lina; Stanković, Vladimir


    We propose an adaptive distributed compression solution using particle filtering that tracks correlation, as well as performing disparity estimation, at the decoder side. The proposed algorithm is tested on the stereo solar images captured by the twin satellites system of NASA's Solar TErrestrial RElations Observatory (STEREO) project. Our experimental results show improved compression performance w.r.t. to a benchmark compression scheme, accurate correlation estimation by our proposed particle-based belief propagation algorithm, and significant peak signal-to-noise ratio improvement over traditional separate bit-plane decoding without dynamic correlation and disparity estimation.

  16. Greylevel Difference Classification Algorithm inFractal Image Compression

    陈毅松; 卢坚; 孙正兴; 张福炎


    This paper proposes the notion of a greylevel difference classification algorithm in fractal image compression. Then an example of the greylevel difference classification algo rithm is given as an improvement of the quadrant greylevel and variance classification in the quadtree-based encoding algorithm. The algorithm incorporates the frequency feature in spatial analysis using the notion of average quadrant greylevel difference, leading to an enhancement in terms of encoding time, PSNR value and compression ratio.

  17. Fast-adaptive near-lossless image compression

    He, Kejing


    The purpose of image compression is to store or transmit image data efficiently. However, most compression methods emphasize the compression ratio rather than the throughput. We propose an encoding process and rules, and consequently a fast-adaptive near-lossless image compression method (FAIC) with good compression ratio. FAIC is a single-pass method, which removes bits from each codeword, then predicts the next pixel value through localized edge detection techniques, and finally uses Golomb-Rice codes to encode the residuals. FAIC uses only logical operations, bitwise operations, additions, and subtractions. Meanwhile, it eliminates the slow operations (e.g., multiplication, division, and logarithm) and the complex entropy coder, which can be a bottleneck in hardware implementations. Besides, FAIC does not depend on any precomputed tables or parameters. Experimental results demonstrate that FAIC achieves good balance between compression ratio and computational complexity in certain range (e.g., peak signal-to-noise ratio >35 dB, bits per pixel>2). It is suitable for applications in which the amount of data is huge or the computation power is limited.

  18. Feature preserving compression of high resolution SAR images

    Yang, Zhigao; Hu, Fuxiang; Sun, Tao; Qin, Qianqing


    Compression techniques are required to transmit the large amounts of high-resolution synthetic aperture radar (SAR) image data over the available channels. Common Image compression methods may lose detail and weak information in original images, especially at smoothness areas and edges with low contrast. This is known as "smoothing effect". It becomes difficult to extract and recognize some useful image features such as points and lines. We propose a new SAR image compression algorithm that can reduce the "smoothing effect" based on adaptive wavelet packet transform and feature-preserving rate allocation. For the reason that images should be modeled as non-stationary information resources, a SAR image is partitioned to overlapped blocks. Each overlapped block is then transformed by adaptive wavelet packet according to statistical features of different blocks. In quantifying and entropy coding of wavelet coefficients, we integrate feature-preserving technique. Experiments show that quality of our algorithm up to 16:1 compression ratio is improved significantly, and more weak information is reserved.

  19. Medical image compression with embedded-wavelet transform

    Cheng, Po-Yuen; Lin, Freddie S.; Jannson, Tomasz


    The need for effective medical image compression and transmission techniques continues to grow because of the huge volume of radiological images captured each year. The limited bandwidth and efficiency of current networking systems cannot meet this need. In response, Physical Optics Corporation devised an efficient medical image management system to significantly reduce the storage space and transmission bandwidth required for digitized medical images. The major functions of this system are: (1) compressing medical imagery, using a visual-lossless coder, to reduce the storage space required; (2) transmitting image data progressively, to use the transmission bandwidth efficiently; and (3) indexing medical imagery according to image characteristics, to enable automatic content-based retrieval. A novel scalable wavelet-based image coder was developed to implement the system. In addition to its high compression, this approach is scalable in both image size and quality. The system provides dramatic solutions to many medical image handling problems. One application is the efficient storage and fast transmission of medical images over picture archiving and communication systems. In addition to reducing costs, the potential impact on improving the quality and responsiveness of health care delivery in the US is significant.

  20. 3D passive integral imaging using compressive sensing.

    Cho, Myungjin; Mahalanobis, Abhijit; Javidi, Bahram


    Passive 3D sensing using integral imaging techniques has been well studied in the literature. It has been shown that a scene can be reconstructed at various depths using several 2D elemental images. This provides the ability to reconstruct objects in the presence of occlusions, and passively estimate their 3D profile. However, high resolution 2D elemental images are required for high quality 3D reconstruction. Compressive Sensing (CS) provides a way to dramatically reduce the amount of data that needs to be collected to form the elemental images, which in turn can reduce the storage and bandwidth requirements. In this paper, we explore the effects of CS in acquisition of the elemental images, and ultimately on passive 3D scene reconstruction and object recognition. Our experiments show that the performance of passive 3D sensing systems remains robust even when elemental images are recovered from very few compressive measurements.

  1. Improved vector quantization scheme for grayscale image compression

    Hu, Y.-C.; Chen, W.-L.; Lo, C.-C.; Chuang, J.-C.


    This paper proposes an improved image coding scheme based on vector quantization. It is well known that the image quality of a VQ-compressed image is poor when a small-sized codebook is used. In order to solve this problem, the mean value of the image block is taken as an alternative block encoding rule to improve the image quality in the proposed scheme. To cut down the storage cost of compressed codes, a two-stage lossless coding approach including the linear prediction technique and the Huffman coding technique is employed in the proposed scheme. The results show that the proposed scheme achieves better image qualities than vector quantization while keeping low bit rates.

  2. Effect of Image Linearization on Normalized Compression Distance

    Mortensen, Jonathan; Wu, Jia Jie; Furst, Jacob; Rogers, John; Raicu, Daniela

    Normalized Information Distance, based on Kolmogorov complexity, is an emerging metric for image similarity. It is approximated by the Normalized Compression Distance (NCD) which generates the relative distance between two strings by using standard compression algorithms to compare linear strings of information. This relative distance quantifies the degree of similarity between the two objects. NCD has been shown to measure similarity effectively on information which is already a string: genomic string comparisons have created accurate phylogeny trees and NCD has also been used to classify music. Currently, to find a similarity measure using NCD for images, the images must first be linearized into a string, and then compared. To understand how linearization of a 2D image affects the similarity measure, we perform four types of linearization on a subset of the Corel image database and compare each for a variety of image transformations. Our experiment shows that different linearization techniques produce statistically significant differences in NCD for identical spatial transformations.

  3. A compression tolerant scheme for image authentication

    刘宝锋; 张文军; 余松煜


    Image authentication techniques used to protect the recipients against malicious forgery. In this paper, we propose a new image authentication technique based on digital signature. The authentication is verified by comparing the features of the each block in tested image with the corresponding features of the block recorded in the digital signature. The proposed authentication scheme is capable of distinguishing visible but non-malicious changes due to common processing operations from malicious changes. At last our experimental results show that the proposed scheme is not only efficient to protect integrity of image, but also with low computation,which are feasible for practical applications.

  4. Compression of Ultrasonic NDT Image by Wavelet Based Local Quantization

    Cheng, W.; Li, L. Q.; Tsukada, K.; Hanasaki, K.


    Compression on ultrasonic image that is always corrupted by noise will cause `over-smoothness' or much distortion. To solve this problem to meet the need of real time inspection and tele-inspection, a compression method based on Discrete Wavelet Transform (DWT) that can also suppress the noise without losing much flaw-relevant information, is presented in this work. Exploiting the multi-resolution and interscale correlation property of DWT, a simple way named DWCs classification, is introduced first to classify detail wavelet coefficients (DWCs) as dominated by noise, signal or bi-effected. A better denoising can be realized by selective thresholding DWCs. While in `Local quantization', different quantization strategies are applied to the DWCs according to their classification and the local image property. It allocates the bit rate more efficiently to the DWCs thus achieve a higher compression rate. Meanwhile, the decompressed image shows the effects of noise suppressed and flaw characters preserved.

  5. Adaptive interference hyperspectral image compression with spectrum distortion control

    Jing Ma; Yunsong Li; Chengke Wu; Dong Chen


    As one of the next generation imaging spectrometers,interferential spectrometer has been paid much attention.With traditional spectrum compression methods,the hyperspectral images generated by interferential spectrometer can only be protected with better visual quality in spatial domain,but its optical applications in Fourier domain are often ignored.So the relation between the distortion in Fourier domain and the compression in spatial domain is analyzed in this letter.Based on this analysis,a novel coding scheme is proposed,which can compress data in spatial domain while reducing the distortion in Fourier domain.The bitstream of set partitioning in hierarchical trees (SPIHT) is truncated by adaptively lifting the rate-distortion slopes of zerotrees according to the priorities of optical path difference (OPD) based on rate-distortion optimization theory.Experimental results show that the proposed scheme can achieve better performance in Fourier domain while maintaining the image quality in spatial domain.

  6. A specific measurement matrix in compressive imaging system

    Wang, Fen; Wei, Ping; Ke, Jun


    Compressed sensing or compressive sampling (CS) is a new framework for simultaneous data sampling and compression which was proposed by Candes, Donoho, and Tao several years ago. Ever since the advent of a single-pixel camera, one of the CS applications - compressive imaging (CI, also referred as feature-specific imaging) has aroused more interest of numerous researchers. However, it is still a challenging problem to choose a simple and efficient measurement matrix in such a hardware system, especially for large scale image. In this paper, we propose a new measurement matrix whose rows are the odd rows of N order Hadamard matrix and discuss the validity of the matrix theoretically. The advantage of the matrix is its universality and easy implementation in the optical domain owing to its integer-valued elements. In addition, we demonstrate the validity of the matrix through the reconstruction of natural images using Orthogonal Matching Pursuit (OMP) algorithm. Due to the limitation of the memory of the hardware system and personal computer which is used to simulate the process, it is impossible to create such a large matrix that is used to conduct large scale images. In order to solve the problem, the block-wise notion is introduced to conduct large scale images and the experiments results present the validity of this method.

  7. Spatial exemplars and metrics for characterizing image compression transform error

    Schmalz, Mark S.; Caimi, Frank M.


    The efficient transmission and storage of digital imagery increasingly requires compression to maintain effective channel bandwidth and device capacity. Unfortunately, in applications where high compression ratios are required, lossy compression transforms tend to produce a wide variety of artifacts in decompressed images. Image quality measures (IQMs) have been published that detect global changes in image configuration resulting from the compression or decompression process. Examples include statistical and correlation-based procedures related to mean-squared error, diffusion of energy from features of interest, and spectral analysis. Additional but sparsely-reported research involves local IQMs that quantify feature distortion in terms of objective or subjective models. In this paper, a suite of spatial exemplars and evaluation procedures is introduced that can elicit and measure a wide range of spatial, statistical, or spectral distortions from an image compression transform T. By applying the test suite to the input of T, performance deficits can be highlighted in the transform's design phase, versus discovery under adverse conditions in field practice. In this study, performance analysis is concerned primarily with the effect of compression artifacts on automated target recognition (ATR) algorithm performance. For example, featural distortion can be measured using linear, curvilinear, polygonal, or elliptical features interspersed with various textures or noise-perturbed backgrounds or objects. These simulated target blobs may themselves be perturbed with various types or levels of noise, thereby facilitating measurement of statistical target-background interactions. By varying target-background contrast, resolution, noise level, and target shape, compression transforms can be stressed to isolate performance deficits. Similar techniques can be employed to test spectral, phase and boundary distortions due to decompression. Applicative examples are taken from

  8. Yule-Nielsen based multi-angle reflectance prediction of metallic halftones

    Babaei, Vahid; Hersch, Roger D.


    Spectral prediction models are widely used for characterizing classical, almost transparent ink halftones printed on a diffuse substrate. Metallic-ink prints however reflect a significant portion of light in the specular direction. Due to their opaque nature, multi-color metallic halftones require juxtaposed halftoning methods where halftone dots of different colors are laid out side-by-side. In this work, we study the application of the Yule-Nielsen spectral Neugebauer (YNSN) model on metallic halftones in order to predict their reflectances. The model is calibrated separately at each considered illumination and observation angle. For each measuring geometry, there is a different Yule-Nielsen n-value. For traditional prints on paper, the n-value expresses the amount of optical dot gain. In the case of the metallic prints, the optical dot gain is much smaller than in paper prints. With the fitted n-values, we try to better understand the interaction of light and metallic halftones.

  9. An Image Coder for Lossless and Near Lossless Compression

    MENChaoguang; LIXiukun; ZHAODebin; YANGXiaozong


    In this paper, we propose a new image coder (DACLIC) for lossless and near lossless image cornpression. The redundancy removal in DACLIC (Direction and context-based lossless/near lossless image coder) is achieved by block direction prediction and context-based error modeling. A quadtree coder and a postprocessing technique in DACLIC are also described. Experiments show that DACLIC has higher compression efficiency than the ISO standard: LOCO-I (Low complexity lossless compression for images). For example, DACLIC is superior to LOCO-I by 0.12bpp, 0.13bpp and 0.21bpp when the maximum absolute tolerant error n = 0. 5 and 10 for 512 × 512 image “Lena”. In term of computational complexity, DACLIC has marginally higher encoding complexity than LOCO-I but is comparable to LOCO-I in decoding complexity.

  10. View compensated compression of volume rendered images for remote visualization.

    Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S


    Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.

  11. Compressive microscopic imaging with "positive-negative" light modulation

    Yu, Wen-Kai; Yao, Xu-Ri; Liu, Xue-Feng; Lan, Ruo-Ming; Wu, Ling-An; Zhai, Guang-Jie; Zhao, Qing


    An experiment on compressive microscopic imaging with single-pixel detector and single-arm has been performed on the basis of "positive-negative" (differential) light modulation of a digital micromirror device (DMD). A magnified image of micron-sized objects illuminated by the microscope's own incandescent lamp has been successfully acquired. The image quality is improved by one more orders of magnitude compared with that obtained by conventional single-pixel imaging scheme with normal modulation using the same sampling rate, and moreover, the system is robust against the instability of light source and may be applied to very weak light condition. Its nature and the analysis of noise sources is discussed deeply. The realization of this technique represents a big step to the practical applications of compressive microscopic imaging in the fields of biology and materials science.

  12. Pulse-compression ghost imaging lidar via coherent detection

    Deng, Chenjin; Han, Shensheng


    Ghost imaging (GI) lidar, as a novel remote sensing technique,has been receiving increasing interest in recent years. By combining pulse-compression technique and coherent detection with GI, we propose a new lidar system called pulse-compression GI lidar. Our analytical results, which are backed up by numerical simulations, demonstrate that pulse-compression GI lidar can obtain the target's spatial intensity distribution, range and moving velocity. Compared with conventional pulsed GI lidar system, pulse-compression GI lidar, without decreasing the range resolution, is easy to obtain high single pulse energy with the use of a long pulse, and the mechanism of coherent detection can eliminate the influence of the stray light, which can dramatically improve the detection sensitivity and detection range.

  13. Pulse-compression ghost imaging lidar via coherent detection.

    Deng, Chenjin; Gong, Wenlin; Han, Shensheng


    Ghost imaging (GI) lidar, as a novel remote sensing technique, has been receiving increasing interest in recent years. By combining pulse-compression technique and coherent detection with GI, we propose a new lidar system called pulse-compression GI lidar. Our analytical results, which are backed up by numerical simulations, demonstrate that pulse-compression GI lidar can obtain the target's spatial intensity distribution, range and moving velocity. Compared with conventional pulsed GI lidar system, pulse-compression GI lidar, without decreasing the range resolution, is easy to obtain high single pulse energy with the use of a long pulse, and the mechanism of coherent detection can eliminate the influence of the stray light, which is helpful to improve the detection sensitivity and detection range.

  14. An innovative lossless compression method for discrete-color images.

    Alzahir, Saif; Borici, Arber


    In this paper, we present an innovative method for lossless compression of discrete-color images, such as map images, graphics, GIS, as well as binary images. This method comprises two main components. The first is a fixed-size codebook encompassing 8×8 bit blocks of two-tone data along with their corresponding Huffman codes and their relative probabilities of occurrence. The probabilities were obtained from a very large set of discrete color images which are also used for arithmetic coding. The second component is the row-column reduction coding, which will encode those blocks that are not in the codebook. The proposed method has been successfully applied on two major image categories: 1) images with a predetermined number of discrete colors, such as digital maps, graphs, and GIS images and 2) binary images. The results show that our method compresses images from both categories (discrete color and binary images) with 90% in most case and higher than the JBIG-2 by 5%-20% for binary images, and by 2%-6.3% for discrete color images on average.

  15. Photoacoustic image reconstruction based on Bayesian compressive sensing algorithm

    Mingjian Sun; Naizhang Feng; Yi Shen; Jiangang Li; Liyong Ma; Zhenghua Wu


    The photoacoustic tomography (PAT) method, based on compressive sensing (CS) theory, requires that,for the CS reconstruction, the desired image should have a sparse representation in a known transform domain. However, the sparsity of photoacoustic signals is destroyed because noises always exist. Therefore,the original sparse signal cannot be effectively recovered using the general reconstruction algorithm. In this study, Bayesian compressive sensing (BCS) is employed to obtain highly sparse representations of photoacoustic images based on a set of noisy CS measurements. Results of simulation demonstrate that the BCS-reconstructed image can achieve superior performance than other state-of-the-art CS-reconstruction algorithms.%@@ The photoacoustic tomography (PAT) method, based on compressive sensing (CS) theory, requires that,for the CS reconstruction, the desired image should have a sparse representation in a known transform domain.However, the sparsity of photoacoustic signals is destroyed because noises always exist.Therefore,the original sparse signal cannot be effectively recovered using the general reconstruction algorithm.In this study, Bayesian compressive sensing (BCS) is employed to obtain highly sparse representations of photoacoustic inages based on a set of noisy CS measurements.Results of simulation demonstrate that the BCS-reconstructed image can achieve superior performance than other state-of-the-art CS-reconstruction algorithms.


    Yang Guoan; Zheng Nanning; Guo Shugang


    A new approach for designing the Biorthogonal Wavelet Filter Bank (BWFB) for the purpose of image compression is presented in this letter. The approach is decomposed into two steps.First, an optimal filter bank is designed in theoretical sense based on Vaidyanathan's coding gain criterion in SubBand Coding (SBC) system. Then the above filter bank is optimized based on the criterion of Peak Signal-to-Noise Ratio (PSNR) in JPEG2000 image compression system, resulting in a BWFB in practical application sense. With the approach, a series of BWFB for a specific class of applications related to image compression, such as remote sensing images, can be fast designed. Here,new 5/3 BWFB and 9/7 BWFB are presented based on the above approach for the remote sensing image compression applications. Experiments show that the two filter banks are equally performed with respect to CDF 9/7 and LT 5/3 filter in JPEG2000 standard; at the same time, the coefficients and the lifting parameters of the lifting scheme are all rational, which bring the computational advantage, and the ease for VLSI implementation.

  17. Integer wavelet transform for embedded lossy to lossless image compression.

    Reichel, J; Menegaz, G; Nadenau, M J; Kunt, M


    The use of the discrete wavelet transform (DWT) for embedded lossy image compression is now well established. One of the possible implementations of the DWT is the lifting scheme (LS). Because perfect reconstruction is granted by the structure of the LS, nonlinear transforms can be used, allowing efficient lossless compression as well. The integer wavelet transform (IWT) is one of them. This is an interesting alternative to the DWT because its rate-distortion performance is similar and the differences can be predicted. This topic is investigated in a theoretical framework. A model of the degradations caused by the use of the IWT instead of the DWT for lossy compression is presented. The rounding operations are modeled as additive noise. The noise are then propagated through the LS structure to measure their impact on the reconstructed pixels. This methodology is verified using simulations with random noise as input. It predicts accurately the results obtained using images compressed by the well-known EZW algorithm. Experiment are also performed to measure the difference in terms of bit rate and visual quality. This allows to a better understanding of the impact of the IWT when applied to lossy image compression.

  18. Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images

    Barni Mauro


    Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.

  19. A Review On Segmentation Based Image Compression Techniques



    Full Text Available Abstract -The storage and transmission of imagery become more challenging task in the current scenario of multimedia applications. Hence, an efficient compression scheme is highly essential for imagery, which reduces the requirement of storage medium and transmission bandwidth. Not only improvement in performance and also the compression techniques must converge quickly in order to apply them for real time applications. There are various algorithms have been done in image compression, but everyone has its own pros and cons. Here, an extensive analysis between existing methods is performed. Also, the use of existing works is highlighted, for developing the novel techniques which face the challenging task of image storage and transmission in multimedia applications.

  20. Improved zerotree coding algorithm for wavelet image compression

    Chen, Jun; Li, Yunsong; Wu, Chengke


    A listless minimum zerotree coding algorithm based on the fast lifting wavelet transform with lower memory requirement and higher compression performance is presented in this paper. Most state-of-the-art image compression techniques based on wavelet coefficients, such as EZW and SPIHT, exploit the dependency between the subbands in a wavelet transformed image. We propose a minimum zerotree of wavelet coefficients which exploits the dependency not only between the coarser and the finer subbands but also within the lowest frequency subband. And a ne listless significance map coding algorithm based on the minimum zerotree, using new flag maps and new scanning order different form Wen-Kuo Lin et al. LZC, is also proposed. A comparison reveals that the PSNR results of LMZC are higher than those of LZC, and the compression performance of LMZC outperforms that of SPIHT in terms of hard implementation.

  1. Fast lossless color image compression method using perceptron

    贾克斌; 张延华; 庄新月


    The technique of lossless image compression plays an important role in image transmission and storage for high quality. At present, both the compression ratio and processing speed should be considered in a real-time multimedia system. A novel lossless compression algorithm is researched. A low complexity predictive model is proposed using the correlation of pixels and color components. In the meantime, perceptron in neural network is used to rectify the prediction values adaptively. It makes the prediction residuals smaller and in a small dynamic scope. Also a color space transform is used and good decorrelation is obtained in our algorithm. The compared experimental results have shown that our algorithm has a noticeably better performance than traditional algorithms. Compared to the new standard JPEG-LS, this predictive model reduces its computational complexity. And its speed is faster than the JPEG-LS with negligible performance sacrifice.

  2. Digital image sequence processing, compression, and analysis

    Reed, Todd R



  3. Robust SPIHT-based Image Compression

    CHENHailin; YANGYuhang


    As a famous wavelet-based image coding technique, Set partitioning in hierarchical trees (SPIHT) provides excellent rate distortion performance and progressive display properties when images are transmitted over lossless networks. But due to its highly statedependent properties, it performs poorly over losing networks. In this paper, we propose an algorithm to reorganize the wavelet transform coefficients according to wavelet tree concept and code each wavelet tree independently. Then, each coded bit-plane of each wavelet tree is packetized and transmitted to networks independently with little header information. Experimental results show that the proposed algorithm improves the robustness of the bit steam greatly while preserving its progressive display properties.

  4. Hybrid coding for split gray values in radiological image compression

    Lo, Shih-Chung B.; Krasner, Brian; Mun, Seong K.; Horii, Steven C.


    Digital techniques are used more often than ever in a variety of fields. Medical information management is one of the largest digital technology applications. It is desirable to have both a large data storage resource and extremely fast data transmission channels for communication. On the other hand, it is also essential to compress these data into an efficient form for storage and transmission. A variety of data compression techniques have been developed to tackle a diversity of situations. A digital value decomposition method using a splitting and remapping method has recently been proposed for image data compression. This method attempts to employ an error-free compression for one part of the digital value containing highly significant value and uses another method for the second part of the digital value. We have reported that the effect of this method is substantial for the vector quantization and other spatial encoding techniques. In conjunction with DCT type coding, however, the splitting method only showed a limited improvement when compared to the nonsplitting method. With the latter approach, we used a nonoptimized method for the images possessing only the top three-most-significant- bit value (3MSBV) and produced a compression ratio of approximately 10:1. Since the 3MSB images are highly correlated and the same values tend to aggregate together, the use of area or contour coding was investigated. In our experiment, we obtained an average error-free compression ratio of 30:1 and 12:1 for 3MSB and 4MSB images, respectively, with the alternate value contour coding. With this technique, we clearly verified that the splitting method is superior to the nonsplitting method for finely digitized radiographs.

  5. Accelerated MR imaging using compressive sensing with no free parameters.

    Khare, Kedar; Hardy, Christopher J; King, Kevin F; Turski, Patrick A; Marinelli, Luca


    We describe and evaluate a robust method for compressive sensing MRI reconstruction using an iterative soft thresholding framework that is data-driven, so that no tuning of free parameters is required. The approach described here combines a Nesterov type optimal gradient scheme for iterative update along with standard wavelet-based adaptive denoising methods, resulting in a leaner implementation compared with the nonlinear conjugate gradient method. Tests with T₂ weighted brain data and vascular 3D phase contrast data show that the image quality of reconstructions is comparable with those from an empirically tuned nonlinear conjugate gradient approach. Statistical analysis of image quality scores for multiple datasets indicates that the iterative soft thresholding approach as presented here may improve the robustness of the reconstruction and the image quality, when compared with nonlinear conjugate gradient that requires manual tuning for each dataset. A data-driven approach as illustrated in this article should improve future clinical applicability of compressive sensing image reconstruction.

  6. A Parallel Approach to Fractal Image Compression

    Lubomir Dedera


    Full Text Available The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  7. Image Compression Via a Fast DCT Approximation

    Bayer, F. M.; Cintra, R. J.


    Discrete transforms play an important role in digital signal processing. In particular, due to its transform domain energy compaction properties, the discrete cosine transform (DCT) is pivotal in many image processing problems. This paper introduces a numerical approximation method for the DCT based

  8. Wavelet-based pavement image compression and noise reduction

    Zhou, Jian; Huang, Peisen S.; Chiang, Fu-Pen


    For any automated distress inspection system, typically a huge number of pavement images are collected. Use of an appropriate image compression algorithm can save disk space, reduce the saving time, increase the inspection distance, and increase the processing speed. In this research, a modified EZW (Embedded Zero-tree Wavelet) coding method, which is an improved version of the widely used EZW coding method, is proposed. This method, unlike the two-pass approach used in the original EZW method, uses only one pass to encode both the coordinates and magnitudes of wavelet coefficients. An adaptive arithmetic encoding method is also implemented to encode four symbols assigned by the modified EZW into binary bits. By applying a thresholding technique to terminate the coding process, the modified EZW coding method can compress the image and reduce noise simultaneously. The new method is much simpler and faster. Experimental results also show that the compression ratio was increased one and one-half times compared to the EZW coding method. The compressed and de-noised data can be used to reconstruct wavelet coefficients for off-line pavement image processing such as distress classification and quantification.

  9. A new modified fast fractal image compression algorithm

    Salarian, Mehdi; Nadernejad, Ehsan; MiarNaimi, Hossein


    In this paper, a new fractal image compression algorithm is proposed, in which the time of the encoding process is considerably reduced. The algorithm exploits a domain pool reduction approach, along with the use of innovative predefined values for contrast scaling factor, S, instead of searching...

  10. Feasibility Study of Compressive Sensing Underwater Imaging Lidar


    patterns generated using this scheme can significantly reduce the cost and complexity of the antenna design in such imaging systems. Another...currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1, REPORT DATE (’DD- MW -yYVyj 03/28/2014 2. REPORT TYPE Final...Feasibility study of Compressive Sensing Underwater Imaging Lidar 5a. CONTRACT NUMBER 5b. GRANT NUMBER N00014-12-1-0921 5c. PROGRAM ELEMENT NUMBER 6

  11. Fast algorithm for exploring and compressing of large hyperspectral images

    Kucheryavskiy, Sergey


    A new method for calculation of latent variable space for exploratory analysis and dimension reduction of large hyperspectral images is proposed. The method is based on significant downsampling of image pixels with preservation of pixels’ structure in feature (variable) space. To achieve this, in...... can be used first of all for fast compression of large data arrays with principal component analysis or similar projection techniques....

  12. Clinical evaluation of irreversible image compression: analysis of chest imaging with computed radiography.

    Ishigaki, T; Sakuma, S; Ikeda, M; Itoh, Y; Suzuki, M; Iwai, S


    To implement a picture archiving and communication system, clinical evaluation of irreversible image compression with a newly developed modified two-dimensional discrete cosine transform (DCT) and bit-allocation technique was performed for chest images with computed radiography (CR). CR images were observed on a cathode-ray-tube monitor in a 1,024 X 1,536 matrix. One original and five reconstructed versions of the same images with compression ratios of 3:1, 6:1, 13:1, 19:1, and 31:1 were ranked according to quality. Test images with higher spatial frequency were ranked better than those with lower spatial frequency and the acceptable upper limit of the compression ratio was 19:1. In studies of receiver operating characteristics for scoring the presence or absence of nodules and linear shadows, the images with a compression ratio of 25:1 showed a statistical difference as compared with the other images with a compression ratio of 20:1 or less. Both studies show that plain CR chest images with a compression ratio of 10:1 are acceptable and, with use of an improved DCT technique, the upper limit of the compression ratio is 20:1.

  13. Data Mining Un-Compressed Images from cloud with Clustering Compression technique using Lempel-Ziv-Welch

    C. Parthasarathy


    Full Text Available Cloud computing is a highly discussed topic in the technical and economic world, and many of the big players of the software industry have entered the development of cloud services. Several companies’ and organizations wants to explore the possibilities and benefits of incorporating such cloud computing services in their business, as well as the possibilities to offer own cloud services. We are going to mine the un-compressed image from the cloud and use k-means clustering grouping the uncompressed image and compress it with Lempel-ziv-welch coding technique so that the un-compressed images becomes error-free compression and spatial redundancies.

  14. Space, time, error, and power optimization of image compression transforms

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.


    The implementation of an image compression transform on one or more small, embedded processors typically involves stringent constraints on power consumption and form factor. Traditional methods of optimizing compression algorithm performance typically emphasize joint minimization of space and time complexity, often without significant consideration of arithmetic accuracy or power consumption. However, small autonomous imaging platforms typically require joint optimization of space, time, error (or accuracy), and power (STEP) parameters, which the authors call STEP optimization. In response to implementational constraints on space and power consumption, the authors have developed systems and techniques for STEP optimization that are based on recent research in VLSI circuit design, as well as extensive previous work in system optimization. Building on the authors' previous research in embedded processors as well as adaptive or reconfigurable computing, it is possible to produce system-independent STEP optimization that can be customized for a given set of system-specific constraints. This approach is particularly useful when algorithms for image and signal processing (ISP) computer vision (CV), or automated target recognition (ATR), expressed in a machine- independent notation, are mapped to one or more heterogeneous processors (e.g., digital signal processors or DSPs, SIMD mesh processors, or reconfigurable logic). Following a theoretical summary, this paper illustrates various STEP optimization techniques via case studies, for example, real-time compression of underwater imagery on board an autonomous vehicle. Optimization algorithms are taken from the literature, and error profiling/analysis methodologies developed in the authors' previous research are employed. This yields a more rigorous basis for the simulation and evaluation of compression algorithms on a wide variety of hardware models. In this study, image algebra is employed as the notation of choice

  15. Simultaneous compression and encryption of closely resembling images: application to video sequences and polarimetric images.

    Aldossari, M; Alfalou, A; Brosseau, C


    This study presents and validates an optimized method of simultaneous compression and encryption designed to process images with close spectra. This approach is well adapted to the compression and encryption of images of a time-varying scene but also to static polarimetric images. We use the recently developed spectral fusion method [Opt. Lett.35, 1914-1916 (2010)] to deal with the close resemblance of the images. The spectral plane (containing the information to send and/or to store) is decomposed in several independent areas which are assigned according a specific way. In addition, each spectrum is shifted in order to minimize their overlap. The dual purpose of these operations is to optimize the spectral plane allowing us to keep the low- and high-frequency information (compression) and to introduce an additional noise for reconstructing the images (encryption). Our results show that not only can the control of the spectral plane enhance the number of spectra to be merged, but also that a compromise between the compression rate and the quality of the reconstructed images can be tuned. We use a root-mean-square (RMS) optimization criterion to treat compression. Image encryption is realized at different security levels. Firstly, we add a specific encryption level which is related to the different areas of the spectral plane, and then, we make use of several random phase keys. An in-depth analysis at the spectral fusion methodology is done in order to find a good trade-off between the compression rate and the quality of the reconstructed images. Our new proposal spectral shift allows us to minimize the image overlap. We further analyze the influence of the spectral shift on the reconstructed image quality and compression rate. The performance of the multiple-image optical compression and encryption method is verified by analyzing several video sequences and polarimetric images.

  16. Lossless compression of multispectral images using spectral information

    Ma, Long; Shi, Zelin; Tang, Xusheng


    Multispectral images are available for different purposes due to developments in spectral imaging systems. The sizes of multispectral images are enormous. Thus transmission and storage of these volumes of data require huge time and memory resources. That is why compression algorithms must be developed. A salient property of multispectral images is that strong spectral correlation exists throughout almost all bands. This fact is successfully used to predict each band based on the previous bands. We propose to use spectral linear prediction and entropy coding with context modeling for encoding multispectral images. Linear prediction predicts the value for the next sample and computes the difference between predicted value and the original value. This difference is usually small, so it can be encoded with less its than the original value. The technique implies prediction of each image band by involving number of bands along the image spectra. Each pixel is predicted using information provided by pixels in the previous bands in the same spatial position. As done in the JPEG-LS, the proposed coder also represents the mapped residuals by using an adaptive Golomb-Rice code with context modeling. This residual coding is context adaptive, where the context used for the current sample is identified by a context quantization function of the three gradients. Then, context-dependent Golomb-Rice code and bias parameters are estimated sample by sample. The proposed scheme was compared with three algorithms applied to the lossless compression of multispectral images, namely JPEG-LS, Rice coding, and JPEG2000. Simulation tests performed on AVIRIS images have demonstrated that the proposed compression scheme is suitable for multispectral images.

  17. Spectrally Adaptable Compressive Sensing Imaging System


    viewed by a Stingray F-033C CCD Color Camera. The desired bands are depicted in (g). The original desired bands are shown in (a). Reconstructed images...would be viewed by a Stingray F-033C CCD Color Camera. The desired bands are indicated in (e). The original desired bands are shown in (a). Reconstructed...times and the mean PSNR is estimated. The resulting spectral data cubes are shown as they would be viewed by a Stingray F-033C CCD Color Camera. Figure

  18. Wavelet-based image compression using fixed residual value

    Muzaffar, Tanzeem; Choi, Tae-Sun


    Wavelet based compression is getting popular due to its promising compaction properties at low bitrate. Zerotree wavelet image coding scheme efficiently exploits multi-level redundancy present in transformed data to minimize coding bits. In this paper, a new technique is proposed to achieve high compression by adding new zerotree and significant symbols to original EZW coder. Contrary to four symbols present in basic EZW scheme, modified algorithm uses eight symbols to generate fewer bits for a given data. Subordinate pass of EZW is eliminated and replaced with fixed residual value transmission for easy implementation. This modification simplifies the coding technique as well and speeds up the process, retaining the property of embeddedness.

  19. 2D image compression using concurrent wavelet transform

    Talukder, Kamrul Hasan; Harada, Koichi


    In the recent years wavelet transform (WT) has been widely used for image compression. As WT is a sequential process, much time is required to transform data. Here a new approach has been presented where the transformation process is executed concurrently. As a result the procedure runs first and the time of transformation is reduced. Multiple threads are used for row and column transformation and the communication among threads has been managed effectively. Thus, the transformation time has been reduced significantly. The proposed system provides better compression ratio and PSNR value with lower time complexity.

  20. JPIC-Rad-Hard JPEG2000 Image Compression ASIC

    Zervas, Nikos; Ginosar, Ran; Broyde, Amitai; Alon, Dov


    JPIC is a rad-hard high-performance image compression ASIC for the aerospace market. JPIC implements tier 1 of the ISO/IEC 15444-1 JPEG2000 (a.k.a. J2K) image compression standard [1] as well as the post compression rate-distortion algorithm, which is part of tier 2 coding. A modular architecture enables employing a single JPIC or multiple coordinated JPIC units. JPIC is designed to support wide data sources of imager in optical, panchromatic and multi-spectral space and airborne sensors. JPIC has been developed as a collaboration of Alma Technologies S.A. (Greece), MBT/IAI Ltd (Israel) and Ramon Chips Ltd (Israel). MBT IAI defined the system architecture requirements and interfaces, The JPEG2K-E IP core from Alma implements the compression algorithm [2]. Ramon Chips adds SERDES interfaces and host interfaces and integrates the ASIC. MBT has demonstrated the full chip on an FPGA board and created system boards employing multiple JPIC units. The ASIC implementation, based on Ramon Chips' 180nm CMOS RadSafe[TM] RH cell library enables superior radiation hardness.

  1. Mechanical compression for contrasting OCT images of biotissues

    Kirillin, Mikhail Y.; Argba, Pavel D.; Kamensky, Vladislav A.


    Contrasting of biotissue layers in OCT images after application of mechanical compression is discussed. The study is performed on ex vivo samples of human rectum, and in vivo on skin of human volunteers. We show that mechanical compression provides contrasting of biotissue layer boundaries due to different mechanical properties of layers. We show that alteration of pressure from 0 up to 0.45 N/mm2 causes contrast increase from 1 to 10 dB in OCT imaging of human rectum ex vivo. Results of ex vivo studies are in good agreement with Monte Carlo simulations. Application of pressure of 0.45 N/mm2 causes increase in contrast of epidermis-dermis junction in OCT-images of human skin in vivo for about 10 dB.

  2. Compressive Fluorescence Microscopy for Biological and Hyperspectral Imaging

    Studer, Vincent; Chahid, Makhlad; Moussavi, Hamed; Candes, Emmanuel; Dahan, Maxime


    The mathematical theory of compressed sensing (CS) asserts that one can acquire signals from measurements whose rate is much lower than the total bandwidth. Whereas the CS theory is now well developed, challenges concerning hardware implementations of CS-based acquisition devices---especially in optics---have only started being addressed. This paper presents an implementation of compressive sensing in fluorescence microscopy and its applications to biomedical imaging. Our CS microscope combines a dynamic structured wide-field illumination and a fast and sensitive single-point fluorescence detection to enable reconstructions of images of fluorescent beads, cells and tissues with undersampling ratios (between the number of pixels and number of measurements) up to 32. We further demonstrate a hyperspectral mode and record images with 128 spectral channels and undersampling ratios up to 64, illustrating the potential benefits of CS acquisition for higher dimensional signals which typically exhibits extreme redund...

  3. Implementation of aeronautic image compression technology on DSP

    Wang, Yujing; Gao, Xueqiang; Wang, Mei


    According to the designed characteristics and demands of aeronautic image compression system, lifting scheme wavelet and SPIHT algorithm was selected as the key part of software implementation, which was introduced with details. In order to improve execution efficiency, border processing was simplified reasonably and SPIHT (Set Partitioning in Hierarchical Trees) algorithm was also modified partly. The results showed that the selected scheme has a 0.4dB improvement in PSNR(peak-peak-ratio) compared with classical Shaprio's scheme. To improve the operating speed, the hardware system was then designed based on DSP and many optimization measures were then applied successfully. Practical test showed that the system can meet the real-time demand with good quality of reconstruct image, which has been used in an aeronautic image compression system practically.

  4. A Progressive Image Compression Method Based on EZW Algorithm

    Du, Ke; Lu, Jianming; Yahagi, Takashi

    A simple method based on the EZW algorithm is presented for improving image compression performance. Recent success in wavelet image coding is mainly attributed to recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's EZW(Embedded Zerotree Wavelets)(1), Said and Pearlman's SPIHT(Set Partitioning In Hierarchical Trees)(2), and Bing-Bing Chai's SLCCA(Significance-Linked Connected Component Analysis for Wavelet Image Coding)(3). The EZW algorithm is based on five key concepts: (1) a DWT(Discrete Wavelet Transform) or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, (4) universal lossless data compression which is achieved via adaptive arithmetic coding. and (5) DWT coefficients' degeneration from high scale subbands to low scale subbands. In this paper, we have improved the self-similarity statistical characteristic in concept (5) and present a progressive image compression method.

  5. An improved fast fractal image compression using spatial texture correlation

    Wang Xing-Yuan; Wang Yuan-Xing; Yun Jiao-Jiao


    This paper utilizes a spatial texture correlation and the intelligent classification algorithm (ICA) search strategy to speed up the encoding process and improve the bit rate for fractal image compression.Texture features is one of the most important properties for the representation of an image.Entropy and maximum entry from co-occurrence matrices are used for representing texture features in an image.For a range block,concerned domain blocks of neighbouring range blocks with similar texture features can be searched.In addition,domain blocks with similar texture features are searched in the ICA search process.Experiments show that in comparison with some typical methods,the proposed algorithm significantly speeds up the encoding process and achieves a higher compression ratio,with a slight diminution in the quality of the reconstructed image; in comparison with a spatial correlation scheme,the proposed scheme spends much less encoding time while the compression ratio and the quality of the reconstructed image are almost the same.

  6. Spatial compression algorithm for the analysis of very large multivariate images

    Keenan, Michael R.


    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  7. De l'image vers la compression

    Wagner, Charles


    Les domaines d'applications mettant en jeu l'image vidéo ne cessent de s'étendre sous l'impulsion des progrès réalisés en traitement du signal, en architecture de machines ainsi que des avancées technologiques en matière d'intégration de composants. En télévision haute définition cette évolution est plus particulièrement sensible et l'on constate que l'application, les algorithmes mis en oeuvre, les supports de transmission utilisés et les aspects normalisation sont étroitement liés....

  8. Split Bregman's optimization method for image construction in compressive sensing

    Skinner, D.; Foo, S.; Meyer-Bäse, A.


    The theory of compressive sampling (CS) was reintroduced by Candes, Romberg and Tao, and D. Donoho in 2006. Using a priori knowledge that a signal is sparse, it has been mathematically proven that CS can defY Nyquist sampling theorem. Theoretically, reconstruction of a CS image relies on the minimization and optimization techniques to solve this complex almost NP-complete problem. There are many paths to consider when compressing and reconstructing an image but these methods have remained untested and unclear on natural images, such as underwater sonar images. The goal of this research is to perfectly reconstruct the original sonar image from a sparse signal while maintaining pertinent information, such as mine-like object, in Side-scan sonar (SSS) images. Goldstein and Osher have shown how to use an iterative method to reconstruct the original image through a method called Split Bregman's iteration. This method "decouples" the energies using portions of the energy from both the !1 and !2 norm. Once the energies are split, Bregman iteration is used to solve the unconstrained optimization problem by recursively solving the problems simultaneously. The faster these two steps or energies can be solved then the faster the overall method becomes. While the majority of CS research is still focused on the medical field, this paper will demonstrate the effectiveness of the Split Bregman's methods on sonar images.

  9. A novel image fusion approach based on compressive sensing

    Yin, Hongpeng; Liu, Zhaodong; Fang, Bin; Li, Yanxia


    Image fusion can integrate complementary and relevant information of source images captured by multiple sensors into a unitary synthetic image. The compressive sensing-based (CS) fusion approach can greatly reduce the processing speed and guarantee the quality of the fused image by integrating fewer non-zero coefficients. However, there are two main limitations in the conventional CS-based fusion approach. Firstly, directly fusing sensing measurements may bring greater uncertain results with high reconstruction error. Secondly, using single fusion rule may result in the problems of blocking artifacts and poor fidelity. In this paper, a novel image fusion approach based on CS is proposed to solve those problems. The non-subsampled contourlet transform (NSCT) method is utilized to decompose the source images. The dual-layer Pulse Coupled Neural Network (PCNN) model is used to integrate low-pass subbands; while an edge-retention based fusion rule is proposed to fuse high-pass subbands. The sparse coefficients are fused before being measured by Gaussian matrix. The fused image is accurately reconstructed by Compressive Sampling Matched Pursuit algorithm (CoSaMP). Experimental results demonstrate that the fused image contains abundant detailed contents and preserves the saliency structure. These also indicate that our proposed method achieves better visual quality than the current state-of-the-art methods.


    S. Manimurugan


    Full Text Available The eXtensible Markup Language (XML is a format that is widely used as a tool for data exchange and storage. It is being increasingly used in secure transmission of image data over wireless network and World Wide Web. Verbose in nature, XML files can be tens of megabytes long. Thus, to reduce their size and to allow faster transmission, compression becomes vital. Several general purpose compression tools have been proposed without satisfactory results. This paper proposes a novel technique using modified BWT for compressing XML files in a lossless fashion. The experimental results show that the performance of the proposed technique outperforms both general purpose and XML-specific compressors.

  11. Optimum image compression rate maintaining diagnostic image quality of digital intraoral radiographs

    Song, Ju Seop; Koh, Kwang Joon [Dept. of Oral and Maxillofacial Radiology and Institute of Oral Bio Science, School of Dentistry, Chonbuk National University, Chonju (Korea, Republic of)


    The aims of the present study are to determine the optimum compression rate in terms of file size reduction and diagnostic quality of the images after compression and evaluate the transmission speed of original or each compressed images. The material consisted of 24 extracted human premolars and molars. The occlusal surfaces and proximal surfaces of the teeth had a clinical disease spectrum that ranged from sound to varying degrees of fissure discoloration and cavitation. The images from Digora system were exported in TIFF and the images from conventional intraoral film were scanned and digitalized in TIFF by Nikon SF-200 scanner(Nikon, Japan). And six compression factors were chosen and applied on the basis of the results from a pilot study. The total number of images to be assessed were 336. Three radiologists assessed the occlusal and proximal surfaces of the teeth with 5-rank scale. Finally diagnosed as either sound or carious lesion by one expert oral pathologist. And sensitivity and specificity and kappa value for diagnostic agreement was calculated. Also the area (Az) values under the ROC curve were calculated and paired t-test and oneway ANOVA test was performed. Thereafter, transmission time of the image files of the each compression level were compared with that of the original image files. No significant difference was found between original and the corresponding images up to 7% (1:14) compression ratio for both the occlusal and proximal caries (p<0.05). JPEG3 (1:14) image files are transmitted fast more than 10 times, maintained diagnostic information in image, compared with original image files. 1:14 compressed image file may be used instead of the original image and reduce storage needs and transmission time.

  12. Compression and Processing of Space Image Sequences of Northern Lights and Sprites

    Forchhammer, Søren Otto; Martins, Bo; Jensen, Ole Riis


    Compression of image sequences of auroral activity as northern lights and thunderstorms with sprites is investigated.......Compression of image sequences of auroral activity as northern lights and thunderstorms with sprites is investigated....

  13. Novel image compression-encryption hybrid algorithm based on key-controlled measurement matrix in compressive sensing

    Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua


    The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.

  14. Image compression using address-vector quantization

    Nasrabadi, Nasser M.; Feng, Yushu


    A novel vector quantization scheme, the address-vector quantizer (A-VQ), is proposed which exploits the interblock correlation by encoding a group of blocks together using an address-codebook (AC). The AC is a set of address-codevectors (ACVs), each representing a combination of addresses or indices. Each element of the ACV is an address of an entry in the LBG-codebook, representing a vector-quantized block. The AC consists of an active (addressable) region and an inactive (nonaddressable) region. During encoding the ACVs in the AC are reordered adaptively to bring the most probable ACVs into the active region. When encoding an ACV, the active region is checked, and if such an address combination exists, its index is transmitted to the receiver. Otherwise, the address of each block is transmitted individually. The SNR of the images encoded by the A-VQ method is the same as that of a memoryless vector quantizer, but the bit rate is by a factor of approximately two.

  15. Lossless Image Compression Based on Multiple-Tables Arithmetic Coding

    Rung-Ching Chen


    Full Text Available This paper is intended to present a lossless image compression method based on multiple-tables arithmetic coding (MTAC method to encode a gray-level image f. First, the MTAC method employs a median edge detector (MED to reduce the entropy rate of f. The gray levels of two adjacent pixels in an image are usually similar. A base-switching transformation approach is then used to reduce the spatial redundancy of the image. The gray levels of some pixels in an image are more common than those of others. Finally, the arithmetic encoding method is applied to reduce the coding redundancy of the image. To promote high performance of the arithmetic encoding method, the MTAC method first classifies the data and then encodes each cluster of data using a distinct code table. The experimental results show that, in most cases, the MTAC method provides a higher efficiency in use of storage space than the lossless JPEG2000 does.

  16. Digital image compression for a 2f multiplexing optical setup

    Vargas, J.; Amaya, D.; Rueda, E.


    In this work a virtual 2f multiplexing system was implemented in combination with digital image compression techniques and redundant information elimination. Depending on the image type to be multiplexed, a memory-usage saving of as much as 99% was obtained. The feasibility of the system was tested using three types of images, binary characters, QR codes, and grey level images. A multiplexing step was implemented digitally, while a demultiplexing step was implemented in a virtual 2f optical setup following real experimental parameters. To avoid cross-talk noise, each image was codified with a specially designed phase diffraction carrier that would allow the separation and relocation of the multiplexed images on the observation plane by simple light propagation. A description of the system is presented together with simulations that corroborate the method. The present work may allow future experimental implementations that will make use of all the parallel processing capabilities of optical systems.

  17. Hybrid tenso-vectorial compressive sensing for hyperspectral imaging

    Li, Qun; Bernal, Edgar A.


    Hyperspectral imaging has a wide range of applications relying on remote material identification, including astronomy, mineralogy, and agriculture; however, due to the large volume of data involved, the complexity and cost of hyperspectral imagers can be prohibitive. The exploitation of redundancies along the spatial and spectral dimensions of a hyperspectral image of a scene has created new paradigms that overcome the limitations of traditional imaging systems. While compressive sensing (CS) approaches have been proposed and simulated with success on already acquired hyperspectral imagery, most of the existing work relies on the capability to simultaneously measure the spatial and spectral dimensions of the hyperspectral cube. Most real-life devices, however, are limited to sampling one or two dimensions at a time, which renders a significant portion of the existing work unfeasible. We propose a new variant of the recently proposed serial hybrid vectorial and tensorial compressive sensing (HCS-S) algorithm that, like its predecessor, is compatible with real-life devices both in terms of the acquisition and reconstruction requirements. The newly introduced approach is parallelizable, and we abbreviate it as HCS-P. Together, HCS-S and HCS-P comprise a generalized framework for hybrid tenso-vectorial compressive sensing, or HCS for short. We perform a detailed analysis that demonstrates the uniqueness of the signal reconstructed by both the original HCS-S and the proposed HCS-P algorithms. Last, we analyze the behavior of the HCS reconstruction algorithms in the presence of measurement noise, both theoretically and experimentally.

  18. Astronomical Image Compression Techniques Based on ACC and KLT Coder

    J. Schindler


    Full Text Available This paper deals with a compression of image data in applications in astronomy. Astronomical images have typical specific properties — high grayscale bit depth, size, noise occurrence and special processing algorithms. They belong to the class of scientific images. Their processing and compression is quite different from the classical approach of multimedia image processing. The database of images from BOOTES (Burst Observer and Optical Transient Exploring System has been chosen as a source of the testing signal. BOOTES is a Czech-Spanish robotic telescope for observing AGN (active galactic nuclei and the optical transient of GRB (gamma ray bursts searching. This paper discusses an approach based on an analysis of statistical properties of image data. A comparison of two irrelevancy reduction methods is presented from a scientific (astrometric and photometric point of view. The first method is based on a statistical approach, using the Karhunen-Loeve transform (KLT with uniform quantization in the spectral domain. The second technique is derived from wavelet decomposition with adaptive selection of used prediction coefficients. Finally, the comparison of three redundancy reduction methods is discussed. Multimedia format JPEG2000 and HCOMPRESS, designed especially for astronomical images, are compared with the new Astronomical Context Coder (ACC coder based on adaptive median regression.

  19. Compressive SAR Imaging with Joint Sparsity and Local Similarity Exploitation

    Fangfang Shen


    Full Text Available Compressive sensing-based synthetic aperture radar (SAR imaging has shown its superior capability in high-resolution image formation. However, most of those works focus on the scenes that can be sparsely represented in fixed spaces. When dealing with complicated scenes, these fixed spaces lack adaptivity in characterizing varied image contents. To solve this problem, a new compressive sensing-based radar imaging approach with adaptive sparse representation is proposed. Specifically, an autoregressive model is introduced to adaptively exploit the structural sparsity of an image. In addition, similarity among pixels is integrated into the autoregressive model to further promote the capability and thus an adaptive sparse representation facilitated by a weighted autoregressive model is derived. Since the weighted autoregressive model is inherently determined by the unknown image, we propose a joint optimization scheme by iterative SAR imaging and updating of the weighted autoregressive model to solve this problem. Eventually, experimental results demonstrated the validity and generality of the proposed approach.

  20. An Improved Fast SPIHT Image Compression Algorithm for Aerial Applications

    Ning Zhang


    Full Text Available In this paper, an improved fast SPIHT algorithm has been presented. SPIHT and NLS (Not List SPIHT are efficient compression algorithms, but the algorithms application is limited by the shortcomings of the poor error resistance and slow compression speed in the aviation areas. In this paper, the error resilience and the compression speed are improved. The remote sensing images are decomposed by Le Gall5/3 wavelet, and wavelet coefficients are indexed, scanned and allocated by the means of family blocks. The bit-plane importance is predicted by bitwise OR, so the N bit-planes can be encoded at the same time. Compared with the SPIHT algorithm, this improved algorithm is easy implemented by hardware, and the compression speed is improved. The PSNR of reconstructed images encoded by fast SPIHT is higher than SPIHT and CCSDS from 0.3 to 0.9db, and the speed is 4-6 times faster than SPIHT encoding process. The algorithm meets the high speed and reliability requirements of aerial applications.

  1. Image Compression based on DCT and BPSO for MRI and Standard Images

    D.J. Ashpin Pabi


    Full Text Available Nowadays, digital image compression has become a crucial factor of modern telecommunication systems. Image compression is the process of reducing total bits required to represent an image by reducing redundancies while preserving the image quality as much as possible. Various applications including internet, multimedia, satellite imaging, medical imaging uses image compression in order to store and transmit images in an efficient manner. Selection of compression technique is an application-specific process. In this paper, an improved compression technique based on Butterfly-Particle Swarm Optimization (BPSO is proposed. BPSO is an intelligence-based iterative algorithm utilized for finding optimal solution from a set of possible values. The dominant factors of BPSO over other optimization techniques are higher convergence rate, searching ability and overall performance. The proposed technique divides the input image into 88 blocks. Discrete Cosine Transform (DCT is applied to each block to obtain the coefficients. Then, the threshold values are obtained from BPSO. Based on this threshold, values of the coefficients are modified. Finally, quantization followed by the Huffman encoding is used to encode the image. Experimental results show the effectiveness of the proposed method over the existing method.

  2. Performance Analysis of Multi Spectral Band Image Compression using Discrete Wavelet Transform

    S. S. Ramakrishnan


    Full Text Available Problem statement: Efficient and effective utilization of transmission bandwidth and storage capacity have been a core area of research for remote sensing images. Hence image compression is required for multi-band satellite imagery. In addition, image quality is also an important factor after compression and reconstruction. Approach: In this investigation, the discrete wavelet transform is used to compress the Landsat5 agriculture and forestry image using various wavelets and the spectral signature graph is drawn. Results: The compressed image performance is analyzed using Compression Ratio (CR, Peak Signal to Noise Ratio (PSNR. The compressed image using dmey wavelet is selected based on its Digital Number Minimum (DNmin and Digital Number Maximum (DNmax. Then it is classified using maximum likelihood classification and the accuracy is determined using error matrix, kappa statistics and over all accuracy. Conclusion: Hence the proposed compression technique is well suited to compress the agriculture and forestry multi-band image.

  3. Empirical data decomposition and its applications in image compression

    Deng Jiaxian; Wu Xiaoqin


    A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, is automatically determined by observed data, and is able to implement multi-resolution analysis as wavelet transform. The algorithm is suitable for analyzing non-stationary data and can effectively wipe off the relevance of observed data. Then through discussing the applications of EDD in image compression, the paper presents a 2-dimension data decomposition framework and makes some modifications of contexts used by Embedded Block Coding with Optimized Truncation (EBCOT). Simulation results show that EDD is more suitable for non-stationary image data compression.

  4. Compression of fingerprint data using the wavelet vector quantization image compression algorithm. 1992 progress report

    Bradley, J.N.; Brislawn, C.M.


    This report describes the development of a Wavelet Vector Quantization (WVQ) image compression algorithm for fingerprint raster files. The pertinent work was performed at Los Alamos National Laboratory for the Federal Bureau of Investigation. This document describes a previously-sent package of C-language source code, referred to as LAFPC, that performs the WVQ fingerprint compression and decompression tasks. The particulars of the WVQ algorithm and the associated design procedure are detailed elsewhere; the purpose of this document is to report the results of the design algorithm for the fingerprint application and to delineate the implementation issues that are incorporated in LAFPC. Special attention is paid to the computation of the wavelet transform, the fast search algorithm used for the VQ encoding, and the entropy coding procedure used in the transmission of the source symbols.

  5. Accelerated dynamic EPR imaging using fast acquisition and compressive recovery

    Ahmad, Rizwan; Samouilov, Alexandre; Zweier, Jay L.


    Electron paramagnetic resonance (EPR) allows quantitative imaging of tissue redox status, which provides important information about ischemic syndromes, cancer and other pathologies. For continuous wave EPR imaging, however, poor signal-to-noise ratio and low acquisition efficiency limit its ability to image dynamic processes in vivo including tissue redox, where conditions can change rapidly. Here, we present a data acquisition and processing framework that couples fast acquisition with compressive sensing-inspired image recovery to enable EPR-based redox imaging with high spatial and temporal resolutions. The fast acquisition (FA) allows collecting more, albeit noisier, projections in a given scan time. The composite regularization based processing method, called spatio-temporal adaptive recovery (STAR), not only exploits sparsity in multiple representations of the spatio-temporal image but also adaptively adjusts the regularization strength for each representation based on its inherent level of the sparsity. As a result, STAR adjusts to the disparity in the level of sparsity across multiple representations, without introducing any tuning parameter. Our simulation and phantom imaging studies indicate that a combination of fast acquisition and STAR (FASTAR) enables high-fidelity recovery of volumetric image series, with each volumetric image employing less than 10 s of scan. In addition to image fidelity, the time constants derived from FASTAR also match closely to the ground truth even when a small number of projections are used for recovery. This development will enhance the capability of EPR to study fast dynamic processes that cannot be investigated using existing EPR imaging techniques.

  6. Remotely sensed image compression based on wavelet transform

    Kim, Seong W.; Lee, Heung K.; Kim, Kyung S.; Choi, Soon D.


    In this paper, we present an image compression algorithm that is capable of significantly reducing the vast amount of information contained in multispectral images. The developed algorithm exploits the spectral and spatial correlations found in multispectral images. The scheme encodes the difference between images after contrast/brightness equalization to remove the spectral redundancy, and utilizes a two-dimensional wavelet transform to remove the spatial redundancy. the transformed images are then encoded by Hilbert-curve scanning and run-length-encoding, followed by Huffman coding. We also present the performance of the proposed algorithm with the LANDSAT MultiSpectral Scanner data. The loss of information is evaluated by PSNR (peak signal to noise ratio) and classification capability.

  7. Bi-level image compression with tree coding

    Martins, Bo; Forchhammer, Søren


    Presently, tree coders are the best bi-level image coders. The current ISO standard, JBIG, is a good example. By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code. Three general-purpose coders...... version that without sacrificing speed brings it close to the multi-pass coders in compression performance...

  8. Processing and image compression based on the platform Arduino

    Lazar, Jan; Kostolanyova, Katerina; Bradac, Vladimir


    This paper focuses on the use of a minicomputer built on platform Arduino for the purposes of image compression and decompression. Arduino is used as a control element, which integrates needed proposed algorithms. This solution is unique as there is no commonly available solution with low computational performance for demanding graphical operations with the possibility of subsequent extending, because using Arduino, as an open source, enables further extensions and adjustments.

  9. Evaluation of color-embedded wavelet image compression techniques

    Saenz, Martha; Salama, Paul; Shen, Ke; Delp, Edward J., III


    Color embedded image compression is investigated by means of a set of core experiments that seek to evaluate the advantages of various color transformations, spatial orientation trees and the use of monochrome embedded coding schemes such as EZW and SPIHT. In order to take advantage of the interdependencies of the color components for a given color space, two new spatial orientation trees that relate frequency bands and color components are investigated.


    Narwaria, Manish; Perreira Da Silva, Matthieu; Le Callet, Patrick; Pépion, Romuald


    International audience; Tone mapping or range reduction is often used in High Dynamic Range (HDR) visual signal compression to take advantage of the existing image/video coding architectures. Thus, it is important to study the impact of tone mapping on the visual quality of decompressed HDR visual signals. To our knowledge, most of the existing studies focus only on the quality loss in the resultant low dynamic range (LDR) signal (obtained via tone mapping) and typically employ LDR displays f...

  11. A geometric approach to multi-view compressive imaging

    Park, Jae Young; Wakin, Michael B.


    In this paper, we consider multi-view imaging problems in which an ensemble of cameras collect images describing a common scene. To simplify the acquisition and encoding of these images, we study the effectiveness of non-collaborative compressive sensing encoding schemes wherein each sensor directly and independently compresses its image using randomized measurements. After these measurements and also perhaps the camera positions are transmitted to a central node, the key to an accurate reconstruction is to fully exploit the joint correlation among the signal ensemble. To capture such correlations, we propose a geometric modeling framework in which the image ensemble is treated as a sampling of points from a low-dimensional manifold in the ambient signal space. Building on results that guarantee stable embeddings of manifolds under random measurements, we propose a "manifold lifting" algorithm for recovering the ensemble that can operate even without knowledge of the camera positions. We divide our discussion into two scenarios, the near-field and far-field cases, and describe how the manifold lifting algorithm could be applied to these scenarios. At the end of this paper, we present an in-depth case study of a far-field imaging scenario, where the aim is to reconstruct an ensemble of satellite images taken from different positions with limited but overlapping fields of view. In this case study, we demonstrate the impressive power of random measurements to capture single- and multi-image structure without explicitly searching for it, as the randomized measurement encoding in conjunction with the proposed manifold lifting algorithm can even outperform image-by-image transform coding.

  12. Filtered gradient reconstruction algorithm for compressive spectral imaging

    Mejia, Yuri; Arguello, Henry


    Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.

  13. High-resolution three-dimensional imaging with compress sensing

    Wang, Jingyi; Ke, Jun


    LIDAR three-dimensional imaging technology have been used in many fields, such as military detection. However, LIDAR require extremely fast data acquisition speed. This makes the manufacture of detector array for LIDAR system is very difficult. To solve this problem, we consider using compress sensing which can greatly decrease the data acquisition and relax the requirement of a detection device. To use the compressive sensing idea, a spatial light modulator will be used to modulate the pulsed light source. Then a photodetector is used to receive the reflected light. A convex optimization problem is solved to reconstruct the 2D depth map of the object. To improve the resolution in transversal direction, we use multiframe image restoration technology. For each 2D piecewise-planar scene, we move the SLM half-pixel each time. Then the position where the modulated light illuminates will changed accordingly. We repeat moving the SLM to four different directions. Then we can get four low-resolution depth maps with different details of the same plane scene. If we use all of the measurements obtained by the subpixel movements, we can reconstruct a high-resolution depth map of the sense. A linear minimum-mean-square error algorithm is used for the reconstruction. By combining compress sensing and multiframe image restoration technology, we reduce the burden on data analyze and improve the efficiency of detection. More importantly, we obtain high-resolution depth maps of a 3D scene.

  14. Tree Coding of Bilevel Images

    Martins, Bo; Forchhammer, Søren


    Presently, sequential tree coders are the best general purpose bilevel image coders and the best coders of halftoned images. The current ISO standard, Joint Bilevel Image Experts Group (JBIG), is a good example. A sequential tree coder encodes the data by feeding estimates of conditional...... probabilities to an arithmetic coder. The conditional probabilities are estimated from co-occurrence statistics of past pixels, the statistics are stored in a tree. By organizing the code length calculations properly, a vast number of possible models (trees) reflecting different pixel orderings can...... is one order of magnitude slower than JBIG, obtains excellent and highly robust compression performance. A multipass free tree coding scheme produces superior compression results for all test images. A multipass free template coding scheme produces significantly better results than JBIG for difficult...


    Jiang Lai; Huang Cailing; Liao Huilian; Ji Zhen


    In this letter, a new Linde-Buzo-Gray (LBG)-based image compression method using Discrete Cosine Transform (DCT) and Vector Quantization (VQ) is proposed. A gray-level image is firstly decomposed into blocks, then each block is subsequently encoded by a 2D DCT coding scheme. The dimension of vectors as the input of a generalized VQ scheme is reduced. The time of encoding by a generalized VQ is reduced with the introduction of DCT process. The experimental results demonstrate the efficiency of the proposed method.

  16. Block-based adaptive lifting schemes for multiband image compression

    Masmoudi, Hela; Benazza-Benyahia, Amel; Pesquet, Jean-Christophe


    In this paper, we are interested in designing lifting schemes adapted to the statistics of the wavelet coefficients of multiband images for compression applications. More precisely, nonseparable vector lifting schemes are used in order to capture simultaneously the spatial and the spectral redundancies. The underlying operators are then computed in order to minimize the entropy of the resulting multiresolution representation. To this respect, we have developed a new iterative block-based classification algorithm. Simulation tests carried out on remotely sensed multispectral images indicate that a substantial gain in terms of bit-rate is achieved by the proposed adaptive coding method w.r.t the non-adaptive one.

  17. Compressive imaging system design using task-specific information.

    Ashok, Amit; Baheti, Pawan K; Neifeld, Mark A


    We present a task-specific information (TSI) based framework for designing compressive imaging (CI) systems. The task of target detection is chosen to demonstrate the performance of the optimized CI system designs relative to a conventional imager. In our optimization framework, we first select a projection basis and then find the associated optimal photon-allocation vector in the presence of a total photon-count constraint. Several projection bases, including principal components (PC), independent components, generalized matched-filter, and generalized Fisher discriminant (GFD) are considered for candidate CI systems, and their respective performance is analyzed for the target-detection task. We find that the TSI-optimized CI system design based on a GFD projection basis outperforms all other candidate CI system designs as well as the conventional imager. The GFD-based compressive imager yields a TSI of 0.9841 bits (out of a maximum possible 1 bit for the detection task), which is nearly ten times the 0.0979 bits achieved by the conventional imager at a signal-to-noise ratio of 5.0. We also discuss the relation between the information-theoretic TSI metric and a conventional statistical metric like probability of error in the context of the target-detection problem. It is shown that the TSI can be used to derive an upper bound on the probability of error that can be attained by any detection algorithm.


    P. Arockia Jansi Rani; V. Sadasivam


    Image compression is very important in reducing the costs of data storage and transmission in relatively slow channels. In this paper, a still image compression scheme driven by Self-Organizing Map with polynomial regression modeling and entropy coding, employed within the wavelet framework is presented. The image compressibility and interpretability are improved by incorporating noise reduction into the compression scheme. The implementation begins with the classical wavelet decomposition, q...

  19. Image Compression using Haar and Modified Haar Wavelet Transform

    Mohannad Abid Shehab Ahmed


    Full Text Available Efficient image compression approaches can provide the best solutions to the recent growth of the data intensive and multimedia based applications. As presented in many papers the Haar matrix–based methods and wavelet analysis can be used in various areas of image processing such as edge detection, preserving, smoothing or filtering. In this paper, color image compression analysis and synthesis based on Haar and modified Haar is presented. The standard Haar wavelet transformation with N=2 is composed of a sequence of low-pass and high-pass filters, known as a filter bank, the vertical and horizontal Haar filters are composed to construct four 2-dimensional filters, such filters applied directly to the image to speed up the implementation of the Haar wavelet transform. Modified Haar technique is studied and implemented for odd based numbers i.e. (N=3 & N=5 to generate many solution sets, these sets are tested using the energy function or numerical method to get the optimum one.The Haar transform is simple, efficient in memory usage due to high zero value spread (it can use sparse principle, and exactly reversible without the edge effects as compared to DCT (Discrete Cosine Transform. The implemented Matlab simulation results prove the effectiveness of DWT (Discrete Wave Transform algorithms based on Haar and Modified Haar techniques in attaining an efficient compression ratio (C.R, achieving higher peak signal to noise ratio (PSNR, and the resulting images are of much smoother as compared to standard JPEG especially for high C.R. A comparison between standard JPEG, Haar, and Modified Haar techniques is done finally which approves the highest capability of Modified Haar between others.

  20. An efficient BTC image compression algorithm with visual patterns


    Discusses block truncation coding (BTC) a simple and fast image compression technique suitable for real-time image transmission with high channel error resisting capability and good reconstructed image quality, and its main drawback of high bit rate of 2 bits/pixel for a 256-gray image for the purpose of reducing the bit rate, and introduces a simple look-up-table method for coding the higher mean and the lower mean of a block, and a set of 24 visual patterns used to encode 4×4 bit plane of the high-detail block and proposes a new algorithm, when needs only 19 bits to encode 4×4 high-detail block and 12 bits to encode the 4×4 low-detail block.

  1. Compressed sensing sparse reconstruction for coherent field imaging

    Bei, Cao; Xiu-Juan, Luo; Yu, Zhang; Hui, Liu; Ming-Lai, Chen


    Return signal processing and reconstruction plays a pivotal role in coherent field imaging, having a significant influence on the quality of the reconstructed image. To reduce the required samples and accelerate the sampling process, we propose a genuine sparse reconstruction scheme based on compressed sensing theory. By analyzing the sparsity of the received signal in the Fourier spectrum domain, we accomplish an effective random projection and then reconstruct the return signal from as little as 10% of traditional samples, finally acquiring the target image precisely. The results of the numerical simulations and practical experiments verify the correctness of the proposed method, providing an efficient processing approach for imaging fast-moving targets in the future. Project supported by the National Natural Science Foundation of China (Grant No. 61505248) and the Fund from Chinese Academy of Sciences, the Light of “Western” Talent Cultivation Plan “Dr. Western Fund Project” (Grant No. Y429621213).

  2. Compressed Sensing Inspired Image Reconstruction from Overlapped Projections

    Lin Yang


    Full Text Available The key idea discussed in this paper is to reconstruct an image from overlapped projections so that the data acquisition process can be shortened while the image quality remains essentially uncompromised. To perform image reconstruction from overlapped projections, the conventional reconstruction approach (e.g., filtered backprojection (FBP algorithms cannot be directly used because of two problems. First, overlapped projections represent an imaging system in terms of summed exponentials, which cannot be transformed into a linear form. Second, the overlapped measurement carries less information than the traditional line integrals. To meet these challenges, we propose a compressive sensing-(CS- based iterative algorithm for reconstruction from overlapped data. This algorithm starts with a good initial guess, relies on adaptive linearization, and minimizes the total variation (TV. Then, we demonstrated the feasibility of this algorithm in numerical tests.

  3. Fast Second Degree Total Variation Method for Image Compressive Sensing.

    Liu, Pengfei; Xiao, Liang; Zhang, Jun


    This paper presents a computationally efficient algorithm for image compressive sensing reconstruction using a second degree total variation (HDTV2) regularization. Firstly, a preferably equivalent formulation of the HDTV2 functional is derived, which can be formulated as a weighted L1-L2 mixed norm of second degree image derivatives under the spectral decomposition framework. Secondly, using the equivalent formulation of HDTV2, we introduce an efficient forward-backward splitting (FBS) scheme to solve the HDTV2-based image reconstruction model. Furthermore, from the averaged non-expansive operator point of view, we make a detailed analysis on the convergence of the proposed FBS algorithm. Experiments on medical images demonstrate that the proposed method outperforms several fast algorithms of the TV and HDTV2 reconstruction models in terms of peak signal to noise ratio (PSNR), structural similarity index (SSIM) and convergence speed.

  4. SAR Imaging of Moving Targets via Compressive Sensing

    Wang, Jun; Zhang, Hao; Wang, Xiqin


    An algorithm based on compressive sensing (CS) is proposed for synthetic aperture radar (SAR) imaging of moving targets. The received SAR echo is decomposed into the sum of basis sub-signals, which are generated by discretizing the target spatial domain and velocity domain and synthesizing the SAR received data for every discretized spatial position and velocity candidate. In this way, the SAR imaging problem is converted into sub-signal selection problem. In the case that moving targets are sparsely distributed in the observed scene, their reflectivities, positions and velocities can be obtained by using the CS technique. It is shown that, compared with traditional algorithms, the target image obtained by the proposed algorithm has higher resolution and lower side-lobe while the required number of measurements can be an order of magnitude less than that by sampling at Nyquist sampling rate. Moreover, multiple targets with different speeds can be imaged simultaneously, so the proposed algorithm has higher eff...

  5. Edge-Based Image Compression with Homogeneous Diffusion

    Mainberger, Markus; Weickert, Joachim

    It is well-known that edges contain semantically important image information. In this paper we present a lossy compression method for cartoon-like images that exploits information at image edges. These edges are extracted with the Marr-Hildreth operator followed by hysteresis thresholding. Their locations are stored in a lossless way using JBIG. Moreover, we encode the grey or colour values at both sides of each edge by applying quantisation, subsampling and PAQ coding. In the decoding step, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. Our experiments show that the suggested method outperforms the widely-used JPEG standard and can even beat the advanced JPEG2000 standard for cartoon-like images.

  6. Compressive sensing for direct millimeter-wave holographic imaging.

    Qiao, Lingbo; Wang, Yingxin; Shen, Zongjun; Zhao, Ziran; Chen, Zhiqiang


    Direct millimeter-wave (MMW) holographic imaging, which provides both the amplitude and phase information by using the heterodyne mixing technique, is considered a powerful tool for personnel security surveillance. However, MWW imaging systems usually suffer from the problem of high cost or relatively long data acquisition periods for array or single-pixel systems. In this paper, compressive sensing (CS), which aims at sparse sampling, is extended to direct MMW holographic imaging for reducing the number of antenna units or the data acquisition time. First, following the scalar diffraction theory, an exact derivation of the direct MMW holographic reconstruction is presented. Then, CS reconstruction strategies for complex-valued MMW images are introduced based on the derived reconstruction formula. To pursue the applicability for near-field MMW imaging and more complicated imaging targets, three sparsity bases, including total variance, wavelet, and curvelet, are evaluated for the CS reconstruction of MMW images. We also discuss different sampling patterns for single-pixel, linear array and two-dimensional array MMW imaging systems. Both simulations and experiments demonstrate the feasibility of recovering MMW images from measurements at 1/2 or even 1/4 of the Nyquist rate.

  7. Progressive image data compression with adaptive scale-space quantization

    Przelaskowski, Artur


    Some improvements of embedded zerotree wavelet algorithm are considere. Compression methods tested here are based on dyadic wavelet image decomposition, scalar quantization and coding in progressive fashion. Profitable coders with embedded form of code and rate fixing abilities like Shapiro EZW and Said nad Pearlman SPIHT are modified to improve compression efficiency. We explore the modifications of the initial threshold value, reconstruction levels and quantization scheme in SPIHT algorithm. Additionally, we present the result of the best filter bank selection. The most efficient biorthogonal filter banks are tested. Significant efficiency improvement of SPIHT coder was finally noticed even up to 0.9dB of PSNR in some cases. Because of the problems with optimization of quantization scheme in embedded coder we propose another solution: adaptive threshold selection of wavelet coefficients in progressive coding scheme. Two versions of this coder are tested: progressive in quality and resolution. As a result, improved compression effectiveness is achieved - close to 1.3 dB in comparison to SPIHT for image Barbara. All proposed algorithms are optimized automatically and are not time-consuming. But sometimes the most efficient solution must be found in iterative way. Final results are competitive across the most efficient wavelet coders.

  8. A novel image compression-encryption hybrid algorithm based on the analysis sparse representation

    Zhang, Ye; Xu, Biao; Zhou, Nanrun


    Recent advances on the compressive sensing theory were invoked for image compression-encryption based on the synthesis sparse model. In this paper we concentrate on an alternative sparse representation model, i.e., the analysis sparse model, to propose a novel image compression-encryption hybrid algorithm. The analysis sparse representation of the original image is obtained with an overcomplete fixed dictionary that the order of the dictionary atoms is scrambled, and the sparse representation can be considered as an encrypted version of the image. Moreover, the sparse representation is compressed to reduce its dimension and re-encrypted by the compressive sensing simultaneously. To enhance the security of the algorithm, a pixel-scrambling method is employed to re-encrypt the measurements of the compressive sensing. Various simulation results verify that the proposed image compression-encryption hybrid algorithm could provide a considerable compression performance with a good security.

  9. Interactive decoding for the CCSDS recommendation for image data compression

    García-Vílchez, Fernando; Serra-Sagristà, Joan; Zabala, Alaitz; Pons, Xavier


    In 2005, the Consultative Committee for Space Data Systems (CCSDS) approved a new Recommendation (CCSDS 122.0-B-1) for Image Data Compression. Our group has designed a new file syntax for the Recommendation. The proposal consists of adding embedded headers. Such modification provides scalability by quality, spatial location, resolution and component. The main advantages of our proposal are: 1) the definition of multiple types of progression order, which enhances abilities in transmission scenarios, and 2) the support for the extraction and decoding of specific windows of interest without needing to decode the complete code-stream. In this paper we evaluate the performance of our proposal. First we measure the impact of the embedded headers in the encoded stream. Second we compare the compression performance of our technique to JPEG2000.

  10. Entropy coders for image compression based on binary forward classification

    Yoo, Hoon; Jeong, Jechang


    Entropy coders as a noiseless compression method are widely used as final step compression for images, and there have been many contributions to increase of entropy coder performance and to reduction of entropy coder complexity. In this paper, we propose some entropy coders based on the binary forward classification (BFC). The BFC requires overhead of classification but there is no change between the amount of input information and the total amount of classified output information, which we prove this property in this paper. And using the proved property, we propose entropy coders that are the BFC followed by Golomb-Rice coders (BFC+GR) and the BFC followed by arithmetic coders (BFC+A). The proposed entropy coders introduce negligible additional complexity due to the BFC. Simulation results also show better performance than other entropy coders that have similar complexity to the proposed coders.

  11. Simultaneous encryption and compression of medical images based on optimized tensor compressed sensing with 3D Lorenz.

    Wang, Qingzhu; Chen, Xiaoming; Wei, Mengying; Miao, Zhuang


    The existing techniques for simultaneous encryption and compression of images refer lossy compression. Their reconstruction performances did not meet the accuracy of medical images because most of them have not been applicable to three-dimensional (3D) medical image volumes intrinsically represented by tensors. We propose a tensor-based algorithm using tensor compressive sensing (TCS) to address these issues. Alternating least squares is further used to optimize the TCS with measurement matrices encrypted by discrete 3D Lorenz. The proposed method preserves the intrinsic structure of tensor-based 3D images and achieves a better balance of compression ratio, decryption accuracy, and security. Furthermore, the characteristic of the tensor product can be used as additional keys to make unauthorized decryption harder. Numerical simulation results verify the validity and the reliability of this scheme.

  12. Low Memory Low Complexity Image Compression Using HSSPIHT Encoder



    Full Text Available Due to the large requirement for memory and the high complexity of computation, JPEG2000 cannot be used in many conditions especially in the memory constraint equipment. The line-based W avelet transform was proposed and accepted because lower memory is required without affecting the result of W avelet transform, In this paper, the improved lifting schem e is introduced to perform W avelet transform to replace Mallat method that is used in the original line-based wavelet transform. In this a three-adder unit is adopted to realize lifting scheme. It can perform wavelet transform with less computation and reduce memory than Mallat algorithm. The corresponding HS_SPIHT coding is designed here so that the proposed algorithm is more suitable for equipment. W e proposed a highly scale image compression scheme based on the Set Partitioning in Hierarchical Trees (SPIHT algorithm. Our algorithm, called Highly Scalable SPIHT (HS_SPIHT, supports High Compression efficiency, spatial and SNR scalability and provides l bit stream that can be easily adapted to give bandwidth and resolution requirements by a simple transcoder (parse. The HS_SPIHT algorithm adds the spatial scalability feature without sacrificing the S NR embeddedness property as found in the original SPIHT bit stream. Highly scalable image compression scheme based on the SPIHT algorithm the proposed algorithm used, highly scalable SPIHT (HS_SPIHT Algorithm, adds the spatial scalability feature to the SPIHT algorithm through the introduction of multiple resolution dependent lists and a resolution-dependent sorting pass. SPIHT keeps the import features of the original SPIHT algorithm such as compression efficiency, full SNR Scalability and low complexity.

  13. Recent Advances in Compressed Sensing: Discrete Uncertainty Principles and Fast Hyperspectral Imaging





    In this paper,the technique of quasi-lossless compression basedon the image restoration is presented.The technique of compression described in the paper includes three steps,namely bit compression,correlation removing and image restoration based on the theory of modulation transfer function (MTF).The quasi-lossless compression comes to a high speed.The quality of the reconstruction image under restoration is up to par of the quasi-lossless with higher compression ratio.The experiments of the TM and SPOT images show that the technique is reasonable and applicable.

  15. FPGA Implementation of 5/3 Integer DWT for Image Compression

    M Puttaraju


    Full Text Available The wavelet transform has emerged as a cutting edge technology, in the field of image compression. Wavelet-based coding provides substantial improvements in picture quality at higher compression ratios. In this paper an approach is proposed for the compression of an image using 5/3(lossless Integer discrete wavelet transform (DWT for Image Compression. The proposed architecture, based on new and fast lifting scheme approach for (5, 3 filter in DWT. Here an attempt is made to establish a Standard for a data compression algorithm applied to two-dimensional digital spatial image data from payload instruments.

  16. Application of strong zerotrees to compression of correlated MRI image sets

    Soloveyko, Olexandr M.; Musatenko, Yurij S.; Kurashov, Vitalij N.; Dubikovskiy, Vladislav A.


    It is known that gainful interframe compression of magnetic resonance(MR) image set is quite difficult problem. Only few authors reported gain in performance of compressors like that comparing to separate compression of every MR image from the set (intraframe compression). Known reasons of such a situation are significant noise in MR images and presence of only low frequency correlations in images of the set. Recently we suggested new method of correlated image set compression based on Karhunen-Loeve(KL) transform and special EZW compression scheme with strong zerotrees(KLSEZW). KLSEZW algorithm showed good results in compression of video sequences with low and middle motion even without motion compensation. The paper presents successful application of the basic method and its modification to interframe MR image compression problem.

  17. Research of Image Compression Based on Quantum BP Network

    Hao-yu Zhou


    Full Text Available Quantum Neural Network (QNN, which integrates the characteristics of Artificial Neural Network (ANN with quantum theory, is a new study field. It takes advantages of ANN and quantum computing and has a high theoretical value and potential applications. Based on quantum neuron model with a quantum input and output quantum and artificial neural network theory, at the same time, QBP algorithm is proposed on the basis of the complex BP algorithm, the network of a 3-layer quantum BP which implements image compression and image reconstruction is built. The simulation results show that QBP can obtain the reconstructed images with better quantity compared with BP in spite of the less learning iterations.  

  18. Efficient image compression scheme based on differential coding

    Zhu, Li; Wang, Guoyou; Liu, Ying


    Embedded zerotree (EZW) and Set Partitioning in Hierarchical Trees (SPIHT) coding, introduced by J.M. Shapiro and Amir Said, are very effective and being used in many fields widely. In this study, brief explanation of the principles of SPIHT was first provided, and then, some improvement of SPIHT algorithm according to experiments was introduced. 1) For redundancy among the coefficients in the wavelet region, we propose differential method to reduce it during coding. 2) Meanwhile, based on characteristic of the coefficients' distribution in subband, we adjust sorting pass and optimize differential coding, in order to reduce the redundancy coding in each subband. 3) The image coding result, calculated by certain threshold, shows that through differential coding, the rate of compression get higher, and the quality of reconstructed image have been get raised greatly, when bpp (bit per pixel)=0.5, PSNR (Peak Signal to Noise Ratio) of reconstructed image exceeds that of standard SPIHT by 0.2~0.4db.

  19. Single image non-uniformity correction using compressive sensing

    Jian, Xian-zhong; Lu, Rui-zhi; Guo, Qiang; Wang, Gui-pu


    A non-uniformity correction (NUC) method for an infrared focal plane array imaging system was proposed. The algorithm, based on compressive sensing (CS) of single image, overcame the disadvantages of "ghost artifacts" and bulk calculating costs in traditional NUC algorithms. A point-sampling matrix was designed to validate the measurements of CS on the time domain. The measurements were corrected using the midway infrared equalization algorithm, and the missing pixels were solved with the regularized orthogonal matching pursuit algorithm. Experimental results showed that the proposed method can reconstruct the entire image with only 25% pixels. A small difference was found between the correction results using 100% pixels and the reconstruction results using 40% pixels. Evaluation of the proposed method on the basis of the root-mean-square error, peak signal-to-noise ratio, and roughness index (ρ) proved the method to be robust and highly applicable.

  20. Degradative encryption: An efficient way to protect SPIHT compressed images

    Xiang, Tao; Qu, Jinyu; Yu, Chenyun; Fu, Xinwen


    Degradative encryption, a new selective image encryption paradigm, is proposed to encrypt only a small part of image data to make the detail blurred but keep the skeleton discernible. The efficiency is further optimized by combining compression and encryption. A format-compliant degradative encryption algorithm based on set partitioning in hierarchical trees (SPIHT) is then proposed, and the scheme is designed to work in progressive mode for gaining a tradeoff between efficiency and security. Extensive experiments are conducted to evaluate the strength and efficiency of the scheme, and it is found that less than 10% data need to be encrypted for a secure degradation. In security analysis, the scheme is verified to be immune to cryptographic attacks as well as those adversaries utilizing image processing techniques. The scheme can find its wide applications in online try-and-buy service on mobile devices, searchable multimedia encryption in cloud computing, etc.

  1. Edge-Oriented Compression Coding on Image Sequence


    An edge-oriented image sequence coding scheme is presented.On the basis of edge detecting,an image could be divided into the sensitized region and the smooth region.In this scheme,the architecture of sensityzed region is approximated with linear type of segments.Then a rectangle belt is constructed for each segment.Finally,the gray value distribution in the region is fitted by normal forms polynomials.The model matching and motion analysis are also based on the architecture of sensityized region.For the smooth region we use the run length scanning and linear approximating.By means of normal forms polynomial fitting and motion prediction by matching,the images are compressed.It is shown through the simulations that the subjective quality of reconstructed picture is excellent at 0.0075 bit-per-pel.

  2. Coherent temporal imaging with analog time-bandwidth compression

    Asghari, Mohammad H


    We introduce the concept of coherent temporal imaging and its combination with the anamorphic stretch transform. The new system can measure both temporal profile of fast waveforms as well as their spectrum in real time and at high-throughput. We show that the combination of coherent detection and warped time-frequency mapping also performs time-bandwidth compression. By reducing the temporal width without sacrificing spectral resolution, it addresses the Big Data problem in real time instruments. The proposed method is the first application of the recently demonstrated Anamorphic Stretch Transform to temporal imaging. Using this method narrow spectral features beyond the spectrometer resolution can be captured. At the same time the output bandwidth and hence the record length is minimized. Coherent detection allows the temporal imaging and dispersive Fourier transform systems to operate in the traditional far field as well as in near field regimes.

  3. An RGB Image Encryption Supported by Wavelet-based Lossless Compression

    Ch. Samson


    Full Text Available In this paper we have proposed a method for an RGB image encryption supported by lifting scheme based lossless compression. Firstly we have compressed the input color image using a 2-D integer wavelet transform. Then we have applied lossless predictive coding to achieve additional compression. The compressed image is encrypted by using Secure Advanced Hill Cipher (SAHC involving a pair of involutory matrices, a function called Mix( and an operation called XOR. Decryption followed by reconstruction shows that there is no difference between the output image and the input image. The proposed method can be used for efficient and secure transmission of image data.

  4. Microarray BASICA: Background Adjustment, Segmentation, Image Compression and Analysis of Microarray Images

    Jianping Hua


    Full Text Available This paper presents microarray BASICA: an integrated image processing tool for background adjustment, segmentation, image compression, and analysis of cDNA microarray images. BASICA uses a fast Mann-Whitney test-based algorithm to segment cDNA microarray images, and performs postprocessing to eliminate the segmentation irregularities. The segmentation results, along with the foreground and background intensities obtained with the background adjustment, are then used for independent compression of the foreground and background. We introduce a new distortion measurement for cDNA microarray image compression and devise a coding scheme by modifying the embedded block coding with optimized truncation (EBCOT algorithm (Taubman, 2000 to achieve optimal rate-distortion performance in lossy coding while still maintaining outstanding lossless compression performance. Experimental results show that the bit rate required to ensure sufficiently accurate gene expression measurement varies and depends on the quality of cDNA microarray images. For homogeneously hybridized cDNA microarray images, BASICA is able to provide from a bit rate as low as 5 bpp the gene expression data that are 99% in agreement with those of the original 32 bpp images.

  5. A hyperspectral image compression algorithm based on wavelet transformation and fractal composition (AWFC)

    HU; Xingtang; ZHANG; Bing; ZHANG; Xia; ZHENG; Lanfen; TONG; Qingxi


    Starting with a fractal-based image-compression algorithm based on wavelet transformation for hyperspectral images, the authors were able to obtain more spectral bands with the help of of hyperspectral remote sensing. Because large amounts of data and limited bandwidth complicate the storage and transmission of data measured by TB-level bits, it is important to compress image data acquired by hyperspectral sensors such as MODIS, PHI, and OMIS; otherwise, conventional lossless compression algorithms cannot reach adequate compression ratios. Other loss-compression methods can reach high compression ratios but lack good image fidelity, especially for hyperspectral image data. Among the third generation of image compression algorithms, fractal image compression based on wavelet transformation is superior to traditional compression methods,because it has high compression ratios and good image fidelity, and requires less computing time. To keep the spectral dimension invariable, the authors compared the results of two compression algorithms based on the storage-file structures of BSQ and of BIP, and improved the HV and Quadtree partitioning and domain-range matching algorithms in order to accelerate their encode/decode efficiency. The authors' Hyperspectral Image Process and Analysis System (HIPAS) software used a VC++6.0 integrated development environment (IDE), with which good experimental results were obtained. Possible modifications of the algorithm and limitations of the method are also discussed.


    Benjamin Joseph


    Full Text Available The main contribution of this article is introducing an intelligent classifier to distinguish between benign and malignant areas of micro-calcification in companded mammogram image which is not proved or addressed elsewhere. This method does not require any manual processing technique for classification, thus it can be assimilated for identifying benign and malignant areas in intelligent way. Moreover it gives good classification responses for compressed mammogram image. The goal of the proposed method is twofold: one is to preserve the details in Region of Interest (ROI at low bit rate without affecting the diagnostic related information and second is to classify and segment the micro-calcification area in reconstructed mammogram image with high accuracy. The prime contribution of this work is that details of ROI and Non-ROI regions extracted using multi-wavelet transform are coded at variable bit rate using proposed Region Based Set Partitioning in Hierarchical Trees (RBSPIHT before storing or transmitting the image. Image reconstructed during retrieval or at the receiving end is preprocessed to remove the channel noise and to enhance the diagnostic contrast information. Then the preprocessed image is classified as normal or abnormal (benign or malignant using Probabilistic neural network. Segmentation of cancerous region is done using Fuzzy C-means Clustering (FCC algorithm and the cancerous area is computed. The experimental result shows that the proposed model performance is good at achieving high sensitivity of 97.27%, specificity of 94.38% at an average compression rate and Peak Signal to Noise Ratio (PSNR of 0.5bpp and 58dB respectively.

  7. Real-time Image Generation for Compressive Light Field Displays

    Wetzstein, G.; Lanman, D.; Hirsch, M.; Raskar, R.


    With the invention of integral imaging and parallax barriers in the beginning of the 20th century, glasses-free 3D displays have become feasible. Only today—more than a century later—glasses-free 3D displays are finally emerging in the consumer market. The technologies being employed in current-generation devices, however, are fundamentally the same as what was invented 100 years ago. With rapid advances in optical fabrication, digital processing power, and computational perception, a new generation of display technology is emerging: compressive displays exploring the co-design of optical elements and computational processing while taking particular characteristics of the human visual system into account. In this paper, we discuss real-time implementation strategies for emerging compressive light field displays. We consider displays composed of multiple stacked layers of light-attenuating or polarization-rotating layers, such as LCDs. The involved image generation requires iterative tomographic image synthesis. We demonstrate that, for the case of light field display, computed tomographic light field synthesis maps well to operations included in the standard graphics pipeline, facilitating efficient GPU-based implementations with real-time framerates.

  8. Image compression with a hybrid wavelet-fractal coder.

    Li, J; Kuo, C J


    A hybrid wavelet-fractal coder (WFC) for image compression is proposed. The WFC uses the fractal contractive mapping to predict the wavelet coefficients of the higher resolution from those of the lower resolution and then encode the prediction residue with a bitplane wavelet coder. The fractal prediction is adaptively applied only to regions where the rate saving offered by fractal prediction justifies its overhead. A rate-distortion criterion is derived to evaluate the fractal rate saving and used to select the optimal fractal parameter set for WFC. The superior performance of the WFC is demonstrated with extensive experimental results.

  9. Multifrequency Bayesian compressive sensing methods for microwave imaging.

    Poli, Lorenzo; Oliveri, Giacomo; Ding, Ping Ping; Moriyama, Toshifumi; Massa, Andrea


    The Bayesian retrieval of sparse scatterers under multifrequency transverse magnetic illuminations is addressed. Two innovative imaging strategies are formulated to process the spectral content of microwave scattering data according to either a frequency-hopping multistep scheme or a multifrequency one-shot scheme. To solve the associated inverse problems, customized implementations of single-task and multitask Bayesian compressive sensing are introduced. A set of representative numerical results is discussed to assess the effectiveness and the robustness against the noise of the proposed techniques also in comparison with some state-of-the-art deterministic strategies.

  10. COxSwAIN: Compressive Sensing for Advanced Imaging and Navigation

    Kurwitz, Richard; Pulley, Marina; LaFerney, Nathan; Munoz, Carlos


    The COxSwAIN project focuses on building an image and video compression scheme that can be implemented in a small or low-power satellite. To do this, we used Compressive Sensing, where the compression is performed by matrix multiplications on the satellite and reconstructed on the ground. Our paper explains our methodology and demonstrates the results of the scheme, being able to achieve high quality image compression that is robust to noise and corruption.

  11. Novel approaches to the design of halftone masks for analog lithography.

    Teschke, Marcel; Sinzinger, Stefan


    We report novel approaches to the design of halftone masks for analog lithography. The approaches are derived from interferometric phase contrast. In a first step we show that the interferometric phase-contrast method with detour holograms can be reduced into a single binary mask. In a second step we introduce the interferometric phase-contrast method by interference of the object wavefront with the conjugate object wavefront. This method also allows for a design of a halftone mask. To use kinoform holograms as halftone phase masks, we show in a third step the combination of the zeroth-order phase-contrast technique with the interferometric phase-contrast method.

  12. Resolution enhancement for ISAR imaging via improved statistical compressive sensing

    Zhang, Lei; Wang, Hongxian; Qiao, Zhi-jun


    Developing compressed sensing (CS) theory reveals that optimal reconstruction of an unknown signal can be achieved from very limited observations by utilizing signal sparsity. For inverse synthetic aperture radar (ISAR), the image of an interesting target is generally constructed by limited strong scattering centers, representing strong spatial sparsity. Such prior sparsity intrinsically paves a way to improved ISAR imaging performance. In this paper, we develop a super-resolution algorithm for forming ISAR images from limited observations. When the amplitude of the target scattered field follows an identical Laplace probability distribution, the approach converts super-resolution imaging into sparsity-driven optimization in the Bayesian statistics sense. We show that improved performance is achievable by taking advantage of the meaningful spatial structure of the scattered field. Further, we use the nonidentical Laplace distribution with small scale on strong signal components and large scale on noise to discriminate strong scattering centers from noise. A maximum likelihood estimator combined with a bandwidth extrapolation technique is also developed to estimate the scale parameters. Real measured data processing indicates the proposal can reconstruct the high-resolution image though only limited pulses even with low SNR, which shows advantages over current super-resolution imaging methods.

  13. Dynamic Fractal Transform with Applications to Image Data Compression

    王舟; 余英林


    A recent trend in computer graphics and image processing is to use Iterated Function System(IFS)to generate and describe both man-made graphics and natural images.Jacquin was the first to propose a fully automation gray scale image compression algorithm which is referred to as a typical static fractal transform based algorithm in this paper.By using this algorithm,an image can be condensely described as a fractal transform operator which is the combination of a set of reactal mappings.When the fractal transform operator is iteratedly applied to any initial image,a unique attractro(reconstructed image)can be achieved.In this paper,a dynamic fractal transform is presented which is a modification of the static transform.Instea of being fixed,the dynamic transform operator varies in each decoder iteration,thus differs from static transform operators.The new transform has advantages in improving coding efficiency and shows better convergence for the deocder.

  14. An investigation of image compression on NIIRS rating degradation through automated image analysis

    Chen, Hua-Mei; Blasch, Erik; Pham, Khanh; Wang, Zhonghai; Chen, Genshe


    The National Imagery Interpretability Rating Scale (NIIRS) is a subjective quantification of static image widely adopted by the Geographic Information System (GIS) community. Efforts have been made to relate NIIRS image quality to sensor parameters using the general image quality equations (GIQE), which make it possible to automatically predict the NIIRS rating of an image through automated image analysis. In this paper, we present an automated procedure to extract line edge profile based on which the NIIRS rating of a given image can be estimated through the GIQEs if the ground sampling distance (GSD) is known. Steps involved include straight edge detection, edge stripes determination, and edge intensity determination, among others. Next, we show how to employ GIQEs to estimate NIIRS degradation without knowing the ground truth GSD and investigate the effects of image compression on the degradation of an image's NIIRS rating. Specifically, we consider JPEG and JPEG2000 image compression standards. The extensive experimental results demonstrate the effect of image compression on the ground sampling distance and relative edge response, which are the major factors effecting NIIRS rating.

  15. ZPEG: a hybrid DPCM-DCT based approach for compression of Z-stack images.

    Khire, Sourabh; Cooper, Lee; Park, Yuna; Carter, Alexis; Jayant, Nikil; Saltz, Joel


    Modern imaging technology permits obtaining images at varying depths along the thickness, or the Z-axis of the sample being imaged. A stack of multiple such images is called a Z-stack image. The focus capability offered by Z-stack images is critical for many digital pathology applications. A single Z-stack image may result in several hundred gigabytes of data, and needs to be compressed for archival and distribution purposes. Currently, the existing methods for compression of Z-stack images such as JPEG and JPEG 2000 compress each focal plane independently, and do not take advantage of the Z-signal redundancy. It is possible to achieve additional compression efficiency over the existing methods, by exploiting the high Z-signal correlation during image compression. In this paper, we propose a novel algorithm for compression of Z-stack images, which we term as ZPEG. ZPEG extends the popular discrete-cosine transform (DCT) based image encoder to compress Z-stack images. This is achieved by decorrelating the neighboring layers of the Z-stack image using differential pulse-code modulation (DPCM). PSNR measurements, as well as subjective evaluations by experts indicate that ZPEG can encode Z-stack images at a higher quality as compared to JPEG, JPEG 2000 and JP3D at compression ratios below 50∶1.

  16. Lossless, Near-Lossless, and Refinement Coding of Bi-level Images

    Martins, Bo; Forchhammer, Søren Otto


    We present general and unified algorithms for lossy/lossless coding of bi-level images. The compression is realized by applying arithmetic coding to conditional probabilities. As in the current JBIG standard the conditioning may be specified by a template.For better compression, the more general...... to the specialized soft pattern matching techniques which work better for text. Template based refinement coding is applied for lossy-to-lossless refinement. Introducing only a small amount of loss in halftoned test images, compression is increased by up to a factor of four compared with JBIG. Lossy, lossless......, and refinement decoding speed and lossless encoding speed are less than a factor of two slower than JBIG. The (de)coding method is proposed as part of JBIG2, an emerging international standard for lossless/lossy compression of bi-level images....

  17. High-performance JPEG image compression chip set for multimedia applications

    Razavi, Abbas; Shenberg, Isaac; Seltz, Danny; Fronczak, Dave


    By its very nature, multimedia includes images, text and audio stored in digital format. Image compression is an enabling technology essential to overcoming two bottlenecks: cost of storage and bus speed limitation. Storing 10 seconds of high resolution RGB (640 X 480) motion video (30 frames/sec) requires 277 MBytes and a bus speed of 28 MBytes/sec (which cannot be handled by a standard bus). With high quality JPEG baseline compression the storage and bus requirements are reduced to 12 MBytes of storage and a bus speed of 1.2 MBytes/sec. Moreover, since consumer video and photography products (e.g., digital still video cameras, camcorders, TV) will increasingly use digital (and therefore compressed) images because of quality, accessibility, and the ease of adding features, compressed images may become the bridge between the multimedia computer and consumer products. The image compression challenge can be met by implementing the discrete cosine transform (DCT)-based image compression algorithm defined by the JPEG baseline standard. Using the JPEG baseline algorithm, an image can be compressed by a factor of about 24:1 without noticeable degradation in image quality. Because motion video is compressed frame by frame (or field by field), system cost is minimized (no frame or field memories and interframe operations are required) and each frame can be edited independently. Since JPEG is an international standard, the compressed files generated by this solution can be readily interchanged with other users and processed by standard software packages. This paper describes a multimedia image compression board utilizing Zoran's 040 JPEG Image Compression chip set. The board includes digitization, video decoding and compression. While the original video is sent to the display (`video in a window'), it is also compressed and transferred to the computer bus for storage. During playback, the system receives the compressed sequence from the bus and displays it on the screen.

  18. A linear mixture analysis-based compression for hyperspectral image analysis

    C. I. Chang; I. W. Ginsberg


    In this paper, the authors present a fully constrained least squares linear spectral mixture analysis-based compression technique for hyperspectral image analysis, particularly, target detection and classification. Unlike most compression techniques that directly deal with image gray levels, the proposed compression approach generates the abundance fractional images of potential targets present in an image scene and then encodes these fractional images so as to achieve data compression. Since the vital information used for image analysis is generally preserved and retained in the abundance fractional images, the loss of information may have very little impact on image analysis. In some occasions, it even improves analysis performance. Airborne visible infrared imaging spectrometer (AVIRIS) data experiments demonstrate that it can effectively detect and classify targets while achieving very high compression ratios.

  19. Improving multispectral satellite image compression using onboard subpixel registration

    Albinet, Mathieu; Camarero, Roberto; Isnard, Maxime; Poulet, Christophe; Perret, Jokin


    Future CNES earth observation missions will have to deal with an ever increasing telemetry data rate due to improvements in resolution and addition of spectral bands. Current CNES image compressors implement a discrete wavelet transform (DWT) followed by a bit plane encoding (BPE) but only on a mono spectral basis and do not profit from the multispectral redundancy of the observed scenes. Recent CNES studies have proven a substantial gain on the achievable compression ratio, +20% to +40% on selected scenarios, by implementing a multispectral compression scheme based on a Karhunen Loeve transform (KLT) followed by the classical DWT+BPE. But such results can be achieved only on perfectly registered bands; a default of registration as low as 0.5 pixel ruins all the benefits of multispectral compression. In this work, we first study the possibility to implement a multi-bands subpixel onboard registration based on registration grids generated on-the-fly by the satellite attitude control system and simplified resampling and interpolation techniques. Indeed bands registration is usually performed on ground using sophisticated techniques too computationally intensive for onboard use. This fully quantized algorithm is tuned to meet acceptable registration performances within stringent image quality criteria, with the objective of onboard real-time processing. In a second part, we describe a FPGA implementation developed to evaluate the design complexity and, by extrapolation, the data rate achievable on a spacequalified ASIC. Finally, we present the impact of this approach on the processing chain not only onboard but also on ground and the impacts on the design of the instrument.

  20. Underwater Acoustic Matched Field Imaging Based on Compressed Sensing

    Huichen Yan


    Full Text Available Matched field processing (MFP is an effective method for underwater target imaging and localizing, but its performance is not guaranteed due to the nonuniqueness and instability problems caused by the underdetermined essence of MFP. By exploiting the sparsity of the targets in an imaging area, this paper proposes a compressive sensing MFP (CS-MFP model from wave propagation theory by using randomly deployed sensors. In addition, the model’s recovery performance is investigated by exploring the lower bounds of the coherence parameter of the CS dictionary. Furthermore, this paper analyzes the robustness of CS-MFP with respect to the displacement of the sensors. Subsequently, a coherence-excluding coherence optimized orthogonal matching pursuit (CCOOMP algorithm is proposed to overcome the high coherent dictionary problem in special cases. Finally, some numerical experiments are provided to demonstrate the effectiveness of the proposed CS-MFP method.

  1. Compressive dynamic range imaging via Bayesian shrinkage dictionary learning

    Yuan, Xin


    We apply the Bayesian shrinkage dictionary learning into compressive dynamic-range imaging. By attenuating the luminous intensity impinging upon the detector at the pixel level, we demonstrate a conceptual design of an 8-bit camera to sample high-dynamic-range scenes with a single snapshot. Coding strategies for both monochrome and color cameras are proposed. A Bayesian reconstruction algorithm is developed to learn a dictionary in situ on the sampled image, for joint reconstruction and demosaicking. We use global-local shrinkage priors to learn the dictionary and dictionary coefficients representing the data. Simulation results demonstrate the feasibility of the proposed camera and the superior performance of the Bayesian shrinkage dictionary learning algorithm.

  2. Pairwise KLT-Based Compression for Multispectral Images

    Nian, Yongjian; Liu, Yu; Ye, Zhen


    This paper presents a pairwise KLT-based compression algorithm for multispectral images. Although the KLT has been widely employed for spectral decorrelation, its complexity is high if it is performed on the global multispectral images. To solve this problem, this paper presented a pairwise KLT for spectral decorrelation, where KLT is only performed on two bands every time. First, KLT is performed on the first two adjacent bands and two principle components are obtained. Secondly, one remainning band and the principal component (PC) with the larger eigenvalue is selected to perform a KLT on this new couple. This procedure is repeated until the last band is reached. Finally, the optimal truncation technique of post-compression rate-distortion optimization is employed for the rate allocation of all the PCs, followed by embedded block coding with optimized truncation to generate the final bit-stream. Experimental results show that the proposed algorithm outperforms the algorithm based on global KLT. Moreover, the pairwise KLT structure can significantly reduce the complexity compared with a global KLT.

  3. A CMOS Imager with Focal Plane Compression using Predictive Coding

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.


    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  4. Area and power efficient DCT architecture for image compression

    Dhandapani, Vaithiyanathan; Ramachandran, Seshasayanan


    The discrete cosine transform (DCT) is one of the major components in image and video compression systems. The final output of these systems is interpreted by the human visual system (HVS), which is not perfect. The limited perception of human visualization allows the algorithm to be numerically approximate rather than exact. In this paper, we propose a new matrix for discrete cosine transform. The proposed 8 × 8 transformation matrix contains only zeros and ones which requires only adders, thus avoiding the need for multiplication and shift operations. The new class of transform requires only 12 additions, which highly reduces the computational complexity and achieves a performance in image compression that is comparable to that of the existing approximated DCT. Another important aspect of the proposed transform is that it provides an efficient area and power optimization while implementing in hardware. To ensure the versatility of the proposal and to further evaluate the performance and correctness of the structure in terms of speed, area, and power consumption, the model is implemented on Xilinx Virtex 7 field programmable gate array (FPGA) device and synthesized with Cadence® RTL Compiler® using UMC 90 nm standard cell library. The analysis obtained from the implementation indicates that the proposed structure is superior to the existing approximation techniques with a 30% reduction in power and 12% reduction in area.

  5. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Yongjian Nian


    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  6. High-performance compression and double cryptography based on compressive ghost imaging with the fast Fourier transform

    Leihong, Zhang; Zilan, Pan; Luying, Wu; Xiuhua, Ma


    To solve the problem that large images can hardly be retrieved for stringent hardware restrictions and the security level is low, a method based on compressive ghost imaging (CGI) with Fast Fourier Transform (FFT) is proposed, named FFT-CGI. Initially, the information is encrypted by the sender with FFT, and the FFT-coded image is encrypted by the system of CGI with a secret key. Then the receiver decrypts the image with the aid of compressive sensing (CS) and FFT. Simulation results are given to verify the feasibility, security, and compression of the proposed encryption scheme. The experiment suggests the method can improve the quality of large images compared with conventional ghost imaging and achieve the imaging for large-sized images, further the amount of data transmitted largely reduced because of the combination of compressive sensing and FFT, and improve the security level of ghost images through ciphertext-only attack (COA), chosen-plaintext attack (CPA), and noise attack. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.

  7. Diagnostic imaging of compression neuropathy; Bildgebende Diagnostik von Nervenkompressionssyndromen

    Weishaupt, D.; Andreisek, G. [Universitaetsspital, Institut fuer Diagnostische Radiologie, Zuerich (Switzerland)


    Compression-induced neuropathy of peripheral nerves can cause severe pain of the foot and ankle. Early diagnosis is important to institute prompt treatment and to minimize potential injury. Although clinical examination combined with electrophysiological studies remain the cornerstone of the diagnostic work-up, in certain cases, imaging may provide key information with regard to the exact anatomic location of the lesion or aid in narrowing the differential diagnosis. In other patients with peripheral neuropathies of the foot and ankle, imaging may establish the etiology of the condition and provide information crucial for management and/or surgical planning. MR imaging and ultrasound provide direct visualization of the nerve and surrounding abnormalities. Bony abnormalities contributing to nerve compression are best assessed by radiographs and CT. Knowledge of the anatomy, the etiology, typical clinical findings, and imaging features of peripheral neuropathies affecting the peripheral nerves of the foot and ankle will allow for a more confident diagnosis. (orig.) [German] Kompressionsbedingte Schaedigungen peripherer Nerven koennen die Ursache hartnaeckiger Schmerzen im Bereich des Sprunggelenks und Fusses sein. Eine fruehzeitige Diagnose ist entscheidend, um den Patienten der richtigen Therapie zuzufuehren und potenzielle Schaedigungen zu vermeiden oder zu verringern. Obschon die klinische Untersuchung und die elektrophysiologische Abklaerungen die wichtigsten Elemente der Diagnostik peripherer Nervenkompressionssyndrome sind, kann die Bildgebung entscheidend sein, wenn es darum geht, die Hoehe des Nervenschadens festzulegen oder die Differenzialdiagnose einzugrenzen. In gewissen Faellen kann durch Bildgebung sogar die Ursache der Nervenkompression gefunden werden. In anderen Faellen ist die Bildgebung wichtig bei der Therapieplanung, insbesondere dann, wenn die Laesion chirurgisch angegangen wird. Magnetresonanztomographie (MRT) und Sonographie ermoeglichen eine

  8. Probability of correct reconstruction in compressive spectral imaging

    Samuel Eduardo Pinilla


    Full Text Available Coded Aperture Snapshot Spectral Imaging (CASSI systems capture the 3-dimensional (3D spatio-spectral information of a scene using a set of 2-dimensional (2D random coded Focal Plane Array (FPA measurements. A compressed sensing reconstruction algorithm is then used to recover the underlying spatio-spectral 3D data cube. The quality of the reconstructed spectral images depends exclusively on the CASSI sensing matrix, which is determined by the statistical structure of the coded apertures. The Restricted Isometry Property (RIP of the CASSI sensing matrix is used to determine the probability of correct image reconstruction and provides guidelines for the minimum number of FPA measurement shots needed for image reconstruction. Further, the RIP can be used to determine the optimal structure of the coded projections in CASSI. This article describes the CASSI optical architecture and develops the RIP for the sensing matrix in this system. Simulations show the higher quality of spectral image reconstructions when the RIP property is satisfied. Simulations also illustrate the higher performance of the optimal structured projections in CASSI.

  9. Oriented wavelet transform for image compression and denoising.

    Chappelier, Vivien; Guillemot, Christine


    In this paper, we introduce a new transform for image processing, based on wavelets and the lifting paradigm. The lifting steps of a unidimensional wavelet are applied along a local orientation defined on a quincunx sampling grid. To maximize energy compaction, the orientation minimizing the prediction error is chosen adaptively. A fine-grained multiscale analysis is provided by iterating the decomposition on the low-frequency band. In the context of image compression, the multiresolution orientation map is coded using a quad tree. The rate allocation between the orientation map and wavelet coefficients is jointly optimized in a rate-distortion sense. For image denoising, a Markov model is used to extract the orientations from the noisy image. As long as the map is sufficiently homogeneous, interesting properties of the original wavelet are preserved such as regularity and orthogonality. Perfect reconstruction is ensured by the reversibility of the lifting scheme. The mutual information between the wavelet coefficients is studied and compared to the one observed with a separable wavelet transform. The rate-distortion performance of this new transform is evaluated for image coding using state-of-the-art subband coders. Its performance in a denoising application is also assessed against the performance obtained with other transforms or denoising methods.

  10. Application of region selective embedded zerotree wavelet coder in CT image compression.

    Li, Guoli; Zhang, Jian; Wang, Qunjing; Hu, Cungang; Deng, Na; Li, Jianping


    Compression is necessary in medical image preservation because of the huge data quantity. Medical images are different from the common images because of their own characteristics, for example, part of information in CT image is useless, and it's a kind of resource waste to save this part information. The region selective EZW coder was proposed with which only useful part of image was selected and compressed, and the test image provides good result.

  11. Image Quality Assessment for Different Wavelet Compression Techniques in a Visual Communication Framework

    Nuha A. S. Alwan


    Full Text Available Images with subband coding and threshold wavelet compression are transmitted over a Rayleigh communication channel with additive white Gaussian noise (AWGN, after quantization and 16-QAM modulation. A comparison is made between these two types of compression using both mean square error (MSE and structural similarity (SSIM image quality assessment (IQA criteria applied to the reconstructed image at the receiver. The two methods yielded comparable SSIM but different MSE measures. In this work, we justify our results which support previous findings in the literature that the MSE between two images is not indicative of structural similarity or the visibility of errors. It is found that it is difficult to reduce the pointwise errors in subband-compressed images (higher MSE. However, the compressed images provide comparable SSIM or perceived quality for both types of compression provided that the retained energy after compression is the same.

  12. Effect of Lossy JPEG Compression of an Image with Chromatic Aberrations on Target Measurement Accuracy

    Matsuoka, R.


    This paper reports an experiment conducted to investigate the effect of lossy JPEG compression of an image with chromatic aberrations on the measurement accuracy of target center by the intensity-weighted centroid method. I utilized six images shooting a white sheet with 30 by 20 black filled circles in the experiment. The images were acquired by a digital camera Canon EOS 20D. The image data were compressed by using two compression parameter sets of a downsampling ratio, a quantization table and a Huffman code table utilized in EOS 20D. The experiment results clearly indicate that lossy JPEG compression of an image with chromatic aberrations would produce a significant effect on measurement accuracy of target center by the intensity-weighted centroid method. The maximum displacements of red, green and blue components caused by lossy JPEG compression were 0.20, 0.09, and 0.20 pixels respectively. The results also suggest that the downsampling of the chrominance components Cb and Cr in lossy JPEG compression would produce displacements between uncompressed image data and compressed image data. In conclusion, since the author consider that it would be unable to correct displacements caused by lossy JPEG compression, the author would recommend that lossy JPEG compression before recording an image in a digital camera should not be executed in case of highly precise image measurement by using color images acquired by a non-metric digital camera.

  13. Three-dimensional imaging reconstruction algorithm of gated-viewing laser imaging with compressive sensing.

    Li, Li; Xiao, Wei; Jian, Weijian


    Three-dimensional (3D) laser imaging combining compressive sensing (CS) has an advantage in lower power consumption and less imaging sensors; however, it brings enormous stress to subsequent calculation devices. In this paper we proposed a fast 3D imaging reconstruction algorithm to deal with time-slice images sampled by single-pixel detectors. The algorithm implements 3D imaging reconstruction before CS recovery, thus it saves plenty of runtime of CS recovery. Several experiments are conducted to verify the performance of the algorithm. Simulation results demonstrated that the proposed algorithm has better performance in terms of efficiency compared to an existing algorithm.

  14. Statistical Analysis of Compression Methods for Storing Binary Image for Low-Memory Systems

    Roman Slaby


    Full Text Available The paper is focused on the statistical comparison of the selected compression methods which are used for compression of the binary images. The aim is to asses, which of presented compression method for low-memory system requires less number of bytes of memory. For assessment of the success rates of the input image to binary image the correlation functions are used. Correlation function is one of the methods of OCR algorithm used for the digitization of printed symbols. Using of compression methods is necessary for systems based on low-power micro-controllers. The data stream saving is very important for such systems with limited memory as well as the time required for decoding the compressed data. The success rate of the selected compression algorithms is evaluated using the basic characteristics of the exploratory analysis. The searched samples represent the amount of bytes needed to compress the test images, representing alphanumeric characters.

  15. Acquisition of STEM Images by Adaptive Compressive Sensing

    Xie, Weiyi; Feng, Qianli; Srinivasan, Ramprakash; Stevens, Andrew; Browning, Nigel D.


    Compressive Sensing (CS) allows a signal to be sparsely measured first and accurately recovered later in software [1]. In scanning transmission electron microscopy (STEM), it is possible to compress an image spatially by reducing the number of measured pixels, which decreases electron dose and increases sensing speed [2,3,4]. The two requirements for CS to work are: (1) sparsity of basis coefficients and (2) incoherence of the sensing system and the representation system. However, when pixels are missing from the image, it is difficult to have an incoherent sensing matrix. Nevertheless, dictionary learning techniques such as Beta-Process Factor Analysis (BPFA) [5] are able to simultaneously discover a basis and the sparse coefficients in the case of missing pixels. On top of CS, we would like to apply active learning [6,7] to further reduce the proportion of pixels being measured, while maintaining image reconstruction quality. Suppose we initially sample 10% of random pixels. We wish to select the next 1% of pixels that are most useful in recovering the image. Now, we have 11% of pixels, and we want to decide the next 1% of “most informative” pixels. Active learning methods are online and sequential in nature. Our goal is to adaptively discover the best sensing mask during acquisition using feedback about the structures in the image. In the end, we hope to recover a high quality reconstruction with a dose reduction relative to the non-adaptive (random) sensing scheme. In doing this, we try three metrics applied to the partial reconstructions for selecting the new set of pixels: (1) variance, (2) Kullback-Leibler (KL) divergence using a Radial Basis Function (RBF) kernel, and (3) entropy. Figs. 1 and 2 display the comparison of Peak Signal-to-Noise (PSNR) using these three different active learning methods at different percentages of sampled pixels. At 20% level, all the three active learning methods underperform the original CS without active learning. However

  16. On optimisation of wavelet algorithms for non-perfect wavelet compression of digital medical images

    Ricke, J


    Aim: Optimisation of medical image compression. Evaluation of wavelet-filters for wavelet-compression. Results: Application of filters with different complexity results in significant variations in the quality of image reconstruction after compression specifically in low frequency information. Filters of high complexity proved to be advantageous despite of heterogenous results during visual analysis. For high frequency details, complexity of filters did not prove to be of significant impact on image after reconstruction.


    Nishat kanvel


    Full Text Available This paper presents an adaptive lifting scheme with Particle Swarm Optimization technique for image compression. Particle swarm Optimization technique is used to improve the accuracy of the predictionfunction used in the lifting scheme. This scheme is applied in Image compression and parameters such as PSNR, Compression Ratio and the visual quality of the image is calculated .The proposed scheme iscompared with the existing methods.

  18. Effective palette indexing for image compression using self-organization of Kohonen feature map.

    Pei, Soo-Chang; Chuang, Yu-Ting; Chuang, Wei-Hong


    The process of limited-color image compression usually involves color quantization followed by palette re-indexing. Palette re-indexing could improve the compression of color-indexed images, but it is still complicated and consumes extra time. Making use of the topology-preserving property of self-organizing Kohonen feature map, we can generate a fairly good color index table to achieve both high image quality and high compression, without re-indexing. Promising experiment results will be presented.

  19. Block Compressed Sensing of Images Using Adaptive Granular Reconstruction

    Ran Li


    Full Text Available In the framework of block Compressed Sensing (CS, the reconstruction algorithm based on the Smoothed Projected Landweber (SPL iteration can achieve the better rate-distortion performance with a low computational complexity, especially for using the Principle Components Analysis (PCA to perform the adaptive hard-thresholding shrinkage. However, during learning the PCA matrix, it affects the reconstruction performance of Landweber iteration to neglect the stationary local structural characteristic of image. To solve the above problem, this paper firstly uses the Granular Computing (GrC to decompose an image into several granules depending on the structural features of patches. Then, we perform the PCA to learn the sparse representation basis corresponding to each granule. Finally, the hard-thresholding shrinkage is employed to remove the noises in patches. The patches in granule have the stationary local structural characteristic, so that our method can effectively improve the performance of hard-thresholding shrinkage. Experimental results indicate that the reconstructed image by the proposed algorithm has better objective quality when compared with several traditional ones. The edge and texture details in the reconstructed image are better preserved, which guarantees the better visual quality. Besides, our method has still a low computational complexity of reconstruction.

  20. Iliac vein compression syndrome: Clinical, imaging and pathologic findings

    Katelyn; N; Brinegar; Rahul; A; Sheth; Ali; Khademhosseini; Jemianne; Bautista; Rahmi; Oklu


    May-Thurner syndrome(MTS) is the pathologic compression of the left common iliac vein by the right common iliac artery, resulting in left lower extremity pain, swelling, and deep venous thrombosis. Though this syndrome was first described in 1851, there are currently no standardized criteria to establish the diagnosis of MTS. Since MTS is treated by a wide array of specialties, including interventional radiology, vascular surgery, cardiology, and vascular medicine, the need for an established diagnostic criterion is imperative in order to reduce misdiagnosis and inappropriate treatment. Although MTS has historically been diagnosed by the presence of pathologic features, the use of dynamic imaging techniques has led to a more radiologic based diagnosis. Thus, imaging plays an integral part in screening patients for MTS, and the utility of a wide array of imaging modalities has been evaluated. Here, we summarize the historical aspects of the clinical features of this syndrome. We then provide a comprehensive assessment of the literature on the efficacy of imaging tools available to diagnose MTS. Lastly, we provide clinical pearls and recommendations to aid physicians in diagnosing the syndrome through the use of provocative measures.

  1. Compressed Sensing Techniques Applied to Ultrasonic Imaging of Cargo Containers

    Álvarez López, Yuri; Martínez Lorenzo, José Ángel


    One of the key issues in the fight against the smuggling of goods has been the development of scanners for cargo inspection. X-ray-based radiographic system scanners are the most developed sensing modality. However, they are costly and use bulky sources that emit hazardous, ionizing radiation. Aiming to improve the probability of threat detection, an ultrasonic-based technique, capable of detecting the footprint of metallic containers or compartments concealed within the metallic structure of the inspected cargo, has been proposed. The system consists of an array of acoustic transceivers that is attached to the metallic structure-under-inspection, creating a guided acoustic Lamb wave. Reflections due to discontinuities are detected in the images, provided by an imaging algorithm. Taking into consideration that the majority of those images are sparse, this contribution analyzes the application of Compressed Sensing (CS) techniques in order to reduce the amount of measurements needed, thus achieving faster scanning, without compromising the detection capabilities of the system. A parametric study of the image quality, as a function of the samples needed in spatial and frequency domains, is presented, as well as the dependence on the sampling pattern. For this purpose, realistic cargo inspection scenarios have been simulated. PMID:28098841

  2. Compressed Sensing Techniques Applied to Ultrasonic Imaging of Cargo Containers.

    López, Yuri Álvarez; Lorenzo, José Ángel Martínez


    One of the key issues in the fight against the smuggling of goods has been the development of scanners for cargo inspection. X-ray-based radiographic system scanners are the most developed sensing modality. However, they are costly and use bulky sources that emit hazardous, ionizing radiation. Aiming to improve the probability of threat detection, an ultrasonic-based technique, capable of detecting the footprint of metallic containers or compartments concealed within the metallic structure of the inspected cargo, has been proposed. The system consists of an array of acoustic transceivers that is attached to the metallic structure-under-inspection, creating a guided acoustic Lamb wave. Reflections due to discontinuities are detected in the images, provided by an imaging algorithm. Taking into consideration that the majority of those images are sparse, this contribution analyzes the application of Compressed Sensing (CS) techniques in order to reduce the amount of measurements needed, thus achieving faster scanning, without compromising the detection capabilities of the system. A parametric study of the image quality, as a function of the samples needed in spatial and frequency domains, is presented, as well as the dependence on the sampling pattern. For this purpose, realistic cargo inspection scenarios have been simulated.

  3. Compressed Sensing Techniques Applied to Ultrasonic Imaging of Cargo Containers

    Yuri Álvarez López


    Full Text Available One of the key issues in the fight against the smuggling of goods has been the development of scanners for cargo inspection. X-ray-based radiographic system scanners are the most developed sensing modality. However, they are costly and use bulky sources that emit hazardous, ionizing radiation. Aiming to improve the probability of threat detection, an ultrasonic-based technique, capable of detecting the footprint of metallic containers or compartments concealed within the metallic structure of the inspected cargo, has been proposed. The system consists of an array of acoustic transceivers that is attached to the metallic structure-under-inspection, creating a guided acoustic Lamb wave. Reflections due to discontinuities are detected in the images, provided by an imaging algorithm. Taking into consideration that the majority of those images are sparse, this contribution analyzes the application of Compressed Sensing (CS techniques in order to reduce the amount of measurements needed, thus achieving faster scanning, without compromising the detection capabilities of the system. A parametric study of the image quality, as a function of the samples needed in spatial and frequency domains, is presented, as well as the dependence on the sampling pattern. For this purpose, realistic cargo inspection scenarios have been simulated.

  4. Adaptive wavelet transform algorithm for lossy image compression

    Pogrebnyak, Oleksiy B.; Ramirez, Pablo M.; Acevedo Mosqueda, Marco Antonio


    A new algorithm of locally adaptive wavelet transform based on the modified lifting scheme is presented. It performs an adaptation of the wavelet high-pass filter at the prediction stage to the local image data activity. The proposed algorithm uses the generalized framework for the lifting scheme that permits to obtain easily different wavelet filter coefficients in the case of the (~N, N) lifting. Changing wavelet filter order and different control parameters, one can obtain the desired filter frequency response. It is proposed to perform the hard switching between different wavelet lifting filter outputs according to the local data activity estimate. The proposed adaptive transform possesses a good energy compaction. The designed algorithm was tested on different images. The obtained simulation results show that the visual and quantitative quality of the restored images is high. The distortions are less in the vicinity of high spatial activity details comparing to the non-adaptive transform, which introduces ringing artifacts. The designed algorithm can be used for lossy image compression and in the noise suppression applications.

  5. Compressive spectral polarization imaging by a pixelized polarizer and colored patterned detector.

    Fu, Chen; Arguello, Henry; Sadler, Brian M; Arce, Gonzalo R


    A compressive spectral and polarization imager based on a pixelized polarizer and colored patterned detector is presented. The proposed imager captures several dispersed compressive projections with spectral and polarization coding. Stokes parameter images at several wavelengths are reconstructed directly from 2D projections. Employing a pixelized polarizer and colored patterned detector enables compressive sensing over spatial, spectral, and polarization domains, reducing the total number of measurements. Compressive sensing codes are specially designed to enhance the peak signal-to-noise ratio in the reconstructed images. Experiments validate the architecture and reconstruction algorithms.

  6. Auto-shape lossless compression of pharynx and esophagus fluoroscopic images.

    Arif, Arif Sameh; Mansor, Sarina; Logeswaran, Rajasvaran; Karim, Hezerul Abdul


    The massive number of medical images produced by fluoroscopic and other conventional diagnostic imaging devices demand a considerable amount of space for data storage. This paper proposes an effective method for lossless compression of fluoroscopic images. The main contribution in this paper is the extraction of the regions of interest (ROI) in fluoroscopic images using appropriate shapes. The extracted ROI is then effectively compressed using customized correlation and the combination of Run Length and Huffman coding, to increase compression ratio. The experimental results achieved show that the proposed method is able to improve the compression ratio by 400 % as compared to that of traditional methods.

  7. Compressive imaging for difference image formation and wide-field-of-view target tracking



    Use of imaging systems for performing various situational awareness tasks in military and commercial settings has a long history. There is increasing recognition, however, that a much better job can be done by developing non-traditional optical systems that exploit the task-specific system aspects within the imager itself. In some cases, a direct consequence of this approach can be real-time data compression along with increased measurement fidelity of the task-specific features. In others, compression can potentially allow us to perform high-level tasks such as direct tracking using the compressed measurements without reconstructing the scene of interest. In this dissertation we present novel advancements in feature-specific (FS) imagers for large field-of-view surveillence, and estimation of temporal object-scene changes utilizing the compressive imaging paradigm. We develop these two ideas in parallel. In the first case we show a feature-specific (FS) imager that optically multiplexes multiple, encoded sub-fields of view onto a common focal plane. Sub-field encoding enables target tracking by creating a unique connection between target characteristics in superposition space and the target's true position in real space. This is accomplished without reconstructing a conventional image of the large field of view. System performance is evaluated in terms of two criteria: average decoding time and probability of decoding error. We study these performance criteria as a function of resolution in the encoding scheme and signal-to-noise ratio. We also include simulation and experimental results demonstrating our novel tracking method. In the second case we present a FS imager for estimating temporal changes in the object scene over time by quantifying these changes through a sequence of difference images. The difference images are estimated by taking compressive measurements of the scene. Our goals are twofold. First, to design the optimal sensing matrix for taking

  8. Joint image encryption and compression scheme based on IWT and SPIHT

    Zhang, Miao; Tong, Xiaojun


    A joint lossless image encryption and compression scheme based on integer wavelet transform (IWT) and set partitioning in hierarchical trees (SPIHT) is proposed to achieve lossless image encryption and compression simultaneously. Making use of the properties of IWT and SPIHT, encryption and compression are combined. Moreover, the proposed secure set partitioning in hierarchical trees (SSPIHT) via the addition of encryption in the SPIHT coding process has no effect on compression performance. A hyper-chaotic system, nonlinear inverse operation, Secure Hash Algorithm-256(SHA-256), and plaintext-based keystream are all used to enhance the security. The test results indicate that the proposed methods have high security and good lossless compression performance.

  9. Visually Improved Image Compression by using Embedded Zero-tree Wavelet Coding

    Janaki R


    Full Text Available Image compression is very important for efficient transmission and storage of images. Embedded Zero- tree Wavelet (EZW algorithm is a simple yet powerful algorithm having the property that the bits in the stream are generated in the order of their importance. Image compression can improve the performance of the digital systems by reducing time and cost in image storage and transmission without significant reduction of the image quality. For image compression it is desirable that the selection of transform should reduce the size of resultant data set as compared to source data set. EZW is computationally very fast and among the best image compression algorithm known today. This paper proposes a technique for image compression which uses the Wavelet-based Image Coding. A large number of experimental results are shown that this method saves a lot of bits in transmission, further enhances the compression performance. This paper aims to determine the best threshold to compress the still image at a particular decomposition level by using Embedded Zero-tree Wavelet encoder. Compression Ratio (CR and Peak-Signal-to-Noise (PSNR is determined for different threshold values ranging from 6 to 60 for decomposition level 8.

  10. Compressed Sensing on the Image of Bilinear Maps

    Walk, Philipp


    For several communication models, the dispersive part of a communication channel is described by a bilinear operation $T$ between the possible sets of input signals and channel parameters. The received channel output has then to be identified from the image $T(X,Y)$ of the input signal difference sets $X$ and the channel state sets $Y$. The main goal in this contribution is to characterize the compressibility of $T(X,Y)$ with respect to an ambient dimension $N$. In this paper we show that a restricted norm multiplicativity of $T$ on all canonical subspaces $X$ and $Y$ with dimension $S$ resp. $F$ is sufficient for the reconstruction of output signals with an overwhelming probability from $\\mathcal{O}((S+F)\\log N)$ random sub-Gaussian measurements.

  11. Image compression with QM-AYA adaptive binary arithmetic coder

    Cheng, Joe-Ming; Langdon, Glen G., Jr.


    The Q-coder has been reported in the literature, and is a renorm-driven binary adaptive arithmetic coder. A similar renorm-driven coder, the QM coder, uses the same approach with an initial attack to more rapidly estimate the statistics in the beginning, and with a different state table. The QM coder is the adaptive binary arithmetic coder employed in the JBIG and JPEG image compression algorithms. The QM-AYA arithmetic coder is similar to the QM coder, with a different state table, that offers balanced improvements to the QM probability estimation for the less skewed distributions. The QM-AYA performs better when the probability estimate is near 0.5 for each binary symbol. An approach for constructing effective index change tables for Q-coder type adaptation is discussed.

  12. An effective fractal image compression algorithm based on plane fitting

    Wang Xing-Yuan; Guo Xing; Zhang Dan-Dan


    A new method using plane fitting to decide whether a domain block is similar enough to a given range block is proposed in this paper.First,three coefficients are computed for describing each range and domain block.Then,the best-matched one for every range block is obtained by analysing the relation between their coefficients.Experimental results show that the proposed method can shorten encoding time markedly,while the retrieved image quality is still acceptable.In the decoding step,a kind of simple line fitting on block boundaries is used to reduce blocking effects.At the same time,the proposed method can also achieve a high compression ratio.

  13. Novel Efficient De-blocking Method for Highly Compressed Images

    SHI Min; YI Qing-ming; YANG Liang


    Due to coarse quantization,block-based discrete cosine transform(BDCT) compression methods usually suffer from visible blocking artifacts at the block boundaries.A novel efficient de-blocking method in DCT domain is proposed.A specific criterion for edge detection is given,one-dimensional DCT is applied on each row of the adjacent blocks and the shifted block in smooth region,and the transform coefficients of the shifted block are modified by weighting the average of three coefficients of the block.Mean square difference of slope criterion is used to judge the efficiency of the proposed algorithm.Simulation results show that the new method not only obtains satisfactory image quality,but also maintains high frequency information.

  14. Intelligent fuzzy approach for fast fractal image compression

    Nodehi, Ali; Sulong, Ghazali; Al-Rodhaan, Mznah; Al-Dhelaan, Abdullah; Rehman, Amjad; Saba, Tanzila


    Fractal image compression (FIC) is recognized as a NP-hard problem, and it suffers from a high number of mean square error (MSE) computations. In this paper, a two-phase algorithm was proposed to reduce the MSE computation of FIC. In the first phase, based on edge property, range and domains are arranged. In the second one, imperialist competitive algorithm (ICA) is used according to the classified blocks. For maintaining the quality of the retrieved image and accelerating algorithm operation, we divided the solutions into two groups: developed countries and undeveloped countries. Simulations were carried out to evaluate the performance of the developed approach. Promising results thus achieved exhibit performance better than genetic algorithm (GA)-based and Full-search algorithms in terms of decreasing the number of MSE computations. The number of MSE computations was reduced by the proposed algorithm for 463 times faster compared to the Full-search algorithm, although the retrieved image quality did not have a considerable change.

  15. Image compression with directional lifting on separated sections

    Zhu, Jieying; Wang, Nengchao


    A novel image compression scheme is presented that the directional sections are separated and transformed differently from the rest of image. The discrete directions of anisotropic pixels are calculated and then grouped to compact directional sections. One dimensional (1-D) adaptive directional lifting is continuously applied along orientations of direction sections other than applying 1-D wavelet transform alternately in two dimensions for the whole image. For the rest sections, 2-D adaptive lifting filters are applied according to pixels' positions. Our single embedded coding stream can be truncated exactly for any bit rate. Experiments have showed that large coefficients can be significantly reduced along directional sections by our transform which makes energy more compact than traditional wavelet transform. Though rate-distortion (R-D) optimization isn't exploited, the PSNR is still comparable to that of JPEG-2000 with 9/7 filters at high bit rates. And at low bit rates, the visual quality is better than that of JPEG-2000 for along directional sections both blurring and ringing artifacts can be avoided and edge preservation is good.

  16. Adaptive wavelet transform algorithm for image compression applications

    Pogrebnyak, Oleksiy B.; Manrique Ramirez, Pablo


    A new algorithm of locally adaptive wavelet transform is presented. The algorithm implements the integer-to-integer lifting scheme. It performs an adaptation of the wavelet function at the prediction stage to the local image data activity. The proposed algorithm is based on the generalized framework for the lifting scheme that permits to obtain easily different wavelet coefficients in the case of the (N~,N) lifting. It is proposed to perform the hard switching between (2, 4) and (4, 4) lifting filter outputs according to an estimate of the local data activity. When the data activity is high, i.e., in the vicinity of edges, the (4, 4) lifting is performed. Otherwise, in the plain areas, the (2,4) decomposition coefficients are calculated. The calculations are rather simples that permit the implementation of the designed algorithm in fixed point DSP processors. The proposed adaptive transform possesses the perfect restoration of the processed data and possesses good energy compactation. The designed algorithm was tested on different images. The proposed adaptive transform algorithm can be used for image/signal lossless compression.

  17. Stable and Robust Sampling Strategies for Compressive Imaging.

    Krahmer, Felix; Ward, Rachel


    In many signal processing applications, one wishes to acquire images that are sparse in transform domains such as spatial finite differences or wavelets using frequency domain samples. For such applications, overwhelming empirical evidence suggests that superior image reconstruction can be obtained through variable density sampling strategies that concentrate on lower frequencies. The wavelet and Fourier transform domains are not incoherent because low-order wavelets and low-order frequencies are correlated, so compressive sensing theory does not immediately imply sampling strategies and reconstruction guarantees. In this paper, we turn to a more refined notion of coherence-the so-called local coherence-measuring for each sensing vector separately how correlated it is to the sparsity basis. For Fourier measurements and Haar wavelet sparsity, the local coherence can be controlled and bounded explicitly, so for matrices comprised of frequencies sampled from a suitable inverse square power-law density, we can prove the restricted isometry property with near-optimal embedding dimensions. Consequently, the variable-density sampling strategy we provide allows for image reconstructions that are stable to sparsity defects and robust to measurement noise. Our results cover both reconstruction by ℓ1-minimization and total variation minimization. The local coherence framework developed in this paper should be of independent interest, as it implies that for optimal sparse recovery results, it suffices to have bounded average coherence from sensing basis to sparsity basis-as opposed to bounded maximal coherence-as long as the sampling strategy is adapted accordingly.

  18. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    Pence, William D.; White, R. L.; Seaman, R.


    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  19. A Near-Lossless Image Compression Algorithm Suitable for Hardware Design in Wireless Endoscopy System

    Xie Xiang


    Full Text Available In order to decrease the communication bandwidth and save the transmitting power in the wireless endoscopy capsule, this paper presents a new near-lossless image compression algorithm based on the Bayer format image suitable for hardware design. This algorithm can provide low average compression rate ( bits/pixel with high image quality (larger than dB for endoscopic images. Especially, it has low complexity hardware overhead (only two line buffers and supports real-time compressing. In addition, the algorithm can provide lossless compression for the region of interest (ROI and high-quality compression for other regions. The ROI can be selected arbitrarily by varying ROI parameters. In addition, the VLSI architecture of this compression algorithm is also given out. Its hardware design has been implemented in m CMOS process.

  20. A Near-Lossless Image Compression Algorithm Suitable for Hardware Design in Wireless Endoscopy System

    ZhiHua Wang


    Full Text Available In order to decrease the communication bandwidth and save the transmitting power in the wireless endoscopy capsule, this paper presents a new near-lossless image compression algorithm based on the Bayer format image suitable for hardware design. This algorithm can provide low average compression rate (2.12 bits/pixel with high image quality (larger than 53.11 dB for endoscopic images. Especially, it has low complexity hardware overhead (only two line buffers and supports real-time compressing. In addition, the algorithm can provide lossless compression for the region of interest (ROI and high-quality compression for other regions. The ROI can be selected arbitrarily by varying ROI parameters. In addition, the VLSI architecture of this compression algorithm is also given out. Its hardware design has been implemented in 0.18μm CMOS process.

  1. Block-Based Compressed Sensing for Neutron Radiation Image Using WDFB

    Wei Jin


    Full Text Available An ideal compression method for neutron radiation image should have high compression ratio while keeping more details of the original image. Compressed sensing (CS, which can break through the restrictions of sampling theorem, is likely to offer an efficient compression scheme for the neutron radiation image. Combining wavelet transform with directional filter banks, a novel nonredundant multiscale geometry analysis transform named Wavelet Directional Filter Banks (WDFB is constructed and applied to represent neutron radiation image sparsely. Then, the block-based CS technique is introduced and a high performance CS scheme for neutron radiation image is proposed. By performing two-step iterative shrinkage algorithm the problem of L1 norm minimization is solved to reconstruct neutron radiation image from random measurements. The experiment results demonstrate that the scheme not only improves the quality of reconstructed image obviously but also retains more details of original image.

  2. Bit-plane-channelized hotelling observer for predicting task performance using lossy-compressed images

    Schmanske, Brian M.; Loew, Murray H.


    A technique for assessing the impact of lossy wavelet-based image compression on signal detection tasks is presented. A medical image"s value is based on its ability to support clinical decisions such as detecting and diagnosing abnormalities. Image quality of compressed images is, however, often stated in terms of mathematical metrics such as mean square error. The presented technique provides a more suitable measure of image degradation by building on the channelized Hotelling observer model, which has been shown to predict human performance of signal detection tasks in noise-limited images. The technique first decomposes an image into its constituent wavelet subband coefficient bit-planes. Channel responses for the individual subband bit-planes are computed, combined,and processed with a Hotelling observer model to provide a measure of signal detectability versus compression ratio. This allows a user to determine how much compression can be tolerated before signal detectability drops below a certain threshold.

  3. Edge-based compression of cartoon-like images with homogeneous diffusion

    Mainberger, Markus; Bruhn, Andrés; Weickert, Joachim;


    Edges provide semantically important image features. In this paper a lossy compression method for cartoon-like images is presented, which is based on edge information. Edges together with some adjacent grey/colour values are extracted and encoded using a classical edge detector, binary compression...

  4. A Coded Aperture Compressive Imaging Array and Its Visual Detection and Tracking Algorithms for Surveillance Systems

    Hanxiao Wu


    Full Text Available In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l1 optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l1 tracker without any optimization.

  5. Research on application for integer wavelet transform for lossless compression of medical image

    Zhou, Zude; Li, Quan; Long, Quan


    This paper proposes an approach based on using lifting scheme to construct integer wavelet transform whose purpose is to realize the lossless compression of images. Then researches on application of medical image, software simulation of corresponding algorithm and experiment result are presented in this paper. Experiment shows that this method could improve the compression ration and resolution.

  6. Image and video compression for multimedia engineering fundamentals, algorithms, and standards

    Shi, Yun Q


    Part I: Fundamentals Introduction Quantization Differential Coding Transform Coding Variable-Length Coding: Information Theory Results (II) Run-Length and Dictionary Coding: Information Theory Results (III) Part II: Still Image Compression Still Image Coding: Standard JPEG Wavelet Transform for Image Coding: JPEG2000 Nonstandard Still Image Coding Part III: Motion Estimation and Compensation Motion Analysis and Motion Compensation Block Matching Pel-Recursive Technique Optical Flow Further Discussion and Summary on 2-D Motion Estimation Part IV: Video Compression Fundam

  7. Magni: A Python Package for Compressive Sampling and Reconstruction of Atomic Force Microscopy Images

    Oxvig, Christian Schou; Pedersen, Patrick Steffen; Arildsen, Thomas


    provides researchers in compressed sensing with a selection of algorithms for reconstructing undersampled general images, and offers a consistent and rigorous way to efficiently evaluate the researchers own developed reconstruction algorithms in terms of phase transitions. The package also serves......Magni is an open source Python package that embraces compressed sensing and Atomic Force Microscopy (AFM) imaging techniques. It provides AFM-specific functionality for undersampling and reconstructing images from AFM equipment and thereby accelerating the acquisition of AFM images. Magni also...

  8. The wavelet/scalar quantization compression standard for digital fingerprint images

    Bradley, J.N.; Brislawn, C.M.


    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  9. Correlated image set compression system based on new fast efficient algorithm of Karhunen-Loeve transform

    Musatenko, Yurij S.; Kurashov, Vitalij N.


    The paper presents improved version of our new method for compression of correlated image sets Optimal Image Coding using Karhunen-Loeve transform (OICKL). It is known that Karhunen-Loeve (KL) transform is most optimal representation for such a purpose. The approach is based on fact that every KL basis function gives maximum possible average contribution in every image and this contribution decreases most quickly among all possible bases. So, we lossy compress every KL basis function by Embedded Zerotree Wavelet (EZW) coding with essentially different loss that depends on the functions' contribution in the images. The paper presents new fast low memory consuming algorithm of KL basis construction for compression of correlated image ensembles that enable our OICKL system to work on common hardware. We also present procedure for determining of optimal losses of KL basic functions caused by compression. It uses modified EZW coder which produce whole PSNR (bitrate) curve during the only compression pass.

  10. MR Image Compression Based on Selection of Mother Wavelet and Lifting Based Wavelet

    Sheikh Md. Rabiul Islam


    Full Text Available Magnetic Resonance (MR image is a medical image technique required enormous data to be stored and transmitted for high quality diagnostic application. Various algorithms have been proposed to improve the performance of the compression scheme. In this paper we extended the commonly used algorithms to image compression and compared its performance. For an image compression technique, we have linked different wavelet techniques using traditional mother wavelets and lifting based Cohen-Daubechies-Feauveau wavelets with the low-pass filters of the length 9 and 7 (CDF 9/7 wavelet transform with Set Partition in Hierarchical Trees (SPIHT algorithm. A novel image quality index with highlighting shape of histogram of the image targeted is introduced to assess image compression quality. The index will be used in place of existing traditional Universal Image Quality Index (UIQI “in one go”. It offers extra information about the distortion between an original image and a compressed image in comparisons with UIQI. The proposed index is designed based on modelling image compression as combinations of four major factors: loss of correlation, luminance distortion, contrast distortion and shape distortion. This index is easy to calculate and applicable in various image processing applications. One of our contributions is to demonstrate the choice of mother wavelet is very important for achieving superior wavelet compression performances based on proposed image quality indexes. Experimental results show that the proposed image quality index plays a significantly role in the quality evaluation of image compression on the open sources “BrainWeb: Simulated Brain Database (SBD ”.

  11. Improving a DWT-based compression algorithm for high image-quality requirement of satellite images

    Thiebaut, Carole; Latry, Christophe; Camarero, Roberto; Cazanave, Grégory


    Past and current optical Earth observation systems designed by CNES are using a fixed-rate data compression processing performed at a high-rate in a pushbroom mode (also called scan-based mode). This process generates fixed-length data to the mass memory and data downlink is performed at a fixed rate too. Because of on-board memory limitations and high data rate processing needs, the rate allocation procedure is performed over a small image area called a "segment". For both PLEIADES compression algorithm and CCSDS Image Data Compression recommendation, this rate allocation is realised by truncating to the desired rate a hierarchical bitstream of coded and quantized wavelet coefficients for each segment. Because the quantisation induced by truncation of the bit planes description is the same for the whole segment, some parts of the segment have a poor image quality. These artefacts generally occur in low energy areas within a segment of higher level of energy. In order to locally correct these areas, CNES has studied "exceptional processing" targeted for DWT-based compression algorithms. According to a criteria computed for each part of the segment (called block), the wavelet coefficients can be amplified before bit-plane encoding. As usual Region of Interest handling, these multiplied coefficients will be processed earlier by the encoder than in the nominal case (without exceptional processing). The image quality improvement brought by the exceptional processing has been confirmed by visual image analysis and fidelity criteria. The complexity of the proposed improvement for on-board application has also been analysed.

  12. Optimal Compression of Floating-point Astronomical Images Without Significant Loss of Information

    Pence, W D; Seaman, R


    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 -- 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process can greatly improve the precision of measurements in the images. This is especially important if the analysis algorithm relies on the mode or the median which would be similarly quantized if the pixel values are not dithered. We perform a series of experiments on both synthetic and real...

  13. Design of vector quantizer for image compression using self-organizing feature map and surface fitting.

    Laha, Arijit; Pal, Nikhil R; Chanda, Bhabatosh


    We propose a new scheme of designing a vector quantizer for image compression. First, a set of codevectors is generated using the self-organizing feature map algorithm. Then, the set of blocks associated with each code vector is modeled by a cubic surface for better perceptual fidelity of the reconstructed images. Mean-removed vectors from a set of training images is used for the construction of a generic codebook. Further, Huffman coding of the indices generated by the encoder and the difference-coded mean values of the blocks are used to achieve better compression ratio. We proposed two indices for quantitative assessment of the psychovisual quality (blocking effect) of the reconstructed image. Our experiments on several training and test images demonstrate that the proposed scheme can produce reconstructed images of good quality while achieving compression at low bit rates. Index Terms-Cubic surface fitting, generic codebook, image compression, self-organizing feature map, vector quantization.

  14. Application of Fisher Score and mRMR Techniques for Feature Selection in Compressed Medical Images

    Vamsidhar Enireddy


    Full Text Available In nowadays there is a large increase in the digital medical images and different medical imaging equipments are available for diagnoses, medical professionals are increasingly relying on computer aided techniques for both indexing these images and retrieving similar images from large repositories. To develop systems which are computationally less intensive without compromising on the accuracy from the high dimensional feature space is always challenging. In this paper an investigation is made on the retrieval of compressed medical images. Images are compressed using the visually lossless compression technique. Shape and texture features are extracted and best features are selected using the fisher technique and mRMR. Using these selected features RNN with BPTT was utilized for classification of the compressed images.

  15. Improving the Performance of Backpropagation Neural Network Algorithm for Image Compression/Decompression System

    Omaima N. A.


    Full Text Available Problem statement: The problem inherent to any digital image is the large amount of bandwidth required for transmission or storage. This has driven the research area of image compression to develop algorithms that compress images to lower data rates with better quality. Artificial neural networks are becoming attractive in image processing where high computational performance and parallel architectures are required. Approach: In this research, a three layered Backpropagation Neural Network (BPNN was designed for building image compression/decompression system. The Backpropagation neural network algorithm (BP was used for training the designed BPNN. Many techniques were used to speed up and improve this algorithm by using different BPNN architecture and different values of learning rate and momentum variables. Results: Experiments had been achieved, the results obtained, such as Compression Ratio (CR and peak signal to noise ratio (PSNR are compared with the performance of BP with different BPNN architecture and different learning parameters. The efficiency of the designed BPNN comes from reducing the chance of error occurring during the compressed image transmission through analog or digital channel. Conclusion: The performance of the designed BPNN image compression system can be increased by modifying the network itself, learning parameters and weights. Practically, we can note that the BPNN has the ability to compress untrained images but not in the same performance of the trained images.

  16. Rapid MR spectroscopic imaging of lactate using compressed sensing

    Vidya Shankar, Rohini; Agarwal, Shubhangi; Geethanath, Sairam; Kodibagkar, Vikram D.


    Imaging lactate metabolism in vivo may improve cancer targeting and therapeutics due to its key role in the development, maintenance, and metastasis of cancer. The long acquisition times associated with magnetic resonance spectroscopic imaging (MRSI), which is a useful technique for assessing metabolic concentrations, are a deterrent to its routine clinical use. The objective of this study was to combine spectral editing and prospective compressed sensing (CS) acquisitions to enable precise and high-speed imaging of the lactate resonance. A MRSI pulse sequence with two key modifications was developed: (1) spectral editing components for selective detection of lactate, and (2) a variable density sampling mask for pseudo-random under-sampling of the k-space `on the fly'. The developed sequence was tested on phantoms and in vivo in rodent models of cancer. Datasets corresponding to the 1X (fully-sampled), 2X, 3X, 4X, 5X, and 10X accelerations were acquired. The under-sampled datasets were reconstructed using a custom-built algorithm in MatlabTM, and the fidelity of the CS reconstructions was assessed in terms of the peak amplitudes, SNR, and total acquisition time. The accelerated reconstructions demonstrate a reduction in the scan time by up to 90% in vitro and up to 80% in vivo, with negligible loss of information when compared with the fully-sampled dataset. The proposed unique combination of spectral editing and CS facilitated rapid mapping of the spatial distribution of lactate at high temporal resolution. This technique could potentially be translated to the clinic for the routine assessment of lactate changes in solid tumors.

  17. Magni: A Python Package for Compressive Sampling and Reconstruction of Atomic Force Microscopy Images

    Christian Schou Oxvig


    Full Text Available Magni is an open source Python package that embraces compressed sensing and Atomic Force Microscopy (AFM imaging techniques. It provides AFM-specific functionality for undersampling and reconstructing images from AFM equipment and thereby accelerating the acquisition of AFM images. Magni also provides researchers in compressed sensing with a selection of algorithms for reconstructing undersampled general images, and offers a consistent and rigorous way to efficiently evaluate the researchers own developed reconstruction algorithms in terms of phase transitions. The package also serves as a convenient platform for researchers in compressed sensing aiming at obtaining a high degree of reproducibility of their research.

  18. Compressed Sensing and Low-Rank Matrix Decomposition in Multisource Images Fusion

    Kan Ren


    Full Text Available We propose a novel super-resolution multisource images fusion scheme via compressive sensing and dictionary learning theory. Under the sparsity prior of images patches and the framework of the compressive sensing theory, the multisource images fusion is reduced to a signal recovery problem from the compressive measurements. Then, a set of multiscale dictionaries are learned from several groups of high-resolution sample image’s patches via a nonlinear optimization algorithm. Moreover, a new linear weights fusion rule is proposed to obtain the high-resolution image. Some experiments are taken to investigate the performance of our proposed method, and the results prove its superiority to its counterparts.

  19. Method for low-light-level image compression based on wavelet transform

    Sun, Shaoyuan; Zhang, Baomin; Wang, Liping; Bai, Lianfa


    Low light level (LLL) image communication has received more and more attentions in the night vision field along with the advance of the importance of image communication. LLL image compression technique is the key of LLL image wireless transmission. LLL image, which is different from the common visible light image, has its special characteristics. As still image compression, we propose in this paper a wavelet-based image compression algorithm suitable for LLL image. Because the information in the LLL image is significant, near lossless data compression is required. The LLL image is compressed based on improved EZW (Embedded Zerotree Wavelet) algorithm. We encode the lowest frequency subband data using DPCM (Differential Pulse Code Modulation). All the information in the lowest frequency is kept. Considering the HVS (Human Visual System) characteristics and the LLL image characteristics, we detect the edge contour in the high frequency subband image first using templet and then encode the high frequency subband data using EZW algorithm. And two guiding matrix is set to avoid redundant scanning and replicate encoding of significant wavelet coefficients in the above coding. The experiment results show that the decoded image quality is good and the encoding time is shorter than that of the original EZW algorithm.

  20. Spectral compression algorithms for the analysis of very large multivariate images

    Keenan, Michael R.


    A method for spectrally compressing data sets enables the efficient analysis of very large multivariate images. The spectral compression algorithm uses a factored representation of the data that can be obtained from Principal Components Analysis or other factorization technique. Furthermore, a block algorithm can be used for performing common operations more efficiently. An image analysis can be performed on the factored representation of the data, using only the most significant factors. The spectral compression algorithm can be combined with a spatial compression algorithm to provide further computational efficiencies.

  1. Low-Rank Decomposition Based Restoration of Compressed Images via Adaptive Noise Estimation.

    Zhang, Xinfeng; Lin, Weisi; Xiong, Ruiqin; Liu, Xianming; Ma, Siwei; Gao, Wen


    Images coded at low bit rates in real-world applications usually suffer from significant compression noise, which significantly degrades the visual quality. Traditional denoising methods are not suitable for the content-dependent compression noise, which usually assume that noise is independent and with identical distribution. In this paper, we propose a unified framework of content-adaptive estimation and reduction for compression noise via low-rank decomposition of similar image patches. We first formulate the framework of compression noise reduction based upon low-rank decomposition. Compression noises are removed by soft-thresholding the singular values in singular value decomposition (SVD) of every group of similar image patches. For each group of similar patches, the thresholds are adaptively determined according to compression noise levels and singular values. We analyze the relationship of image statistical characteristics in spatial and transform domains, and estimate compression noise level for every group of similar patches from statistics in both domains jointly with quantization steps. Finally, quantization constraint is applied to estimated images to avoid over-smoothing. Extensive experimental results show that the proposed method not only improves the quality of compressed images obviously for post-processing, but are also helpful for computer vision tasks as a pre-processing method.

  2. Adjustable lossless image compression based on a natural splitting of an image into drawing, shading, and fine-grained components

    Novik, Dmitry A.; Tilton, James C.


    The compression, or efficient coding, of single band or multispectral still images is becoming an increasingly important topic. While lossy compression approaches can produce reconstructions that are visually close to the original, many scientific and engineering applications require exact (lossless) reconstructions. However, the most popular and efficient lossless compression techniques do not fully exploit the two-dimensional structural links existing in the image data. We describe here a general approach to lossless data compression that effectively exploits two-dimensional structural links of any length. After describing in detail two main variants on this scheme, we discuss experimental results.

  3. Lossless compression of medical images using Burrows-Wheeler Transformation with Inversion Coder.

    Preston, Collin; Arnavut, Ziya; Koc, Basar


    Medical imaging is a quickly growing industry where the need for highly efficient lossless compression algorithms is necessary in order to reduce storage space and transmission rates for the large, high resolution, medical images. Due to the fact that medical imagining cannot utilize lossy compression, in the event that vital information may be lost, it is imperative that lossless compression be used. While several authors have investigated lossless compression of medical images, the Burrows-Wheeler Transformation with an Inversion Coder (BWIC) has not been examined. Our investigation shows that BWIC runs in linear time and yields better compression rates than well-known image coders, such as JPEG-LS and JPEG-2000.

  4. Lossless compression of JPEG2000 whole slide images is not required for diagnostic virtual microscopy.

    Kalinski, Thomas; Zwönitzer, Ralf; Grabellus, Florian; Sheu, Sien-Yi; Sel, Saadettin; Hofmann, Harald; Roessner, Albert


    The use of lossy compression in medical imaging is controversial, although it is inevitable to reduce large data amounts. In contrast with lossy compression, lossless compression does not impair image quality. In addition to our previous studies, we evaluated virtual 3-dimensional microscopy using JPEG2000 whole slide images of gastric biopsy specimens with or without Helicobacter pylori gastritis using lossless compression (1:1) or lossy compression with different compression levels: 5:1, 10:1, and 20:1. The virtual slides were diagnosed in a blinded manner by 3 pathologists using the updated Sydney classification. The results showed no significant differences in the diagnosis of H pylori between the different levels of compression in virtual microscopy. We assume that lossless compression is not required for diagnostic virtual microscopy. The limits of lossy compression in virtual microscopy without a loss of diagnostic quality still need to be determined. Analogous to the processes in radiology, recommendations for the use of lossy compression in diagnostic virtual microscopy have to be worked out by pathology societies.

  5. Improved cuckoo search with particle swarm optimization for classification of compressed images

    Vamsidhar Enireddy; Reddi Kiran Kumar


    The need for a general purpose Content Based Image Retrieval (CBIR) system for huge image databases has attracted information-technology researchers and institutions for CBIR techniques development. These techniques include image feature extraction, segmentation, feature mapping, representation, semantics, indexing and storage, image similarity-distance measurement and retrieval making CBIR system development a challenge. Since medical images are large in size running to megabits of data they are compressed to reduce their size for storage and transmission. This paper investigates medical image retrieval problem for compressed images. An improved image classification algorithm for CBIR is proposed. In the proposed method, RAW images are compressed using Haar wavelet. Features are extracted using Gabor filter and Sobel edge detector. The extracted features are classified using Partial Recurrent Neural Network (PRNN). Since training parameters in Neural Network are NP hard, a hybrid Particle Swarm Optimization (PSO) – Cuckoo Search algorithm (CS) is proposed to optimize the learning rate of the neural network.

  6. Image Quality Assessment for Different Wavelet Compression Techniques in a Visual Communication Framework

    Alwan, Nuha A. S.; Zahir M. Hussain


    Images with subband coding and threshold wavelet compression are transmitted over a Rayleigh communication channel with additive white Gaussian noise (AWGN), after quantization and 16-QAM modulation. A comparison is made between these two types of compression using both mean square error (MSE) and structural similarity (SSIM) image quality assessment (IQA) criteria applied to the reconstructed image at the receiver. The two methods yielded comparable SSIM but different MSE measures. In this w...

  7. Reconstruction-Free Action Inference from Compressive Imagers.

    Kulkarni, Kuldeep; Turaga, Pavan


    Persistent surveillance from camera networks, such as at parking lots, UAVs, etc., often results in large amounts of video data, resulting in significant challenges for inference in terms of storage, communication and computation. Compressive cameras have emerged as a potential solution to deal with the data deluge issues in such applications. However, inference tasks such as action recognition require high quality features which implies reconstructing the original video data. Much work in compressive sensing (CS) theory is geared towards solving the reconstruction problem, where state-of-the-art methods are computationally intensive and provide low-quality results at high compression rates. Thus, reconstruction-free methods for inference are much desired. In this paper, we propose reconstruction-free methods for action recognition from compressive cameras at high compression ratios of 100 and above. Recognizing actions directly from CS measurements requires features which are mostly nonlinear and thus not easily applicable. This leads us to search for such properties that are preserved in compressive measurements. To this end, we propose the use of spatio-temporal smashed filters, which are compressive domain versions of pixel-domain matched filters. We conduct experiments on publicly available databases and show that one can obtain recognition rates that are comparable to the oracle method in uncompressed setup, even for high compression ratios.

  8. A novel color image compression algorithm using the human visual contrast sensitivity characteristics

    Yao, Juncai; Liu, Guizhong


    In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.

  9. A novel color image compression algorithm using the human visual contrast sensitivity characteristics

    Yao, Juncai; Liu, Guizhong


    In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.

  10. The Implementation of Mirror-Image Effect in MPEG-2 Compressed Video

    NI Qiang; ZHOU Lei; ZHANG Wen-jun


    Straightforward techniques for spatial domain digital video editing (DVE) of compressed video via decompression and recompression are computationally expensive. In this paper, a novel algorithm was proposed for mirror-image special effect editing in compressed video without full frame decompression and motion estimation.The results show that with the reducing of computational complexity, the quality of edited video in compressed domain is still close to the quality of the edited video in uncompressed domain at the same bit rate.

  11. Multiwavelet and Estimation by Interpolation AnalysisBased Hybrid Color Image Compression

    Ali Hussien Miry


    Full Text Available Nowadays, still images are used everywhere in the digital world. The shortages of storage capacity and transmission bandwidth make efficient compression solutions essential. A revolutionary mathematics tool, wavelet transform, has already shown its power in image processing. The major topic of this paper, is improve the compresses of still images by Multiwavelet based on estimation the high Multiwavelet coefficients in high frequencies sub band by interpolation instead of sending all Multiwavelet coefficients. When comparing the proposed approach with other compression methods Good result obtained.

  12. A Lossless hybrid wavelet-fractal compression for welding radiographic images.

    Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud


    In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm.

  13. Lossless, Near-Lossless, and Refinement Coding of Bi-level Images

    Martins, Bo; Forchhammer, Søren Otto


    We present general and unified algorithms for lossy/lossless codingof bi-level images. The compression is realized by applying arithmetic coding to conditional probabilities. As in the current JBIG standard the conditioning may be specified by a template.For better compression, the more general...... to the specialized soft pattern matching techniques which work better for text. Template based refinement coding is applied for lossy-to-lossless refinement.We demonstrate that both single pass refinement coding and multiple pass refinement coding yielding progressive build-up may be carried out very efficiently....... Introducing only a small amount of loss in halftoned test images, compression is increased by up to a factor of four compared with JBIG. Lossy, lossless, and refinement decoding speed and lossless encoding speed are less than a factor of two slower than JBIG. The (de)coding method is proposed as part of JBIG...

  14. Lossless compression of hyperspectral images based on the prediction error block

    Li, Yongjun; Li, Yunsong; Song, Juan; Liu, Weijia; Li, Jiaojiao


    A lossless compression algorithm of hyperspectral image based on distributed source coding is proposed, which is used to compress the spaceborne hyperspectral data effectively. In order to make full use of the intra-frame correlation and inter-frame correlation, the prediction error block scheme are introduced. Compared with the scalar coset based distributed compression method (s-DSC) proposed by E.Magli et al., that is , the bitrate of the whole block is determined by its maximum prediction error, and the s-DSC-classify scheme proposed by Song Juan that is based on classification and coset coding, the prediction error block scheme could reduce the bitrate efficiently. Experimental results on hyperspectral images show that the proposed scheme can offer both high compression performance and low encoder complexity and decoder complexity, which is available for on-board compression of hyperspectral images.

  15. To Improvement in Image Compression ratio using Artificial Neural Network Technique

    Shabbir Ahmad


    Full Text Available Compression of data in any form is a large and active field as well as a big business. This paper presents a neural network based technique that may be applied to data compression. This paper breaks down large images into smaller windows and eliminates redundant information. Finally, the technique uses a neural network trained by direct solution methods. Conventional techniques such as Huffman coding and the Shannon Fano method, LZ Method, Run Length Method, LZ-77 are discussed as well as more recent methods for the compression of data presents a neural network based technique that may be applied to data compression. The proposed technique and images. Intelligent methods for data compression are reviewed including the use of Back propagation and Kohonen neural networks. The proposed technique has been implemented in C on the SP2 and tested on digital mammograms and other images. The results obtained are presented in this paper.

  16. Development and evaluation of a novel lossless image compression method (AIC: artificial intelligence compression method) using neural networks as artificial intelligence.

    Fukatsu, Hiroshi; Naganawa, Shinji; Yumura, Shinnichiro


    This study was aimed to validate the performance of a novel image compression method using a neural network to achieve a lossless compression. The encoding consists of the following blocks: a prediction block; a residual data calculation block; a transformation and quantization block; an organization and modification block; and an entropy encoding block. The predicted image is divided into four macro-blocks using the original image for teaching; and then redivided into sixteen sub-blocks. The predicted image is compared to the original image to create the residual image. The spatial and frequency data of the residual image are compared and transformed. Chest radiography, computed tomography (CT), magnetic resonance imaging, positron emission tomography, radioisotope mammography, ultrasonography, and digital subtraction angiography images were compressed using the AIC lossless compression method; and the compression rates were calculated. The compression rates were around 15:1 for chest radiography and mammography, 12:1 for CT, and around 6:1 for other images. This method thus enables greater lossless compression than the conventional methods. This novel method should improve the efficiency of handling of the increasing volume of medical imaging data.

  17. Image data compression using a new floating-point digital signal processor.

    Siegel, E L; Templeton, A W; Hensley, K L; McFadden, M A; Baxter, K G; Murphey, M D; Cronin, P E; Gesell, R G; Dwyer, S J


    A new dual-ported, floating-point, digital signal processor has been evaluated for compressing 512 and 1,024 digital radiographic images using a full-frame, two-dimensional, discrete cosine transform (2D-DCT). The floating point digital signal processor operates at 49.5 million floating point instructions per second (MFLOPS). The level of compression can be changed by varying four parameters in the lossy compression algorithm. Throughput times were measured for both 2D-DCT compression and decompression. For a 1,024 x 1,024 x 10-bit image with a compression ratio of 316:1, the throughput was 75.73 seconds (compression plus decompression throughput). For a digital fluorography 1,024 x 1,024 x 8-bit image and a compression ratio of 26:1, the total throughput time was 63.23 seconds. For a computed tomography image of 512 x 512 x 12 bits and a compression ratio of 10:1 the throughput time was 19.65 seconds.

  18. Primary hypertension and neurovascular compression: a meta-analysis of magnetic resonance imaging studies.

    Boogaarts, H.D.; Menovsky, T.; Vries, J. de; Verbeek, A.L.M.; Lenders, J.W.M.; Grotenhuis, J.A.


    OBJECT: Several studies have suggested that neurovascular compression (NVC) of the brainstem might be a cause of hypertension. Because this compression syndrome might be demonstrated by MR imaging studies, several authors have tried to assess its prevalence in small series of patients with hypertens

  19. Analyzing the Effect of JPEG Compression on Local Variance of Image Intensity.

    Yang, Jianquan; Zhu, Guopu; Shi, Yun-Qing


    The local variance of image intensity is a typical measure of image smoothness. It has been extensively used, for example, to measure the visual saliency or to adjust the filtering strength in image processing and analysis. However, to the best of our knowledge, no analytical work has been reported about the effect of JPEG compression on image local variance. In this paper, a theoretical analysis on the variation of local variance caused by JPEG compression is presented. First, the expectation of intensity variance of 8×8 non-overlapping blocks in a JPEG image is derived. The expectation is determined by the Laplacian parameters of the discrete cosine transform coefficient distributions of the original image and the quantization step sizes used in the JPEG compression. Second, some interesting properties that describe the behavior of the local variance under different degrees of JPEG compression are discussed. Finally, both the simulation and the experiments are performed to verify our derivation and discussion. The theoretical analysis presented in this paper provides some new insights into the behavior of local variance under JPEG compression. Moreover, it has the potential to be used in some areas of image processing and analysis, such as image enhancement, image quality assessment, and image filtering.

  20. Contributions in compression of 3D medical images and 2D images; Contributions en compression d'images medicales 3D et d'images naturelles 2D

    Gaudeau, Y


    The huge amounts of volumetric data generated by current medical imaging techniques in the context of an increasing demand for long term archiving solutions, as well as the rapid development of distant radiology make the use of compression inevitable. Indeed, if the medical community has sided until now with compression without losses, most of applications suffer from compression ratios which are too low with this kind of compression. In this context, compression with acceptable losses could be the most appropriate answer. So, we propose a new loss coding scheme based on 3D (3 dimensional) Wavelet Transform and Dead Zone Lattice Vector Quantization 3D (DZLVQ) for medical images. Our algorithm has been evaluated on several computerized tomography (CT) and magnetic resonance image volumes. The main contribution of this work is the design of a multidimensional dead zone which enables to take into account correlations between neighbouring elementary volumes. At high compression ratios, we show that it can out-perform visually and numerically the best existing methods. These promising results are confirmed on head CT by two medical patricians. The second contribution of this document assesses the effect with-loss image compression on CAD (Computer-Aided Decision) detection performance of solid lung nodules. This work on 120 significant lungs images shows that detection did not suffer until 48:1 compression and still was robust at 96:1. The last contribution consists in the complexity reduction of our compression scheme. The first allocation dedicated to 2D DZLVQ uses an exponential of the rate-distortion (R-D) functions. The second allocation for 2D and 3D medical images is based on block statistical model to estimate the R-D curves. These R-D models are based on the joint distribution of wavelet vectors using a multidimensional mixture of generalized Gaussian (MMGG) densities. (author)

  1. A method of image compression based on lifting wavelet transform and modified SPIHT

    Lv, Shiliang; Wang, Xiaoqian; Liu, Jinguo


    In order to improve the efficiency of remote sensing image data storage and transmission we present a method of the image compression based on lifting scheme and modified SPIHT(set partitioning in hierarchical trees) by the design of FPGA program, which realized to improve SPIHT and enhance the wavelet transform image compression. The lifting Discrete Wavelet Transform (DWT) architecture has been selected for exploiting the correlation among the image pixels. In addition, we provide a study on what storage elements are required for the wavelet coefficients. We present lena's image using the 3/5 lifting scheme.

  2. Improved successive refinement for wavelet-based embedded image compression

    Creusere, Charles D.


    In this paper we consider a new form of successive coefficient refinement which can be used in conjunction with embedded compression algorithms like Shapiro's EZW (Embedded Zerotree Wavelet) and Said & Pearlman's SPIHT (Set Partitioning in Hierarchical Trees). Using the conventional refinement process, the approximation of a coefficient that was earlier determined to be significantly is refined by transmitting one of two symbols--an `up' symbol if the actual coefficient value is in the top half of the current uncertainty interval or a `down' symbol if it is the bottom half. In the modified scheme developed here, we transmit one of 3 symbols instead--`up', `down', or `exact'. The new `exact' symbol tells the decoder that its current approximation of a wavelet coefficient is `exact' to the level of precision desired. By applying this scheme in earlier work to lossless embedded compression (also called lossy/lossless compression), we achieved significant reductions in encoder and decoder execution times with no adverse impact on compression efficiency. These excellent results for lossless systems have inspired us to adapt this refinement approach to lossy embedded compression. Unfortunately, the results we have achieved thus far for lossy compression are not as good.

  3. Lossy compression of floating point high-dynamic range images using JPEG2000

    Springer, Dominic; Kaup, Andre


    In recent years, a new technique called High Dynamic Range (HDR) has gained attention in the image processing field. By representing pixel values with floating point numbers, recorded images can hold significantly more luminance information than ordinary integer images. This paper focuses on the realization of a lossy compression scheme for HDR images. The JPEG2000 standard is used as a basic component and is efficiently integrated into the compression chain. Based on a detailed analysis of the floating point format and the human visual system, a concept for lossy compression is worked out and thoroughly optimized. Our scheme outperforms all other existing lossy HDR compression schemes and shows superior performance both at low and high bitrates.

  4. Visually Improved Image Compression by Combining EZW Encoding with Texture Modeling using Huffman Encoder

    Vinay U. Kale


    Full Text Available This paper proposes a technique for image compression which uses the Wavelet-based Image/Texture Coding Hybrid (WITCH scheme [1] in combination with Huffman encoder. It implements a hybrid coding approach, while nevertheless preserving the features of progressive and lossless coding. The hybrid scheme was designed to encode the structural image information by Embedded Zerotree Wavelet (EZW encoding algorithm [2] and the stochastic texture in a model-based manner and this encoded data is then compressed using Huffman encoder. The scheme proposed here achieves superior subjective quality while increasing the compression ratio by more than a factor of three or even four. With this technique, it is possible to achieve compression ratios as high as 10 to 12 but with some minor distortions in the encoded image.

  5. Compressive Sensing Based Bio-Inspired Shape Feature Detection CMOS Imager

    Duong, Tuan A. (Inventor)


    A CMOS imager integrated circuit using compressive sensing and bio-inspired detection is presented which integrates novel functions and algorithms within a novel hardware architecture enabling efficient on-chip implementation.

  6. Datapath system for multiple electron beam lithography systems using image compression

    Yang, Jeehong; Savari, Serap A.; Harris, H. Rusty


    The datapath throughput of electron beam lithography systems can be improved by applying lossless image compression to the layout images and using an electron beam writer that contains a decoding circuit packed in single silicon to decode the compressed image on-the-fly. In our past research, we had introduced Corner2, a lossless layout image compression algorithm that achieved significantly better performance in compression ratio, encoding/decoding speed, and decoder memory requirement than Block C4. However, it assumed a somewhat different writing strategy from those currently suggested by multiple electron beam (MEB) system designers. The Corner2 algorithm is modified so that it can support the writing strategy of an MEB system.

  7. Strict Authentication Watermarking with JPEG Compression (SAW-JPEG) for Medical Images

    Zain, Jasni Mohamad


    This paper proposes a strict authentication watermarking for medical images. In this scheme, we define region of interest (ROI) by taking the smallest rectangle around an image. The watermark is generated from hashing the area of interest. The embedding region is considered to be outside the region of interest as to preserve the area from distortion as a result from watermarking. The strict authentication watermarking is robust to some degree of JPEG compression (SAW-JPEG). JPEG compression will be reviewed. To embed a watermark in the spatial domain, we have to make sure that the embedded watermark will survive JPEG quantization process. The watermarking scheme, including data embedding, extracting and verifying procedure were presented. Experimental results showed that such a scheme could embed and extract the watermark at a high compression rate. The watermark is robust to a high compression rate up to 90.6%. The JPEG image quality threshold is 60 for the least significant bit embedding. The image quality ...

  8. SVD application in image and data compression - Some case studies in oceanography (Developed in MATLAB)

    Murty, T.V.R.; Rao, M.M.M.; SuryaPrakash, S.; Chandramouli, P.; Murthy, K.S.R.

    An integrated friendly-user interactive multiple Ocean Application Pacakage has been developed utilizing the well known statistical technique called Singular Value Decomposition (SVD) to achieve image and data compression in MATLAB environment...

  9. Analysis of discrete-to-discrete imaging models for iterative tomographic image reconstruction and compressive sensing

    Jørgensen, Jakob H; Pan, Xiaochuan


    Discrete-to-discrete imaging models for computed tomography (CT) are becoming increasingly ubiquitous as the interest in iterative image reconstruction algorithms has heightened. Despite this trend, all the intuition for algorithm and system design derives from analysis of continuous-to-continuous models such as the X-ray and Radon transform. While the similarity between these models justifies some crossover, questions such as what are sufficient sampling conditions can be quite different for the two models. This sampling issue is addressed extensively in the first half of the article using singular value decomposition analysis for determining sufficient number of views and detector bins. The question of full sampling for CT is particularly relevant to current attempts to adapt compressive sensing (CS) motivated methods to application in CT image reconstruction. The second half goes in depth on this subject and discusses the link between object sparsity and sufficient sampling for accurate reconstruction. Par...

  10. A VLSI Processor Design of Real-Time Data Compression for High-Resolution Imaging Radar

    Fang, W.


    For the high-resolution imaging radar systems, real-time data compression of raw imaging data is required to accomplish the science requirements and satisfy the given communication and storage constraints. The Block Adaptive Quantizer (BAQ) algorithm and its associated VLSI processor design have been developed to provide a real-time data compressor for high-resolution imaging radar systems.

  11. Lossless and lossy compression of quantitative phase images of red blood cells obtained by digital holographic imaging.

    Jaferzadeh, Keyvan; Gholami, Samaneh; Moon, Inkyu


    In this paper, we evaluate lossless and lossy compression techniques to compress quantitative phase images of red blood cells (RBCs) obtained by an off-axis digital holographic microscopy (DHM). The RBC phase images are numerically reconstructed from their digital holograms and are stored in 16-bit unsigned integer format. In the case of lossless compression, predictive coding of JPEG lossless (JPEG-LS), JPEG2000, and JP3D are evaluated, and compression ratio (CR) and complexity (compression time) are compared against each other. It turns out that JP2k can outperform other methods by having the best CR. In the lossy case, JP2k and JP3D with different CRs are examined. Because some data is lost in a lossy way, the degradation level is measured by comparing different morphological and biochemical parameters of RBC before and after compression. Morphological parameters are volume, surface area, RBC diameter, sphericity index, and the biochemical cell parameter is mean corpuscular hemoglobin (MCH). Experimental results show that JP2k outperforms JP3D not only in terms of mean square error (MSE) when CR increases, but also in compression time in the lossy compression way. In addition, our compression results with both algorithms demonstrate that with high CR values the three-dimensional profile of RBC can be preserved and morphological and biochemical parameters can still be within the range of reported values.

  12. An Approach to Integer Wavelet Transform for Medical Image Compression in PACS


    We study an approach to integer wavelet transform for lossless compression of medical image in medical picture archiving and communication system (PACS). By lifting scheme a reversible integer wavelet transform is generated, which has the similar features with the corresponding biorthogonal wavelet transform. Experimental results of the method based on integer wavelet transform are given to show better performance and great applicable potentiality in medical image compression.

  13. Image Compression Using Wavelet Transform Based on the Lifting Scheme and its Implementation

    A Alice Blessie


    Full Text Available This paper presents image compression using 9/7 wavelet transform based on the lifting scheme. This is simulated using ISE simulator and implemented in FPGA. The 9/7 wavelet transform performs well for the low frequency components. Implementation in FPGA is since because of its partial reconfiguration. The project mainly aims at retrieving the smooth images without any loss. This design may be used for both lossy and lossless compression.

  14. A New Index Compression Algorithm for Efficient VQ Encoding of Images

    J.FENG; M.-Y.LEE


    In this paper, a new index compression al-gorithm is proposed for efficient VQ coding of images. The proposed algorithm tries to exploit the high correlation be-tween neighboring image blocks in the VQ index domain in order to achieve further compression. Simulation results show that the proposed scheme can achieve 40% bit-rate reduction without introducing any extra coding error when compared with a standard VQ system.

  15. Vector Quantization Techniques For Partial Encryption of Wavelet-based Compressed Digital Images

    H. A. Younis


    Full Text Available The use of image communication has increased in recent years. In this paper, newpartial encryption schemes are used to encrypt only part of the compressed data. Only6.25-25% of the original data is encrypted for four different images, resulting in asignificant reduction in encryption and decryption time. In the compression step, anadvanced clustering analysis technique (Fuzzy C-means (FCM is used. In the encryptionstep, the permutation cipher is used. The effect of number of different clusters is studied.The proposed partial encryption schemes are fast and secure, and do not reduce thecompression performance of the underlying selected compression methods as shown inexperimental results and conclusion.

  16. Two-band hybrid FIR-IIR filters for image compression.

    Lin, Jianyu; Smith, Mark J T


    Two-band analysis-synthesis filters or wavelet filters are used pervasively for compressing natural images. Both FIR and IIR filters have been studied in this context, the former being the most popular. In this paper, we examine the compression performance of these two-band filters in a dyadic wavelet decomposition and attempt to isolate features that contribute most directly to the performance gain. Then, employing the general exact reconstruction condition, hybrid FIR-IIR analysis-synthesis filters are designed to maximize compression performance for natural images. Experimental results are presented that compare performance with the popular biorthogonal filters in terms of peak SNR, subjective quality, and computational complexity.

  17. Remote sensing image compression for deep space based on region of interest

    王振华; 吴伟仁; 田玉龙; 田金文; 柳健


    A major limitation for deep space communication is the limited bandwidths available. The downlinkrate using X-band with an L2 halo orbit is estimated to be of only 5.35 GB/d. However, the Next GenerationSpace Telescope (NGST) will produce about 600 GB/d. Clearly the volume of data to downlink must be re-duced by at least a factor of 100. One of the resolutions is to encode the data using very low bit rate image com-pression techniques. An very low bit rate image compression method based on region of interest(ROI) has beenproposed for deep space image. The conventional image compression algorithms which encode the original datawithout any data analysis can maintain very good details and haven' t high compression rate while the modernimage compressions with semantic organization can have high compression rate even to be hundred and can' tmaintain too much details. The algorithms based on region of interest inheriting from the two previews algorithmshave good semantic features and high fidelity, and is therefore suitable for applications at a low bit rate. Theproposed method extracts the region of interest by texture analysis after wavelet transform and gains optimal localquality with bit rate control. The Result shows that our method can maintain more details in ROI than generalimage compression algorithm(SPIHT) under the condition of sacrificing the quality of other uninterested areas.

  18. Comparison of Open Source Compression Algorithms on Vhr Remote Sensing Images for Efficient Storage Hierarchy

    Akoguz, A.; Bozkurt, S.; Gozutok, A. A.; Alp, G.; Turan, E. G.; Bogaz, M.; Kent, S.


    High resolution level in satellite imagery came with its fundamental problem as big amount of telemetry data which is to be stored after the downlink operation. Moreover, later the post-processing and image enhancement steps after the image is acquired, the file sizes increase even more and then it gets a lot harder to store and consume much more time to transmit the data from one source to another; hence, it should be taken into account that to save even more space with file compression of the raw and various levels of processed data is a necessity for archiving stations to save more space. Lossless data compression algorithms that will be examined in this study aim to provide compression without any loss of data holding spectral information. Within this objective, well-known open source programs supporting related compression algorithms have been implemented on processed GeoTIFF images of Airbus Defence & Spaces SPOT 6 & 7 satellites having 1.5 m. of GSD, which were acquired and stored by ITU Center for Satellite Communications and Remote Sensing (ITU CSCRS), with the algorithms Lempel-Ziv-Welch (LZW), Lempel-Ziv-Markov chain Algorithm (LZMA & LZMA2), Lempel-Ziv-Oberhumer (LZO), Deflate & Deflate 64, Prediction by Partial Matching (PPMd or PPM2), Burrows-Wheeler Transform (BWT) in order to observe compression performances of these algorithms over sample datasets in terms of how much of the image data can be compressed by ensuring lossless compression.

  19. New algorithms for processing images in the transform-compressed domain

    Chang, Shih-Fu


    Future multimedia applications involving images and video will require technologies enabling users to manipulate image and video data as flexibly as traditional text and numerical data. However, vast amounts of image and video data mandate the use of image compression, which makes direct manipulation and editing of image data difficult. To explore the maximum synergistic relationships between image manipulation and compression, we extend our prior study of transform-domain image manipulation techniques to more complicated image operations such as rotation, shearing, and line-wise special effects. We propose to extract the individual image rows (columns) first and then apply the previously proposed transform-domain filtering and scaling techniques. The transform-domain rotation and line-wise operations can be accomplished by calculating the summation of products of nonzero transform coefficients and some precalculated special matrices. The overall computational complexity depends on the compression rate of the input images. For highly-compressed images, the transform-domain technique provides great potential for improving the computation speed.

  20. Observer detection of image degradation caused by irreversible data compression processes

    Chen, Ji; Flynn, Michael J.; Gross, Barry; Spizarny, David


    Irreversible data compression methods have been proposed to reduce the data storage and communication requirements of digital imaging systems. In general, the error produced by compression increases as an algorithm''s compression ratio is increased. We have studied the relationship between compression ratios and the detection of induced error using radiologic observers. The nature of the errors was characterized by calculating the power spectrum of the difference image. In contrast with studies designed to test whether detected errors alter diagnostic decisions, this study was designed to test whether observers could detect the induced error. A paired-film observer study was designed to test whether induced errors were detected. The study was conducted with chest radiographs selected and ranked for subtle evidence of interstitial disease, pulmonary nodules, or pneumothoraces. Images were digitized at 86 microns (4K X 5K) and 2K X 2K regions were extracted. A full-frame discrete cosine transform method was used to compress images at ratios varying between 6:1 and 60:1. The decompressed images were reprinted next to the original images in a randomized order with a laser film printer. The use of a film digitizer and a film printer which can reproduce all of the contrast and detail in the original radiograph makes the results of this study insensitive to instrument performance and primarily dependent on radiographic image quality. The results of this study define conditions for which errors associated with irreversible compression cannot be detected by radiologic observers. The results indicate that an observer can detect the errors introduced by this compression algorithm for compression ratios of 10:1 (1.2 bits/pixel) or higher.


    A.R. Nadira Banu Kamal


    Full Text Available The storage requirements for images can be excessive, if true color and a high-perceived image quality are desired. An RGB image may be viewed as a stack of three gray-scale images that when fed into the red, green and blue inputs of a color monitor, produce a color image on the screen. The abnormal size of many images leads to long, costly, transmission times. Hence, an iteration free fractal algorithm is proposed in this research paper to design an efficient search of the domain pools for colour image compression using Genetic Algorithm (GA. The proposed methodology reduces the coding process time and intensive computation tasks. Parameters such as image quality, compression ratio and coding time are analyzed. It is observed that the proposed method achieves excellent performance in image quality with reduction in storage space.

  2. Electromagnetic Scattered Field Evaluation and Data Compression Using Imaging Techniques

    Gupta, I. J.; Burnside, W. D.


    This is the final report on Project #727625 between The Ohio State University and NASA, Lewis Research Center, Cleveland, Ohio. Under this project, a data compression technique for scattered field data of electrically large targets is developed. The technique was applied to the scattered fields of two targets of interest. The backscattered fields of the scale models of these targets were measured in a ra compact range. For one of the targets, the backscattered fields were also calculated using XPATCH computer code. Using the technique all scattered field data sets were compressed successfully. A compression ratio of the order 40 was achieved. In this report, the technique is described briefly and some sample results are included.

  3. High capacity image steganography method based on framelet and compressive sensing

    Xiao, Moyan; He, Zhibiao


    To improve the capacity and imperceptibility of image steganography, a novel high capacity and imperceptibility image steganography method based on a combination of framelet and compressive sensing (CS) is put forward. Firstly, SVD (Singular Value Decomposition) transform to measurement values obtained by compressive sensing technique to the secret data. Then the singular values in turn embed into the low frequency coarse subbands of framelet transform to the blocks of the cover image which is divided into non-overlapping blocks. Finally, use inverse framelet transforms and combine to obtain the stego image. The experimental results show that the proposed steganography method has a good performance in hiding capacity, security and imperceptibility.

  4. Lossless Image Compression Using A Simplified MED Algorithm with Integer Wavelet Transform

    Mohamed M. Fouad


    Full Text Available In this paper, we propose a lossless (LS image compression technique combining a prediction step with the integer wavelet transform. The prediction step proposed in this technique is a simplified version of the median edge detector algorithm used with JPEG-LS. First, the image is transformed using the prediction step and a difference image is obtained. The difference image goes through an integer wavelet transform and the transform coefficients are used in the lossless codeword assignment. The algorithm is simple and test results show that it yields higher compression ratios than competing techniques. Computational cost is also kept close to competing techniques.


    P. Arockia Jansi Rani


    Full Text Available Image compression is very important in reducing the costs of data storage and transmission in relatively slow channels. In this paper, a still image compression scheme driven by Self-Organizing Map with polynomial regression modeling and entropy coding, employed within the wavelet framework is presented. The image compressibility and interpretability are improved by incorporating noise reduction into the compression scheme. The implementation begins with the classical wavelet decomposition, quantization followed by Huffman encoder. The codebook for the quantization process is designed using an unsupervised learning algorithm and further modified using polynomial regression to control the amount of noise reduction. Simulation results show that the proposed method reduces bit rate significantly and provides better perceptual quality than earlier methods.

  6. An improved image compression algorithm using binary space partition scheme and geometric wavelets.

    Chopra, Garima; Pal, A K


    Geometric wavelet is a recent development in the field of multivariate nonlinear piecewise polynomials approximation. The present study improves the geometric wavelet (GW) image coding method by using the slope intercept representation of the straight line in the binary space partition scheme. The performance of the proposed algorithm is compared with the wavelet transform-based compression methods such as the embedded zerotree wavelet (EZW), the set partitioning in hierarchical trees (SPIHT) and the embedded block coding with optimized truncation (EBCOT), and other recently developed "sparse geometric representation" based compression algorithms. The proposed image compression algorithm outperforms the EZW, the Bandelets and the GW algorithm. The presented algorithm reports a gain of 0.22 dB over the GW method at the compression ratio of 64 for the Cameraman test image.

  7. Joint image encryption and compression scheme based on a new hyperchaotic system and curvelet transform

    Zhang, Miao; Tong, Xiaojun


    This paper proposes a joint image encryption and compression scheme based on a new hyperchaotic system and curvelet transform. A new five-dimensional hyperchaotic system based on the Rabinovich system is presented. By means of the proposed hyperchaotic system, a new pseudorandom key stream generator is constructed. The algorithm adopts diffusion and confusion structure to perform encryption, which is based on the key stream generator and the proposed hyperchaotic system. The key sequence used for image encryption is relation to plain text. By means of the second generation curvelet transform, run-length coding, and Huffman coding, the image data are compressed. The joint operation of compression and encryption in a single process is performed. The security test results indicate the proposed methods have high security and good compression effect.

  8. Hyperspectral images lossless compression using the 3D binary EZW algorithm

    Cheng, Kai-jen; Dill, Jeffrey


    This paper presents a transform based lossless compression for hyperspectral images which is inspired by Shapiro (1993)'s EZW algorithm. The proposed compression method uses a hybrid transform which includes an integer Karhunrn-Loeve transform (KLT) and integer discrete wavelet transform (DWT). The integer KLT is employed to eliminate the presence of correlations among the bands of the hyperspectral image. The integer 2D discrete wavelet transform (DWT) is applied to eliminate the correlations in the spatial dimensions and produce wavelet coefficients. These coefficients are then coded by a proposed binary EZW algorithm. The binary EZW eliminates the subordinate pass of conventional EZW by coding residual values, and produces binary sequences. The binary EZW algorithm combines the merits of well-known EZW and SPIHT algorithms, and it is computationally simpler for lossless compression. The proposed method was applied to AVIRIS images and compared to other state-of-the-art image compression techniques. The results show that the proposed lossless image compression is more efficient and it also has higher compression ratio than other algorithms.

  9. Multispectral image compression based on DSC combined with CCSDS-IDC.

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng


    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.

  10. Analysis of LAPAN-IPB image lossless compression using differential pulse code modulation and huffman coding

    Hakim, P. R.; Permala, R.


    LAPAN-A3/IPB satellite is the latest Indonesian experimental microsatellite with remote sensing and earth surveillance missions. The satellite has three optical payloads, which are multispectral push-broom imager, digital matrix camera and video camera. To increase data transmission efficiency, the multispectral imager data can be compressed using either lossy or lossless compression method. This paper aims to analyze Differential Pulse Code Modulation (DPCM) method and Huffman coding that are used in LAPAN-IPB satellite image lossless compression. Based on several simulation and analysis that have been done, current LAPAN-IPB lossless compression algorithm has moderate performance. There are several aspects that can be improved from current configuration, which are the type of DPCM code used, the type of Huffman entropy-coding scheme, and the use of sub-image compression method. The key result of this research shows that at least two neighboring pixels should be used for DPCM calculation to increase compression performance. Meanwhile, varying Huffman tables with sub-image approach could also increase the performance if on-board computer can support for more complicated algorithm. These results can be used as references in designing Payload Data Handling System (PDHS) for an upcoming LAPAN-A4 satellite.

  11. Hierarchical prediction and context adaptive coding for lossless color image compression.

    Kim, Seyun; Cho, Nam Ik


    This paper presents a new lossless color image compression algorithm, based on the hierarchical prediction and context-adaptive arithmetic coding. For the lossless compression of an RGB image, it is first decorrelated by a reversible color transform and then Y component is encoded by a conventional lossless grayscale image compression method. For encoding the chrominance images, we develop a hierarchical scheme that enables the use of upper, left, and lower pixels for the pixel prediction, whereas the conventional raster scan prediction methods use upper and left pixels. An appropriate context model for the prediction error is also defined and the arithmetic coding is applied to the error signal corresponding to each context. For several sets of images, it is shown that the proposed method further reduces the bit rates compared with JPEG2000 and JPEG-XR.

  12. Multivariate compressive sensing for image reconstruction in the wavelet domain: using scale mixture models.

    Wu, Jiao; Liu, Fang; Jiao, L C; Wang, Xiaodong; Hou, Biao


    Most wavelet-based reconstruction methods of compressive sensing (CS) are developed under the independence assumption of the wavelet coefficients. However, the wavelet coefficients of images have significant statistical dependencies. Lots of multivariate prior models for the wavelet coefficients of images have been proposed and successfully applied to the image estimation problems. In this paper, the statistical structures of the wavelet coefficients are considered for CS reconstruction of images that are sparse or compressive in wavelet domain. A multivariate pursuit algorithm (MPA) based on the multivariate models is developed. Several multivariate scale mixture models are used as the prior distributions of MPA. Our method reconstructs the images by means of modeling the statistical dependencies of the wavelet coefficients in a neighborhood. The proposed algorithm based on these scale mixture models provides superior performance compared with many state-of-the-art compressive sensing reconstruction algorithms.

  13. Fast compressive measurements acquisition using optimized binary sensing matrices for low-light-level imaging.

    Ke, Jun; Lam, Edmund Y


    Compressive measurements benefit low-light-level imaging (L3-imaging) due to the significantly improved measurement signal-to-noise ratio (SNR). However, as with other compressive imaging (CI) systems, compressive L3-imaging is slow. To accelerate the data acquisition, we develop an algorithm to compute the optimal binary sensing matrix that can minimize the image reconstruction error. First, we make use of the measurement SNR and the reconstruction mean square error (MSE) to define the optimal gray-value sensing matrix. Then, we construct an equality-constrained optimization problem to solve for a binary sensing matrix. From several experimental results, we show that the latter delivers a similar reconstruction performance as the former, while having a smaller dynamic range requirement to system sensors.


    Shambezadeh, Jamshid; Forouzbakhsh, Farshid


    compression. We can employ the proposed scheme in conjunction with any traditional vector quantization technique to obtain an improved performance. At least an improvement of 28 percent has been observed in the result of simulations reported in this paper. In addition, the performance of the new coder...

  15. Spinal cord compression in thalassemia major: value of MR imaging

    Ziegler, L. [Dept. of Radiology, Div. of Magnetic Resonance Imaging, Univ. of Munich (Germany); Lange, M. [Dept. of Neurosurgery, Univ. of Munich (Germany); Feiden, W. [Dept. of Neuropathology, Univ. of Munich (Germany); Vogl, T. [Dept. of Radiology, Div. of Magnetic Resonance Imaging, Univ. of Munich (Germany)


    A 17 year old Iranian girl presented with thalassemia major, complicated by acute compression of the cauda equina caused by extramedullary haemopoiesis. The advantages of MRI in confirming the spinal space-occupying lesion and involvement of liver and pancreas are discussed in the context of treatment decision analysis and follow-up. (orig.)

  16. Informational Analysis for Compressive Sampling in Radar Imaging

    Jingxiong Zhang


    Full Text Available Compressive sampling or compressed sensing (CS works on the assumption of the sparsity or compressibility of the underlying signal, relies on the trans-informational capability of the measurement matrix employed and the resultant measurements, operates with optimization-based algorithms for signal reconstruction and is thus able to complete data compression, while acquiring data, leading to sub-Nyquist sampling strategies that promote efficiency in data acquisition, while ensuring certain accuracy criteria. Information theory provides a framework complementary to classic CS theory for analyzing information mechanisms and for determining the necessary number of measurements in a CS environment, such as CS-radar, a radar sensor conceptualized or designed with CS principles and techniques. Despite increasing awareness of information-theoretic perspectives on CS-radar, reported research has been rare. This paper seeks to bridge the gap in the interdisciplinary area of CS, radar and information theory by analyzing information flows in CS-radar from sparse scenes to measurements and determining sub-Nyquist sampling rates necessary for scene reconstruction within certain distortion thresholds, given differing scene sparsity and average per-sample signal-to-noise ratios (SNRs. Simulated studies were performed to complement and validate the information-theoretic analysis. The combined strategy proposed in this paper is valuable for information-theoretic orientated CS-radar system analysis and performance evaluation.

  17. A Complete Image Compression Scheme Based on Overlapped Block Transform with Post-Processing

    Kwan, C.; Li, B.; Xu, R.; Li, X.; Tran, T.; Nguyen, T.


    A complete system was built for high-performance image compression based on overlapped block transform. Extensive simulations and comparative studies were carried out for still image compression including benchmark images (Lena and Barbara), synthetic aperture radar (SAR) images, and color images. We have achieved consistently better results than three commercial products in the market (a Summus wavelet codec, a baseline JPEG codec, and a JPEG-2000 codec) for most images that we used in this study. Included in the system are two post-processing techniques based on morphological and median filters for enhancing the perceptual quality of the reconstructed images. The proposed system also supports the enhancement of a small region of interest within an image, which is of interest in various applications such as target recognition and medical diagnosis

  18. Hardware Implementation of Lossless Adaptive Compression of Data From a Hyperspectral Imager

    Keymeulen, Didlier; Aranki, Nazeeh I.; Klimesh, Matthew A.; Bakhshi, Alireza


    Efficient onboard data compression can reduce the data volume from hyperspectral imagers on NASA and DoD spacecraft in order to return as much imagery as possible through constrained downlink channels. Lossless compression is important for signature extraction, object recognition, and feature classification capabilities. To provide onboard data compression, a hardware implementation of a lossless hyperspectral compression algorithm was developed using a field programmable gate array (FPGA). The underlying algorithm is the Fast Lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral- Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), p. 26 with the modification reported in Lossless, Multi-Spectral Data Comressor for Improved Compression for Pushbroom-Type Instruments (NPO-45473), NASA Tech Briefs, Vol. 32, No. 7 (July 2008) p. 63, which provides improved compression performance for data from pushbroom-type imagers. An FPGA implementation of the unmodified FL algorithm was previously developed and reported in Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System (NPO-46867), NASA Tech Briefs, Vol. 36, No. 5 (May 2012) p. 42. The essence of the FL algorithm is adaptive linear predictive compression using the sign algorithm for filter adaption. The FL compressor achieves a combination of low complexity and compression effectiveness that exceeds that of stateof- the-art techniques currently in use. The modification changes the predictor structure to tolerate differences in sensitivity of different detector elements, as occurs in pushbroom-type imagers, which are suitable for spacecraft use. The FPGA implementation offers a low-cost, flexible solution compared to traditional ASIC (application specific integrated circuit) and can be integrated as an intellectual property (IP) for part of, e.g., a design that manages the instrument interface. The FPGA implementation was benchmarked on the Xilinx

  19. Region of interest extraction for lossless compression of bone X-ray images.

    Kazeminia, S; Karimi, N; Soroushmehr, S M R; Samavi, S; Derksen, H; Najarian, K


    For few decades digital X-ray imaging has been one of the most important tools for medical diagnosis. With the advent of distance medicine and the use of big data in this respect, the need for efficient storage and online transmission of these images is becoming an essential feature. Limited storage space and limited transmission bandwidth are the main challenges. Efficient image compression methods are lossy while the information of medical images should be preserved with no change. Hence, lossless compression methods are necessary for this purpose. In this paper, a novel method has been proposed to eliminate the non-ROI data from bone X-ray images. Background pixels do not contain any valuable medical information. The proposed method is based on the histogram dispersion method. ROI is separated from the background and it is compressed with a lossless compression method to preserve medical information of the image. Compression ratios of the implemented results show that the proposed algorithm is capable of effective reduction of the statistical and spatial redundancies.

  20. A Combined Approach for Lossless Image Compression Technique using Curvelet Transform

    Ezhilarasi .P


    Full Text Available Image compression is an unavoidable research area which addresses the problem of reducing the amount of data required to represent a digital image for minimizing the memory requirement and system complexity. In the recent years, most of the efforts in the research of image compression focused on the development of lossy techniques. The key idea of our proposed scheme describes lossless compression using curvelet transform combined with error correcting BCH and modified arithmetic encoding technique. Most of the wavelet based approaches are well suited to point singularities have limitations with orientation selectivity and do not represent two dimensional singularities (e.g. smooth curves effectively. Our proposed curvelet based approach exhibits good approximation properties for smooth 2D images. The BCH encoder converts the message of k bits in to a codeword of length n by adding three parity bits. The image can be divided into blocks of size 7 bits and entered to the BCH decoder which eliminates the parity bits. Thus the block of 7 bits will be reduced in to a block of size 4 bits and output will be in two folds. The first file contains the compressed image and the second contains the keys. The simulation results show that our proposed compression scheme gives more than 50% memory saving at peak signal to noise ratio (PSNR 45 dB with 0.5 bit per pixel (BPP.

  1. A joint image encryption and watermarking algorithm based on compressive sensing and chaotic map

    Xiao, Di; Cai, Hong-Kun; Zheng, Hong-Ying


    In this paper, a compressive sensing (CS) and chaotic map-based joint image encryption and watermarking algorithm is proposed. The transform domain coefficients of the original image are scrambled by Arnold map firstly. Then the watermark is adhered to the scrambled data. By compressive sensing, a set of watermarked measurements is obtained as the watermarked cipher image. In this algorithm, watermark embedding and data compression can be performed without knowing the original image; similarly, watermark extraction will not interfere with decryption. Due to the characteristics of CS, this algorithm features compressible cipher image size, flexible watermark capacity, and lossless watermark extraction from the compressed cipher image as well as robustness against packet loss. Simulation results and analyses show that the algorithm achieves good performance in the sense of security, watermark capacity, extraction accuracy, reconstruction, robustness, etc. Project supported by the Open Research Fund of Chongqing Key Laboratory of Emergency Communications, China (Grant No. CQKLEC, 20140504), the National Natural Science Foundation of China (Grant Nos. 61173178, 61302161, and 61472464), and the Fundamental Research Funds for the Central Universities, China (Grant Nos. 106112013CDJZR180005 and 106112014CDJZR185501).

  2. Comparison Study of Different Lossy Compression Techniques Applied on Digital Mammogram Images

    Ayman AbuBaker


    Full Text Available The huge growth of the usage of internet increases the need to transfer and save multimedia files. Mammogram images are part of these files that have large image size with high resolution. The compression of these images is used to reduce the size of the files without degrading the quality especially the suspicious regions in the mammogram images. Reduction of the size of these images gives more chance to store more images and minimize the cost of transmission in the case of exchanging information between radiologists. Many techniques exists in the literature to solve the loss of information in images. In this paper, two types of compression transformations are used which are Singular Value Decomposition (SVD that transforms the image into series of Eigen vectors that depends on the dimensions of the image and Discrete Cosine Transform (DCT that covert the image from spatial domain into frequency domain. In this paper, the Computer Aided Diagnosis (CAD system is implemented to evaluate the microcalcification appearance in mammogram images after using the two transformation compressions. The performance of both transformations SVD and DCT is subjectively compared by a radiologist. As a result, the DCT algorithm can effectively reduce the size of the mammogram images by 65% with high quality microcalcification appearance regions.

  3. A Global-Scale Image Lossless Compression Method Based on QTM Pixels

    SUN Wen-bin; ZHAO Xue-sheng


    In this paper, a new predictive model, adapted to QTM (Quaternary Triangular Mesh) pixel compression, is introduced. Our approach starts with the principles of proposed predictive models based on available QTM neighbor pixels. An algorithm of ascertaining available QTM neighbors is also proposed. Then, the method for reducing space complexities in the procedure of predicting QTM pixel values is presented. Next, the structure for storing compressed QTM pixel is proposed. In the end, the experiment on comparing compression ratio of this method with other methods is carried out by using three wave bands data of 1 km resolution of NOAA images in China. The results indicate that: 1) the compression method performs better than any other, such as Run Length Coding, Arithmetic Coding, Huffman Coding, etc; 2) the average size of compressed three wave band data based on the neighbor QTM pixel predictive model is 31.58% of the origin space requirements and 67.5% of Arithmetic Coding without predictive model.

  4. The Cyborg Astrobiologist: matching of prior textures by image compression for geological mapping and novelty detection

    McGuire, P. C.; Bonnici, A.; Bruner, K. R.; Gross, C.; Ormö, J.; Smosna, R. A.; Walter, S.; Wendt, L.


    We describe an image-comparison technique of Heidemann and Ritter (2008a, b), which uses image compression, and is capable of: (i) detecting novel textures in a series of images, as well as of: (ii) alerting the user to the similarity of a new image to a previously observed texture. This image-comparison technique has been implemented and tested using our Astrobiology Phone-cam system, which employs Bluetooth communication to send images to a local laptop server in the field for the image-compression analysis. We tested the system in a field site displaying a heterogeneous suite of sandstones, limestones, mudstones and coal beds. Some of the rocks are partly covered with lichen. The image-matching procedure of this system performed very well with data obtained through our field test, grouping all images of yellow lichens together and grouping all images of a coal bed together, and giving 91% accuracy for similarity detection. Such similarity detection could be employed to make maps of different geological units. The novelty-detection performance of our system was also rather good (64% accuracy). Such novelty detection may become valuable in searching for new geological units, which could be of astrobiological interest. The current system is not directly intended for mapping and novelty detection of a second field site based on image-compression analysis of an image database from a first field site, although our current system could be further developed towards this end. Furthermore, the image-comparison technique is an unsupervised technique that is not capable of directly classifying an image as containing a particular geological feature; labelling of such geological features is done post facto by human geologists associated with this study, for the purpose of analysing the system's performance. By providing more advanced capabilities for similarity detection and novelty detection, this image-compression technique could be useful in giving more scientific autonomy

  5. Vector-lifting schemes based on sorting techniques for lossless compression of multispectral images

    Benazza-Benyahia, Amel; Pesquet, Jean-Christophe


    In this paper, we introduce vector-lifting schemes which allow to generate very compact multiresolution representations, suitable for lossless and progressive coding of multispectral images. These new decomposition schemes exploit simultaneously the spatial and the spectral redundancies contained in multispectral images. When the spectral bands have different dynamic ranges, we improve dramatically the performances of the proposed schemes by a reversible histogram modification based on sorting permutations. Simulation tests carried out on real images allow to evaluate the performances of this new compression method. They indicate that the achieved compression ratios are higher than those obtained with currently used lossless coders.

  6. Adaptive Binary Arithmetic Coder-Based Image Feature and Segmentation in the Compressed Domain

    Hsi-Chin Hsin


    Full Text Available Image compression is necessary in various applications, especially for efficient transmission over a band-limited channel. It is thus desirable to be able to segment an image in the compressed domain directly such that the burden of decompressing computation can be avoided. Motivated by the adaptive binary arithmetic coder (MQ coder of JPEG2000, we propose an efficient scheme to segment the feature vectors that are extracted from the code stream of an image. We modify the Compression-based Texture Merging (CTM algorithm to alleviate the influence of overmerging problem by making use of the rate distortion information. Experimental results show that the MQ coder-based image segmentation is preferable in terms of the boundary displacement error (BDE measure. It has the advantage of saving computational cost as the segmentation results even at low rates of bits per pixel (bpp are satisfactory.

  7. Complete Focal Plane Compression Based on CMOS Image Sensor Using Predictive Coding

    Yao Suying; Yu Xiao; Gao Jing; Xu Jiangtao


    In this paper, a CMOS image sensor(CIS) is proposed, which can accomplish both decorrelation and en-tropy coding of image compression directly on the focal plane. The design is based on predictive coding for image decorrelation. The predictions are performed in analog domain by 2×2 pixel units. Both the prediction residuals and original pixel values are quantized and encoded in parallel. Since the residuals have a peak distribution around zero, the output codewords can be replaced by the valid part of the residuals’ binary mode. The compressed bit stream is accessible directly at the output of CIS without extra disposition. Simulation results show that the proposed approach achieves a compression rate of 2. 2 and PSNR of 51 on different test images.


    S. Ebenezer Juliet


    Full Text Available This paper presents a one pass block classification algorithm for efficient coding of compound images which consists of multimedia elements like text, graphics and natural images. The objective is to minimize the loss of visual quality of text during compression by separating text information which needs high special resolution than the pictures and background. It segments computer screen images into text/graphics and picture/background classes based on DCT energy in each 4x4 block, and then compresses both text/graphics pixels and picture/background blocks by H.264/AVC with variable quantization parameter. Experimental results show that the single H.264/AVC-INTRA coder with variable quantization outperforms single coders such as JPEG, JPEG-2000 for compound images. Also the proposed method improves the PSNR value significantly than standard JPEG, JPEG-2000 and while keeping competitive compression ratios.

  9. Non-convex prior image constrained compressed sensing (NC-PICCS)

    Ramírez Giraldo, Juan Carlos; Trzasko, Joshua D.; Leng, Shuai; McCollough, Cynthia H.; Manduca, Armando


    The purpose of this paper is to present a new image reconstruction algorithm for dynamic data, termed non-convex prior image constrained compressed sensing (NC-PICCS). It generalizes the prior image constrained compressed sensing (PICCS) algorithm with the use of non-convex priors. Here, we concentrate on perfusion studies using computed tomography examples in simulated phantoms (with and without added noise) and in vivo data, to show how the NC-PICCS method holds potential for dramatic reductions in radiation dose for time-resolved CT imaging. We show that NC-PICCS can provide additional undersampling compared to conventional convex compressed sensing and PICCS, as well as, faster convergence under a quasi-Newton numerical solver.

  10. Using Triangular Function To Improve Size Of Population In Quantum Evolution Algorithm For Fractal Image Compression

    Amin Qorbani


    Full Text Available Fractal Image Compression is a well-known problem which is in the class of NP-Hard problems.Quantum Evolutionary Algorithm is a novel optimization algorithm which uses a probabilisticrepresentation for solutions and is highly suitable for combinatorial problems like Knapsack problem.Genetic algorithms are widely used for fractal image compression problems, but QEA is not used for thiskind of problems yet. This paper improves QEA whit change population size and used it in fractal imagecompression. Utilizing the self-similarity property of a natural image, the partitioned iterated functionsystem (PIFS will be found to encode an image through Quantum Evolutionary Algorithm (QEA methodExperimental results show that our method has a better performance than GA and conventional fractalimage compression algorithms.

  11. An SAO-DS9-Based Widget Interface for Compressed Images

    Gastaud, René D.; Popoff, Fabien S.; Starck, Jean-Luc

    Astronomical images can be efficiently compressed by the multi-resolution package MR1. We describe here a user interface for the images compressed by the software package MR/1 which comes as a plug-in for the popular astronomical image viewer SAO-DS9. This interface allows the user to load a compressed file and to choose not only the scale, but also the size and the portion of image to be displayed, resulting in reduced memory and processing requirements. Astrometry and all SAO-DS9 functionalities are still available. The Tcl/Tk source code of the interface, and the binary code for the decompression (for Unix and Windows) will be made available to the astronomical community.

  12. Optimization of Channel Coding for Transmitted Image Using Quincunx Wavelets Transforms Compression

    Mustapha Khelifi


    Full Text Available Many images you see on the Internet today have undergone compression for various reasons. Image compression can benefit users by having pictures load faster and webpages use up less space on a Web host. Image compression does not reduce the physical size of an image but instead compresses the data that makes up the image into a smaller size. In case of image transmission the noise will decrease the quality of recivide image which obliges us to use channel coding techniques to protect our data against the channel noise. The Reed-Solomon code is one of the most popular channel coding techniques used to correct errors in many systems ((Wireless or mobile communications, Satellite communications, Digital television / DVB,High-speed modems such as ADSL, xDSL, etc.. Since there is lot of possibilities to select the input parameters of RS code this will make us concerned about the optimum input that can protect our data with minimum number of redundant bits. In this paper we are going to use the genetic algorithm to optimize in the selction of input parameters of RS code acording to the channel conditions wich reduce the number of bits needed to protect our data with hight quality of received image.

  13. A methodology for visually lossless JPEG2000 compression of monochrome stereo images.

    Feng, Hsin-Chang; Marcellin, Michael W; Bilgin, Ali


    A methodology for visually lossless compression of monochrome stereoscopic 3D images is proposed. Visibility thresholds are measured for quantization distortion in JPEG2000. These thresholds are found to be functions of not only spatial frequency, but also of wavelet coefficient variance, as well as the gray level in both the left and right images. To avoid a daunting number of measurements during subjective experiments, a model for visibility thresholds is developed. The left image and right image of a stereo pair are then compressed jointly using the visibility thresholds obtained from the proposed model to ensure that quantization errors in each image are imperceptible to both eyes. This methodology is then demonstrated via a particular 3D stereoscopic display system with an associated viewing condition. The resulting images are visually lossless when displayed individually as 2D images, and also when displayed in stereoscopic 3D mode.

  14. All-optical image processing and compression based on Haar wavelet transform.

    Parca, Giorgia; Teixeira, Pedro; Teixeira, Antonio


    Fast data processing and compression methods based on wavelet transform are fundamental tools in the area of real-time 2D data/image analysis, enabling high definition applications and redundant data reduction. The need for information processing at high data rates motivates the efforts on exploiting the speed and the parallelism of the light for data analysis and compression. Among several schemes for optical wavelet transform implementation, the Haar transform offers simple design and fast computation, plus it can be easily implemented by optical planar interferometry. We present an all optical scheme based on an asymmetric couplers network for achieving fast image processing and compression in the optical domain. The implementation of Haar wavelet transform through a 3D passive structure is supported by theoretical formulation and simulations results. Asymmetrical coupler 3D network design and optimization are reported and Haar wavelet transform, including compression, was achieved, thus demonstrating the feasibility of our approach.

  15. A hyperspectral images compression algorithm based on 3D bit plane transform

    Zhang, Lei; Xiang, Libin; Zhang, Sam; Quan, Shengxue


    According the analyses of the hyper-spectral images, a new compression algorithm based on 3-D bit plane transform is proposed. The spectral coefficient is higher than the spatial. The algorithm is proposed to overcome the shortcoming of 1-D bit plane transform for it can only reduce the correlation when the neighboring pixels have similar values. The algorithm calculates the horizontal, vertical and spectral bit plane transform sequentially. As the spectral bit plane transform, the algorithm can be easily realized by hardware. In addition, because the calculation and encoding of the transform matrix of each bit are independent, the algorithm can be realized by parallel computing model, which can improve the calculation efficiency and save the processing time greatly. The experimental results show that the proposed algorithm achieves improved compression performance. With a certain compression ratios, the algorithm satisfies requirements of hyper-spectral images compression system, by efficiently reducing the cost of computation and memory usage.

  16. Performance analysis of reversible image compression techniques for high-resolution digital teleradiology.

    Kuduvalli, G R; Rangayyan, R M


    The performances of a number of block-based, reversible, compression algorithms suitable for compression of very-large-format images (4096x4096 pixels or more) are compared to that of a novel two-dimensional linear predictive coder developed by extending the multichannel version of the Burg algorithm to two dimensions. The compression schemes implemented are: Huffman coding, Lempel-Ziv coding, arithmetic coding, two-dimensional linear predictive coding (in addition to the aforementioned one), transform coding using discrete Fourier-, discrete cosine-, and discrete Walsh transforms, linear interpolative coding, and combinations thereof. The performances of these coding techniques for a few mammograms and chest radiographs digitized to sizes up to 4096x4096 10 b pixels are discussed. Compression from 10 b to 2.5-3.0 b/pixel on these images has been achieved without any loss of information. The modified multichannel linear predictor outperforms the other methods while offering certain advantages in implementation.

  17. The Cyborg Astrobiologist: Image Compression for Geological Mapping and Novelty Detection

    McGuire, P. C.; Bonnici, A.; Bruner, K. R.; Gross, C.; Ormö, J.; Smosna, R. A.; Walter, S.; Wendt, L.


    We describe an image-comparison technique of Heidemann and Ritter [4,5] that uses image compression, and is capable of: (i) detecting novel textures in a series of images, as well as of: (ii) alerting the user to the similarity of a new image to a previously-observed texture. This image-comparison technique has been implemented and tested using our Astrobiology Phone-cam system, which employs Bluetooth communication to send images to a local laptop server in the field for the image-compression analysis. We tested the system in a field site displaying a heterogeneous suite of sandstones, limestones, mudstones and coalbeds. Some of the rocks are partly covered with lichen. The image-matching procedure of this system performed very well with data obtained through our field test, grouping all images of yellow lichens together and grouping all images of a coal bed together, and giving a 91% accuracy for similarity detection. Such similarity detection could be employed to make maps of different geological units. The novelty-detection performance of our system was also rather good (a 64% accuracy). Such novelty detection may become valuable in searching for new geological units, which could be of astrobiological interest. By providing more advanced capabilities for similarity detection and novelty detection, this image-compression technique could be useful in giving more scientific autonomy to robotic planetary rovers, and in assisting human astronauts in their geological exploration.

  18. New image compression algorithm based on improved reversible biorthogonal integer wavelet transform

    Zhang, Libao; Yu, Xianchuan


    The low computational complexity and high coding efficiency are the most significant requirements for image compression and transmission. Reversible biorthogonal integer wavelet transform (RB-IWT) supports the low computational complexity by lifting scheme (LS) and allows both lossy and lossless decoding using a single bitstream. However, RB-IWT degrades the performances and peak signal noise ratio (PSNR) of the image coding for image compression. In this paper, a new IWT-based compression scheme based on optimal RB-IWT and improved SPECK is presented. In this new algorithm, the scaling parameter of each subband is chosen for optimizing the transform coefficient. During coding, all image coefficients are encoding using simple, efficient quadtree partitioning method. This scheme is similar to the SPECK, but the new method uses a single quadtree partitioning instead of set partitioning and octave band partitioning of original SPECK, which reduces the coding complexity. Experiment results show that the new algorithm not only obtains low computational complexity, but also provides the peak signal-noise ratio (PSNR) performance of lossy coding to be comparable to the SPIHT algorithm using RB-IWT filters, and better than the SPECK algorithm. Additionally, the new algorithm supports both efficiently lossy and lossless compression using a single bitstream. This presented algorithm is valuable for future remote sensing image compression.

  19. Novel region-based image compression method based on spiking cor tical model

    Rongchang Zhao; Yide Ma


    To get the high compression ratio as wel as the high-quality reconstructed image, an effective image compres-sion scheme named irregular segmentation region coding based on spiking cortical model (ISRCS) is presented. This scheme is region-based and mainly focuses on two issues. Firstly, an appro-priate segmentation algorithm is developed to partition an image into some irregular regions and tidy contours, where the crucial regions corresponding to objects are retained and a lot of tiny parts are eliminated. The irregular regions and contours are coded using different methods respectively in the next step. The other is-sue is the coding method of contours where an efficient and novel chain code is employed. This scheme tries to find a compromise between the quality of reconstructed images and the compression ratio. Some principles and experiments are conducted and the results show its higher performance compared with other com-pression technologies, in terms of higher quality of reconstructed images, higher compression ratio and less time consuming.


    T. Celine Therese Jenny


    Full Text Available The Embedded Zero-tree Wavelet (EZW is a lossy compression method that allows for progressive transmission of a compressed image. By exploiting the natural zero-trees found in a wavelet decomposed image, the EZW algorithm is able to encode large portions of insignificant regions of an still image with a minimal number of bits. The upshot of this encoding is an algorithm that is able to achieve relatively high peak signal to noise ratios (PSNR for high compression levels. The EZW algorithm is to encode large portions of insignificant regions of an image with a minimal number of bits. Vector Quantization (VQ method can be performed as a post processing step to reduce the coded file size. Vector Quantization (VQ method can be reduces redundancy of the image data in order to be able to store or transmit data in an efficient form. It is demonstrated by experimental results that the proposed method outperforms several well-known lossless image compression techniques for still images that contain 256 colors or less.

  1. A lossless compression method for medical image sequences using JPEG-LS and interframe coding.

    Miaou, Shaou-Gang; Ke, Fu-Sheng; Chen, Shu-Ching


    Hospitals and medical centers produce an enormous amount of digital medical images every day, especially in the form of image sequences, which requires considerable storage space. One solution could be the application of lossless compression. Among available methods, JPEG-LS has excellent coding performance. However, it only compresses a single picture with intracoding and does not utilize the interframe correlation among pictures. Therefore, this paper proposes a method that combines the JPEG-LS and an interframe coding with motion vectors to enhance the compression performance of using JPEG-LS alone. Since the interframe correlation between two adjacent images in a medical image sequence is usually not as high as that in a general video image sequence, the interframe coding is activated only when the interframe correlation is high enough. With six capsule endoscope image sequences under test, the proposed method achieves average compression gains of 13.3% and 26.3% over the methods of using JPEG-LS and JPEG2000 alone, respectively. Similarly, for an MRI image sequence, coding gains of 77.5% and 86.5% are correspondingly obtained.

  2. Fast Fractal Compression of Satellite and Medical Images Based on Domain-Range Entropy

    Ramesh Babu Inampudi


    Full Text Available Fractal image Compression is a lossy compression technique developed in the early 1990s. It makes use of the local self-similarity property existing in an image and finds a contractive mapping affine transformation (fractal transformT, such that the fixed point of T is close to the given image in a suitable metric. It has generated much interest due to its promise of high compression ratios with good decompression quality. The other advantage is its multi resolution property, i.e. an image can be decoded at higher or lower resolutions than the original without much degradation in quality. However, the encoding time is computationally intensive. In this paper, a fast fractal image compression method based on the domain-range entropy is proposed to reduce the encoding time, while maintaining the fidelity and compression ratio of the decoded image. The method is a two-step process. First, domains that are similar i.e. domains having nearly equal variances are eliminated from the domain pool. Second, during the encoding phase, only domains and ranges having equal entropies (with an adaptive error threshold, λdepth for each quadtree depth are compared for a match within the rms error tolerance. As a result, many unqualified domains are removed from comparison and a significant reduction in encoding time is expected. The method is applied for compression of satellite and medical images (512x512, 8-bit gray scale. Experimental results show that the proposed method yields superior performance over Fisher’s classified search and other methods.

  3. Adaptive uniform grayscale coded aperture design for high dynamic range compressive spectral imaging

    Diaz, Nelson; Rueda, Hoover; Arguello, Henry


    Imaging spectroscopy is an important area with many applications in surveillance, agriculture and medicine. The disadvantage of conventional spectroscopy techniques is that they collect the whole datacube. In contrast, compressive spectral imaging systems capture snapshot compressive projections, which are the input of reconstruction algorithms to yield the underlying datacube. Common compressive spectral imagers use coded apertures to perform the coded projections. The coded apertures are the key elements in these imagers since they define the sensing matrix of the system. The proper design of the coded aperture entries leads to a good quality in the reconstruction. In addition, the compressive measurements are prone to saturation due to the limited dynamic range of the sensor, hence the design of coded apertures must consider saturation. The saturation errors in compressive measurements are unbounded and compressive sensing recovery algorithms only provide solutions for bounded noise or bounded with high probability. In this paper it is proposed the design of uniform adaptive grayscale coded apertures (UAGCA) to improve the dynamic range of the estimated spectral images by reducing the saturation levels. The saturation is attenuated between snapshots using an adaptive filter which updates the entries of the grayscale coded aperture based on the previous snapshots. The coded apertures are optimized in terms of transmittance and number of grayscale levels. The advantage of the proposed method is the efficient use of the dynamic range of the image sensor. Extensive simulations show improvements in the image reconstruction of the proposed method compared with grayscale coded apertures (UGCA) and adaptive block-unblock coded apertures (ABCA) in up to 10 dB.

  4. High Speed and Area Efficient 2D DWT Processor based Image Compression" Signal & Image Processing

    Kaur, Sugreev


    This paper presents a high speed and area efficient DWT processor based design for Image Compression applications. In this proposed design, pipelined partially serial architecture has been used to enhance the speed along with optimal utilization and resources available on target FPGA. The proposed model has been designed and simulated using Simulink and System Generator blocks, synthesized with Xilinx Synthesis tool (XST) and implemented on Spartan 2 and 3 based XC2S100-5tq144 and XC3S500E-4fg320 target device. The results show that proposed design can operate at maximum frequency 231 MHz in case of Spartan 3 by consuming power of 117mW at 28 degree/c junction temperature. The result comparison has shown an improvement of 15% in speed.

  5. Application study of image segmentation methods on pattern recognition in the course of wood across-compression

    曹军; 孙丽萍; 张冬妍; 姜宇


    Image segmentation is one of important steps on pattern recognition study in the course of wood across-compression. By comparing and studying processing methods on finding cell space and cell wall, this paper puts forward some image segmentation methods that are suitable for study of cell images of wood cross-grained compression. The method of spline function fitting was used for linking edges of cell, which perfects the study of pattern recognition in the course of wood across-compression.

  6. A New Chaos-Based Image-Encryption and Compression Algorithm

    Somaya Al-Maadeed


    Full Text Available We propose a new and efficient method to develop secure image-encryption techniques. The new algorithm combines two techniques: encryption and compression. In this technique, a wavelet transform was used to decompose the image and decorrelate its pixels into approximation and detail components. The more important component (the approximation component is encrypted using a chaos-based encryption algorithm. This algorithm produces a cipher of the test image that has good diffusion and confusion properties. The remaining components (the detail components are compressed using a wavelet transform. This proposed algorithm was verified to provide a high security level. A complete specification for the new algorithm is provided. Several test images are used to demonstrate the validity of the proposed algorithm. The results of several experiments show that the proposed algorithm for image cryptosystems provides an efficient and secure approach to real-time image encryption and transmission.

  7. Image compression with embedded wavelet coding via vector quantization

    Katsavounidis, Ioannis; Kuo, C.-C. Jay


    In this research, we improve Shapiro's EZW algorithm by performing the vector quantization (VQ) of the wavelet transform coefficients. The proposed VQ scheme uses different vector dimensions for different wavelet subbands and also different codebook sizes so that more bits are assigned to those subbands that have more energy. Another feature is that the vector codebooks used are tree-structured to maintain the embedding property. Finally, the energy of these vectors is used as a prediction parameter between different scales to improve the performance. We investigate the performance of the proposed method together with the 7 - 9 tap bi-orthogonal wavelet basis, and look into ways to incorporate loseless compression techniques.

  8. Joint Denoising / Compression of Image Contours via Shape Prior and Context Tree

    Zheng, Amin; Cheung, Gene; Florencio, Dinei


    With the advent of depth sensing technologies, the extraction of object contours in images---a common and important pre-processing step for later higher-level computer vision tasks like object detection and human action recognition---has become easier. However, acquisition noise in captured depth images means that detected contours suffer from unavoidable errors. In this paper, we propose to jointly denoise and compress detected contours in an image for bandwidth-constrained transmission to a...

  9. Computed Tomography Diagnosis Utilizing Compressed Image Data: An ROC Analysis Using Acute Appendicitis as a Model


    Using receiver-operating characteristic (ROC) methodology, the ability to diagnose acute appendicitis with computed tomography (CT) images displayed at varying levels of lossy compression was evaluated. Nine sequential images over the ileocecal region were obtained from 53 consecutive patients with right lower quadrant pain who were clinically suspected to have acute appendicitis. Thirty were proven surgically to have acute appendicitis, alternative diagnoses confirmed in 23. The image sets w...

  10. New Contribution on Compression Color Images: Analysis and Synthesis for Telemedicine Applications

    Beladgham Mohammed


    Full Text Available The wavelets are a recent tool for signal processing analysis, for multiple time scale. It gives rise to many applications in various fields such as geophysics, astrophysics, telecommunications, imaging, and video coding. They are the basis of new analytical techniques and signal synthesis and some nice applications for general problems such as compression. This paper introduces an application for color medical image compression based on the wavelet transform coupled with SP?HT coding algorithm. In order to enhance the compression by this algorithm, we have compared the results obtained with wavelet transform application in natural, medical and satellite color image field. For this reason, we evaluated two parameters known for their calculation speed. The first parameter is the PSNR; the second is MSSIM (structural similarity.

  11. Compression of compound images and video for enabling rich media in embedded systems

    Said, Amir


    It is possible to improve the features supported by devices with embedded systems by increasing the processor computing power, but this always results in higher costs, complexity, and power consumption. An interesting alternative is to use the growing networking infrastructures to do remote processing and visualization, with the embedded system mainly responsible for communications and user interaction. This enables devices to behave as if much more "intelligent" to users, at very low costs and power. In this article we explain how compression can make some of these solutions more bandwidth-efficient, enabling devices to simply decompress very rich graphical information and user interfaces that had been rendered elsewhere. The mixture of natural images and video with text, graphics, and animations simultaneously in the same frame is called compound video. We present a new method for compression of compound images and video, which is able to efficiently identify the different components during compression, and use an appropriate coding method. Our system uses lossless compression for graphics and text, and, on natural images and highly detailed parts, it uses lossy compression with dynamically varying quality. Since it was designed for embedded systems with very limited resources, and it has small executable size, and low complexity for classification, compression and decompression. Other compression methods (e.g., MPEG) can do the same, but are very inefficient for compound content. High-level graphics languages can be bandwidth-efficient, but are much less reliable (e.g., supporting Asian fonts), and are many orders of magnitude more complex. Numerical tests show the very significant gains in compression achieved by these systems.

  12. Lossless image compression with projection-based and adaptive reversible integer wavelet transforms.

    Deever, Aaron T; Hemami, Sheila S


    Reversible integer wavelet transforms are increasingly popular in lossless image compression, as evidenced by their use in the recently developed JPEG2000 image coding standard. In this paper, a projection-based technique is presented for decreasing the first-order entropy of transform coefficients and improving the lossless compression performance of reversible integer wavelet transforms. The projection technique is developed and used to predict a wavelet transform coefficient as a linear combination of other wavelet transform coefficients. It yields optimal fixed prediction steps for lifting-based wavelet transforms and unifies many wavelet-based lossless image compression results found in the literature. Additionally, the projection technique is used in an adaptive prediction scheme that varies the final prediction step of the lifting-based transform based on a modeling context. Compared to current fixed and adaptive lifting-based transforms, the projection technique produces improved reversible integer wavelet transforms with superior lossless compression performance. It also provides a generalized framework that explains and unifies many previous results in wavelet-based lossless image compression.

  13. A mixed transform approach for efficient compression of medical images.

    Ramaswamy, A; Mikhael, W B


    A novel technique is presented to compress medical data employing two or more mutually nonorthogonal transforms. Both lossy and lossless compression implementations are considered. The signal is first resolved into subsignals such that each subsignal is compactly represented in a particular transform domain. An efficient lossy representation of the signal is achieved by superimposing the dominant coefficients corresponding to each subsignal. The residual error, which is the difference between the original signal and the reconstructed signal is properly formulated. Adaptive algorithms in conjunction with an optimization strategy are developed to minimize this error. Both two-dimensional (2-D) and three-dimensional (3-D) approaches for the technique are developed. It is shown that for a given number of retained coefficients, the discrete cosine transform (DCT)-Walsh mixed transform representation yields a more compact representation than using DCT or Walsh alone. This lossy technique is further extended for the lossless case. The coefficients are quantized and the signal is reconstructed. The resulting reconstructed signal samples are rounded to the nearest integer and the modified residual error is computed. This error is transmitted employing a lossless technique such as the Huffman coding. It is shown that for a given number of retained coefficients, the mixed transforms again produces the smaller rms-modified residual error. The first-order entropy of the error is also smaller for the mixed-transforms technique than for the DCT, thus resulting in smaller length Huffman codes.

  14. Lossless compression of RNAi fluorescence images using regional fluctuations of pixels.

    Karimi, Nader; Samavi, Shadrokh; Shirani, Shahram


    RNA interference (RNAi) is considered one of the most powerful genomic tools which allows the study of drug discovery and understanding of the complex cellular processes by high-content screens. This field of study, which was the subject of 2006 Nobel Prize of medicine, has drastically changed the conventional methods of analysis of genes. A large number of images have been produced by the RNAi experiments. Even though a number of capable special purpose methods have been proposed recently for the processing of RNAi images but there is no customized compression scheme for these images. Hence, highly proficient tools are required to compress these images. In this paper, we propose a new efficient lossless compression scheme for the RNAi images. A new predictor specifically designed for these images is proposed. It is shown that pixels can be classified into three categories based on their intensity distributions. Using classification of pixels based on the intensity fluctuations among the neighbors of a pixel a context-based method is designed. Comparisons of the proposed method with the existing state-of-the-art lossless compression standards and well-known general-purpose methods are performed to show the efficiency of the proposed method.

  15. An Adaptive Two-Stage BPNN–DCT Image Compression Technique

    Dr. Tarun Kumar


    Full Text Available Neural Networks offer the potential for providing a novel solution to the problem of data compression by its ability to generate an internal data representation. This network, which is an application of back propagation network, accepts a large amount of image data, compresses it for storage or transmission, and subsequently restores it when desired. A new approach for reducing training time by reconstructing representative vectors has also been proposed. Performance of the network has been evaluated using some standard real world images. Neural networks can be trained to represent certain sets of data. After decomposing an image using the Discrete Cosine Transform (DCT, a two stage neural network may be able to represent the DCT coefficients in less space than the coefficients themselves. After splitting the image and the decomposition using several methods, neural networks were trained to represent the image blocks. By saving the weights and bias of each neuron, by using the Inverse DCT (IDCT coefficient mechanism an image segment can be approximately recreated. Compression can be achieved using neural networks. Current results have been promising except for the amount of time needed to train a neural network. One method of speeding up code execution is discussed. However, plenty of future research work is available in this area it is shown that the development architecture and training algorithm provide high compression ratio and low distortion while maintaining the ability to generalize and is very robust as well

  16. Multispectral image compression methods for improvement of both colorimetric and spectral accuracy

    Liang, Wei; Zeng, Ping; Xiao, Zhaolin; Xie, Kun


    We propose that both colorimetric and spectral distortion in compressed multispectral images can be reduced by a composite model, named OLCP(W)-X (OptimalLeaders_Color clustering-PCA-W weighted-X coding). In the model, first the spectral-colorimetric clustering is designed for sparse equivalent representation by generating spatial basis. Principal component analysis (PCA) is subsequently used in the manipulation of spatial basis for spectral redundancy removal. Then error compensation mechanism is presented to produce predicted difference image, and finally combined with visual characteristic matrix W, and the created image is compressed by traditional multispectral image coding schemes. We introduce four model-based algorithms to explain their validity. The first two algorithms are OLCPWKWS (OLC-PCA-W-KLT-WT-SPIHT) and OLCPKWS, in which Karhunen-Loeve transform, wavelet transform, and set partitioning in hierarchical trees coding are applied for the created image compression. And the latter two methods are OLCPW-JPEG2000-MCT and OLCP-JPEG2000-MCT. Experimental results show that, compared with the corresponding traditional coding, the proposed OLCPW-X schemes can significantly improve the colorimetric accuracy of rebuilding images under various illumination conditions and generally achieve satisfactory peak signal-to-noise ratio under the same compression ratio. And OLCP-X methods could always ensure superior spectrum reconstruction. Furthermore, our model has excellent performance on user interaction.

  17. Fast algorithm for exploring and compressing of large hyperspectral images

    Kucheryavskiy, Sergey


    A new method for calculation of latent variable space for exploratory analysis and dimension reduction of large hyperspectral images is proposed. The method is based on significant downsampling of image pixels with preservation of pixels’ structure in feature (variable) space. To achieve this, in...

  18. Segmentation of Natural Images by Texture and Boundary Compression

    Mobahi, Hossein; Yang, Allen Y; Sastry, Shankar S; Ma, Yi


    We present a novel algorithm for segmentation of natural images that harnesses the principle of minimum description length (MDL). Our method is based on observations that a homogeneously textured region of a natural image can be well modeled by a Gaussian distribution and the region boundary can be effectively coded by an adaptive chain code. The optimal segmentation of an image is the one that gives the shortest coding length for encoding all textures and boundaries in the image, and is obtained via an agglomerative clustering process applied to a hierarchy of decreasing window sizes as multi-scale texture features. The optimal segmentation also provides an accurate estimate of the overall coding length and hence the true entropy of the image. We test our algorithm on the publicly available Berkeley Segmentation Dataset. It achieves state-of-the-art segmentation results compared to other existing methods.

  19. Image Compression and Resizing for Retinal Implant in Bionic Eye



    Full Text Available One field where computer-related Image processing technology shows great promise for the future isbionic implants such as Cochlear implants, Retinal implants etc.. Retinal implants are being developedaround the world in hopes of restoring useful vision for patients suffering from certain types of diseaseslike Age-related Macular Degeneration (AMD and Retinitis Pigmentosa (RP. In these diseases thephotoreceptor cells slowly degenerated, leading to blindness. However, many of the inner retinalneurons that transmit signals from the photoreceptors to the brain are preserved to a large extent for aprolonged period of time. The Retinal Prosthesis aims to provide partial vision by electricallyactivating the remaining cells of the retina. The Epi retinal prosthesis system is composed of twounits, extra ocular unit and intraocular implant. The two units are connected by a telemetric inductivelink. The Extraocular unit consists of a CCD camera, an image processor, an encoder, and a transmitterbuilt on the eyeglass. High-resolution image from a CCD camera is reduced to lower resolutionmatching the array of electrodes by image processor, which is then encoded into bit stream. Eachelectrode in an implant corresponds to one pixel in an image. The bit stream is modulated on a 22 MHzcarrier and then transmitted wirelessly to the inside implant. This paper mainly discusses twoapproaches in image processing which reduces the size of the image without loss of the object detectionrate to that of the original image. One is about the related image processing algorithms include imageresizing, color erasing, edge enhancement and edge detection. Second one is to generate the saliencymap for an image.

  20. Development of a fast electromagnetic shutter for compressive sensing imaging in scanning transmission electron microscopy

    Béché, Armand; Freitag, Bert; Verbeeck, Jo


    The concept of compressive sensing was recently proposed to significantly reduce the electron dose in scanning transmission electron microscopy (STEM) while still maintaining the main features in the image. Here, an experimental setup based on an electromagnetic shutter placed in the condenser plane of a STEM is proposed. The shutter blanks the beam following a random pattern while the scanning coils are moving the beam in the usual scan pattern. Experimental images at both medium scale and high resolution are acquired and then reconstructed based on a discrete cosine algorithm. The obtained results confirm the predicted usefulness of compressive sensing in experimental STEM even though some remaining artifacts need to be resolved.