Sample records for fractal image compression

  1. Experimental Study of Fractal Image Compression Algorithm

    Directory of Open Access Journals (Sweden)

    Chetan R. Dudhagara


    Full Text Available Image compression applications have been increasing in recent years. Fractal compression is a lossy compression method for digital images, based on fractals. The method is best suited for textures and natural images, relying on the fact that parts of an image often resemble other parts of the same image. In this paper, a study on fractal-based image compression and fixed-size partitioning will be made, analyzed for performance and compared with a standard frequency domain based image compression standard, JPEG. Sample images will be used to perform compression and decompression. Performance metrics such as compression ratio, compression time and decompression time will be measured in JPEG cases. Also the phenomenon of resolution/scale independence will be studied and described with examples. Fractal algorithms convert these parts into mathematical data called "fractal codes" which are used to recreate the encoded image. Fractal encoding is a mathematical process used to encode bitmaps containing a real-world image as a set of mathematical data that describes the fractal properties of the image. Fractal encoding relies on the fact that all natural, and most artificial, objects contain redundant information in the form of similar, repeating patterns called fractals.

  2. A Fast Fractal Image Compression Coding Method

    Institute of Scientific and Technical Information of China (English)


    Fast algorithms for reducing encoding complexity of fractal image coding have recently been an important research topic. Search of the best matched domain block is the most computation intensive part of the fractal encoding process. In this paper, a fast fractal approximation coding scheme implemented on a personal computer based on matching in range block's neighbours is presented. Experimental results show that the proposed algorithm is very simple in implementation, fast in encoding time and high in compression ratio while PSNR is almost the same as compared with Barnsley's fractal block coding .

  3. A Novel Fractal Wavelet Image Compression Approach

    Institute of Scientific and Technical Information of China (English)

    SONG Chun-lin; FENG Rui; LIU Fu-qiang; CHEN Xi


    By investigating the limitation of existing wavelet tree based image compression methods, we propose a novel wavelet fractal image compression method in this paper. Briefly, the initial errors are appointed given the different levels of importance accorded the frequency sublevel band wavelet coefficients. Higher frequency sublevel bands would lead to larger initial errors. As a result, the sizes of sublevel blocks and super blocks would be changed according to the initial errors. The matching sizes between sublevel blocks and super blocks would be changed according to the permitted errors and compression rates. Systematic analyses are performed and the experimental results demonstrate that the proposed method provides a satisfactory performance with a clearly increasing rate of compression and speed of encoding without reducing SNR and the quality of decoded images. Simulation results show that our method is superior to the traditional wavelet tree based methods of fractal image compression.

  4. Hybrid Prediction and Fractal Hyperspectral Image Compression

    Directory of Open Access Journals (Sweden)

    Shiping Zhu


    Full Text Available The data size of hyperspectral image is too large for storage and transmission, and it has become a bottleneck restricting its applications. So it is necessary to study a high efficiency compression method for hyperspectral image. Prediction encoding is easy to realize and has been studied widely in the hyperspectral image compression field. Fractal coding has the advantages of high compression ratio, resolution independence, and a fast decoding speed, but its application in the hyperspectral image compression field is not popular. In this paper, we propose a novel algorithm for hyperspectral image compression based on hybrid prediction and fractal. Intraband prediction is implemented to the first band and all the remaining bands are encoded by modified fractal coding algorithm. The proposed algorithm can effectively exploit the spectral correlation in hyperspectral image, since each range block is approximated by the domain block in the adjacent band, which is of the same size as the range block. Experimental results indicate that the proposed algorithm provides very promising performance at low bitrate. Compared to other algorithms, the encoding complexity is lower, the decoding quality has a great enhancement, and the PSNR can be increased by about 5 dB to 10 dB.

  5. Image compression with a hybrid wavelet-fractal coder. (United States)

    Li, J; Kuo, C J


    A hybrid wavelet-fractal coder (WFC) for image compression is proposed. The WFC uses the fractal contractive mapping to predict the wavelet coefficients of the higher resolution from those of the lower resolution and then encode the prediction residue with a bitplane wavelet coder. The fractal prediction is adaptively applied only to regions where the rate saving offered by fractal prediction justifies its overhead. A rate-distortion criterion is derived to evaluate the fractal rate saving and used to select the optimal fractal parameter set for WFC. The superior performance of the WFC is demonstrated with extensive experimental results.

  6. Structured-light Image Compression Based on Fractal Theory

    Institute of Scientific and Technical Information of China (English)


    The method of fractal image compression is introduced which is applied to compress the line structured-light image. Based on the self-similarity of the structured-light image, we attain satisfactory compression ratio and higher peak signal-to-noise ratio (PSNR). The experimental results indicate that this method can achieve high performance.

  7. A Parallel Approach to Fractal Image Compression

    Directory of Open Access Journals (Sweden)

    Lubomir Dedera


    Full Text Available The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  8. A new modified fast fractal image compression algorithm

    DEFF Research Database (Denmark)

    Salarian, Mehdi; Nadernejad, Ehsan; MiarNaimi, Hossein


    In this paper, a new fractal image compression algorithm is proposed, in which the time of the encoding process is considerably reduced. The algorithm exploits a domain pool reduction approach, along with the use of innovative predefined values for contrast scaling factor, S, instead of searching...

  9. Greylevel Difference Classification Algorithm inFractal Image Compression

    Institute of Scientific and Technical Information of China (English)

    陈毅松; 卢坚; 孙正兴; 张福炎


    This paper proposes the notion of a greylevel difference classification algorithm in fractal image compression. Then an example of the greylevel difference classification algo rithm is given as an improvement of the quadrant greylevel and variance classification in the quadtree-based encoding algorithm. The algorithm incorporates the frequency feature in spatial analysis using the notion of average quadrant greylevel difference, leading to an enhancement in terms of encoding time, PSNR value and compression ratio.

  10. Dynamic Fractal Transform with Applications to Image Data Compression

    Institute of Scientific and Technical Information of China (English)

    王舟; 余英林


    A recent trend in computer graphics and image processing is to use Iterated Function System(IFS)to generate and describe both man-made graphics and natural images.Jacquin was the first to propose a fully automation gray scale image compression algorithm which is referred to as a typical static fractal transform based algorithm in this paper.By using this algorithm,an image can be condensely described as a fractal transform operator which is the combination of a set of reactal mappings.When the fractal transform operator is iteratedly applied to any initial image,a unique attractro(reconstructed image)can be achieved.In this paper,a dynamic fractal transform is presented which is a modification of the static transform.Instea of being fixed,the dynamic transform operator varies in each decoder iteration,thus differs from static transform operators.The new transform has advantages in improving coding efficiency and shows better convergence for the deocder.

  11. An improved fast fractal image compression using spatial texture correlation

    Institute of Scientific and Technical Information of China (English)

    Wang Xing-Yuan; Wang Yuan-Xing; Yun Jiao-Jiao


    This paper utilizes a spatial texture correlation and the intelligent classification algorithm (ICA) search strategy to speed up the encoding process and improve the bit rate for fractal image compression.Texture features is one of the most important properties for the representation of an image.Entropy and maximum entry from co-occurrence matrices are used for representing texture features in an image.For a range block,concerned domain blocks of neighbouring range blocks with similar texture features can be searched.In addition,domain blocks with similar texture features are searched in the ICA search process.Experiments show that in comparison with some typical methods,the proposed algorithm significantly speeds up the encoding process and achieves a higher compression ratio,with a slight diminution in the quality of the reconstructed image; in comparison with a spatial correlation scheme,the proposed scheme spends much less encoding time while the compression ratio and the quality of the reconstructed image are almost the same.

  12. A simple method for estimating the fractal dimension from digital images: The compression dimension

    CERN Document Server

    Chamorro-Posada, P


    The fractal structure of real world objects is often analyzed using digital images. In this context, the compression fractal dimension is put forward. It provides a simple method for the direct estimation of the dimension of fractals stored as digital image files. The computational scheme can be implemented using readily available free software. Its simplicity also makes it very interesting for introductory elaborations of basic concepts of fractal geometry, complexity, and information theory. A test of the computational scheme using limited-quality images of well-defined fractal sets obtained from the Internet and free software has been performed.

  13. Intelligent fuzzy approach for fast fractal image compression (United States)

    Nodehi, Ali; Sulong, Ghazali; Al-Rodhaan, Mznah; Al-Dhelaan, Abdullah; Rehman, Amjad; Saba, Tanzila


    Fractal image compression (FIC) is recognized as a NP-hard problem, and it suffers from a high number of mean square error (MSE) computations. In this paper, a two-phase algorithm was proposed to reduce the MSE computation of FIC. In the first phase, based on edge property, range and domains are arranged. In the second one, imperialist competitive algorithm (ICA) is used according to the classified blocks. For maintaining the quality of the retrieved image and accelerating algorithm operation, we divided the solutions into two groups: developed countries and undeveloped countries. Simulations were carried out to evaluate the performance of the developed approach. Promising results thus achieved exhibit performance better than genetic algorithm (GA)-based and Full-search algorithms in terms of decreasing the number of MSE computations. The number of MSE computations was reduced by the proposed algorithm for 463 times faster compared to the Full-search algorithm, although the retrieved image quality did not have a considerable change.

  14. A simple method for estimating the fractal dimension from digital images: The compression dimension (United States)

    Chamorro-Posada, Pedro


    The fractal structure of real world objects is often analyzed using digital images. In this context, the compression fractal dimension is put forward. It provides a simple method for the direct estimation of the dimension of fractals stored as digital image files. The computational scheme can be implemented using readily available free software. Its simplicity also makes it very interesting for introductory elaborations of basic concepts of fractal geometry, complexity, and information theory. A test of the computational scheme using limited-quality images of well-defined fractal sets obtained from the Internet and free software has been performed. Also, a systematic evaluation of the proposed method using computer generated images of the Weierstrass cosine function shows an accuracy comparable to those of the methods most commonly used to estimate the dimension of fractal data sequences applied to the same test problem.

  15. A Lossless hybrid wavelet-fractal compression for welding radiographic images. (United States)

    Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud


    In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm.

  16. A hyperspectral image compression algorithm based on wavelet transformation and fractal composition (AWFC)

    Institute of Scientific and Technical Information of China (English)

    HU; Xingtang; ZHANG; Bing; ZHANG; Xia; ZHENG; Lanfen; TONG; Qingxi


    Starting with a fractal-based image-compression algorithm based on wavelet transformation for hyperspectral images, the authors were able to obtain more spectral bands with the help of of hyperspectral remote sensing. Because large amounts of data and limited bandwidth complicate the storage and transmission of data measured by TB-level bits, it is important to compress image data acquired by hyperspectral sensors such as MODIS, PHI, and OMIS; otherwise, conventional lossless compression algorithms cannot reach adequate compression ratios. Other loss-compression methods can reach high compression ratios but lack good image fidelity, especially for hyperspectral image data. Among the third generation of image compression algorithms, fractal image compression based on wavelet transformation is superior to traditional compression methods,because it has high compression ratios and good image fidelity, and requires less computing time. To keep the spectral dimension invariable, the authors compared the results of two compression algorithms based on the storage-file structures of BSQ and of BIP, and improved the HV and Quadtree partitioning and domain-range matching algorithms in order to accelerate their encode/decode efficiency. The authors' Hyperspectral Image Process and Analysis System (HIPAS) software used a VC++6.0 integrated development environment (IDE), with which good experimental results were obtained. Possible modifications of the algorithm and limitations of the method are also discussed.

  17. Using Triangular Function To Improve Size Of Population In Quantum Evolution Algorithm For Fractal Image Compression

    Directory of Open Access Journals (Sweden)

    Amin Qorbani


    Full Text Available Fractal Image Compression is a well-known problem which is in the class of NP-Hard problems.Quantum Evolutionary Algorithm is a novel optimization algorithm which uses a probabilisticrepresentation for solutions and is highly suitable for combinatorial problems like Knapsack problem.Genetic algorithms are widely used for fractal image compression problems, but QEA is not used for thiskind of problems yet. This paper improves QEA whit change population size and used it in fractal imagecompression. Utilizing the self-similarity property of a natural image, the partitioned iterated functionsystem (PIFS will be found to encode an image through Quantum Evolutionary Algorithm (QEA methodExperimental results show that our method has a better performance than GA and conventional fractalimage compression algorithms.

  18. An effective fractal image compression algorithm based on plane fitting

    Institute of Scientific and Technical Information of China (English)

    Wang Xing-Yuan; Guo Xing; Zhang Dan-Dan


    A new method using plane fitting to decide whether a domain block is similar enough to a given range block is proposed in this paper.First,three coefficients are computed for describing each range and domain block.Then,the best-matched one for every range block is obtained by analysing the relation between their coefficients.Experimental results show that the proposed method can shorten encoding time markedly,while the retrieved image quality is still acceptable.In the decoding step,a kind of simple line fitting on block boundaries is used to reduce blocking effects.At the same time,the proposed method can also achieve a high compression ratio.

  19. Fast Fractal Compression of Satellite and Medical Images Based on Domain-Range Entropy

    Directory of Open Access Journals (Sweden)

    Ramesh Babu Inampudi


    Full Text Available Fractal image Compression is a lossy compression technique developed in the early 1990s. It makes use of the local self-similarity property existing in an image and finds a contractive mapping affine transformation (fractal transformT, such that the fixed point of T is close to the given image in a suitable metric. It has generated much interest due to its promise of high compression ratios with good decompression quality. The other advantage is its multi resolution property, i.e. an image can be decoded at higher or lower resolutions than the original without much degradation in quality. However, the encoding time is computationally intensive. In this paper, a fast fractal image compression method based on the domain-range entropy is proposed to reduce the encoding time, while maintaining the fidelity and compression ratio of the decoded image. The method is a two-step process. First, domains that are similar i.e. domains having nearly equal variances are eliminated from the domain pool. Second, during the encoding phase, only domains and ranges having equal entropies (with an adaptive error threshold, λdepth for each quadtree depth are compared for a match within the rms error tolerance. As a result, many unqualified domains are removed from comparison and a significant reduction in encoding time is expected. The method is applied for compression of satellite and medical images (512x512, 8-bit gray scale. Experimental results show that the proposed method yields superior performance over Fisher’s classified search and other methods.


    Directory of Open Access Journals (Sweden)

    A.R. Nadira Banu Kamal


    Full Text Available The storage requirements for images can be excessive, if true color and a high-perceived image quality are desired. An RGB image may be viewed as a stack of three gray-scale images that when fed into the red, green and blue inputs of a color monitor, produce a color image on the screen. The abnormal size of many images leads to long, costly, transmission times. Hence, an iteration free fractal algorithm is proposed in this research paper to design an efficient search of the domain pools for colour image compression using Genetic Algorithm (GA. The proposed methodology reduces the coding process time and intensive computation tasks. Parameters such as image quality, compression ratio and coding time are analyzed. It is observed that the proposed method achieves excellent performance in image quality with reduction in storage space.

  1. Classification of vertebral compression fractures in magnetic resonance images using spectral and fractal analysis. (United States)

    Azevedo-Marques, P M; Spagnoli, H F; Frighetto-Pereira, L; Menezes-Reis, R; Metzner, G A; Rangayyan, R M; Nogueira-Barbosa, M H


    Fractures with partial collapse of vertebral bodies are generically referred to as "vertebral compression fractures" or VCFs. VCFs can have different etiologies comprising trauma, bone failure related to osteoporosis, or metastatic cancer affecting bone. VCFs related to osteoporosis (benign fractures) and to cancer (malignant fractures) are commonly found in the elderly population. In the clinical setting, the differentiation between benign and malignant fractures is complex and difficult. This paper presents a study aimed at developing a system for computer-aided diagnosis to help in the differentiation between malignant and benign VCFs in magnetic resonance imaging (MRI). We used T1-weighted MRI of the lumbar spine in the sagittal plane. Images from 47 consecutive patients (31 women, 16 men, mean age 63 years) were studied, including 19 malignant fractures and 54 benign fractures. Spectral and fractal features were extracted from manually segmented images of 73 vertebral bodies with VCFs. The classification of malignant vs. benign VCFs was performed using the k-nearest neighbor classifier with the Euclidean distance. Results obtained show that combinations of features derived from Fourier and wavelet transforms, together with the fractal dimension, were able to obtain correct classification rate up to 94.7% with area under the receiver operating characteristic curve up to 0.95.

  2. A novel fractal image compression scheme with block classification and sorting based on Pearson's correlation coefficient. (United States)

    Wang, Jianji; Zheng, Nanning


    Fractal image compression (FIC) is an image coding technology based on the local similarity of image structure. It is widely used in many fields such as image retrieval, image denoising, image authentication, and encryption. FIC, however, suffers from the high computational complexity in encoding. Although many schemes are published to speed up encoding, they do not easily satisfy the encoding time or the reconstructed image quality requirements. In this paper, a new FIC scheme is proposed based on the fact that the affine similarity between two blocks in FIC is equivalent to the absolute value of Pearson's correlation coefficient (APCC) between them. First, all blocks in the range and domain pools are chosen and classified using an APCC-based block classification method to increase the matching probability. Second, by sorting the domain blocks with respect to APCCs between these domain blocks and a preset block in each class, the matching domain block for a range block can be searched in the selected domain set in which these APCCs are closer to APCC between the range block and the preset block. Experimental results show that the proposed scheme can significantly speed up the encoding process in FIC while preserving the reconstructed image quality well.

  3. A Distributed Web GIS Application Based on Component Technology and Fractal Image Compression

    Institute of Scientific and Technical Information of China (English)

    HAN Jie


    Geographic information system (GIS) technology is a combination of computer's graphic and database to store and process spatial information. According to the users' demands, GIS exports the exact geographic information and related information for users with map and description through associating geographic place and related attributes. Based on the existing popular technology, this paper presents a distributed web GIS application based on component technology and fractal image compression. It presents the basic framework of the proposed system at first, and then discusses the key technology of implementing this system; finally it designs a three-layer WEB GIS instance using VC++ ATL based on Geo Beans. The example suggests the proposed design is correct, feasible and valid.

  4. Algorithm of Fractal Image Compression on CUDA%CUDA平台的分形图像压缩方法

    Institute of Scientific and Technical Information of China (English)



    In fractal image compression, the matching procedure between range blocks and domain blocks can be executed in parallel manner. Therefore, in order to accelerate fractal image compression by using GPU, we apply compute unified device architecture CU-DA to it. This paper presents a hybrid quad tree compression approach of GPU and CPU, which accelerates the distance calculation that consumes time mostly in GPU side, and handles quad tree division, initialization and so on in CPU side. In GPU part, we discuss two methods, single range block and multiple range blocks. Analysis and experiments show that the latter can achieve better parallel performance than the former. When our approach is compared with traditional pure CPU ones, it can improve fractal compression speed greatly.%考虑到分形图像压缩中,值域块与定义域块之间的匹配能够并行计算这一特点,利用计算统一设备平台CUDA进行GPU加速.提出一种GPU、CPU相结合的四叉树压缩算法,通过GPU加速最耗时的距离计算部分,而四叉树分割、初始化等部分仍采用CPU完成.在GPU加速部分,讨论了单值域块与多值域块的方法,通过分析与实验表明,后者比前者能进一步提高并行性能.与传统的纯CPU方法相比,本文的方法能够显著提高压缩速度.

  5. Fractal image encoding based on adaptive search

    Institute of Scientific and Technical Information of China (English)

    Kya Berthe; Yang Yang; Huifang Bi


    Finding the optimal algorithm between an efficient encoding process and the rate distortion is the main research in fractal image compression theory. A new method has been proposed based on the optimization of the Least-Square Error and the orthogonal projection. A large number of domain blocks can be eliminated in order to speed-up fractal image compression. Moreover, since the rate-distortion performance of most fractal image coders is not satisfactory, an efficient bit allocation algorithm to improve the rate distortion is also proposed. The implementation and comparison have been done with the feature extraction method to prove the efficiency of the proposed method.

  6. A fast and efficient hybrid fractal-wavelet image coder. (United States)

    Iano, Yuzo; da Silva, Fernando Silvestre; Cruz, Ana Lúcia Mendes


    The excellent visual quality and compression rate of fractal image coding have limited applications due to exhaustive inherent encoding time. This paper presents a new fast and efficient image coder that applies the speed of the wavelet transform to the image quality of the fractal compression. Fast fractal encoding using Fisher's domain classification is applied to the lowpass subband of wavelet transformed image and a modified set partitioning in hierarchical trees (SPIHT) coding, on the remaining coefficients. Furthermore, image details and wavelet progressive transmission characteristics are maintained, no blocking effects from fractal techniques are introduced, and the encoding fidelity problem common in fractal-wavelet hybrid coders is solved. The proposed scheme promotes an average of 94% reduction in encoding-decoding time comparing to the pure accelerated Fractal coding results. The simulations also compare the results to the SPIHT wavelet coding. In both cases, the new scheme improves the subjective quality of pictures for high-medium-low bitrates.

  7. Fractal images induce fractal pupil dilations and constrictions. (United States)

    Moon, P; Muday, J; Raynor, S; Schirillo, J; Boydston, C; Fairbanks, M S; Taylor, R P


    Fractals are self-similar structures or patterns that repeat at increasingly fine magnifications. Research has revealed fractal patterns in many natural and physiological processes. This article investigates pupillary size over time to determine if their oscillations demonstrate a fractal pattern. We predict that pupil size over time will fluctuate in a fractal manner and this may be due to either the fractal neuronal structure or fractal properties of the image viewed. We present evidence that low complexity fractal patterns underlie pupillary oscillations as subjects view spatial fractal patterns. We also present evidence implicating the autonomic nervous system's importance in these patterns. Using the variational method of the box-counting procedure we demonstrate that low complexity fractal patterns are found in changes within pupil size over time in millimeters (mm) and our data suggest that these pupillary oscillation patterns do not depend on the fractal properties of the image viewed.

  8. Trabecular architecture analysis in femur radiographic images using fractals. (United States)

    Udhayakumar, G; Sujatha, C M; Ramakrishnan, S


    Trabecular bone is a highly complex anisotropic material that exhibits varying magnitudes of strength in compression and tension. Analysis of the trabecular architectural alteration that manifest as loss of trabecular plates and connection has been shown to yield better estimation of bone strength. In this work, an attempt has been made toward the development of an automated system for investigation of trabecular femur bone architecture using fractal analysis. Conventional radiographic femur bone images recorded using standard protocols are used in this study. The compressive and tensile regions in the images are delineated using preprocessing procedures. The delineated images are analyzed using Higuchi's fractal method to quantify pattern heterogeneity and anisotropy of trabecular bone structure. The results show that the extracted fractal features are distinct for compressive and tensile regions of normal and abnormal human femur bone. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  9. Fractal methods in image analysis and coding


    Neary, David


    In this thesis we present an overview of image processing techniques which use fractal methods in some way. We show how these fields relate to each other, and examine various aspects of fractal methods in each area. The three principal fields of image processing and analysis th a t we examine are texture classification, image segmentation and image coding. In the area of texture classification, we examine fractal dimension estimators, comparing these methods to other methods in use, a...

  10. Fractal Image Coding with Digital Watermarks

    Directory of Open Access Journals (Sweden)

    Z. Klenovicova


    Full Text Available In this paper are presented some results of implementation of digitalwatermarking methods into image coding based on fractal principles. Thepaper focuses on two possible approaches of embedding digitalwatermarks into fractal code of images - embedding digital watermarksinto parameters for position of similar blocks and coefficients ofblock similarity. Both algorithms were analyzed and verified on grayscale static images.

  11. Fast Fractal Image Encoding Based on Special Image Features

    Institute of Scientific and Technical Information of China (English)

    ZHANG Chao; ZHOU Yiming; ZHANG Zengke


    The fractal image encoding method has received much attention for its many advantages over other methods,such as high decoding quality at high compression ratios. However, because every range block must be compared to all domain blocks in the codebook to find the best-matched one during the coding procedure, baseline fractal coding (BFC) is quite time consuming. To speed up fractal coding, a new fast fractal encoding algorithm is proposed. This algorithm aims at reducing the size of the search window during the domain-range matching process to minimize the computational cost. A new theorem presented in this paper shows that a special feature of the image can be used to do this work. Based on this theorem, the most inappropriate domain blocks, whose features are not similar to that of the given range block, are excluded before matching. Thus, the best-matched block can be captured much more quickly than in the BFC approachThe experimental results show that the runtime of the proposed method is reduced greatly compared to the BFC method. At the same time,the new algorithm also achieves high reconstructed image quality. In addition,the method can be incorporated with other fast algorithms to achieve better performance.Therefore, the proposed algorithm has a much better application potential than BFC.

  12. H.264/AVC Video Compressed Traces: Multifractal and Fractal Analysis

    Directory of Open Access Journals (Sweden)

    Samčović Andreja


    Full Text Available Publicly available long video traces encoded according to H.264/AVC were analyzed from the fractal and multifractal points of view. It was shown that such video traces, as compressed videos (H.261, H.263, and MPEG-4 Version 2 exhibit inherent long-range dependency, that is, fractal, property. Moreover they have high bit rate variability, particularly at higher compression ratios. Such signals may be better characterized by multifractal (MF analysis, since this approach describes both local and global features of the process. From multifractal spectra of the frame size video traces it was shown that higher compression ratio produces broader and less regular MF spectra, indicating to higher MF nature and the existence of additive components in video traces. Considering individual frames (I, P, and B and their MF spectra one can approve additive nature of compressed video and the particular influence of these frames to a whole MF spectrum. Since compressed video occupies a main part of transmission bandwidth, results obtained from MF analysis of compressed video may contribute to more accurate modeling of modern teletraffic. Moreover, by appropriate choice of the method for estimating MF quantities, an inverse MF analysis is possible, that means, from a once derived MF spectrum of observed signal it is possible to recognize and extract parts of the signal which are characterized by particular values of multifractal parameters. Intensive simulations and results obtained confirm the applicability and efficiency of MF analysis of compressed video.

  13. FIRE: fractal indexing with robust extensions for image databases. (United States)

    Distasi, Riccardo; Nappi, Michele; Tucci, Maurizio


    As already documented in the literature, fractal image encoding is a family of techniques that achieves a good compromise between compression and perceived quality by exploiting the self-similarities present in an image. Furthermore, because of its compactness and stability, the fractal approach can be used to produce a unique signature, thus obtaining a practical image indexing system. Since fractal-based indexing systems are able to deal with the images in compressed form, they are suitable for use with large databases. We propose a system called FIRE, which is then proven to be invariant under three classes of pixel intensity transformations and under geometrical isometries such as rotations by multiples of /spl pi//2 and reflections. This property makes the system robust with respect to a large class of image transformations that can happen in practical applications: the images can be retrieved even in the presence of illumination and/or color alterations. Additionally, the experimental results show the effectiveness of FIRE in terms of both compression and retrieval accuracy.

  14. Pre-Service Teachers' Concept Images on Fractal Dimension (United States)

    Karakus, Fatih


    The analysis of pre-service teachers' concept images can provide information about their mental schema of fractal dimension. There is limited research on students' understanding of fractal and fractal dimension. Therefore, this study aimed to investigate the pre-service teachers' understandings of fractal dimension based on concept image. The…

  15. A fractal-based image encryption system

    KAUST Repository

    Abd-El-Hafiz, S. K.


    This study introduces a novel image encryption system based on diffusion and confusion processes in which the image information is hidden inside the complex details of fractal images. A simplified encryption technique is, first, presented using a single-fractal image and statistical analysis is performed. A general encryption system utilising multiple fractal images is, then, introduced to improve the performance and increase the encryption key up to hundreds of bits. This improvement is achieved through several parameters: feedback delay, multiplexing and independent horizontal or vertical shifts. The effect of each parameter is studied separately and, then, they are combined to illustrate their influence on the encryption quality. The encryption quality is evaluated using different analysis techniques such as correlation coefficients, differential attack measures, histogram distributions, key sensitivity analysis and the National Institute of Standards and Technology (NIST) statistical test suite. The obtained results show great potential compared to other techniques.

  16. An efficient fractal image coding algorithm using unified feature and DCT

    Energy Technology Data Exchange (ETDEWEB)

    Zhou Yiming [Department of Automation, Tsinghua University, Beijing 100084 (China)], E-mail:; Zhang Chao; Zhang Zengke [Department of Automation, Tsinghua University, Beijing 100084 (China)


    Fractal image compression is a promising technique to improve the efficiency of image storage and image transmission with high compression ratio, however, the huge time consumption for the fractal image coding is a great obstacle to the practical applications. In order to improve the fractal image coding, efficient fractal image coding algorithms using a special unified feature and a DCT coder are proposed in this paper. Firstly, based on a necessary condition to the best matching search rule during fractal image coding, the fast algorithm using a special unified feature (UFC) is addressed, and it can reduce the search space obviously and exclude most inappropriate matching subblocks before the best matching search. Secondly, on the basis of UFC algorithm, in order to improve the quality of the reconstructed image, a DCT coder is combined to construct a hybrid fractal image algorithm (DUFC). Experimental results show that the proposed algorithms can obtain good quality of the reconstructed images and need much less time than the baseline fractal coding algorithm.

  17. Lossless Medical Image Compression

    Directory of Open Access Journals (Sweden)

    Nagashree G


    Full Text Available Image compression has become an important process in today‟s world of information exchange. Image compression helps in effective utilization of high speed network resources. Medical Image Compression is very important in the present world for efficient archiving and transmission of images. In this paper two different approaches for lossless image compression is proposed. One uses the combination of 2D-DWT & FELICS algorithm for lossy to lossless Image Compression and another uses combination of prediction algorithm and Integer wavelet Transform (IWT. To show the effectiveness of the methodology used, different image quality parameters are measured and shown the comparison of both the approaches. We observed the increased compression ratio and higher PSNR values.

  18. Wavelet image compression

    CERN Document Server

    Pearlman, William A


    This book explains the stages necessary to create a wavelet compression system for images and describes state-of-the-art systems used in image compression standards and current research. It starts with a high level discussion of the properties of the wavelet transform, especially the decomposition into multi-resolution subbands. It continues with an exposition of the null-zone, uniform quantization used in most subband coding systems and the optimal allocation of bitrate to the different subbands. Then the image compression systems of the FBI Fingerprint Compression Standard and the JPEG2000 S

  19. Fractal Image Editing with PhotoFrac

    Directory of Open Access Journals (Sweden)

    Tim McGraw


    Full Text Available In this paper, we describe the development and use of PhotoFrac, an application that allows artists and designers to turn digital images into fractal patterns interactively. Fractal equations are a rich source of procedural texture and detail, but controlling the patterns and incorporating traditional media has been difficult. Additionally, the iterative nature of fractal calculations makes implementation of interactive techniques on mobile devices and web apps challenging. We overcome these problems by using an image coordinate based orbit trapping technique that permits a user-selected image to be embedded into the fractal. Performance challenges are addressed by exploiting the processing power of graphic processing unit (GPU and precomputing some intermediate results for use on mobile devices. This paper presents results and qualitative analyses of the tool by four artists (the authors who used the PhotoFrac application to create new artworks from original digital images. The final results demonstrate a fusion of traditional media with algorithmic art.

  20. Multispectral image fusion based on fractal features (United States)

    Tian, Jie; Chen, Jie; Zhang, Chunhua


    Imagery sensors have been one indispensable part of the detection and recognition systems. They are widely used to the field of surveillance, navigation, control and guide, et. However, different imagery sensors depend on diverse imaging mechanisms, and work within diverse range of spectrum. They also perform diverse functions and have diverse circumstance requires. So it is unpractical to accomplish the task of detection or recognition with a single imagery sensor under the conditions of different circumstances, different backgrounds and different targets. Fortunately, the multi-sensor image fusion technique emerged as important route to solve this problem. So image fusion has been one of the main technical routines used to detect and recognize objects from images. While, loss of information is unavoidable during fusion process, so it is always a very important content of image fusion how to preserve the useful information to the utmost. That is to say, it should be taken into account before designing the fusion schemes how to avoid the loss of useful information or how to preserve the features helpful to the detection. In consideration of these issues and the fact that most detection problems are actually to distinguish man-made objects from natural background, a fractal-based multi-spectral fusion algorithm has been proposed in this paper aiming at the recognition of battlefield targets in the complicated backgrounds. According to this algorithm, source images are firstly orthogonally decomposed according to wavelet transform theories, and then fractal-based detection is held to each decomposed image. At this step, natural background and man-made targets are distinguished by use of fractal models that can well imitate natural objects. Special fusion operators are employed during the fusion of area that contains man-made targets so that useful information could be preserved and features of targets could be extruded. The final fused image is reconstructed from the

  1. Fast Fractal Image Encoding Using an Improved Search Scheme

    Institute of Scientific and Technical Information of China (English)


    As fractal image encoding algorithms can yield high-resolution reconstructed images at very high compression ratio, and therefore, have a great potential for improving the efficiency of image storage and image transmission. However, the baseline fractal encoding algorithm requires a great deal of time to complete the best matching search between the range and domain blocks, which greatly limits practical applications of the algorithm. In order to solve this problem, a necessary condition of the best matching search based on an image feature is proposed in this paper. The proposed method can reduce the search space significantly and excludes the most inappropriate domain blocks for each range block before carrying out the best matching search. Experimental results show that the proposed algorithm can produce good quality reconstructed images and requires much less time than the baseline encoding algorithm. Specifically, the new algorithm can speed up encoding by about 85 times with a loss of just 3 dB in the peak signal to noise ratio (PSNR), and yields compression ratios close to 34.

  2. Determination of Uniaxial Compressive Strength of Ankara Agglomerate Considering Fractal Geometry of Blocks (United States)

    Coskun, Aycan; Sonmez, Harun; Ercin Kasapoglu, K.; Ozge Dinc, S.; Celal Tunusluoglu, M.


    The uniaxial compressive strength (UCS) of rock material is a crucial parameter to be used for design stages of slopes, tunnels and foundations to be constructed in/on geological medium. However, preparation of high quality cores from geological mixtures or fragmented rocks such as melanges, fault rocks, coarse pyroclastic rocks, breccias and sheared serpentinites is often extremely difficult. According to the studies performed in literature, this type of geological materials may be grouped as welded and unwelded birmocks. Success of preparation of core samples from welded bimrocks is slightly better than unwelded ones. Therefore, some studies performed on the welded bimrocks to understand the mechanical behavior of geological mixture materials composed of stronger and weaker components (Gokceoglu, 2002; Sonmez et al., 2004; Sonmez et al., 2006; Kahraman, et al., 2008). The overall strength of bimrocks are generally depends on strength contrast between blocks and matrix; types and strength of matrix; type, size, strength, shape and orientation of blocks and volumetric block proportion. In previously proposed prediction models, while UCS of unwelded bimrocks may be determined by decreasing the UCS of matrix considering the volumetric block proportion, the welded ones can be predicted by considering both UCS of matrix and blocks together (Lindquist, 1994; Lindquist and Goodman, 1994; Sonmez et al., 2006 and Sonmez et al., 2009). However, there is a few attempts were performed about the effect of blocks shape and orientation on the strength of bimrock (Linqduist, 1994 and Kahraman, et al., 2008). In this study, Ankara agglomerate, which is composed of andesite blocks and surrounded weak tuff matrix, was selected as study material. Image analyses were performed on bottom, top and side faces of cores to identify volumetric block portions. In addition to the image analyses, andesite blocks on bottom, top and side faces were digitized for determination of fractal

  3. Pyramidal fractal dimension for high resolution images (United States)

    Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut


    Fractal analysis (FA) should be able to yield reliable and fast results for high-resolution digital images to be applicable in fields that require immediate outcomes. Triggered by an efficient implementation of FA for binary images, we present three new approaches for fractal dimension (D) estimation of images that utilize image pyramids, namely, the pyramid triangular prism, the pyramid gradient, and the pyramid differences method (PTPM, PGM, PDM). We evaluated the performance of the three new and five standard techniques when applied to images with sizes up to 8192 × 8192 pixels. By using artificial fractal images created by three different generator models as ground truth, we determined the scale ranges with minimum deviations between estimation and theory. All pyramidal methods (PM) resulted in reasonable D values for images of all generator models. Especially, for images with sizes ≥1024 ×1024 pixels, the PMs are superior to the investigated standard approaches in terms of accuracy and computation time. A measure for the possibility to differentiate images with different intrinsic D values did show not only that the PMs are well suited for all investigated image sizes, and preferable to standard methods especially for larger images, but also that results of standard D estimation techniques are strongly influenced by the image size. Fastest results were obtained with the PDM and PGM, followed by the PTPM. In terms of absolute D values best performing standard methods were magnitudes slower than the PMs. Concluding, the new PMs yield high quality results in short computation times and are therefore eligible methods for fast FA of high-resolution images.

  4. Image compression for dermatology (United States)

    Cookson, John P.; Sneiderman, Charles; Colaianni, Joseph; Hood, Antoinette F.


    Color 35mm photographic slides are commonly used in dermatology for education, and patient records. An electronic storage and retrieval system for digitized slide images may offer some advantages such as preservation and random access. We have integrated a system based on a personal computer (PC) for digital imaging of 35mm slides that depict dermatologic conditions. Such systems require significant resources to accommodate the large image files involved. Methods to reduce storage requirements and access time through image compression are therefore of interest. This paper contains an evaluation of one such compression method that uses the Hadamard transform implemented on a PC-resident graphics processor. Image quality is assessed by determining the effect of compression on the performance of an image feature recognition task.


    Institute of Scientific and Technical Information of China (English)


    A novel approach of printed circuit board(PCB)image locating is presentedBased on the rectangle mark image edge of PCB,the featur es is used to describe the image edge and the fractal properby of image edge is analyzedIt is proved that the rectangle mark image edge of PCB has some fracta l featuresA method of deleting unordinary curve noise and compensating the l ength of the fractal curve is put forward,which can get the fractal dimension value from one complex edge fractal property curveThe relation between the dim ension of the fractal curve and the turning angle of image can be acquired from an equation,as a result,the angle value of the PCB image is got exactlyA real image edge analysis result confirms that the method based on the fractal theory is a new powerful tool for angle locating on PCB and related image area

  6. Comparison and Analysis of Three Kinds of Optimizing Fractal Image Compression Algorithms%三种优化分形图片压缩算法比较分析

    Institute of Scientific and Technical Information of China (English)



    Fractal image compression (FIC) is an image compression algorithm based on partitioned iterative function system (PIFS), i. e. self-similarity of natural image is used to conduct data compression, however, its huge time-consuming limits its real application. The time-consuming of FIC is mainly embodied in the aspects of the process of the optimal matched domain block search of every range block in defined domain block, calculation, quantification and storage of all affine transformation parameters and image partition process. In order to overcome the shortcoming of high computation cost, this paper uses optimization algorithm such as GA, ACO and PSO to reduce the search space for finding the self similarity in the given image and to speed up encoding. Experiment results show that optimized FIC can effectively reduce encoding time while peak value of signal-to-noise ratio is maintained.%分形图像压缩(FIC)是基于局部迭代函数系统(PIFS)的图像压缩算法,即用自然景物的自相似性来进行数据压缩;但是巨大的耗时量限制了其实际应用;FIC的耗时量主要体现在以下几方面:每一个值域块的最优匹配块的搜索都要在所有的定义域块中进行,需要花费大量的时间;计算、量化、存储所有的仿射变换参数;图像分割过程;为了克服FIC计算成本高的缺点,采用了遗传算法、蚁群算法和粒子群算法减少寻找相似定义域块的搜索空间,加快编码速度;实验结果表明:优化后的FIC能有效地减少编码时间同时保持峰值信噪比。

  7. Image fractal coding algorithm based on complex exponent moments and minimum variance (United States)

    Yang, Feixia; Ping, Ziliang; Zhou, Suhua


    Image fractal coding possesses very high compression ratio, the main problem is low speed of coding. The algorithm based on Complex Exponent Moments(CEM) and minimum variance is proposed to speed up the fractal coding compression. The definition of CEM and its FFT algorithm are presented, and the multi-distorted invariance of CEM are discussed. The multi-distorted invariance of CEM is fit to the fractal property of an image. The optimal matching pair of range blocks and domain blocks in an image is determined by minimizing the variance of their CEM. Theory analysis and experimental results have proved that the algorithm can dramatically reduce the iteration time and speed up image encoding and decoding process.

  8. Image data compression investigation (United States)

    Myrie, Carlos


    NASA continuous communications systems growth has increased the demand for image transmission and storage. Research and analysis was conducted on various lossy and lossless advanced data compression techniques or approaches used to improve the efficiency of transmission and storage of high volume stellite image data such as pulse code modulation (PCM), differential PCM (DPCM), transform coding, hybrid coding, interframe coding, and adaptive technique. In this presentation, the fundamentals of image data compression utilizing two techniques which are pulse code modulation (PCM) and differential PCM (DPCM) are presented along with an application utilizing these two coding techniques.

  9. An Approach to Extracting Fractal in Remote Sensing Image

    Institute of Scientific and Technical Information of China (English)

    ZHU Ji; LIN Ziyu; WANG Angsheng; CUI Peng


    In order to apply the spatial structure information to remote sensing interpretation through fractal theory,an algorithm is introduced to compute the single pixel fractal dimension in remote sensing images. After a computer program was written according to the algorithm, the ETM+ images were calculated to obtain their fractal data through the program. The algorithm has following characteristics: The obtained fractal values indicate the complexity of image, and have positive correlation with the complexity of images and ground objects. Moreover, the algorithm is simple and reliable, and easy to be implemented.

  10. Estimating fractal dimension of medical images (United States)

    Penn, Alan I.; Loew, Murray H.


    Box counting (BC) is widely used to estimate the fractal dimension (fd) of medical images on the basis of a finite set of pixel data. The fd is then used as a feature to discriminate between healthy and unhealthy conditions. We show that BC is ineffective when used on small data sets and give examples of published studies in which researchers have obtained contradictory and flawed results by using BC to estimate the fd of data-limited medical images. We present a new method for estimating fd of data-limited medical images. In the new method, fractal interpolation functions (FIFs) are used to generate self-affine models of the underlying image; each model, upon discretization, approximates the original data points. The fd of each FIF is analytically evaluated. The mean of the fds of the FIFs is the estimate of the fd of the original data. The standard deviation of the fds of the FIFs is a confidence measure of the estimate. The goodness-of-fit of the discretized models to the original data is a measure of self-affinity of the original data. In a test case, the new method generated a stable estimate of fd of a rib edge in a standard chest x-ray; box counting failed to generate a meaningful estimate of the same image.

  11. Lossless wavelet compression on medical image (United States)

    Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong


    An increasing number of medical imagery is created directly in digital form. Such as Clinical image Archiving and Communication Systems (PACS), as well as telemedicine networks require the storage and transmission of this huge amount of medical image data. Efficient compression of these data is crucial. Several lossless and lossy techniques for the compression of the data have been proposed. Lossless techniques allow exact reconstruction of the original imagery, while lossy techniques aim to achieve high compression ratios by allowing some acceptable degradation in the image. Lossless compression does not degrade the image, thus facilitating accurate diagnosis, of course at the expense of higher bit rates, i.e. lower compression ratios. Various methods both for lossy (irreversible) and lossless (reversible) image compression are proposed in the literature. The recent advances in the lossy compression techniques include different methods such as vector quantization. Wavelet coding, neural networks, and fractal coding. Although these methods can achieve high compression ratios (of the order 50:1, or even more), they do not allow reconstructing exactly the original version of the input data. Lossless compression techniques permit the perfect reconstruction of the original image, but the achievable compression ratios are only of the order 2:1, up to 4:1. In our paper, we use a kind of lifting scheme to generate truly loss-less non-linear integer-to-integer wavelet transforms. At the same time, we exploit the coding algorithm producing an embedded code has the property that the bits in the bit stream are generated in order of importance, so that all the low rate codes are included at the beginning of the bit stream. Typically, the encoding process stops when the target bit rate is met. Similarly, the decoder can interrupt the decoding process at any point in the bit stream, and still reconstruct the image. Therefore, a compression scheme generating an embedded code can

  12. Compressive Transient Imaging

    KAUST Repository

    Sun, Qilin


    High resolution transient/3D imaging technology is of high interest in both scientific research and commercial application. Nowadays, all of the transient imaging methods suffer from low resolution or time consuming mechanical scanning. We proposed a new method based on TCSPC and Compressive Sensing to achieve a high resolution transient imaging with a several seconds capturing process. Picosecond laser sends a serious of equal interval pulse while synchronized SPAD camera\\'s detecting gate window has a precise phase delay at each cycle. After capturing enough points, we are able to make up a whole signal. By inserting a DMD device into the system, we are able to modulate all the frames of data using binary random patterns to reconstruct a super resolution transient/3D image later. Because the low fill factor of SPAD sensor will make a compressive sensing scenario ill-conditioned, We designed and fabricated a diffractive microlens array. We proposed a new CS reconstruction algorithm which is able to denoise at the same time for the measurements suffering from Poisson noise. Instead of a single SPAD senor, we chose a SPAD array because it can drastically reduce the requirement for the number of measurements and its reconstruction time. Further more, it not easy to reconstruct a high resolution image with only one single sensor while for an array, it just needs to reconstruct small patches and a few measurements. In this thesis, we evaluated the reconstruction methods using both clean measurements and the version corrupted by Poisson noise. The results show how the integration over the layers influence the image quality and our algorithm works well while the measurements suffer from non-trival Poisson noise. It\\'s a breakthrough in the areas of both transient imaging and compressive sensing.

  13. Fractal analysis: fractal dimension and lacunarity from MR images for differentiating the grades of glioma. (United States)

    Smitha, K A; Gupta, A K; Jayasree, R S


    Glioma, the heterogeneous tumors originating from glial cells, generally exhibit varied grades and are difficult to differentiate using conventional MR imaging techniques. When this differentiation is crucial in the disease prognosis and treatment, even the advanced MR imaging techniques fail to provide a higher discriminative power for the differentiation of malignant tumor from benign ones. A powerful image processing technique applied to the imaging techniques is expected to provide a better differentiation. The present study focuses on the fractal analysis of fluid attenuation inversion recovery MR images, for the differentiation of glioma. For this, we have considered the most important parameters of fractal analysis, fractal dimension and lacunarity. While fractal analysis assesses the malignancy and complexity of a fractal object, lacunarity gives an indication on the empty space and the degree of inhomogeneity in the fractal objects. Box counting method with the preprocessing steps namely binarization, dilation and outlining was used to obtain the fractal dimension and lacunarity in glioma. Statistical analysis such as one-way analysis of variance and receiver operating characteristic (ROC) curve analysis helped to compare the mean and to find discriminative sensitivity of the results. It was found that the lacunarity of low and high grade gliomas vary significantly. ROC curve analysis between low and high grade glioma for fractal dimension and lacunarity yielded 70.3% sensitivity and 66.7% specificity and 70.3% sensitivity and 88.9% specificity, respectively. The study observes that fractal dimension and lacunarity increases with an increase in the grade of glioma and lacunarity is helpful in identifying most malignant grades.

  14. Determination of fish gender using fractal analysis of ultrasound images

    DEFF Research Database (Denmark)

    McEvoy, Fintan J.; Tomkiewicz, Jonna; Støttrup, Josianne;


    The gender of cod Gadus morhua can be determined by considering the complexity in their gonadal ultrasonographic appearance. The fractal dimension (DB) can be used to describe this feature in images. B-mode gonadal ultrasound images in 32 cod, where gender was known, were collected. Fractal...... by subjective analysis alone. The mean (and standard deviation) of the fractal dimension DB for male fish was 1.554 (0.073) while for female fish it was 1.468 (0.061); the difference was statistically significant (P=0.001). The area under the ROC curve was 0.84 indicating the value of fractal analysis in gender...... result. Fractal analysis is useful for gender determination in cod. This or a similar form of analysis may have wide application in veterinary imaging as a tool for quantification of complexity in images...

  15. Displaying radiologic images on personal computers: image storage and compression--Part 2. (United States)

    Gillespy, T; Rowberg, A H


    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.

  16. Image Compression Algorithms Using Dct

    Directory of Open Access Journals (Sweden)

    Er. Abhishek Kaushik


    Full Text Available Image compression is the application of Data compression on digital images. The discrete cosine transform (DCT is a technique for converting a signal into elementary frequency components. It is widely used in image compression. Here we develop some simple functions to compute the DCT and to compress images. An image compression algorithm was comprehended using Matlab code, and modified to perform better when implemented in hardware description language. The IMAP block and IMAQ block of MATLAB was used to analyse and study the results of Image Compression using DCT and varying co-efficients for compression were developed to show the resulting image and error image from the original images. Image Compression is studied using 2-D discrete Cosine Transform. The original image is transformed in 8-by-8 blocks and then inverse transformed in 8-by-8 blocks to create the reconstructed image. The inverse DCT would be performed using the subset of DCT coefficients. The error image (the difference between the original and reconstructed image would be displayed. Error value for every image would be calculated over various values of DCT co-efficients as selected by the user and would be displayed in the end to detect the accuracy and compression in the resulting image and resulting performance parameter would be indicated in terms of MSE , i.e. Mean Square Error.

  17. An image retrieval system based on fractal dimension

    Institute of Scientific and Technical Information of China (English)


    This paper presents a new kind of image retrieval system which obtains the feature vectors of images by estimating their fractal dimension; and at the same time establishes a tree-structure image database. After preprocessing and feature extracting, a given image is matched with the standard images in the image database using a hierarchical method of image indexing.

  18. An image retrieval system based on fractal dimension. (United States)

    Yao, Min; Yi, Wen-Sheng; Shen, Bin; Dai, Hong-Hua


    This paper presents a new kind of image retrieval system which obtains the feature vectors of images by estimating their fractal dimension; and at the same time establishes a tree-structure image database. After preprocessing and feature extracting, a given image is matched with the standard images in the image database using a hierarchical method of image indexing.

  19. Image edge detection based on multi-fractal spectrum analysis

    Institute of Scientific and Technical Information of China (English)

    WANG Shao-yuan; WANG Yao-nan


    In this paper,an image edge detection method based on multi-fractal spectrum analysis is presented.The coarse grain H(o)lder exponent of the image pixels is first computed,then,its multi-fractal spectrum is estimated by the kernel estimation method.Finally,the image edge detection is done by means of different multi-fractal spectrum values.Simulation results show that this method is efficient and has better locality compared with the traditional edge detection methods such as the Sobel method.


    Directory of Open Access Journals (Sweden)

    Hynek Lauschmann


    Full Text Available The morphology of fatigue fracture surface (caused by constant cycle loading is strictly related to crack growth rate. This relation may be expressed, among other methods, by means of fractal analysis. Fractal dimension as a single numerical value is not sufficient. Two types of fractal feature vectors are discussed: multifractal and multiparametric. For analysis of images, the box-counting method for 3D is applied with respect to the non-homogeneity of dimensions (two in space, one in brightness. Examples of application are shown: images of several fracture surfaces are analyzed and related to crack growth rate.

  1. Laser image denoising technique based on multi-fractal theory (United States)

    Du, Lin; Sun, Huayan; Tian, Weiqing; Wang, Shuai


    The noise of laser images is complex, which includes additive noise and multiplicative noise. Considering the features of laser images, the basic processing capacity and defects of the common algorithm, this paper introduces the fractal theory into the research of laser image denoising. The research of laser image denoising is implemented mainly through the analysis of the singularity exponent of each pixel in fractal space and the feature of multi-fractal spectrum. According to the quantitative and qualitative evaluation of the processed image, the laser image processing technique based on fractal theory not only effectively removes the complicated noise of the laser images obtained by range-gated laser active imaging system, but can also maintains the detail information when implementing the image denoising processing. For different laser images, multi-fractal denoising technique can increase SNR of the laser image at least 1~2dB compared with other denoising techniques, which basically meet the needs of the laser image denoising technique.

  2. Compressive sensing in medical imaging. (United States)

    Graff, Christian G; Sidky, Emil Y


    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  3. Image quality (IQ) guided multispectral image compression (United States)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik


    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  4. Static Image Compression Based on Fractal Theory

    Institute of Scientific and Technical Information of China (English)



  5. Image compression in local helioseismology

    CERN Document Server

    Löptien, Björn; Gizon, Laurent; Schou, Jesper


    Context. Several upcoming helioseismology space missions are very limited in telemetry and will have to perform extensive data compression. This requires the development of new methods of data compression. Aims. We give an overview of the influence of lossy data compression on local helioseismology. We investigate the effects of several lossy compression methods (quantization, JPEG compression, and smoothing and subsampling) on power spectra and time-distance measurements of supergranulation flows at disk center. Methods. We applied different compression methods to tracked and remapped Dopplergrams obtained by the Helioseismic and Magnetic Imager onboard the Solar Dynamics Observatory. We determined the signal-to-noise ratio of the travel times computed from the compressed data as a function of the compression efficiency. Results. The basic helioseismic measurements that we consider are very robust to lossy data compression. Even if only the sign of the velocity is used, time-distance helioseismology is still...

  6. Study on Fractal Characteristics of Cracks and Pore Structure of Concrete based on Digital Image Technology


    Xianyu Jin; Bei Li; Ye Tian; Nanguo Jin; An Duan


    Based on the fractal theory, this study presents a numerical analysis on the fractal characteristics of cracks and pore structure of concrete with the help of digital image technology. The results show that concrete cracks and the micro pore distribution of concrete are of fractal characteristics and the fractal dimension ranges from 1 to 2. The fractal characteristics of pores in cracked concrete and un-cracked concrete is similar and the former fractal dimension of the micro pore structure ...

  7. Image Compression using GSOM Algorithm

    Directory of Open Access Journals (Sweden)



    Full Text Available image compression. Conventional techniques such as Huffman coding and the Shannon Fano method, LZ Method, Run Length Method, LZ-77 are more recent methods for the compression of data. A traditional approach to reduce the large amount of data would be to discard some data redundancy and introduce some noise after reconstruction. We present a neural network based Growing self-organizing map technique that may be a reliable and efficient way to achieve vector quantization. Typical application of such algorithm is image compression. Moreover, Kohonen networks realize a mapping between an input and an output space that preserves topology. This feature can be used to build new compression schemes which allow obtaining better compression rate than with classical method as JPEG without reducing the image quality .the experiment result show that proposed algorithm improve the compression ratio in BMP, JPG and TIFF File.

  8. Compressive Sensing for Quantum Imaging (United States)

    Howland, Gregory A.

    This thesis describes the application of compressive sensing to several challenging problems in quantum imaging with practical and fundamental implications. Compressive sensing is a measurement technique that compresses a signal during measurement such that it can be dramatically undersampled. Compressive sensing has been shown to be an extremely efficient measurement technique for imaging, particularly when detector arrays are not available. The thesis first reviews compressive sensing through the lens of quantum imaging and quantum measurement. Four important applications and their corresponding experiments are then described in detail. The first application is a compressive sensing, photon-counting lidar system. A novel depth mapping technique that uses standard, linear compressive sensing is described. Depth maps up to 256 x 256 pixel transverse resolution are recovered with depth resolution less than 2.54 cm. The first three-dimensional, photon counting video is recorded at 32 x 32 pixel resolution and 14 frames-per-second. The second application is the use of compressive sensing for complementary imaging---simultaneously imaging the transverse-position and transverse-momentum distributions of optical photons. This is accomplished by taking random, partial projections of position followed by imaging the momentum distribution on a cooled CCD camera. The projections are shown to not significantly perturb the photons' momenta while allowing high resolution position images to be reconstructed using compressive sensing. A variety of objects and their diffraction patterns are imaged including the double slit, triple slit, alphanumeric characters, and the University of Rochester logo. The third application is the use of compressive sensing to characterize spatial entanglement of photon pairs produced by spontaneous parametric downconversion. The technique gives a theoretical speedup N2/log N for N-dimensional entanglement over the standard raster scanning technique

  9. Fractal-based image texture analysis of trabecular bone architecture. (United States)

    Jiang, C; Pitt, R E; Bertram, J E; Aneshansley, D J


    Fractal-based image analysis methods are investigated to extract textural features related to the anisotropic structure of trabecular bone from the X-ray images of cubic bone specimens. Three methods are used to quantify image textural features: power spectrum, Minkowski dimension and mean intercept length. The global fractal dimension is used to describe the overall roughness of the image texture. The anisotropic features formed by the trabeculae are characterised by a fabric ellipse, whose orientation and eccentricity reflect the textural anisotropy of the image. Tests of these methods with synthetic images of known fractal dimension show that the Minkowski dimension provides a more accurate and consistent estimation of global fractal dimension. Tests on bone x-ray (eccentricity range 0.25-0.80) images indicate that the Minkowski dimension is more sensitive to the changes in textural orientation. The results suggest that the Minkowski dimension is a better measure for characterising trabecular bone anisotropy in the x-ray images of thick specimens.

  10. Compressive passive millimeter wave imager (United States)

    Gopalsami, Nachappa; Liao, Shaolin; Elmer, Thomas W; Koehl, Eugene R; Heifetz, Alexander; Raptis, Apostolos C


    A compressive scanning approach for millimeter wave imaging and sensing. A Hadamard mask is positioned to receive millimeter waves from an object to be imaged. A subset of the full set of Hadamard acquisitions is sampled. The subset is used to reconstruct an image representing the object.

  11. A New Approach in Cryptographic Systems Using Fractal Image Coding

    Directory of Open Access Journals (Sweden)

    Nadia M.G. Al-Saidi


    Full Text Available Problem statement: With the rapid development in the communications and information transmissions there is a growing demand for new approaches that increase the security of cryptographic systems. Approach: Therefore some emerging theories, such as fractals, can be adopted to provide a contribution toward this goal. In this study we proposed a new cryptographic system utilizing fractal theories; this approach exploited the main feature of fractals generated by IFS techniques. Results: Double enciphering and double deciphering methods performed to enhance the security of the system. The encrypted date represented the attractor generated by the IFS transformation, collage theorem was used to find the IFSM for decrypting data. Conclusion/Recommendations: The proposed method gave the possibility to hide maximum amount of data in an image that represent the attractor of the IFS without degrading its quality and to make the hidden data robust enough to withstand known cryptographic attacks and image processing techniques which did not change the appearance of image.

  12. Liver ultrasound image classification by using fractal dimension of edge (United States)

    Moldovanu, Simona; Bibicu, Dorin; Moraru, Luminita


    Medical ultrasound image edge detection is an important component in increasing the number of application of segmentation, and hence it has been subject of many studies in the literature. In this study, we have classified the liver ultrasound images (US) combining Canny and Sobel edge detectors with fractal analysis in order to provide an indicator about of the US images roughness. We intend to provide a classification rule of the focal liver lesions as: cirrhotic liver, liver hemangioma and healthy liver. For edges detection the Canny and Sobel operators were used. Fractal analyses have been applied for texture analysis and classification of focal liver lesions according to fractal dimension (FD) determined by using the Box Counting method. To assess the performance and accuracy rate of the proposed method the contrast-to-noise (CNR) is analyzed.

  13. Morphological Transform for Image Compression

    Directory of Open Access Journals (Sweden)

    Luis Pastor Sanchez Fernandez


    Full Text Available A new method for image compression based on morphological associative memories (MAMs is presented. We used the MAM to implement a new image transform and applied it at the transformation stage of image coding, thereby replacing such traditional methods as the discrete cosine transform or the discrete wavelet transform. Autoassociative and heteroassociative MAMs can be considered as a subclass of morphological neural networks. The morphological transform (MT presented in this paper generates heteroassociative MAMs derived from image subblocks. The MT is applied to individual blocks of the image using some transformation matrix as an input pattern. Depending on this matrix, the image takes a morphological representation, which is used to perform the data compression at the next stages. With respect to traditional methods, the main advantage offered by the MT is the processing speed, whereas the compression rate and the signal-to-noise ratio are competitive to conventional transforms.

  14. Fractal image perception provides novel insights into hierarchical cognition. (United States)

    Martins, M J; Fischmeister, F P; Puig-Waldmüller, E; Oh, J; Geissler, A; Robinson, S; Fitch, W T; Beisteiner, R


    Hierarchical structures play a central role in many aspects of human cognition, prominently including both language and music. In this study we addressed hierarchy in the visual domain, using a novel paradigm based on fractal images. Fractals are self-similar patterns generated by repeating the same simple rule at multiple hierarchical levels. Our hypothesis was that the brain uses different resources for processing hierarchies depending on whether it applies a "fractal" or a "non-fractal" cognitive strategy. We analyzed the neural circuits activated by these complex hierarchical patterns in an event-related fMRI study of 40 healthy subjects. Brain activation was compared across three different tasks: a similarity task, and two hierarchical tasks in which subjects were asked to recognize the repetition of a rule operating transformations either within an existing hierarchical level, or generating new hierarchical levels. Similar hierarchical images were generated by both rules and target images were identical. We found that when processing visual hierarchies, engagement in both hierarchical tasks activated the visual dorsal stream (occipito-parietal cortex, intraparietal sulcus and dorsolateral prefrontal cortex). In addition, the level-generating task specifically activated circuits related to the integration of spatial and categorical information, and with the integration of items in contexts (posterior cingulate cortex, retrosplenial cortex, and medial, ventral and anterior regions of temporal cortex). These findings provide interesting new clues about the cognitive mechanisms involved in the generation of new hierarchical levels as required for fractals.

  15. Fractal Image Filters for Specialized Image Recognition Tasks (United States)


    The Fractal Geometry of Nature, [24], Mandelbrot argues that random frac- tals provide geometrical models for naturally occurring shapes and forms...Fractal Properties of Number Systems, Period. Math. Hungar 42 (2001) 51-68. [24] Benoit Mandelbrot , The Fractal Geometry of Nature, W. H. Freeman, San

  16. Chaos-based encryption for fractal image coding

    Institute of Scientific and Technical Information of China (English)

    Yuen Ching-Hung; Wong Kwok-Wo


    A chaos-based cryptosystem for fractal image coding is proposed.The Rényi chaotic map is employed to determine the order of processing the range blocks and to generate the keystream for masking the encoded sequence.Compared with the standard approach of fractal image coding followed by the Advanced Encryption Standard,our scheme offers a higher sensitivity to both plaintext and ciphertext at a comparable operating efficiency.The keystream generated by the Rényi chaotic map passes the randomness tests set by the United States National Institute of Standards and Technology,and so the proposed scheme is sensitive to the key.

  17. Image Encryption using chaos functions and fractal key

    Directory of Open Access Journals (Sweden)

    Houman Kashanian


    Full Text Available Many image in recent years are transmitted via internet and stored on it. Maintain the confidentiality of these data has become a major issue. So that encryption algorithms permit only authorized users to access data which is a proper solution to this problem.This paper presents a novel scheme for image encryption. At first, a two dimensional logistic mapping is applied to permutation relations between image pixels. We used a fractal image as an encryption key. Given that the chaotic mapping properties such as extreme sensitivity to initial values, random behavior, non-periodic, certainty and so on, we used theses mappings in order to select fractal key for encryption. Experimental results show that proposed algorithm to encrypt image has many features. Due to features such as large space key, low relations between the pixels of encrypted image, high sensitivity to key and high security, it can effectively protect the encrypted image security.

  18. Novel welding image processing method based on fractal theory

    Institute of Scientific and Technical Information of China (English)

    陈强; 孙振国; 肖勇; 路井荣


    Computer vision has come into used in the fields of welding process control and automation. In order to improve precision and rapidity of welding image processing, a novel method based on fractal theory has been put forward in this paper. Compared with traditional methods, the image is preliminarily processed in the macroscopic regions then thoroughly analyzed in the microscopic regions in the new method. With which, an image is divided up to some regions according to the different fractal characters of image edge, and the fuzzy regions including image edges are detected out, then image edges are identified with Sobel operator and curved by LSM (Lease Square Method). Since the data to be processed have been decreased and the noise of image has been reduced, it has been testified through experiments that edges of weld seam or weld pool could be recognized correctly and quickly.

  19. The Generating of Fractal Images using MathCAD Program

    Directory of Open Access Journals (Sweden)

    Laura Stefan


    Full Text Available This paper presents the graphic representation in the z–plane of the first three iterations of the algorithm that generates the Sierpinski Gasket. It analyses the influence of the f(z map when we represent fractal images.

  20. Fractal Loop Heat Pipe Performance Comparisons of a Soda Lime Glass and Compressed Carbon Foam Wick (United States)

    Myre, David; Silk, Eric A.


    This study compares heat flux performance of a Loop Heat Pipe (LHP) wick structure fabricated from compressed carbon foam with that of a wick structure fabricated from sintered soda lime glass. Each wick was used in an LHP containing a fractal based evaporator. The Fractal Loop Heat Pipe (FLHP) was designed and manufactured by Mikros Manufacturing Inc. The compressed carbon foam wick structure was manufactured by ERG Aerospace Inc., and machined to specifications comparable to that of the initial soda lime glass wick structure. Machining of the compressed foam as well as performance testing was conducted at the United States Naval Academy. Performance testing with the sintered soda lime glass wick structures was conducted at NASA Goddard Space Flight Center. Heat input for both wick structures was supplied via cartridge heaters mounted in a copper block. The copper heater block was placed in contact with the FLHP evaporator which had a circular cross-sectional area of 0.88 cm(sup 2). Twice distilled, deionized water was used as the working fluid in both sets of experiments. Thermal performance data was obtained for three different Condenser/Subcooler temperatures under degassed conditions. Both wicks demonstrated comparable heat flux performance with a maximum of 75 W/cm observed for the soda lime glass wick and 70 W /cm(sup 2) for the compressed carbon foam wick.

  1. Brain image Compression, a brief survey

    Directory of Open Access Journals (Sweden)

    Saleha Masood


    Full Text Available Brain image compression is known as a subfield of image compression. It allows the deep analysis and measurements of brain images in different modes. Brain images are compressed to analyze and diagnose in an effective manner while reducing the image storage space. This survey study describes the different existing techniques regarding brain image compression. The techniques come under different categories. The study also discusses these categories.

  2. Fractal Dimension-Based Damage Imaging for Composites

    Directory of Open Access Journals (Sweden)

    Li Zhou


    Full Text Available In this paper, a damage imaging algorithm based on fractal dimension is developed for quantitative damage detection of composite structures. Box-counting dimension, a typical fractal dimension, is employed to analyze the difference of Lamb wave signals, extract damage feature and define damage index. An enhanced reconstruction algorithm for probabilistic inspection of damage is developed for damage imaging. Experimental investigation in a composite laminate and a stiffened composite panel shows that the developed algorithm could quantitatively predict the location and size of not only single but also multiple damages. The influence of parameters in the developed algorithm on the imaging quality and accuracy is studied, and reference values for parameters are presented.

  3. Comparing image compression methods in biomedical applications

    Directory of Open Access Journals (Sweden)

    Libor Hargas


    Full Text Available Compression methods suitable for image processing are described in this article in biomedical applications. The compression is often realized by reduction of irrelevance or redundancy. There are described lossless and lossy compression methods which can be use for compress of images in biomedical applications and comparison of these methods based on fidelity criteria.

  4. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Directory of Open Access Journals (Sweden)

    Xiangwei Li


    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  5. Efficient lossy compression for compressive sensing acquisition of images in compressive sensing imaging systems. (United States)

    Li, Xiangwei; Lan, Xuguang; Yang, Meng; Xue, Jianru; Zheng, Nanning


    Compressive Sensing Imaging (CSI) is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS) acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  6. Image Quality Meter Using Compression

    Directory of Open Access Journals (Sweden)

    Muhammad Ibrar-Ul-Haque


    Full Text Available This paper proposed a new technique to compressed image blockiness/blurriness in frequency domain through edge detection method by applying Fourier transform. In image processing, boundaries are characterized by edges and thus, edges are the problems of fundamental importance. The edges have to be identified and computed thoroughly in order to retrieve the complete illustration of the image. Our novel edge detection scheme for blockiness and blurriness shows improvement of 60 and 100 blocks for high frequency components respectively than any other detection technique.

  7. BPCS steganography using EZW lossy compressed images


    Spaulding, Jeremiah; Noda, Hideki; Shirazi, Mahdad N.; Kawaguchi, Eiji


    This paper presents a steganography method based on an embedded zerotree wavelet (EZW) compression scheme and bit-plane complexity segmentation (BPCS) steganography. The proposed steganography enables us to use lossy compressed images as dummy files in bit-plane-based steganographic algorithms. Large embedding rates of around 25% of the compressed image size were achieved with little noticeable degradation in image quality.

  8. Beyond maximum entropy: Fractal Pixon-based image reconstruction (United States)

    Puetter, Richard C.; Pina, R. K.


    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.

  9. Fractal coding of wavelet image based on human vision contrast-masking effect (United States)

    Wei, Hai; Shen, Lansun


    In this paper, a fractal-based compression approach of wavelet image is presented. The scheme tries to make full use of the sensitivity features of the human visual system. With the wavelet-based multi-resolution representation of image, detail vectors of each high frequency sub-image are constructed in accordance with its spatial orientation in order to grasp the edge information to which human observer is sensitive. Then a multi-level selection algorithm based on human vision's contrast masking effect is proposed to make the decision whether a detail vector is coded or not. Those vectors below the contrast threshold are discarded without introducing visual artifacts because of the ignorance of human vision. As for the redundancy of the retained vectors, different fractal- based methods are employed to decrease the correlation in single sub-image and between the different resolution sub- images with the same orientation. Experimental results suggest the efficiency of the proposed scheme. With the standard test image, our approach outperforms the EZW algorithm and the JPEG method.

  10. A Novel Semi-blind Watermarking Algorithm Based on Fractal Dimension and Image Feature

    Institute of Scientific and Technical Information of China (English)

    NIRongrong; RUANQiuqi


    This paper presents a novel semi-blind watermarking algorithm based on fractal dimension and image feature. An original image is divided into blocks with fixed size. According to the idea of the second generation watermarking[1], the image is analyzed using fractal dimension to attain its feature blocks containing edges and textures that are used in the later embedding process and used to form a feature label. The watermark that is the fusion of the feature label and a binary copyright symbol not only represents the copyright symbol, but also reflects the feature of the image. Arnold iteration transform is employed to increase the security of watermark. Then,DCT (Discrete cosine transform) is applied to the feature blocks. The secure watermark that is adaptive to the individual image is embedded into the relations between middle-frequency coefficients and corresponding DC coefficients. The detection and extraction procedure is a semiblind one which does not use the original image but the watermark. Only those who have the original watermarkand the key can detect and extract the right watermark.This makes the approach authentic and have high securitylevel. Experimental results show that this algorithm can get good perceptual invisibility, adaptability and security.And it is robust against cropping, scribbling, low or highpass filtering, adding noise and JPEG compression.

  11. Image Compression Using Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    Ryan Rey M. Daga


    Full Text Available Image compression techniques are important and useful in data storage and image transmission through the Internet. These techniques eliminate redundant information in an image which minimizes the physical space requirement of the image. Numerous types of image compression algorithms have been developed but the resulting image is still less than the optimal. The Harmony search algorithm (HSA, a meta-heuristic optimization algorithm inspired by the music improvisation process of musicians, was applied as the underlying algorithm for image compression. Experiment results show that it is feasible to use the harmony search algorithm as an algorithm for image compression. The HSA-based image compression technique was able to compress colored and grayscale images with minimal visual information loss.

  12. Image Compression Using Discrete Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Mohammad Mozammel Hoque Chowdhury


    Full Text Available Image compression is a key technology in transmission and storage of digital images because of vast data associated with them. This research suggests a new image compression scheme with pruning proposal based on discrete wavelet transformation (DWT. The effectiveness of the algorithm has been justified over some real images, and the performance of the algorithm has been compared with other common compression standards. The algorithm has been implemented using Visual C++ and tested on a Pentium Core 2 Duo 2.1 GHz PC with 1 GB RAM. Experimental results demonstrate that the proposed technique provides sufficient high compression ratios compared to other compression techniques.

  13. Digital image compression in dermatology: format comparison. (United States)

    Guarneri, F; Vaccaro, M; Guarneri, C


    Digital image compression (reduction of the amount of numeric data needed to represent a picture) is widely used in electronic storage and transmission devices. Few studies have compared the suitability of the different compression algorithms for dermatologic images. We aimed at comparing the performance of four popular compression formats, Tagged Image File (TIF), Portable Network Graphics (PNG), Joint Photographic Expert Group (JPEG), and JPEG2000 on clinical and videomicroscopic dermatologic images. Nineteen (19) clinical and 15 videomicroscopic digital images were compressed using JPEG and JPEG2000 at various compression factors and TIF and PNG. TIF and PNG are "lossless" formats (i.e., without alteration of the image), JPEG is "lossy" (the compressed image has a lower quality than the original), JPEG2000 has a lossless and a lossy mode. The quality of the compressed images was assessed subjectively (by three expert reviewers) and quantitatively (by measuring, point by point, the color differences from the original). Lossless JPEG2000 (49% compression) outperformed the other lossless algorithms, PNG and TIF (42% and 31% compression, respectively). Lossy JPEG2000 compression was slightly less efficient than JPEG, but preserved image quality much better, particularly at higher compression factors. For its good quality and compression ratio, JPEG2000 appears to be a good choice for clinical/videomicroscopic dermatologic image compression. Additionally, its diffusion and other features, such as the possibility of embedding metadata in the image file and to encode various parts of an image at different compression levels, make it perfectly suitable for the current needs of dermatology and teledermatology.

  14. Lossless Compression of Digital Images

    DEFF Research Database (Denmark)

    Martins, Bo

    Presently, tree coders are the best bi-level image coders. The currentISO standard, JBIG, is a good example.By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code.A number of general-purpose coders...... version that is substantially faster than its precursorsand brings it close to the multi-pass coders in compression performance.Handprinted characters are of unequal complexity; recent work by Singer and Tishby demonstrates that utilizing the physiological process of writing one can synthesize cursive...

  15. Detection of Glaucomatous Eye via Color Fundus Images Using Fractal Dimensions

    Directory of Open Access Journals (Sweden)

    J. Jan


    Full Text Available This paper describes a method for glaucomatous eye detection based on fractal description, followed by classification. Two methods for fractal dimensions estimation, which give a different image/tissue description, are presented. The fundus color images are used, in which the areas with retinal nerve fibers are analyzed. The presented method shows that fractal dimensions can be used as features for retinal nerve fibers losses detection, which is a sign of glaucomatous eye.

  16. Simultaneous denoising and compression of multispectral images (United States)

    Hagag, Ahmed; Amin, Mohamed; Abd El-Samie, Fathi E.


    A new technique for denoising and compression of multispectral satellite images to remove the effect of noise on the compression process is presented. One type of multispectral images has been considered: Landsat Enhanced Thematic Mapper Plus. The discrete wavelet transform (DWT), the dual-tree DWT, and a simple Huffman coder are used in the compression process. Simulation results show that the proposed technique is more effective than other traditional compression-only techniques.

  17. Region-Based Image-Fusion Framework for Compressive Imaging

    Directory of Open Access Journals (Sweden)

    Yang Chen


    Full Text Available A novel region-based image-fusion framework for compressive imaging (CI and its implementation scheme are proposed. Unlike previous works on conventional image fusion, we consider both compression capability on sensor side and intelligent understanding of the image contents in the image fusion. Firstly, the compressed sensing theory and normalized cut theory are introduced. Then region-based image-fusion framework for compressive imaging is proposed and its corresponding fusion scheme is constructed. Experiment results demonstrate that the proposed scheme delivers superior performance over traditional compressive image-fusion schemes in terms of both object metrics and visual quality.

  18. Fractal dimension metric for quantifying noise texture of computed tomography images (United States)

    Khobragade, P.; Fan, Jiahua; Rupcich, Franco; Crotty, Dominic J.; Gilat Schmidt, Taly


    This study investigated a fractal dimension algorithm for noise texture quantification in CT images. Quantifying noise in CT images is important for assessing image quality. Noise is typically quantified by calculating noise standard deviation and noise power spectrum (NPS). Different reconstruction kernels and iterative reconstruction approaches affect both the noise magnitude and noise texture. The shape of the NPS can be used as a noise texture descriptor. However, the NPS requires numerous images for calculation and is a vector quantity. This study proposes the metric of fractal dimension to quantify noise texture, because fractal dimension is a single scalar metric calculated from a small number of images. Fractal dimension measures the complexity of a pattern. In this study, the ACR CT phantom was scanned and images were reconstructed using filtered back-projection with three reconstruction kernels: bone, soft and standard. Regions of interest were extracted from the uniform section of the phantom for NPS and fractal dimension calculation. The results demonstrated a mean fractal dimension of 1.86 for soft kernel, 1.92 for standard kernel, and 2.16 for bone kernel. Increasing fractal dimension corresponded to shift in the NPS towards higher spatial frequencies and grainier noise appearance. Stable fractal dimension was calculated from two ROI's compared to more than 250 ROI's used for NPS calculation. The scalar fractal dimension metric may be a useful noise texture descriptor for evaluating or optimizing reconstruction algorithms.

  19. Studies on image compression and image reconstruction (United States)

    Sayood, Khalid; Nori, Sekhar; Araj, A.


    During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included.

  20. MR imaging and osteoporosis: fractal lacunarity analysis of trabecular bone. (United States)

    Zaia, Annamaria; Eleonori, Roberta; Maponi, Pierluigi; Rossi, Roberto; Murri, Roberto


    We develop a method of magnetic resonance (MR) image analysis able to provide parameter(s) sensitive to bone microarchitecture changes in aging, and to osteoporosis onset and progression. The method has been built taking into account fractal properties of many anatomic and physiologic structures. Fractal lacunarity analysis has been used to determine relevant parameter(s) to differentiate among three types of trabecular bone structure (healthy young, healthy perimenopausal, and osteoporotic patients) from lumbar vertebra MR images. In particular, we propose to approximate the lacunarity function by a hyperbola model function that depends on three coefficients, alpha, beta, and gamma, and to compute these coefficients as the solution of a least squares problem. This triplet of coefficients provides a model function that better represents the variation of mass density of pixels in the image considered. Clinical application of this preliminary version of our method suggests that one of the three coefficients, beta, may represent a standard for the evaluation of trabecular bone architecture and a potentially useful parametric index for the early diagnosis of osteoporosis.

  1. Development of Wavelet Image Compression Technique to Particle Image Velocimetry

    Institute of Scientific and Technical Information of China (English)



    In order to reduce the noise in the images and the physical storage,the wavelet-based image compression technique was applied to PIV processing in this paper,To study the effect of the wavelet bases,the standard PIV images were compressed by some known wavelet families,Daubechies,Coifman and Baylkin families with various compression ratios.It was found that a higher order wavelet base provided good compression performance for compressing PIV images,The error analysis of velocity field obtained indicated that the high compression ratio even up to 64:1,can be realized without losing significant flow information in PIV processing.The wavelet compression technique of PIV was applied to the experimental images of jet flow and showed excellent performance,A reduced number of erroneous vectors can be realized by varying compression ratio.It can say that the wavelet image compression technique is very effective in PIV system.

  2. Still image and video compression with MATLAB

    CERN Document Server

    Thyagarajan, K


    This book describes the principles of image and video compression techniques and introduces current and popular compression standards, such as the MPEG series. Derivations of relevant compression algorithms are developed in an easy-to-follow fashion. Numerous examples are provided in each chapter to illustrate the concepts. The book includes complementary software written in MATLAB SIMULINK to give readers hands-on experience in using and applying various video compression methods. Readers can enhance the software by including their own algorithms.

  3. Improved Fractal Method for Singularity Detection in Fingerprint Images

    Institute of Scientific and Technical Information of China (English)


    A new technique that uses Discrete Fractal Brownian Motion todescribe a fingerprint is presented. By computing certain fractal parameters, a fingerprints core and delta fields can be roughly detected. Experimental results demonstrate this method to be not only more efficient than the single fractal dimension method, but also more noise-resistant than the traditional schemes.

  4. Mathematical transforms and image compression: A review

    Directory of Open Access Journals (Sweden)

    Satish K. Singh


    Full Text Available It is well known that images, often used in a variety of computer and other scientific and engineering applications, are difficult to store and transmit due to their sizes. One possible solution to overcome this problem is to use an efficient digital image compression technique where an image is viewed as a matrix and then the operations are performed on the matrix. All the contemporary digital image compression systems use various mathematical transforms for compression. The compression performance is closely related to the performance by these mathematical transforms in terms of energy compaction and spatial frequency isolation by exploiting inter-pixel redundancies present in the image data. Through this paper, a comprehensive literature survey has been carried out and the pros and cons of various transform-based image compression models have also been discussed.

  5. An efficient medical image compression scheme. (United States)

    Li, Xiaofeng; Shen, Yi; Ma, Jiachen


    In this paper, a fast lossless compression scheme is presented for the medical image. This scheme consists of two stages. In the first stage, a Differential Pulse Code Modulation (DPCM) is used to decorrelate the raw image data, therefore increasing the compressibility of the medical image. In the second stage, an effective scheme based on the Huffman coding method is developed to encode the residual image. This newly proposed scheme could reduce the cost for the Huffman coding table while achieving high compression ratio. With this algorithm, a compression ratio higher than that of the lossless JPEG method for image can be obtained. At the same time, this method is quicker than the lossless JPEG2000. In other words, the newly proposed algorithm provides a good means for lossless medical image compression.

  6. Edge Polynomial Fractal Compression Algorithm for High Quality Video Transmission. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Freddie


    In this final report, Physical Optics Corporation (POC) provides a review of its Edge Polynomial Autonomous Compression (EPAC) technology. This project was undertaken to meet the need for low bandwidth transmission of full-motion video images. In addition, this report offers a synopsis of the logical data representation study that was performed to compress still images and video. The mapping singularities and polynomial representation of 3-D surfaces were found to be ideal for very high image compression. Our efforts were then directed to extending the EPAC algorithm for the motion of singularities by tracking the 3-D coordinates of characteristic points and the development of system components. Finally, we describe the integration of the software with the hardware components. This process consists of acquiring and processing each separate camera view, combining the information from different cameras to calculate the location of an object in three dimensions, and tracking the information history and the behavior of the objects.

  7. Image Processing by Compression: An Overview



    International audience; This article aims to present the various applications of data compression in image processing. Since some time ago, several research groups have been developing methods based on different data compression techniques to classify, segment, filter and detect digital images fakery. It is necessary to analyze the relationship between different methods and put them into a framework to better understand and better exploit the possibilities that compression provides us respect...

  8. Combined Sparsifying Transforms for Compressive Image Fusion

    Directory of Open Access Journals (Sweden)

    ZHAO, L.


    Full Text Available In this paper, we present a new compressive image fusion method based on combined sparsifying transforms. First, the framework of compressive image fusion is introduced briefly. Then, combined sparsifying transforms are presented to enhance the sparsity of images. Finally, a reconstruction algorithm based on the nonlinear conjugate gradient is presented to get the fused image. The simulations demonstrate that by using the combined sparsifying transforms better results can be achieved in terms of both the subjective visual effect and the objective evaluation indexes than using only a single sparsifying transform for compressive image fusion.


    Directory of Open Access Journals (Sweden)



    Full Text Available Fractal Dimension of Urban Expansion Based on Remote Sensing Images: In Cluj-Napoca city the process of urbanization has been accelerated during the years and implication of local authorities reflects a relevant planning policy. A good urban planning framework should take into account the society demands and also it should satisfy the natural conditions of local environment. The expansion of antropic areas it can be approached by implication of 5D variables (time as a sequence of stages, space: with x, y, z and magnitude of phenomena into the process, which will allow us to analyse and extract the roughness of city shape. Thus, to improve the decision factor we take a different approach in this paper, looking at geometry and scale composition. Using the remote sensing (RS and GIS techniques we manage to extract a sequence of built-up areas (from 1980 to 2012 and used the result as an input for modelling the spatialtemporal changes of urban expansion and fractal theory to analysed the geometric features. Taking the time as a parameter we can observe behaviour and changes in urban landscape, this condition have been known as self-organized – a condition which in first stage the system was without any turbulence (before the antropic factor and during the time tend to approach chaotic behaviour (entropy state without causing an disequilibrium in the main system.

  10. Arquitectura Simple y Modular para Compresión Fractal de Imágenes utilizando Árbol Cuádruple Multi-Resolución Simple and Modular Architecture for Fractal Image Compression using Quad-Tree Multi-Resolution


    Alejandro Martínez; Alejandro Díaz; Mónico Linares; Javier Vega


    En este trabajo se presenta una arquitectura simple y rápida para compresión fractal de imágenes basada en un método para compresión fractal multi-resolución de imagen, utilizando particionamiento en árbol cuádruple piramidal y un esquema de clasificación de bloques de acuerdo a su tamaño y a su contraste. El uso de bloques rangos expandidos, bloques dominios no contraídos, y decimación de bloques tipo tablero de ajedrez compensa pérdidas en la calidad de imagen y permite obtener los parámetr...

  11. Semantic Source Coding for Flexible Lossy Image Compression

    National Research Council Canada - National Science Library

    Phoha, Shashi; Schmiedekamp, Mendel


    Semantic Source Coding for Lossy Video Compression investigates methods for Mission-oriented lossy image compression, by developing methods to use different compression levels for different portions...

  12. Image compression algorithm using wavelet transform (United States)

    Cadena, Luis; Cadena, Franklin; Simonov, Konstantin; Zotin, Alexander; Okhotnikov, Grigory


    Within the multi-resolution analysis, the study of the image compression algorithm using the Haar wavelet has been performed. We have studied the dependence of the image quality on the compression ratio. Also, the variation of the compression level of the studied image has been obtained. It is shown that the compression ratio in the range of 8-10 is optimal for environmental monitoring. Under these conditions the compression level is in the range of 1.7 - 4.2, depending on the type of images. It is shown that the algorithm used is more convenient and has more advantages than Winrar. The Haar wavelet algorithm has improved the method of signal and image processing.

  13. Digital Image Compression Using Artificial Neural Networks (United States)

    Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.


    The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.

  14. Review Article: An Overview of Image Compression Techniques

    Directory of Open Access Journals (Sweden)

    M. Marimuthu


    Full Text Available To store an image, large quantities of digital data are required. Due to limited bandwidth, image must be compressed before transmission. However, image compression reduces the image fidelity, when an image is compressed at low bitrates. Hence, the compressed images suffer from block artifacts. To meet this, several compression schemes have been developed in image processing. This study presents an overview of compression techniques for image applications. It covers the lossy and lossless compression algorithm used for still image and other applications. The focus of this article is based on the overview of VLSI DCT architecture for image compression. Further, this new approach may provide better results.

  15. Review on Lossless Image Compression Techniques for Welding Radiographic Images

    Directory of Open Access Journals (Sweden)

    B. Karthikeyan


    Full Text Available Recent development in image processing allows us to apply it in different domains. Radiography image of weld joint is one area where image processing techniques can be applied. It can be used to identify the quality of the weld joint. For this the image has to be stored and processed later in the labs. In order to optimize the use of disk space compression is required. The aim of this study is to find a suitable and efficient lossless compression technique for radiographic weld images. Image compression is a technique by which the amount of data required to represent information is reduced. Hence image compression is effectively carried out by removing the redundant data. This study compares different ways of compressing the radiography images using combinations of different lossless compression techniques like RLE, Huffman.

  16. Fractal Movies. (United States)

    Osler, Thomas J.


    Because fractal images are by nature very complex, it can be inspiring and instructive to create the code in the classroom and watch the fractal image evolve as the user slowly changes some important parameter or zooms in and out of the image. Uses programming language that permits the user to store and retrieve a graphics image as a disk file.…

  17. Segmentation-based CT image compression (United States)

    Thammineni, Arunoday; Mukhopadhyay, Sudipta; Kamath, Vidya


    The existing image compression standards like JPEG and JPEG 2000, compress the whole image as a single frame. This makes the system simple but inefficient. The problem is acute for applications where lossless compression is mandatory viz. medical image compression. If the spatial characteristics of the image are considered, it can give rise to a more efficient coding scheme. For example, CT reconstructed images have uniform background outside the field of view (FOV). Even the portion within the FOV can be divided as anatomically relevant and irrelevant parts. They have distinctly different statistics. Hence coding them separately will result in more efficient compression. Segmentation is done based on thresholding and shape information is stored using 8-connected differential chain code. Simple 1-D DPCM is used as the prediction scheme. The experiments show that the 1st order entropies of images fall by more than 11% when each segment is coded separately. For simplicity and speed of decoding Huffman code is chosen for entropy coding. Segment based coding will have an overhead of one table per segment but the overhead is minimal. Lossless compression of image based on segmentation resulted in reduction of bit rate by 7%-9% compared to lossless compression of whole image as a single frame by the same prediction coder. Segmentation based scheme also has the advantage of natural ROI based progressive decoding. If it is allowed to delete the diagnostically irrelevant portions, the bit budget can go down as much as 40%. This concept can be extended to other modalities.

  18. Compressive Imaging via Approximate Message Passing with Image Denoising


    Tan, Jin; Ma, Yanting; Baron, Dror


    We consider compressive imaging problems, where images are reconstructed from a reduced number of linear measurements. Our objective is to improve over existing compressive imaging algorithms in terms of both reconstruction error and runtime. To pursue our objective, we propose compressive imaging algorithms that employ the approximate message passing (AMP) framework. AMP is an iterative signal reconstruction algorithm that performs scalar denoising at each iteration; in order for AMP to reco...


    Directory of Open Access Journals (Sweden)

    K. Thamizhchelvy


    Full Text Available We propose the fractal generation method to generate the different types of fractals using chaos theory. The fractals are generated by Iterated Function System (IFS technique. The chaos theory is an unpredictable behavior arises in the dynamical system. Chaos in turns explains the nonlinearity and randomness. Chaotic behavior depends upon the initial condition called as “seed” or “key”. Pseudo Random Number Generator (PRNG fixes the initial condition from the difference equations. The system uses the PRNG value and it generates the fractals, also it is hard to break. We apply the rules to generate the fractals. The different types of fractals are generated for the same data, because of the great sensitivity to the initial condition. It can be used as a digital signature in online applications such as e-Banking and online shopping.

  20. Lossless Compression on MRI Images Using SWT. (United States)

    Anusuya, V; Raghavan, V Srinivasa; Kavitha, G


    Medical image compression is one of the growing research fields in biomedical applications. Most medical images need to be compressed using lossless compression as each pixel information is valuable. With the wide pervasiveness of medical imaging applications in health-care settings and the increased interest in telemedicine technologies, it has become essential to reduce both storage and transmission bandwidth requirements needed for archival and communication of related data, preferably by employing lossless compression methods. Furthermore, providing random access as well as resolution and quality scalability to the compressed data has become of great utility. Random access refers to the ability to decode any section of the compressed image without having to decode the entire data set. The system proposes to implement a lossless codec using an entropy coder. 3D medical images are decomposed into 2D slices and subjected to 2D-stationary wavelet transform (SWT). The decimated coefficients are compressed in parallel using embedded block coding with optimized truncation of the embedded bit stream. These bit streams are decoded and reconstructed using inverse SWT. Finally, the compression ratio (CR) is evaluated to prove the efficiency of the proposal. As an enhancement, the proposed system concentrates on minimizing the computation time by introducing parallel computing on the arithmetic coding stage as it deals with multiple subslices.

  1. Fractal Characteristics of Rock Fracture Surface under Triaxial Compression after High Temperature

    Directory of Open Access Journals (Sweden)

    X. L. Xu


    Full Text Available Scanning Electron Microscopy (SEM test on 30 pieces of fractured granite has been researched by using S250MK III SEM under triaxial compression of different temperature (25~1000°C and confining pressure (0~40 MPa. Research results show that (1 the change of fractal dimension (FD of rock fracture with temperature is closely related to confining pressure, which can be divided into two categories. In the first category, when confining pressure is in 0~30 MPa, FD fits cubic polynomial fitting curve with temperature, reaching the maximum at 600°C. In the second category, when confining pressure is in 30~40 MPa, FD has volatility with temperature. (2 The FD of rock fracture varies with confining pressure and is also closely related to the temperature, which can be divided into three categories. In the first category, FD has volatility with confining pressure at 25°C, 400°C, and 800°C. In the second category, it increases exponentially at 200°C and 1000°C. In the third category, it decreases exponentially at 600°C. (3 It is found that 600°C is the critical temperature and 30 MPa is the critical confining pressure of granite. The rock transfers from brittle to plastic phase transition when temperature exceeds 600°C and confining pressure exceeds 30 MPa.

  2. Fractal analysis of scatter imaging signatures to distinguish breast pathologies (United States)

    Eguizabal, Alma; Laughney, Ashley M.; Krishnaswamy, Venkataramanan; Wells, Wendy A.; Paulsen, Keith D.; Pogue, Brian W.; López-Higuera, José M.; Conde, Olga M.


    Fractal analysis combined with a label-free scattering technique is proposed for describing the pathological architecture of tumors. Clinicians and pathologists are conventionally trained to classify abnormal features such as structural irregularities or high indices of mitosis. The potential of fractal analysis lies in the fact of being a morphometric measure of the irregular structures providing a measure of the object's complexity and self-similarity. As cancer is characterized by disorder and irregularity in tissues, this measure could be related to tumor growth. Fractal analysis has been probed in the understanding of the tumor vasculature network. This work addresses the feasibility of applying fractal analysis to the scattering power map (as a physical modeling) and principal components (as a statistical modeling) provided by a localized reflectance spectroscopic system. Disorder, irregularity and cell size variation in tissue samples is translated into the scattering power and principal components magnitude and its fractal dimension is correlated with the pathologist assessment of the samples. The fractal dimension is computed applying the box-counting technique. Results show that fractal analysis of ex-vivo fresh tissue samples exhibits separated ranges of fractal dimension that could help classifier combining the fractal results with other morphological features. This contrast trend would help in the discrimination of tissues in the intraoperative context and may serve as a useful adjunct to surgeons.

  3. Fractal analysis of AFM images of the surface of Bowman's membrane of the human cornea. (United States)

    Ţălu, Ştefan; Stach, Sebastian; Sueiras, Vivian; Ziebarth, Noël Marysa


    The objective of this study is to further investigate the ultrastructural details of the surface of Bowman's membrane of the human cornea, using atomic force microscopy (AFM) images. One representative image acquired of Bowman's membrane of a human cornea was investigated. The three-dimensional (3-D) surface of the sample was imaged using AFM in contact mode, while the sample was completely submerged in optisol solution. Height and deflection images were acquired at multiple scan lengths using the MFP-3D AFM system software (Asylum Research, Santa Barbara, CA), based in IGOR Pro (WaveMetrics, Lake Oswego, OR). A novel approach, based on computational algorithms for fractal analysis of surfaces applied for AFM data, was utilized to analyze the surface structure. The surfaces revealed a fractal structure at the nanometer scale. The fractal dimension, D, provided quantitative values that characterize the scale properties of surface geometry. Detailed characterization of the surface topography was obtained using statistical parameters, in accordance with ISO 25178-2: 2012. Results obtained by fractal analysis confirm the relationship between the value of the fractal dimension and the statistical surface roughness parameters. The surface structure of Bowman's membrane of the human cornea is complex. The analyzed AFM images confirm a fractal nature of the surface, which is not taken into account by classical surface statistical parameters. Surface fractal dimension could be useful in ophthalmology to quantify corneal architectural changes associated with different disease states to further our understanding of disease evolution.

  4. Context-Aware Image Compression.

    Directory of Open Access Journals (Sweden)

    Jacky C K Chan

    Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.

  5. Matlab Simulation of Jacquin Fractal Image Coding%基于Jacquin分形法图像编码的Matlab仿真实现

    Institute of Scientific and Technical Information of China (English)

    李丹; 张梁斌; 梁世斌


    Fractal image coding, which has the potential high compression ratio and simple decoding characteristics, has been a research focus of the lossy image coding over the past decade. This paper describes the mathematical basis of fractal image coding, the traditional Jacquin fractal image coding principle of encoding and decoding, and the experimental simulation of Jacquin fractal image coding using Matlab tool. Experimental result shows that Jacquin fractal image coding needs a long time of searching the best matching domain block, but image decoding is easy and fast.How to improve image coding time will be the main content of Jacquin fractal coding in the future.%分形图像编码具有潜在的高压缩比、解码简单等特点成为近十年来有损编码中的一个研究热点。文章阐述了分形编码的数学基础和传统分形编码Jacquin方法的编解码原理,最后利用Matlab工具对图像的Jacquin分形法进行了实验仿真。实验结果表明,Jacquin分形法搜索最佳匹配块的编码时间较长,而解码过程简单快捷。提高图像编码速度将是Jacquin分形法今后改进的主要内容。

  6. Region-Based Fractal Image Coding with Freely-Shaped Partition

    Institute of Scientific and Technical Information of China (English)

    SUNYunda; ZHAOYao; YUANBaozong


    In Fractal image coding (FIC), a partitioning of the original image into ranges and domains is required, which greatly affects the coding performance. Usually, the more adaptive to the image content the partition is, the higher performance it can achieve. Nowadays, some alleged Region-based fractal coders (RBFC) using split-and-merge strategy can achieve better adaptivity andperformance compared with traditional rectangular block partitions. However, the regions are still with linear contour. In this paper, we present a Freely-shaped Regionbased fractal coder (FS-RBFC) using a two-step partitioning, i.e. coarse partitioning based on fractal dimension and fine partitioning based on region growth, which brings freely-shaped regions. Our highly image-adaptive scheme can achieve better rate-distortion curve than conventional scheme, even more visually pleasing results at the same performance.

  7. Cloud Optimized Image Format and Compression (United States)

    Becker, P.; Plesea, L.; Maurer, T.


    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  8. Lossless compression of VLSI layout image data. (United States)

    Dai, Vito; Zakhor, Avideh


    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.

  9. L-system fractals

    CERN Document Server

    Mishra, Jibitesh


    The book covers all the fundamental aspects of generating fractals through L-system. Also it provides insight to various researches in this area for generating fractals through L-system approach & estimating dimensions. Also it discusses various applications of L-system fractals. Key Features: - Fractals generated from L-System including hybrid fractals - Dimension calculation for L-system fractals - Images & codes for L-system fractals - Research directions in the area of L-system fractals - Usage of various freely downloadable tools in this area - Fractals generated from L-System including hybrid fractals- Dimension calculation for L-system fractals- Images & codes for L-system fractals- Research directions in the area of L-system fractals- Usage of various freely downloadable tools in this area

  10. Iris Recognition: The Consequences of Image Compression

    Directory of Open Access Journals (Sweden)

    Bishop DanielA


    Full Text Available Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  11. Iris Recognition: The Consequences of Image Compression (United States)

    Ives, Robert W.; Bishop, Daniel A.; Du, Yingzi; Belcher, Craig


    Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  12. The fractal measurement of experimental images of supersonic turbulent mixing layer

    Institute of Scientific and Technical Information of China (English)

    ZHAO YuXin; YI ShiHe; TIAN LiFeng; HE Lin; CHENG ZhongYu


    Flow Visualization of supersonic mixing layer has been studied based on the high spatiotemporal resolution Nano-based Planar Laser Scattering (NPLS) method in SML-1 wind tunnel. The corresponding images distinctly reproduced the flow structure of laminar, transitional and turbulent region, with which the fractal meas-urement can be implemented. Two methods of measuring fractal dimension wereintroduced and compared. The fractal dimension of the transitional region and the fully developing turbulence region of supersonic mixing layer were measured based on the box-counting method. In the transitional region, the fractal dimension will increase with turbulent intensity. In the fully developing turbulent region, the fractal dimension will not vary apparently for different flow structures, which em-bodies the self-similarity of supersonic turbulence.

  13. The fractal measurement of experimental images of supersonic turbulent mixing layer

    Institute of Scientific and Technical Information of China (English)


    Flow visualization of supersonic mixing layer has been studied based on the high spatiotemporal resolution Nano-based Planar Laser Scattering(NPLS) method in SML-1 wind tunnel. The corresponding images distinctly reproduced the flow structure of laminar,transitional and turbulent region,with which the fractal measurement can be implemented. Two methods of measuring fractal dimension were introduced and compared. The fractal dimension of the transitional region and the fully developing turbulence region of supersonic mixing layer were measured based on the box-counting method. In the transitional region,the fractal dimension will increase with turbulent intensity. In the fully developing turbulent region,the fractal dimension will not vary apparently for different flow structures,which em-bodies the self-similarity of supersonic turbulence.

  14. Image quality, compression and segmentation in medicine. (United States)

    Morgan, Pam; Frankish, Clive


    This review considers image quality in the context of the evolving technology of image compression, and the effects image compression has on perceived quality. The concepts of lossless, perceptually lossless, and diagnostically lossless but lossy compression are described, as well as the possibility of segmented images, combining lossy compression with perceptually lossless regions of interest. The different requirements for diagnostic and training images are also discussed. The lack of established methods for image quality evaluation is highlighted and available methods discussed in the light of the information that may be inferred from them. Confounding variables are also identified. Areas requiring further research are illustrated, including differences in perceptual quality requirements for different image modalities, image regions, diagnostic subtleties, and tasks. It is argued that existing tools for measuring image quality need to be refined and new methods developed. The ultimate aim should be the development of standards for image quality evaluation which take into consideration both the task requirements of the images and the acceptability of the images to the users.

  15. Compression Techniques for Image Processing Tasks



    International audience; This article aims to present an overview of the different applications of data compression techniques in the image processing filed. Since some time ago, several research groups in the world have been developing various methods based on different data compression techniques to classify, segment, filter and detect digital images fakery. In this sense, it is necessary to analyze and clarify the relationship between different methods and put them into a framework to bette...

  16. Fractal analysis in radiological and nuclear medicine perfusion imaging: a systematic review

    Energy Technology Data Exchange (ETDEWEB)

    Michallek, Florian; Dewey, Marc [Humboldt-Universitaet zu Berlin, Freie Universitaet Berlin, Charite - Universitaetsmedizin Berlin, Medical School, Department of Radiology, Berlin (Germany)


    To provide an overview of recent research in fractal analysis of tissue perfusion imaging, using standard radiological and nuclear medicine imaging techniques including computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, positron emission tomography (PET) and single-photon emission computed tomography (SPECT) and to discuss implications for different fields of application. A systematic review of fractal analysis for tissue perfusion imaging was performed by searching the databases MEDLINE (via PubMed), EMBASE (via Ovid) and ISI Web of Science. Thirty-seven eligible studies were identified. Fractal analysis was performed on perfusion imaging of tumours, lung, myocardium, kidney, skeletal muscle and cerebral diseases. Clinically, different aspects of tumour perfusion and cerebral diseases were successfully evaluated including detection and classification. In physiological settings, it was shown that perfusion under different conditions and in various organs can be properly described using fractal analysis. Fractal analysis is a suitable method for quantifying heterogeneity from radiological and nuclear medicine perfusion images under a variety of conditions and in different organs. Further research is required to exploit physiologically proven fractal behaviour in the clinical setting. (orig.)

  17. Image Compression using Space Adaptive Lifting Scheme

    Directory of Open Access Journals (Sweden)

    Ramu Satyabama


    Full Text Available Problem statement: Digital images play an important role both in daily life applications as well as in areas of research and technology. Due to the increasing traffic caused by multimedia information and digitized form of representation of images; image compression has become a necessity. Approach: Wavelet transform has demonstrated excellent image compression performance. New algorithms based on Lifting style implementation of wavelet transforms have been presented in this study. Adaptively is introduced in lifting by choosing the prediction operator based on the local properties of the image. The prediction filters are chosen based on the edge detection and the relative local variance. In regions where the image is locally smooth, we use higher order predictors and near edges we reduce the order and thus the length of the predictor. Results: We have applied the adaptive prediction algorithms to test images. The original image is transformed using adaptive lifting based wavelet transform and it is compressed using Set Partitioning In Hierarchical Tree algorithm (SPIHT and the performance is compared with the popular 9/7 wavelet transform. The performance metric Peak Signal to Noise Ratio (PSNR for the reconstructed image is computed. Conclusion: The proposed adaptive algorithms give better performance than 9/7 wavelet, the most popular wavelet transforms. Lifting allows us to incorporate adaptivity and nonlinear operators into the transform. The proposed methods efficiently represent the edges and appear promising for image compression. The proposed adaptive methods reduce edge artifacts and ringing and give improved PSNR for edge dominated images.

  18. Hyperspectral image data compression based on DSP (United States)

    Fan, Jiming; Zhou, Jiankang; Chen, Xinhua; Shen, Weimin


    The huge data volume of hyperspectral image challenges its transportation and store. It is necessary to find an effective method to compress the hyperspectral image. Through analysis and comparison of current various algorithms, a mixed compression algorithm based on prediction, integer wavelet transform and embedded zero-tree wavelet (EZW) is proposed in this paper. We adopt a high-powered Digital Signal Processor (DSP) of TMS320DM642 to realize the proposed algorithm. Through modifying the mixed algorithm and optimizing its algorithmic language, the processing efficiency of the program was significantly improved, compared the non-optimized one. Our experiment show that the mixed algorithm based on DSP runs much faster than the algorithm on personal computer. The proposed method can achieve the nearly real-time compression with excellent image quality and compression performance.

  19. Information preserving image compression for archiving NMR images. (United States)

    Li, C C; Gokmen, M; Hirschman, A D; Wang, Y


    This paper presents a result on information preserving compression of NMR images for the archiving purpose. Both Lynch-Davisson coding and linear predictive coding have been studied. For NMR images of 256 x 256 x 12 resolution, the Lynch-Davisson coding with a block size of 64 as applied to prediction error sequences in the Gray code bit planes of each image gave an average compression ratio of 2.3:1 for 14 testing images. The predictive coding with a third order linear predictor and the Huffman encoding of the prediction error gave an average compression ratio of 3.1:1 for 54 images under test, while the maximum compression ratio achieved was 3.8:1. This result is one step further toward the improvement, albeit small, of the information preserving image compression for medical applications.

  20. A New Approach for Fingerprint Image Compression

    Energy Technology Data Exchange (ETDEWEB)

    Mazieres, Bertrand


    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  1. Backpropagation Neural Network Implementation for Medical Image Compression

    Directory of Open Access Journals (Sweden)

    Kamil Dimililer


    Full Text Available Medical images require compression, before transmission or storage, due to constrained bandwidth and storage capacity. An ideal image compression system must yield high-quality compressed image with high compression ratio. In this paper, Haar wavelet transform and discrete cosine transform are considered and a neural network is trained to relate the X-ray image contents to their ideal compression method and their optimum compression ratio.

  2. An efficient adaptive arithmetic coding image compression technology

    Institute of Scientific and Technical Information of China (English)

    Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei


    This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm.The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding.The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block.The decoded image block can accurately recover the encoded image according to the code book information.We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate.The results show that it is an effective compression technology.

  3. Border extrapolation using fractal attributes in remote sensing images (United States)

    Cipolletti, M. P.; Delrieux, C. A.; Perillo, G. M. E.; Piccolo, M. C.


    In management, monitoring and rational use of natural resources the knowledge of precise and updated information is essential. Satellite images have become an attractive option for quantitative data extraction and morphologic studies, assuring a wide coverage without exerting negative environmental influence over the study area. However, the precision of such practice is limited by the spatial resolution of the sensors and the additional processing algorithms. The use of high resolution imagery (i.e., Ikonos) is very expensive for studies involving large geographic areas or requiring long term monitoring, while the use of less expensive or freely available imagery poses a limit in the geographic accuracy and physical precision that may be obtained. We developed a methodology for accurate border estimation that can be used for establishing high quality measurements with low resolution imagery. The method is based on the original theory by Richardson, taking advantage of the fractal nature of geographic features. The area of interest is downsampled at different scales and, at each scale, the border is segmented and measured. Finally, a regression of the dependence of the measured length with respect to scale is computed, which then allows for a precise extrapolation of the expected length at scales much finer than the originally available. The method is tested with both synthetic and satellite imagery, producing accurate results in both cases.

  4. Issues in multiview autostereoscopic image compression (United States)

    Shah, Druti; Dodgson, Neil A.


    Multi-view auto-stereoscopic images and image sequences require large amounts of space for storage and large bandwidth for transmission. High bandwidth can be tolerated for certain applications where the image source and display are close together but, for long distance or broadcast, compression of information is essential. We report on the results of our two- year investigation into multi-view image compression. We present results based on four techniques: differential pulse code modulation (DPCM), disparity estimation, three- dimensional discrete cosine transform (3D-DCT), and principal component analysis (PCA). Our work on DPCM investigated the best predictors to use for predicting a given pixel. Our results show that, for a given pixel, it is generally the nearby pixels within a view that provide better prediction than the corresponding pixel values in adjacent views. This led to investigations into disparity estimation. We use both correlation and least-square error measures to estimate disparity. Both perform equally well. Combining this with DPCM led to a novel method of encoding, which improved the compression ratios by a significant factor. The 3D-DCT has been shown to be a useful compression tool, with compression schemes based on ideas from the two-dimensional JPEG standard proving effective. An alternative to 3D-DCT is PCA. This has proved to be less effective than the other compression methods investigated.

  5. Multiband and Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Raffaele Pizzolante


    Full Text Available Hyperspectral images are widely used in several real-life applications. In this paper, we investigate on the compression of hyperspectral images by considering different aspects, including the optimization of the computational complexity in order to allow implementations on limited hardware (i.e., hyperspectral sensors, etc.. We present an approach that relies on a three-dimensional predictive structure. Our predictive structure, 3D-MBLP, uses one or more previous bands as references to exploit the redundancies among the third dimension. The achieved results are comparable, and often better, with respect to the other state-of-art lossless compression techniques for hyperspectral images.

  6. Gated viewing laser imaging with compressive sensing. (United States)

    Li, Li; Wu, Lei; Wang, Xingbin; Dang, Ersheng


    We present a prototype of gated viewing laser imaging with compressive sensing (GVLICS). By a new framework named compressive sensing, it is possible for us to perform laser imaging using a single-pixel detector where the transverse spatial resolution is obtained. Moreover, combining compressive sensing with gated viewing, the three-dimensional (3D) scene can be reconstructed by the time-slicing technique. The simulations are accomplished to evaluate the characteristics of the proposed GVLICS prototype. Qualitative analysis of Lissajous-type eye-pattern figures indicates that the range accuracy of the reconstructed 3D images is affected by the sampling rate, the image's noise, and the complexity of the scenes.

  7. Feature-based Image Sequence Compression Coding

    Institute of Scientific and Technical Information of China (English)


    A novel compressing method for video teleconference applications is presented. Semantic-based coding based on human image feature is realized, where human features are adopted as parameters. Model-based coding and the concept of vector coding are combined with the work on image feature extraction to obtain the result.

  8. Lossless image compression technique for infrared thermal images (United States)

    Allred, Lloyd G.; Kelly, Gary E.


    The authors have achieved a 6.5-to-one image compression technique for thermal images (640 X 480, 1024 colors deep). Using a combination of new and more traditional techniques, the combined algorithm is computationally simple, enabling `on-the-fly' compression and storage of an image in less time than it takes to transcribe the original image to or from a magnetic medium. Similar compression has been achieved on visual images by virtue of the feature that all optical devices possess a modulation transfer function. As a consequence of this property, the difference in color between adjacent pixels is a usually small number, often between -1 and +1 graduations for a meaningful color scheme. By differentiating adjacent rows and columns, the original image can be expressed in terms of these small numbers. A simple compression algorithm for these small numbers achieves a four to one image compression. By piggy-backing this technique with a LZW compression or a fixed Huffman coding, an additional 35% image compression is obtained, resulting in a 6.5-to-one lossless image compression. Because traditional noise-removal operators tend to minimize the color graduations between adjacent pixels, an additional 20% reduction can be obtained by preprocessing the image with a noise-removal operator. Although noise removal operators are not lossless, their application may prove crucial in applications requiring high compression, such as the storage or transmission of a large number or images. The authors are working with the Air Force Photonics Technology Application Program Management office to apply this technique to transmission of optical images from satellites.

  9. Visual Evaluation of the Morphologic Structure of Nonwovens Using Image Analysis and Fractal Geometry

    Institute of Scientific and Technical Information of China (English)

    杨旭红; 李栋高


    Nonwovens are fiber materials which are based on nonwoven technologies. For the complexity and randomness of nonwovens morphologic structures, it is difficult to express them effectively using classical method. Fractal geometry gives us a new idea and a powerful tool to study on irregularity of geometric objects. Therefore, we studied on the pore size, pore shape, pore size distribution and fiber orientation distribution of real nonwovens using fractal geometry combined with computer image analysis to evaluate nonwovens' morphologic structures.

  10. Efficient predictive algorithms for image compression

    CERN Document Server

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla


    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  11. MRC for compression of Blake Archive images (United States)

    Misic, Vladimir; Kraus, Kari; Eaves, Morris; Parker, Kevin J.; Buckley, Robert R.


    The William Blake Archive is part of an emerging class of electronic projects in the humanities that may be described as hypermedia archives. It provides structured access to high-quality electronic reproductions of rare and often unique primary source materials, in this case the work of poet and painter William Blake. Due to the extensive high frequency content of Blake's paintings (namely, colored engravings), they are not suitable for very efficient compression that meets both rate and distortion criteria at the same time. Resolving that problem, the authors utilized modified Mixed Raster Content (MRC) compression scheme -- originally developed for compression of compound documents -- for the compression of colored engravings. In this paper, for the first time, we have been able to demonstrate the successful use of the MRC compression approach for the compression of colored, engraved images. Additional, but not less important benefits of the MRC image representation for Blake scholars are presented: because the applied segmentation method can essentially lift the color overlay of an impression, it provides the student of Blake the unique opportunity to recreate the underlying copperplate image, model the artist's coloring process, and study them separately.

  12. Gradient-based compressive image fusion

    Institute of Scientific and Technical Information of China (English)

    Yang CHEN‡; Zheng QIN


    We present a novel image fusion scheme based on gradient and scrambled block Hadamard ensemble (SBHE) sam-pling for compressive sensing imaging. First, source images are compressed by compressive sensing, to facilitate the transmission of the sensor. In the fusion phase, the image gradient is calculated to reflect the abundance of its contour information. By com-positing the gradient of each image, gradient-based weights are obtained, with which compressive sensing coefficients are achieved. Finally, inverse transformation is applied to the coefficients derived from fusion, and the fused image is obtained. Information entropy (IE), Xydeas’s and Piella’s metrics are applied as non-reference objective metrics to evaluate the fusion quality in line with different fusion schemes. In addition, different image fusion application scenarios are applied to explore the scenario adaptability of the proposed scheme. Simulation results demonstrate that the gradient-based scheme has the best per-formance, in terms of both subjective judgment and objective metrics. Furthermore, the gradient-based fusion scheme proposed in this paper can be applied in different fusion scenarios.

  13. On the fractal distribution of primes and prime-indexed primes by the binary image analysis (United States)

    Cattani, Carlo; Ciancio, Armando


    In this paper, the distribution of primes and prime-indexed primes (PIPs) is studied by mapping primes into a binary image which visualizes the distribution of primes. These images show that the distribution of primes (and PIPs) is similar to a Cantor dust, moreover the self-similarity with respect to the order of PIPs (already proven in Batchko (2014)) can be seen as an invariance of the binary images. The index of primes plays the same role of the scale for fractals, so that with respect to the index the distribution of prime-indexed primes is characterized by the self-similarity alike any other fractal. In particular, in order to single out the scale dependence, the PIPs fractal distribution will be evaluated by limiting to two parameters, fractal dimension (δ) and lacunarity (λ), that are usually used to measure the fractal nature. Because of the invariance of the corresponding binary plots, the fractal dimension and lacunarity of primes distribution are invariant with respect to the index of PIPs.

  14. Image characterization by fractal descriptors in variational mode decomposition domain: Application to brain magnetic resonance (United States)

    Lahmiri, Salim


    The main purpose of this work is to explore the usefulness of fractal descriptors estimated in multi-resolution domains to characterize biomedical digital image texture. In this regard, three multi-resolution techniques are considered: the well-known discrete wavelet transform (DWT) and the empirical mode decomposition (EMD), and; the newly introduced; variational mode decomposition mode (VMD). The original image is decomposed by the DWT, EMD, and VMD into different scales. Then, Fourier spectrum based fractal descriptors is estimated at specific scales and directions to characterize the image. The support vector machine (SVM) was used to perform supervised classification. The empirical study was applied to the problem of distinguishing between normal and abnormal brain magnetic resonance images (MRI) affected with Alzheimer disease (AD). Our results demonstrate that fractal descriptors estimated in VMD domain outperform those estimated in DWT and EMD domains; and also those directly estimated from the original image.

  15. Compressive Sensing Image Sensors-Hardware Implementation

    Directory of Open Access Journals (Sweden)

    Shahram Shirani


    Full Text Available The compressive sensing (CS paradigm uses simultaneous sensing and compression to provide an efficient image acquisition technique. The main advantages of the CS method include high resolution imaging using low resolution sensor arrays and faster image acquisition. Since the imaging philosophy in CS imagers is different from conventional imaging systems, new physical structures have been developed for cameras that use the CS technique. In this paper, a review of different hardware implementations of CS encoding in optical and electrical domains is presented. Considering the recent advances in CMOS (complementary metal–oxide–semiconductor technologies and the feasibility of performing on-chip signal processing, important practical issues in the implementation of CS in CMOS sensors are emphasized. In addition, the CS coding for video capture is discussed.

  16. Fractal scaling of apparent soil moisture estimated from vertical planes of Vertisol pit images (United States)

    Cumbrera, Ramiro; Tarquis, Ana M.; Gascó, Gabriel; Millán, Humberto


    SummaryImage analysis could be a useful tool for investigating the spatial patterns of apparent soil moisture at multiple resolutions. The objectives of the present work were (i) to define apparent soil moisture patterns from vertical planes of Vertisol pit images and (ii) to describe the scaling of apparent soil moisture distribution using fractal parameters. Twelve soil pits (0.70 m long × 0.60 m width × 0.30 m depth) were excavated on a bare Mazic Pellic Vertisol. Six of them were excavated in April/2011 and six pits were established in May/2011 after 3 days of a moderate rainfall event. Digital photographs were taken from each Vertisol pit using a Kodak™ digital camera. The mean image size was 1600 × 945 pixels with one physical pixel ≈373 μm of the photographed soil pit. Each soil image was analyzed using two fractal scaling exponents, box counting (capacity) dimension (DBC) and interface fractal dimension (Di), and three prefractal scaling coefficients, the total number of boxes intercepting the foreground pattern at a unit scale (A), fractal lacunarity at the unit scale (Λ1) and Shannon entropy at the unit scale (S1). All the scaling parameters identified significant differences between both sets of spatial patterns. Fractal lacunarity was the best discriminator between apparent soil moisture patterns. Soil image interpretation with fractal exponents and prefractal coefficients can be incorporated within a site-specific agriculture toolbox. While fractal exponents convey information on space filling characteristics of the pattern, prefractal coefficients represent the investigated soil property as seen through a higher resolution microscope. In spite of some computational and practical limitations, image analysis of apparent soil moisture patterns could be used in connection with traditional soil moisture sampling, which always renders punctual estimates.

  17. The New CCSDS Image Compression Recommendation (United States)

    Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron; Masschelein, Bart; Moury, Gilles; Schaefer, Christoph


    The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-Earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An Application-Specific Integrated Circuit (ASIC) implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm. Performance results and comparisons with other compressors are given for a test set of space images.

  18. Fractal signature and lacunarity in the measurement of the texture of trabecular bone in clinical CT images. (United States)

    Dougherty, G; Henebry, G M


    Fractal analysis is a method of characterizing complex shapes such as the trabecular structure of bone. Numerous algorithms for estimating fractal dimension have been described, but the Fourier power spectrum method is particularly applicable to self-affine fractals, and facilitates corrections for the effects of noise and blurring in an image. We found that it provided accurate estimates of fractal dimension for synthesized fractal images. For natural texture images fractality is limited to a range of scales, and the fractal dimension as a function of spatial frequency presents as a fractal signature. We found that the fractal signature was more successful at discriminating between these textures than either the global fractal dimension or other metrics such as the mean width and root-mean-square width of the spectral density plots. Different natural textures were also readily distinguishable using lacunarity plots, which explicitly characterize the average size and spatial organization of structural sub-units within an image. The fractal signatures of small regions of interest (32x32 pixels), computed in the frequency domain after corrections for imaging system noise and MTF, were able to characterize the texture of vertebral trabecular bone in CT images. Even small differences in texture due to acquisition slice thickness resulted in measurably different fractal signatures. These differences were also readily apparent in lacunarity plots, which indicated that a slice thickness of 1 mm or less is necessary if essential architectural information is not to be lost. Since lacunarity measures gap size and is not predicated on fractality, it may be particularly useful for characterizing the texture of trabecular bone.

  19. Quantum Discrete Cosine Transform for Image Compression

    CERN Document Server

    Pang, C Y; Guo, G C; Pang, Chao Yang; Zhou, Zheng Wei; Guo, Guang Can


    Discrete Cosine Transform (DCT) is very important in image compression. Classical 1-D DCT and 2-D DCT has time complexity O(NlogN) and O(N²logN) respectively. This paper presents a quantum DCT iteration, and constructs a quantum 1-D and 2-D DCT algorithm for image compression by using the iteration. The presented 1-D and 2-D DCT has time complexity O(sqrt(N)) and O(N) respectively. In addition, the method presented in this paper generalizes the famous Grover's algorithm to solve complex unstructured search problem.

  20. Time Series Analysis OF SAR Image Fractal Maps: The Somma-Vesuvio Volcanic Complex Case Study (United States)

    Pepe, Antonio; De Luca, Claudio; Di Martino, Gerardo; Iodice, Antonio; Manzo, Mariarosaria; Pepe, Susi; Riccio, Daniele; Ruello, Giuseppe; Sansosti, Eugenio; Zinno, Ivana


    The fractal dimension is a significant geophysical parameter describing natural surfaces representing the distribution of the roughness over different spatial scale; in case of volcanic structures, it has been related to the specific nature of materials and to the effects of active geodynamic processes. In this work, we present the analysis of the temporal behavior of the fractal dimension estimates generated from multi-pass SAR images relevant to the Somma-Vesuvio volcanic complex (South Italy). To this aim, we consider a Cosmo-SkyMed data-set of 42 stripmap images acquired from ascending orbits between October 2009 and December 2012. Starting from these images, we generate a three-dimensional stack composed by the corresponding fractal maps (ordered according to the acquisition dates), after a proper co-registration. The time-series of the pixel-by-pixel estimated fractal dimension values show that, over invariant natural areas, the fractal dimension values do not reveal significant changes; on the contrary, over urban areas, it correctly assumes values outside the natural surfaces fractality range and show strong fluctuations. As a final result of our analysis, we generate a fractal map that includes only the areas where the fractal dimension is considered reliable and stable (i.e., whose standard deviation computed over the time series is reasonably small). The so-obtained fractal dimension map is then used to identify areas that are homogeneous from a fractal viewpoint. Indeed, the analysis of this map reveals the presence of two distinctive landscape units corresponding to the Mt. Vesuvio and Gran Cono. The comparison with the (simplified) geological map clearly shows the presence in these two areas of volcanic products of different age. The presented fractal dimension map analysis demonstrates the ability to get a figure about the evolution degree of the monitored volcanic edifice and can be profitably extended in the future to other volcanic systems with

  1. Data compression of scanned halftone images

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Jensen, Kim S.


    A new method for coding scanned halftone images is proposed. It is information-lossy, but still preserving the image quality, compression rates of 16-35 have been achieved for a typical test image scanned on a high resolution scanner. The bi-level halftone images are filtered, in phase...... with the halftone grid, and converted to a gray level representation. A new digital description of (halftone) grids has been developed for this purpose. The gray level values are coded according to a scheme based on states derived from a segmentation of gray values. To enable real-time processing of high resolution...... scanner output, the coding has been parallelized and implemented on a transputer system. For comparison, the test image was coded using existing (lossless) methods giving compression rates of 2-7. The best of these, a combination of predictive and binary arithmetic coding was modified and optimized...

  2. A Method for Generating Super Large Fractal Images useful for Decoration Art

    Institute of Scientific and Technical Information of China (English)

    HuajieLIU; JunLUO


    Many authors have reported the techniques to iterate nonlinear equations on complex plane,but generally,the size of image calculated by usual VGA style(640×480) is too small to fit the needs for high quality publications or ecorative patterns.We describe a universal method for generating and storing(in *.GIF format)fractal image large enough as you need,such as an image 5000×5000 256-color(25,000,774 bytes≈23.8MB),which can thoroughly display the intricate beauty of fractals.

  3. Virtually Lossless Compression of Astrophysical Images

    Directory of Open Access Journals (Sweden)

    Alparone Luciano


    Full Text Available We describe an image compression strategy potentially capable of preserving the scientific quality of astrophysical data, simultaneously allowing a consistent bandwidth reduction to be achieved. Unlike strictly lossless techniques, by which moderate compression ratios are attainable, and conventional lossy techniques, in which the mean square error of the decoded data is globally controlled by users, near-lossless methods are capable of locally constraining the maximum absolute error, based on user's requirements. An advanced lossless/near-lossless differential pulse code modulation (DPCM scheme, recently introduced by the authors and relying on a causal spatial prediction, is adjusted to the specific characteristics of astrophysical image data (high radiometric resolution, generally low noise, etc.. The background noise is preliminarily estimated to drive the quantization stage for high quality, which is the primary concern in most of astrophysical applications. Extensive experimental results of lossless, near-lossless, and lossy compression of astrophysical images acquired by the Hubble space telescope show the advantages of the proposed method compared to standard techniques like JPEG-LS and JPEG2000. Eventually, the rationale of virtually lossless compression, that is, a noise-adjusted lossles/near-lossless compression, is highlighted and found to be in accordance with concepts well established for the astronomers' community.

  4. Compressive Deconvolution in Medical Ultrasound Imaging. (United States)

    Chen, Zhouye; Basarab, Adrian; Kouamé, Denis


    The interest of compressive sampling in ultrasound imaging has been recently extensively evaluated by several research teams. Following the different application setups, it has been shown that the RF data may be reconstructed from a small number of measurements and/or using a reduced number of ultrasound pulse emissions. Nevertheless, RF image spatial resolution, contrast and signal to noise ratio are affected by the limited bandwidth of the imaging transducer and the physical phenomenon related to US wave propagation. To overcome these limitations, several deconvolution-based image processing techniques have been proposed to enhance the ultrasound images. In this paper, we propose a novel framework, named compressive deconvolution, that reconstructs enhanced RF images from compressed measurements. Exploiting an unified formulation of the direct acquisition model, combining random projections and 2D convolution with a spatially invariant point spread function, the benefit of our approach is the joint data volume reduction and image quality improvement. The proposed optimization method, based on the Alternating Direction Method of Multipliers, is evaluated on both simulated and in vivo data.

  5. JPEG2000 Image Compression on Solar EUV Images (United States)

    Fischer, Catherine E.; Müller, Daniel; De Moortel, Ineke


    For future solar missions as well as ground-based telescopes, efficient ways to return and process data have become increasingly important. Solar Orbiter, which is the next ESA/NASA mission to explore the Sun and the heliosphere, is a deep-space mission, which implies a limited telemetry rate that makes efficient onboard data compression a necessity to achieve the mission science goals. Missions like the Solar Dynamics Observatory (SDO) and future ground-based telescopes such as the Daniel K. Inouye Solar Telescope, on the other hand, face the challenge of making petabyte-sized solar data archives accessible to the solar community. New image compression standards address these challenges by implementing efficient and flexible compression algorithms that can be tailored to user requirements. We analyse solar images from the Atmospheric Imaging Assembly (AIA) instrument onboard SDO to study the effect of lossy JPEG2000 (from the Joint Photographic Experts Group 2000) image compression at different bitrates. To assess the quality of compressed images, we use the mean structural similarity (MSSIM) index as well as the widely used peak signal-to-noise ratio (PSNR) as metrics and compare the two in the context of solar EUV images. In addition, we perform tests to validate the scientific use of the lossily compressed images by analysing examples of an on-disc and off-limb coronal-loop oscillation time-series observed by AIA/SDO.


    Institute of Scientific and Technical Information of China (English)


    Block truncation coding (BTC) is a simple and fast image compression technique suitable for realtime image transmission, and it has high channel error resisting capability and good reconstructed image quality. The main shortcoming of the original BTC algorithm is the high bit rate (normally 2 bits/pixel). In order to reduce the bit rate, an efficient BTC image compression algorithm was presented in this paper. In the proposed algorithm, a simple look-up-table method is presented for coding the higher mean and the lower mean of a block without any extra distortion, and a prediction technique is introduced to reduce the number of bits used to code the bit plane with some extra distortion. The test results prove the effectiveness of the proposed algorithm.

  7. Lossless Compression of Digital Images

    DEFF Research Database (Denmark)

    Martins, Bo

    Presently, tree coders are the best bi-level image coders. The currentISO standard, JBIG, is a good example.By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code.A number of general-purpose code...

  8. Listless zerotree image compression algorithm (United States)

    Lian, Jing; Wang, Ke


    In this paper, an improved zerotree structure and a new coding procedure are adopted, which improve the reconstructed image qualities. Moreover, the lists in SPIHT are replaced by flag maps, and lifting scheme is adopted to realize wavelet transform, which lowers the memory requirements and speeds up the coding process. Experimental results show that the algorithm is more effective and efficient compared with SPIHT.

  9. Performance visualization for image compression in telepathology (United States)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace


    The conventional approach to performance evaluation for image compression in telemedicine is simply to measure compression ratio, signal-to-noise ratio and computational load. Evaluation of performance is however a much more complex and many sided issue. It is necessary to consider more deeply the requirements of the applications. In telemedicine, the preservation of clinical information must be taken into account when assessing the suitability of any particular compression algorithm. In telemedicine the metrication of this characteristic is subjective because human judgement must be brought in to identify what is of clinical importance. The assessment must therefore take into account subjective user evaluation criteria as well as objective criteria. This paper develops the concept of user based assessment techniques for image compression used in telepathology. A novel visualization approach has been developed to show and explore the highly complex performance space taking into account both types of measure. The application considered is within a general histopathology image management system; the particular component is a store-and-forward facility for second opinion elicitation. Images of histopathology slides are transmitted to the workstations of consultants working remotely to enable them to provide second opinions.

  10. Compressive imaging using fast transform coding (United States)

    Thompson, Andrew; Calderbank, Robert


    We propose deterministic sampling strategies for compressive imaging based on Delsarte-Goethals frames. We show that these sampling strategies result in multi-scale measurements which can be related to the 2D Haar wavelet transform. We demonstrate the effectiveness of our proposed strategies through numerical experiments.

  11. Compressive sensing for high resolution radar imaging

    NARCIS (Netherlands)

    Anitori, L.; Otten, M.P.G.; Hoogeboom, P.


    In this paper we present some preliminary results on the application of Compressive Sensing (CS) to high resolution radar imaging. CS is a recently developed theory which allows reconstruction of sparse signals with a number of measurements much lower than what is required by the Shannon sampling th

  12. Imaging through diffusive layers using speckle pattern fractal analysis and application to embedded object detection in tissues (United States)

    Tremberger, George, Jr.; Flamholz, A.; Cheung, E.; Sullivan, R.; Subramaniam, R.; Schneider, P.; Brathwaite, G.; Boteju, J.; Marchese, P.; Lieberman, D.; Cheung, T.; Holden, Todd


    The absorption effect of the back surface boundary of a diffuse layer was studied via laser generated reflection speckle pattern. The spatial speckle intensity provided by a laser beam was measured. The speckle data were analyzed in terms of fractal dimension (computed by NIH ImageJ software via the box counting fractal method) and weak localization theory based on Mie scattering. Bar code imaging was modeled as binary absorption contrast and scanning resolution in millimeter range was achieved for diffusive layers up to thirty transport mean free path thick. Samples included alumina, porous glass and chicken tissue. Computer simulation was used to study the effect of speckle spatial distribution and observed fractal dimension differences were ascribed to variance controlled speckle sizes. Fractal dimension suppressions were observed in samples that had thickness dimensions around ten transport mean free path. Computer simulation suggested a maximum fractal dimension of about 2 and that subtracting information could lower fractal dimension. The fractal dimension was shown to be sensitive to sample thickness up to about fifteen transport mean free paths, and embedded objects which modified 20% or more of the effective thickness was shown to be detectable. The box counting fractal method was supplemented with the Higuchi data series fractal method and application to architectural distortion mammograms was demonstrated. The use of fractals in diffusive analysis would provide a simple language for a dialog between optics experts and mammography radiologists, facilitating the applications of laser diagnostics in tissues.

  13. Lossless Image Compression Using New Biorthogonal Wavelets

    Directory of Open Access Journals (Sweden)

    M. Santhosh


    Full Text Available Even though a large number of wavelets exist, one n eeds new wavelets for their specific applications. One of the basic wavelet categories is orthogonal wavel ets. But it was hard to find orthogonal and symmetric wavelets. Symmetricity is required for perfect reconstruction. Hence, a need for orthogonal and symmetric arises. The solution was in the form of biorthogonal wavelets which preserves perfect reconstruction condition. Though a number of biorthogonal wavelets are proposed in the literature, in this paper four new biorthogonal wavelets are proposed which gives bett er compression performance. The new wavelets are compared with traditional wavelets by using the des ign metrics Peak Signal to Noise Ratio (PSNR and Compression Ratio (CR. Set Partitioning in Hierarc hical Trees (SPIHT coding algorithm was utilized to incorporate compression of images.

  14. Automatic Method to Classify Images Based on Multiscale Fractal Descriptors and Paraconsistent Logic (United States)

    Pavarino, E.; Neves, L. A.; Nascimento, M. Z.; Godoy, M. F.; Arruda, P. F.; Neto, D. S.


    In this study is presented an automatic method to classify images from fractal descriptors as decision rules, such as multiscale fractal dimension and lacunarity. The proposed methodology was divided in three steps: quantification of the regions of interest with fractal dimension and lacunarity, techniques under a multiscale approach; definition of reference patterns, which are the limits of each studied group; and, classification of each group, considering the combination of the reference patterns with signals maximization (an approach commonly considered in paraconsistent logic). The proposed method was used to classify histological prostatic images, aiming the diagnostic of prostate cancer. The accuracy levels were important, overcoming those obtained with Support Vector Machine (SVM) and Best- first Decicion Tree (BFTree) classifiers. The proposed approach allows recognize and classify patterns, offering the advantage of giving comprehensive results to the specialists.

  15. Fractal Analysis of Laplacian Pyramidal Filters Applied to Segmentation of Soil Images

    Directory of Open Access Journals (Sweden)

    J. de Castro


    Full Text Available The laplacian pyramid is a well-known technique for image processing in which local operators of many scales, but identical shape, serve as the basis functions. The required properties to the pyramidal filter produce a family of filters, which is unipara metrical in the case of the classical problem, when the length of the filter is 5. We pay attention to gaussian and fractal behaviour of these basis functions (or filters, and we determine the gaussian and fractal ranges in the case of single parameter a. These fractal filters loose less energy in every step of the laplacian pyramid, and we apply this property to get threshold values for segmenting soil images, and then evaluate their porosity. Also, we evaluate our results by comparing them with the Otsu algorithm threshold values, and conclude that our algorithm produce reliable test results.

  16. Detection of Rice Leaf Diseases Using Chaos and Fractal Dimension in Image Processing

    Directory of Open Access Journals (Sweden)



    Full Text Available A novel method for detecting rice leaf disease using image processing technique called fractal dimension and chaos theory is proposed in this paper. The analysis of a diseased leaf is carried out according to its image pattern and fractal dimension, and especially box-counting ratio calculation, and chaos, are applied to be able to identify the disease pattern’s self-similarity and to recreate the fractal. The image’s self-similarity is the disease infected one which is same as when it is fully infected. This method is proposed as preliminary information for the development of an early detection system or for developing knowledge based expert system or decision support system.

  17. Compressive Imaging via Approximate Message Passing (United States)


    20] uses an adaptive Wiener filter [21] for 2D denoising. Another option is to use a more sophisticated image 2D denoiser such as BM3D [22] within AMP... filtering ,” IEEE Trans. Image Process ., vol. 16, no. 8, pp. 2080–2095, Aug. 2007. [23] J. Tan, Y. Ma, H. Rueda, D. Baron, and G. Arce, “Application of...JOURNAL OF SELECTED TOPICS IN in Signal Processing , (06 2015): 1. doi: Jin Tan, Yanting Ma, Dror Baron. Compressive Imaging via Approximate MessagePassing

  18. Lossless compression for three-dimensional images (United States)

    Tang, Xiaoli; Pearlman, William A.


    We investigate and compare the performance of several three-dimensional (3D) embedded wavelet algorithms on lossless 3D image compression. The algorithms are Asymmetric Tree Three-Dimensional Set Partitioning In Hierarchical Trees (AT-3DSPIHT), Three-Dimensional Set Partitioned Embedded bloCK (3D-SPECK), Three-Dimensional Context-Based Embedded Zerotrees of Wavelet coefficients (3D-CB-EZW), and JPEG2000 Part II for multi-component images. Two kinds of images are investigated in our study -- 8-bit CT and MR medical images and 16-bit AVIRIS hyperspectral images. First, the performances by using different size of coding units are compared. It shows that increasing the size of coding unit improves the performance somewhat. Second, the performances by using different integer wavelet transforms are compared for AT-3DSPIHT, 3D-SPECK and 3D-CB-EZW. None of the considered filters always performs the best for all data sets and algorithms. At last, we compare the different lossless compression algorithms by applying integer wavelet transform on the entire image volumes. For 8-bit medical image volumes, AT-3DSPIHT performs the best almost all the time, achieving average of 12% decreases in file size compared with JPEG2000 multi-component, the second performer. For 16-bit hyperspectral images, AT-3DSPIHT always performs the best, yielding average 5.8% and 8.9% decreases in file size compared with 3D-SPECK and JPEG2000 multi-component, respectively. Two 2D compression algorithms, JPEG2000 and UNIX zip, are also included for reference, and all 3D algorithms perform much better than 2D algorithms.

  19. Combining image-processing and image compression schemes (United States)

    Greenspan, H.; Lee, M.-C.


    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  20. Compressed imaging by sparse random convolution. (United States)

    Marcos, Diego; Lasser, Theo; López, Antonio; Bourquard, Aurélien


    The theory of compressed sensing (CS) shows that signals can be acquired at sub-Nyquist rates if they are sufficiently sparse or compressible. Since many images bear this property, several acquisition models have been proposed for optical CS. An interesting approach is random convolution (RC). In contrast with single-pixel CS approaches, RC allows for the parallel capture of visual information on a sensor array as in conventional imaging approaches. Unfortunately, the RC strategy is difficult to implement as is in practical settings due to important contrast-to-noise-ratio (CNR) limitations. In this paper, we introduce a modified RC model circumventing such difficulties by considering measurement matrices involving sparse non-negative entries. We then implement this model based on a slightly modified microscopy setup using incoherent light. Our experiments demonstrate the suitability of this approach for dealing with distinct CS scenarii, including 1-bit CS.


    Directory of Open Access Journals (Sweden)

    V. K. Sudha


    Full Text Available This paper analyses performance of multiwavelets - a variant of wavelet transform on compression of medical images. To do so, two processes namely, transformation for decorrelation and encoding are done. In transformation stage medical images are subjected to multiwavelet transform using multiwavelets such as Geronimo- Hardin-Massopust, Chui Lian, Cardinal 2 Balanced (Cardbal2 and orthogonal symmetric/antsymmetric multiwavelet (SA4. Set partitioned Embedded Block Coder is used as a common platform for encoding the transformed coefficients. Peak Signal to noise ratio, bit rate and Structural Similarity Index are used as metrics for performance analysis. For experiment we have used various medical images such as Magnetic Resonance Image, Computed Tomography and X-ray images.

  2. Efficient lossless compression scheme for multispectral images (United States)

    Benazza-Benyahia, Amel; Hamdi, Mohamed; Pesquet, Jean-Christophe


    Huge amounts of data are generated thanks to the continuous improvement of remote sensing systems. Archiving this tremendous volume of data is a real challenge which requires lossless compression techniques. Furthermore, progressive coding constitutes a desirable feature for telebrowsing. To this purpose, a compact and pyramidal representation of the input image has to be generated. Separable multiresolution decompositions have already been proposed for multicomponent images allowing each band to be decomposed separately. It seems however more appropriate to exploit also the spectral correlations. For hyperspectral images, the solution is to apply a 3D decomposition according to the spatial and to the spectral dimensions. This approach is not appropriate for multispectral images because of the reduced number of spectral bands. In recent works, we have proposed a nonlinear subband decomposition scheme with perfect reconstruction which exploits efficiently both the spatial and the spectral redundancies contained in multispectral images. In this paper, the problem of coding the coefficients of the resulting subband decomposition is addressed. More precisely, we propose an extension to the vector case of Shapiro's embedded zerotrees of wavelet coefficients (V-EZW) with achieves further saving in the bit stream. Simulations carried out on SPOT images indicate the outperformance of the global compression package we performed.

  3. Compressive Hyperspectral Imaging and Anomaly Detection (United States)


    Examples include the discrete cosine basis and various wavelets based bases. They have been thoroughly studied and widely considered in applications...the desired jointly sparse a"s, one shall adjust a and b. 4.4 Hyperspectral Image Reconstruction and Denoising We apply the model x* = Da’ + e! to...iteration for compressive sensing and sparse denoising ,’" Communications in Mathematical Sciences , 2008. W. Yin, "Analysis and generalizations of

  4. Lossless Astronomical Image Compression and the Effects of Random Noise (United States)

    Pence, William


    In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.

  5. Image Segmentation, Registration, Compression, and Matching (United States)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina


    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  6. On the fractal geometry of DNA by the binary image analysis. (United States)

    Cattani, Carlo; Pierro, Gaetano


    The multifractal analysis of binary images of DNA is studied in order to define a methodological approach to the classification of DNA sequences. This method is based on the computation of some multifractality parameters on a suitable binary image of DNA, which takes into account the nucleotide distribution. The binary image of DNA is obtained by a dot-plot (recurrence plot) of the indicator matrix. The fractal geometry of these images is characterized by fractal dimension (FD), lacunarity, and succolarity. These parameters are compared with some other coefficients such as complexity and Shannon information entropy. It will be shown that the complexity parameters are more or less equivalent to FD, while the parameters of multifractality have different values in the sense that sequences with higher FD might have lower lacunarity and/or succolarity. In particular, the genome of Drosophila melanogaster has been considered by focusing on the chromosome 3r, which shows the highest fractality with a corresponding higher level of complexity. We will single out some results on the nucleotide distribution in 3r with respect to complexity and fractality. In particular, we will show that sequences with higher FD also have a higher frequency distribution of guanine, while low FD is characterized by the higher presence of adenine.

  7. Lossless Digital Image Compression Method for Bitmap Images

    CERN Document Server

    Meyyappan, Dr T; Nachiaban, N M Jeya; 10.5121/ijma.2011.3407


    In this research paper, the authors propose a new approach to digital image compression using crack coding This method starts with the original image and develop crack codes in a recursive manner, marking the pixels visited earlier and expanding the entropy in four directions. The proposed method is experimented with sample bitmap images and results are tabulated. The method is implemented in uni-processor machine using C language source code.

  8. Fast Lossless Compression of Multispectral-Image Data (United States)

    Klimesh, Matthew


    An algorithm that effects fast lossless compression of multispectral-image data is based on low-complexity, proven adaptive-filtering algorithms. This algorithm is intended for use in compressing multispectral-image data aboard spacecraft for transmission to Earth stations. Variants of this algorithm could be useful for lossless compression of three-dimensional medical imagery and, perhaps, for compressing image data in general.

  9. [Fractal analysis of trabecular architecture: with special reference to slice thickness and pixel size of the image]. (United States)

    Tomomitsu, Tatsushi; Mimura, Hiroaki; Murase, Kenya; Tamada, Tsutomu; Sone, Teruki; Fukunaga, Masao


    Many analyses of bone microarchitecture using three-dimensional images of micro CT (microCT) have been reported recently. However, as extirpated bone is the subject of measurement on microCT, various kinds of information are not available clinically. Our aim is to evaluate usefulness of fractal dimension as an index of bone strength different from bone mineral density in in-vivo, to which microCT could not be applied. In this fundamental study, the relation between pixel size and the slice thickness of images was examined when fractal analysis was applied to clinical images. We examined 40 lumbar spine specimens extirpated from 16 male cadavers (30-88 years; mean age, 60.8 years). Three-dimensional images of the trabeculae of 150 slices were obtained by a microCT system under the following conditions: matrix size, 512 x 512; slice thickness, 23.2 em; and pixel size, 18.6 em. Based on images of 150 slices, images of four different matrix sizes and nine different slice thicknesses were made using public domain software (NIH Image). The threshold value for image binarization, and the relation between pixel size and the slice thickness of an image used for two-dimensional and three-dimensional fractal analyses were studied. In addition, the box counting method was used for fractal analysis. One hundred forty-five in box counting was most suitable as the threshold value for image binarization on the 256 gray levels. The correlation coefficients between two-dimensional fractal dimensions of processed images and three-dimensional fractal dimensions of original images were more than 0.9 for pixel sizes fractal dimension of processed images and three-dimensional fractal dimension of original images, when pixel size was less than 74.4 microm, a correlation coefficient of more than 0.9 was obtained even for the maximal slice thickness (1.74 mm) examined in this study.

  10. Unsupervised regions of interest extraction for color image compression

    Institute of Scientific and Technical Information of China (English)

    Xiaoguang Shao; Kun Gao; Lili L(U); Guoqiang Ni


    A novel unsupervised approach for regions of interest (ROI) extraction that combines the modified visual attention model and clustering analysis method is proposed.Then the non-uniform color image compression algorithm is followed to compress ROI and other regions with different compression ratios through the JPEG image compression algorithm.The reconstruction algorithm of the compressed image is similar to that of the JPEG algorithm.Experimental results show that the proposed method has better performance in terms of compression ratio and fidelity when comparing with other traditional approaches.

  11. Fractal analysis of granular ore media based on computed tomography image processing

    Institute of Scientific and Technical Information of China (English)

    WU Ai-xiang; YANG Bao-hua; ZHOU Xu


    The cross-sectional images of nine groups of ore samples were obtained by X-ray computed tomography(CT) scanner.Based on CT image analysis,the fractal dimensions of solid matrix,pore space and matrix/pore interface of each sample were measured by using box counting method.The correlation of the three fractal dimensions with particle size,porosity,and seepage coefficient was investigated.The results show that for all images of these samples,the matrix phase has the highest dimension,followed by the pore phase,and the dimension of matrix-pore interface has the smallest value; the dimensions of matrix phase and matrix-pore interface are negatively and linearly correlated with porosity while the dimension of pore phase relates positively and linearly with porosity; the fractal dimension of matrix-pore interface relates negatively and linearly with seepage coefficient.Larger fractal dimension of matrix/pore interface indicates more irregular complicated channels for solution flow,resulting in low permeability.

  12. Fractals everywhere

    CERN Document Server

    Barnsley, Michael F


    ""Difficult concepts are introduced in a clear fashion with excellent diagrams and graphs."" - Alan E. Wessel, Santa Clara University""The style of writing is technically excellent, informative, and entertaining."" - Robert McCartyThis new edition of a highly successful text constitutes one of the most influential books on fractal geometry. An exploration of the tools, methods, and theory of deterministic geometry, the treatment focuses on how fractal geometry can be used to model real objects in the physical world. Two sixteen-page full-color inserts contain fractal images, and a bonus CD of

  13. Compressed sensing imaging techniques for radio interferometry

    CERN Document Server

    Wiaux, Y; Puy, G; Scaife, A M M; Vandergheynst, P


    Radio interferometry probes astrophysical signals through incomplete and noisy Fourier measurements. The theory of compressed sensing demonstrates that such measurements may actually suffice for accurate reconstruction of sparse or compressible signals. We propose new generic imaging techniques based on convex optimization for global minimization problems defined in this context. The versatility of the framework notably allows introduction of specific prior information on the signals, which offers the possibility of significant improvements of reconstruction relative to the standard local matching pursuit algorithm CLEAN used in radio astronomy. We illustrate the potential of the approach by studying reconstruction performances on simulations of two different kinds of signals observed with very generic interferometric configurations. The first kind is an intensity field of compact astrophysical objects. The second kind is the imprint of cosmic strings in the temperature field of the cosmic microwave backgroun...

  14. An image retrieval system based on fractal dimension

    Institute of Scientific and Technical Information of China (English)

    姚敏; 易文晟; 沈斌; DAIHong-hua


    This paper presents a new kind of image retrieval system which obtains the feature vectors of im-ages by estimating their fraetal dimension; and at the same time establishes a tree-structure image database. After preproeessing and feature extracting, a given image is matched with the standard images in the image da-tabase using a hierarchical method of image indexing.

  15. The spatial distribution of β-carotene impregnated in apple slices determined using image and fractal analysis



    Changes in the concentration profiles of β-carotene caused by diffusion through parenchymatic dried apple tissue were characterized by image and fractal analysis. Apple slices were dried by convection, and then impregnated with an aqueous β-carotene solution. Scanning electron microscopy images of dried apple slices were captured and the fractal dimension (FD) values of the textures of the images were obtained (FDSEM). It was observed that the microstructure of the foodstuff being impregnated...

  16. Reduction of Transmitted Information Using Similarities between Range Blocks in Fractal Image Coding


    Hu, Xiaotong; Qiu, Shuping; Kuroda, Hideo


    Fractal image coding uses the similarities between the best matching domain blocks and range blocks to reconstruct image. In the transmitted information, the information about the best matching domain blocks occupies a large percentage, so the reduction of information about the best matching domain blocks is the most effective method to reduce the quality of transmitted information. On the other hand, there are similarities between range blocks from each other. So, when range blocks are simil...

  17. Fpack and Funpack User's Guide: FITS Image Compression Utilities

    CERN Document Server

    Pence, William; White, Rick


    Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see The associated funpack program restores the compressed image file back to its original state (if a lossless compression algorithm is used). (An experimental method for compressing FITS binary tables is also available; see section 7). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression options.


    Directory of Open Access Journals (Sweden)



    Full Text Available In Image Compression, the researcher’s aim is to reduce the number of bits required to represent an image by removing the spatial and spectral redundancies. Recently wavelet packet has emerged as popular techniques for image compression. In this paper proposes a wavelet-based compression scheme that is able to operate in lossyas well as lossless mode. First we describe integer wavelet transform (IWT and integer wavelet packet transform (IWPT as an application of lifting scheme (LS.After analyzing and implementing results for IWT and IWPT , another method combining DPCM and IWPT is implemented using Huffman coding for grey scale images. Then we propose to implement the same for color images using Shannon source coding technique. We measure the level of compression by the compression ratio (CR and compression factor (CF. Comparing with IWT and IWPT the DPCM-IWPT shows better performance in image compression.

  19. Compressive Imaging with Iterative Forward Models

    CERN Document Server

    Liu, Hsiou-Yuan; Liu, Dehong; Mansour, Hassan; Boufounos, Petros T


    We propose a new compressive imaging method for reconstructing 2D or 3D objects from their scattered wave-field measurements. Our method relies on a novel, nonlinear measurement model that can account for the multiple scattering phenomenon, which makes the method preferable in applications where linear measurement models are inaccurate. We construct the measurement model by expanding the scattered wave-field with an accelerated-gradient method, which is guaranteed to converge and is suitable for large-scale problems. We provide explicit formulas for computing the gradient of our measurement model with respect to the unknown image, which enables image formation with a sparsity- driven numerical optimization algorithm. We validate the method both analytically and with numerical simulations.

  20. Discrete directional wavelet bases for image compression (United States)

    Dragotti, Pier L.; Velisavljevic, Vladan; Vetterli, Martin; Beferull-Lozano, Baltasar


    The application of the wavelet transform in image processing is most frequently based on a separable construction. Lines and columns in an image are treated independently and the basis functions are simply products of the corresponding one dimensional functions. Such method keeps simplicity in design and computation, but is not capable of capturing properly all the properties of an image. In this paper, a new truly separable discrete multi-directional transform is proposed with a subsampling method based on lattice theory. Alternatively, the subsampling can be omitted and this leads to a multi-directional frame. This transform can be applied in many areas like denoising, non-linear approximation and compression. The results on non-linear approximation and denoising show very interesting gains compared to the standard two-dimensional analysis.

  1. Image and video processing in the compressed domain

    CERN Document Server

    Mukhopadhyay, Jayanta


    As more images and videos are becoming available in compressed formats, researchers have begun designing algorithms for different image operations directly in their domains of representation, leading to faster computation and lower buffer requirements. Image and Video Processing in the Compressed Domain presents the fundamentals, properties, and applications of a variety of image transforms used in image and video compression. It illustrates the development of algorithms for processing images and videos in the compressed domain. Developing concepts from first principles, the book introduces po

  2. Fractal Dimension Calculation of a Manganese-Chromium Bimetallic Nanocomposite Using Image Processing

    Directory of Open Access Journals (Sweden)

    Amir Lashgari


    Full Text Available Bimetallic materials, which have the ability to convert heat change into mechanical movement, normally consist of two bonded strips of dissimilar metals that expand at different rates. We describe how we made a manganese-chromium (Mn-Cr bimetallic nanocomposite using the centrifuge method and a low-to-high approach. We conducted scanning electron microscope (SEM imaging, energy-dispersive X-ray spectroscopy (EDX analysis, and X-ray diffraction spectra of the nanocomposite to prove its identity. We examined how centrifuge speed, process time, and the use of an “intruder agent” affected the properties of the material. The fractal dimension is a significant factor that can be used to approximate the surface roughness, the texture segmentation, and an image of the studied compounds. We calculated the technique of fractal dimensions using image-processing values on a computer and histogram plot with the SEM image of the Mn-Cr bimetallic nanocomposite using MATLAB software. We applied the Statistical Package for the Social Sciences software for statistics data extracted from the SEM image of the nanocomposite and obtained the following results: mean = 1.778, median = 1.770, max = 1.98, min = 1.60, skewness = 0.177, range = 0.38, and harmonic mean = 1.771 for fractal dimension of the SEM image.





    Image compression is applied to many fields such as television broadcasting, remote sensing, image storage etc. Digitized images are compressed by a technique which exploits the redundancy of the images so that the number of bits required to represent the image can be reduced with acceptable degradation of the decoded image. The degradation of the image quality is limited wrt. the application used. There are various application where accuracy is of major concern. To achieve the objective of p...

  4. Research on compressive fusion for remote sensing images (United States)

    Yang, Senlin; Wan, Guobin; Li, Yuanyuan; Zhao, Xiaoxia; Chong, Xin


    A compressive fusion of remote sensing images is presented based on the block compressed sensing (BCS) and non-subsampled contourlet transform (NSCT). Since the BCS requires small memory space and enables fast computation, firstly, the images with large amounts of data can be compressively sampled into block images with structured random matrix. Further, the compressive measurements are decomposed with NSCT and their coefficients are fused by a rule of linear weighting. And finally, the fused image is reconstructed by the gradient projection sparse reconstruction algorithm, together with consideration of blocking artifacts. The field test of remote sensing images fusion shows the validity of the proposed method.

  5. Watermark Compression in Medical Image Watermarking Using Lempel-Ziv-Welch (LZW) Lossless Compression Technique. (United States)

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohd; Ali, Mushtaq


    In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery.

  6. On-board image compression for the RAE lunar mission (United States)

    Miller, W. H.; Lynch, T. J.


    The requirements, design, implementation, and flight performance of an on-board image compression system for the lunar orbiting Radio Astronomy Explorer-2 (RAE-2) spacecraft are described. The image to be compressed is a panoramic camera view of the long radio astronomy antenna booms used for gravity-gradient stabilization of the spacecraft. A compression ratio of 32 to 1 is obtained by a combination of scan line skipping and adaptive run-length coding. The compressed imagery data are convolutionally encoded for error protection. This image compression system occupies about 1000 cu cm and consumes 0.4 W.


    Directory of Open Access Journals (Sweden)



    Full Text Available Image compression is applied to many fields such as television broadcasting, remote sensing, image storage etc. Digitized images are compressed by a technique which exploits the redundancy of the images so that the number of bits required to represent the image can be reduced with acceptable degradation of the decoded image. The degradation of the image quality is limited wrt. the application used. There are various application where accuracy is of major concern. To achieve the objective of performance improvement with respect to decoded picture quality and compression ratios, compared to existing image compression techniques, a image compression technique using hybrid neural networks combining two different learning networks called Autoassociative multi-layer perceptron and self-organizing feature map is proposed.

  8. ROI-based DICOM image compression for telemedicine

    Indian Academy of Sciences (India)

    Vinayak K Bairagi; Ashok M Sapkal


    Many classes of images contain spatial regions which are more important than other regions. Compression methods capable of delivering higher reconstruction quality for important parts are attractive in this situation. For medical images, only a small portion of the image might be diagnostically useful, but the cost of a wrong interpretation is high. Hence, Region Based Coding (RBC) technique is significant for medical image compression and transmission. Lossless compression schemes with secure transmission play a key role in telemedicine applications that help in accurate diagnosis and research. In this paper, we propose lossless scalable RBC for Digital Imaging and Communications in Medicine (DICOM) images based on Integer Wavelet Transform (IWT) and with distortion limiting compression technique for other regions in image. The main objective of this work is to reject the noisy background and reconstruct the image portions losslessly. The compressed image can be accessed and sent over telemedicine network using personal digital assistance (PDA) like mobile.

  9. The impact of lossless image compression to radiographs (United States)

    Lehmann, Thomas M.; Abel, Jürgen; Weiss, Claudia


    The increasing number of digital imaging modalities results in data volumes of several Tera Bytes per year that must be transferred and archived in a common-sized hospital. Hence, data compression is an important issue for picture archiving and communication systems (PACS). The effect of lossy image compression is frequently analyzed with respect to images from a certain modality supporting a certain diagnosis. However, novel compression schemes have been developed recently allowing efficient but lossless compression. In this study, we compare the lossless compression schemes embedded in the tagged image file format (TIFF), graphics interchange format (GIF), and Joint Photographic Experts Group (JPEG 2000 II) with the Borrows-Wheeler compression algorithm (BWCA) with respect to image content and origin. Repeated measures ANOVA was based on 1.200 images in total. Statistically significant effects (p radiographs of the head, while the lowest factor of 1,05 (7.587 bpp) resulted from the TIFF packbits algorithm applied to pelvis images captured digitally. Over all, the BWCA is slightly but significantly more effective than JPEG 2000. Both compression schemes reduce the required bits per pixel (bpp) below 3. Also, secondarily digitized images are more compressible than the directly digital ones. Interestingly, JPEG outperforms BWCA for directly digital images regardless of image content, while BWCA performs better than JPEG on secondarily digitized radiographs. In conclusion, efficient lossless image compression schemes are available for PACS.

  10. Multiple snapshot colored compressive spectral imager (United States)

    Correa, Claudia V.; Hinojosa, Carlos A.; Arce, Gonzalo R.; Arguello, Henry


    The snapshot colored compressive spectral imager (SCCSI) is a recent compressive spectral imaging (CSI) architecture that senses the spatial and spectral information of a scene in a single snapshot by means of a colored mosaic FPA detector and a dispersive element. Commonly, CSI architectures allow multiple snapshot acquisition, yielding improved reconstructions of spatially detailed and spectrally rich scenes. Each snapshot is captured employing a different coding pattern. In principle, SCCSI does not admit multiple snapshots since the pixelated tiling of optical filters is directly attached to the detector. This paper extends the concept of SCCSI to a system admitting multiple snapshot acquisition by rotating the dispersive element, so the dispersed spatio-spectral source is coded and integrated at different detector pixels in each rotation. Thus, a different set of coded projections is captured using the same optical components of the original architecture. The mathematical model of the multishot SCCSI system is presented along with several simulations. Results show that a gain up to 7 dB of peak signal-to-noise ratio is achieved when four SCCSI snapshots are compared to a single snapshot reconstruction. Furthermore, a gain up to 5 dB is obtained with respect to state-of-the-art architecture, the multishot CASSI.

  11. Fractal and multifractal analysis of PET-CT images of metastatic melanoma before and after treatment with ipilimumab

    CERN Document Server

    Breki, Christina-Marina; Hassel, Jessica; Theoharis, Theoharis; Sachpekidis, Christos; Pan, Leyun; Provata, Astero


    PET/CT with F-18-Fluorodeoxyglucose (FDG) images of patients suffering from metastatic melanoma have been analysed using fractal and multifractal analysis to assess the impact of monoclonal antibody ipilimumab treatment with respect to therapy outcome. Our analysis shows that the fractal dimensions which describe the tracer dispersion in the body decrease consistently with the deterioration of the patient therapeutic outcome condition. In 20 out-of 24 cases the fractal analysis results match those of the medical records, while 7 cases are considered as special cases because the patients have non-tumour related medical conditions or side effects which affect the results. The decrease in the fractal dimensions with the deterioration of the patient conditions (in terms of disease progression) are attributed to the hierarchical localisation of the tracer which accumulates in the affected lesions and does not spread homogeneously throughout the body. Fractality emerges as a result of the migration patterns which t...

  12. Detecting abrupt dynamic change based on changes in the fractal properties of spatial images (United States)

    Liu, Qunqun; He, Wenping; Gu, Bin; Jiang, Yundi


    Many abrupt climate change events often cannot be detected timely by conventional abrupt detection methods until a few years after these events have occurred. The reason for this lag in detection is that abundant and long-term observational data are required for accurate abrupt change detection by these methods, especially for the detection of a regime shift. So, these methods cannot help us understand and forecast the evolution of the climate system in a timely manner. Obviously, spatial images, generated by a coupled spatiotemporal dynamical model, contain more information about a dynamic system than a single time series, and we find that spatial images show the fractal properties. The fractal properties of spatial images can be quantitatively characterized by the Hurst exponent, which can be estimated by two-dimensional detrended fluctuation analysis (TD-DFA). Based on this, TD-DFA is used to detect an abrupt dynamic change of a coupled spatiotemporal model. The results show that the TD-DFA method can effectively detect abrupt parameter changes in the coupled model by monitoring the changing in the fractal properties of spatial images. The present method provides a new way for abrupt dynamic change detection, which can achieve timely and efficient abrupt change detection results.

  13. Wavelet-based Image Compression using Subband Threshold (United States)

    Muzaffar, Tanzeem; Choi, Tae-Sun


    Wavelet based image compression has been a focus of research in recent days. In this paper, we propose a compression technique based on modification of original EZW coding. In this lossy technique, we try to discard less significant information in the image data in order to achieve further compression with minimal effect on output image quality. The algorithm calculates weight of each subband and finds the subband with minimum weight in every level. This minimum weight subband in each level, that contributes least effect during image reconstruction, undergoes a threshold process to eliminate low-valued data in it. Zerotree coding is done next on the resultant output for compression. Different values of threshold were applied during experiment to see the effect on compression ratio and reconstructed image quality. The proposed method results in further increase in compression ratio with negligible loss in image quality.

  14. Discriminating between photorealistic computer graphics and natural images using fractal geometry

    Institute of Scientific and Technical Information of China (English)

    PAN Feng; CHEN JiongBin; HUANG JiWu


    Rendering technology in computer graphics (CG) Is now capable of producing highly photorealistlc Images, giving rise to the problem of how to identify CG Images from natural images. Some methods were proposed to solve this problem. In this paper, we give a novel method from a new point of view of Image perception. Although the photorealisUc CG images are very similar to natural images, they are surrealistic and smoother than natural images, thus leading to the difference in perception. A part of features are derived from fractal dimension to capture the difference In color perception between CG images and natural Images, and several generalized dimensions are used as the rest features to capture difference in coarseness. The effect of these features is verified by experiments. The average accuracy is over 91.2%.

  15. DSP Implementation of Image Compression by Multiresolutional Analysis

    Directory of Open Access Journals (Sweden)

    K. Vlcek


    Full Text Available Wavelet algorithms allow considerably higher compression rates compared to Fourier transform based methods. The most important field of applications of wavelet transforms is that the image is captured in few wavelet coefficients. The successful applications in compression of image or in series of images in both the space and the time dimensions. Compression algorithms exploit the multi-scale nature of the wavelet transform.

  16. DSP Implementation of Image Compression by Multiresolutional Analysis

    Directory of Open Access Journals (Sweden)

    K. Vlcek


    Full Text Available Wavelet algorithms allow considerably higher compression rates compared to Fourier transform based methods. The most important field of applications of wavelet transforms is that the image is captured in few wavelet coefficients. The successful applications in compression of image or in series of images in both the space and the time dimensions. Compression algorithms exploit the multi-scale nature of the wavelet transform.

  17. Image and video compression fundamentals, techniques, and applications

    CERN Document Server

    Joshi, Madhuri A; Dandawate, Yogesh H; Joshi, Kalyani R; Metkar, Shilpa P


    Image and video signals require large transmission bandwidth and storage, leading to high costs. The data must be compressed without a loss or with a small loss of quality. Thus, efficient image and video compression algorithms play a significant role in the storage and transmission of data.Image and Video Compression: Fundamentals, Techniques, and Applications explains the major techniques for image and video compression and demonstrates their practical implementation using MATLAB® programs. Designed for students, researchers, and practicing engineers, the book presents both basic principles

  18. Medical Image Compression using Wavelet Decomposition for Prediction Method

    CERN Document Server

    Ramesh, S M


    In this paper offers a simple and lossless compression method for compression of medical images. Method is based on wavelet decomposition of the medical images followed by the correlation analysis of coefficients. The correlation analyses are the basis of prediction equation for each sub band. Predictor variable selection is performed through coefficient graphic method to avoid multicollinearity problem and to achieve high prediction accuracy and compression rate. The method is applied on MRI and CT images. Results show that the proposed approach gives a high compression rate for MRI and CT images comparing with state of the art methods.

  19. A Multiresolution Image Completion Algorithm for Compressing Digital Color Images

    Directory of Open Access Journals (Sweden)

    R. Gomathi


    Full Text Available This paper introduces a new framework for image coding that uses image inpainting method. In the proposed algorithm, the input image is subjected to image analysis to remove some of the portions purposefully. At the same time, edges are extracted from the input image and they are passed to the decoder in the compressed manner. The edges which are transmitted to decoder act as assistant information and they help inpainting process fill the missing regions at the decoder. Textural synthesis and a new shearlet inpainting scheme based on the theory of p-Laplacian operator are proposed for image restoration at the decoder. Shearlets have been mathematically proven to represent distributed discontinuities such as edges better than traditional wavelets and are a suitable tool for edge characterization. This novel shearlet p-Laplacian inpainting model can effectively reduce the staircase effect in Total Variation (TV inpainting model whereas it can still keep edges as well as TV model. In the proposed scheme, neural network is employed to enhance the value of compression ratio for image coding. Test results are compared with JPEG 2000 and H.264 Intracoding algorithms. The results show that the proposed algorithm works well.



    V. Sutha Jebakumari; P. Arockia Jansi Rani


    Wavelet analysis plays a vital role in the signal processing especially in image compression. In this paper, various compression algorithms like block truncation coding, EZW and SPIHT are studied and ana- lyzed; its algorithm idea and steps are given. The parameters for all these algorithms are analyzed and the best parameter for each of these compression algorithms is found out.


    Directory of Open Access Journals (Sweden)

    V. Sutha Jebakumari


    Full Text Available Wavelet analysis plays a vital role in the signal processing especially in image compression. In this paper, various compression algorithms like block truncation coding, EZW and SPIHT are studied and ana- lyzed; its algorithm idea and steps are given. The parameters for all these algorithms are analyzed and the best parameter for each of these compression algorithms is found out.

  2. A fractal derivative model for the characterization of anomalous diffusion in magnetic resonance imaging (United States)

    Liang, Yingjie; Ye, Allen Q.; Chen, Wen; Gatto, Rodolfo G.; Colon-Perez, Luis; Mareci, Thomas H.; Magin, Richard L.


    Non-Gaussian (anomalous) diffusion is wide spread in biological tissues where its effects modulate chemical reactions and membrane transport. When viewed using magnetic resonance imaging (MRI), anomalous diffusion is characterized by a persistent or 'long tail' behavior in the decay of the diffusion signal. Recent MRI studies have used the fractional derivative to describe diffusion dynamics in normal and post-mortem tissue by connecting the order of the derivative with changes in tissue composition, structure and complexity. In this study we consider an alternative approach by introducing fractal time and space derivatives into Fick's second law of diffusion. This provides a more natural way to link sub-voxel tissue composition with the observed MRI diffusion signal decay following the application of a diffusion-sensitive pulse sequence. Unlike previous studies using fractional order derivatives, here the fractal derivative order is directly connected to the Hausdorff fractal dimension of the diffusion trajectory. The result is a simpler, computationally faster, and more direct way to incorporate tissue complexity and microstructure into the diffusional dynamics. Furthermore, the results are readily expressed in terms of spectral entropy, which provides a quantitative measure of the overall complexity of the heterogeneous and multi-scale structure of biological tissues. As an example, we apply this new model for the characterization of diffusion in fixed samples of the mouse brain. These results are compared with those obtained using the mono-exponential, the stretched exponential, the fractional derivative, and the diffusion kurtosis models. Overall, we find that the order of the fractal time derivative, the diffusion coefficient, and the spectral entropy are potential biomarkers to differentiate between the microstructure of white and gray matter. In addition, we note that the fractal derivative model has practical advantages over the existing models from the

  3. Biomaterial porosity determined by fractal dimensions, succolarity and lacunarity on microcomputed tomographic images

    Energy Technology Data Exchange (ETDEWEB)

    N' Diaye, Mambaye [LUNAM Université, GEROM Groupe Etudes Remodelage Osseux et bioMatériaux-LHEA, IRIS-IBS Institut de Biologie en Santé, CHU d' Angers, 49933 ANGERS Cedex (France); Degeratu, Cristinel [LUNAM Université, GEROM Groupe Etudes Remodelage Osseux et bioMatériaux-LHEA, IRIS-IBS Institut de Biologie en Santé, CHU d' Angers, 49933 ANGERS Cedex (France); University Politehnica of Bucharest, Faculty of Applied Chemistry and Materials Science, Department of Bioresources and Polymer Science, Calea Victoriei 149, 010072, Sector 1, Bucharest (Romania); Bouler, Jean-Michel [Inserm UMR 791, LIOAD, University of Nantes, 44000 Nantes (France); Chappard, Daniel, E-mail: [LUNAM Université, GEROM Groupe Etudes Remodelage Osseux et bioMatériaux-LHEA, IRIS-IBS Institut de Biologie en Santé, CHU d' Angers, 49933 ANGERS Cedex (France)


    Porous structures are becoming more and more important in biology and material science because they help in reducing the density of the grafted material. For biomaterials, porosity also increases the accessibility of cells and vessels inside the grafted area. However, descriptors of porosity are scanty. We have used a series of biomaterials with different types of porosity (created by various porogens: fibers, beads …). Blocks were studied by microcomputed tomography for the measurement of 3D porosity. 2D sections were re-sliced to analyze the microarchitecture of the pores and were transferred to image analysis programs: star volumes, interconnectivity index, Minkowski–Bouligand and Kolmogorov fractal dimensions were determined. Lacunarity and succolarity, two recently described fractal dimensions, were also computed. These parameters provided a precise description of porosity and pores' characteristics. Non-linear relationships were found between several descriptors e.g. succolarity and star volume of the material. A linear correlation was found between lacunarity and succolarity. These techniques appear suitable in the study of biomaterials usable as bone substitutes. Highlights: ► Interconnected porosity is important in the development of bone substitutes. ► Porosity was evaluated by 2D and 3D morphometry on microCT images. ► Euclidean and fractal descriptors measure interconnectivity on 2D microCT images. ► Lacunarity and succolarity were evaluated on a series of porous biomaterials.

  4. Oncologic image compression using both wavelet and masking techniques. (United States)

    Yin, F F; Gao, Q


    A new algorithm has been developed to compress oncologic images using both wavelet transform and field masking methods. A compactly supported wavelet transform is used to decompose the original image into high- and low-frequency subband images. The region-of-interest (ROI) inside an image, such as an irradiated field in an electronic portal image, is identified using an image segmentation technique and is then used to generate a mask. The wavelet transform coefficients outside the mask region are then ignored so that these coefficients can be efficiently coded to minimize the image redundancy. In this study, an adaptive uniform scalar quantization method and Huffman coding with a fixed code book are employed in subsequent compression procedures. Three types of typical oncologic images are tested for compression using this new algorithm: CT, MRI, and electronic portal images with 256 x 256 matrix size and 8-bit gray levels. Peak signal-to-noise ratio (PSNR) is used to evaluate the quality of reconstructed image. Effects of masking and image quality on compression ratio are illustrated. Compression ratios obtained using wavelet transform with and without masking for the same PSNR are compared for all types of images. The addition of masking shows an increase of compression ratio by a factor of greater than 1.5. The effect of masking on the compression ratio depends on image type and anatomical site. A compression ratio of greater than 5 can be achieved for a lossless compression of various oncologic images with respect to the region inside the mask. Examples of reconstructed images with compression ratio greater than 50 are shown.

  5. Fractal analyses of osseous healing using Tuned Aperture Computed Tomography images

    Energy Technology Data Exchange (ETDEWEB)

    Nair, M.K.; Nair, U.P. [Dept. of Oral and Maxillofacial Radiology, Univ. of Pittsburgh, PA (United States); Seyedain, A. [Dept. of Periodontics, Univ. of Pittsburgh, PA (United States); Webber, R.L. [Dept. of Dentistry, Wake Forest University School of Medicine, Winston-Salem (United States); Piesco, N.P.; Agarwal, S.; Mooney, M.P. [Dept. of Oral Biology, School of Dental Medicine, Univ. of Pittsburgh, PA (Ukraine); Groendahl, H.G. [Dept. of Oral and Maxillofacial Radiology, Goteborg Univ. (Sweden)


    The aim of this study was to evaluate osseous healing in mandibular defects using fractal analyses on conventional radiographs and tuned aperture computed tomography (TACT; OrthoTACT, Instrumentarium Imaging, Helsinki, Finland) images. Eighty test sites on the inferior margins of rabbit mandibles were subject to lesion induction and treated with one of the following: no treatment (controls); osteoblasts only; polymer matrix only; or osteoblast-polymer matrix (OPM) combination. Images were acquired using conventional radiography and TACT, including unprocessed TACT (TACT-U) and iteratively restored TACT (TACT-IR). Healing was followed up over time and images acquired at 3, 6, 9, and 12 weeks post-surgery. Fractal dimension (FD) was computed within regions of interest in the defects using the TACT workbench. Results were analyzed for effects produced by imaging modality, treatment modality, time after surgery and lesion location. Histomorphometric data were available to assess ground truth. Significant differences (p<0.0001) were noted based on imaging modality with TACT-IR recording the highest mean fractal dimension (MFD), followed by TACT-U and conventional images, in that order. Sites treated with OPM recorded the highest MFDs among all treatment modalities (p<0.0001). The highest MFD based on time was recorded at 3 weeks and differed significantly with 12 weeks (p<0.035). Correlation of FD with results of histomorphometric data was high (r=0.79; p<0.001). The FD computed on TACT-IR showed the highest correlation with histomorphometric data, thus establishing the fact TACT is a more efficient and accurate imaging modality for quantification of osseous changes within healing bony defects. (orig.)

  6. Classification of diabetic retinopathy using fractal dimension analysis of eye fundus image (United States)

    Safitri, Diah Wahyu; Juniati, Dwi


    Diabetes Mellitus (DM) is a metabolic disorder when pancreas produce inadequate insulin or a condition when body resist insulin action, so the blood glucose level is high. One of the most common complications of diabetes mellitus is diabetic retinopathy which can lead to a vision problem. Diabetic retinopathy can be recognized by an abnormality in eye fundus. Those abnormalities are characterized by microaneurysms, hemorrhage, hard exudate, cotton wool spots, and venous's changes. The diabetic retinopathy is classified depends on the conditions of abnormality in eye fundus, that is grade 1 if there is a microaneurysm only in the eye fundus; grade 2, if there are a microaneurysm and a hemorrhage in eye fundus; and grade 3: if there are microaneurysm, hemorrhage, and neovascularization in the eye fundus. This study proposed a method and a process of eye fundus image to classify of diabetic retinopathy using fractal analysis and K-Nearest Neighbor (KNN). The first phase was image segmentation process using green channel, CLAHE, morphological opening, matched filter, masking, and morphological opening binary image. After segmentation process, its fractal dimension was calculated using box-counting method and the values of fractal dimension were analyzed to make a classification of diabetic retinopathy. Tests carried out by used k-fold cross validation method with k=5. In each test used 10 different grade K of KNN. The accuracy of the result of this method is 89,17% with K=3 or K=4, it was the best results than others K value. Based on this results, it can be concluded that the classification of diabetic retinopathy using fractal analysis and KNN had a good performance.

  7. Comparison of two SVD-based color image compression schemes. (United States)

    Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli


    Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR.

  8. Zone Specific Fractal Dimension of Retinal Images as Predictor of Stroke Incidence

    Directory of Open Access Journals (Sweden)

    Behzad Aliahmad


    Full Text Available Fractal dimensions (FDs are frequently used for summarizing the complexity of retinal vascular. However, previous techniques on this topic were not zone specific. A new methodology to measure FD of a specific zone in retinal images has been developed and tested as a marker for stroke prediction. Higuchi’s fractal dimension was measured in circumferential direction (FDC with respect to optic disk (OD, in three concentric regions between OD boundary and 1.5 OD diameter from its margin. The significance of its association with future episode of stroke event was tested using the Blue Mountain Eye Study (BMES database and compared against spectrum fractal dimension (SFD and box-counting (BC dimension. Kruskal-Wallis analysis revealed FDC as a better predictor of stroke (H=5.80, P=0.016, α=0.05 compared with SFD (H=0.51, P=0.475, α=0.05 and BC (H=0.41, P=0.520, α=0.05 with overall lower median value for the cases compared to the control group. This work has shown that there is a significant association between zone specific FDC of eye fundus images with future episode of stroke while this difference is not significant when other FD methods are employed.

  9. Texture image classification using multi-fractal dimension

    Institute of Scientific and Technical Information of China (English)

    LIU Zhuo-fu; SANG En-fang


    This paper presents a supervised classification metelet analysis. In the process of feature extraction, image transformation and wasion is obtained. In the part of classifier construction, the Learning Vector Quantization (LVQ) network is adopted as a classifier. Experiments of sonar image classification were carried out with satisfactory results, which verify the effectiveness of this method.

  10. Lossless compression of medical images using Hilbert scan (United States)

    Sun, Ziguang; Li, Chungui; Liu, Hao; Zhang, Zengfang


    The effectiveness of Hilbert scan in lossless medical images compression is discussed. In our methods, after coding of intensities, the pixels in a medical images have been decorrelated with differential pulse code modulation, then the error image has been rearranged using Hilbert scan, finally we implement five coding schemes, such as Huffman coding, RLE, lZW coding, Arithmetic coding, and RLE followed by Huffman coding. The experiments show that the case, which applies DPCM followed by Hilbert scan and then compressed by the Arithmetic coding scheme, has the best compression result, also indicate that Hilbert scan can enhance pixel locality, and increase the compression ratio effectively.

  11. Adaptive Super-Spatial Prediction Approach For Lossless Image Compression

    Directory of Open Access Journals (Sweden)

    Arpita C. Raut,


    Full Text Available Existing prediction based lossless image compression schemes perform prediction of an image data using their spatial neighborhood technique which can’t predict high-frequency image structure components, such as edges, patterns, and textures very well which will limit the image compression efficiency. To exploit these structure components, adaptive super-spatial prediction approach is developed. The super-spatial prediction approach is adaptive to compress high frequency structure components from the grayscale image. The motivation behind the proposed prediction approach is taken from motion prediction in video coding, which attempts to find an optimal prediction of structure components within the previously encoded image regions. This prediction approach is efficient for image regions with significant structure components with respect to parameters as compression ratio, bit rate as compared to CALIC (Context-based adaptive lossless image coding.

  12. Compressive optical image watermarking using joint Fresnel transform correlator architecture (United States)

    Li, Jun; Zhong, Ting; Dai, Xiaofang; Yang, Chanxia; Li, Rong; Tang, Zhilie


    A new optical image watermarking technique based on compressive sensing using joint Fresnel transform correlator architecture has been presented. A secret scene or image is first embedded into a host image to perform optical image watermarking by use of joint Fresnel transform correlator architecture. Then, the watermarked image is compressed to much smaller signal data using single-pixel compressive holographic imaging in optical domain. At the received terminal, the watermarked image is reconstructed well via compressive sensing theory and a specified holographic reconstruction algorithm. The preliminary numerical simulations show that it is effective and suitable for optical image security transmission in the coming absolutely optical network for the reason of the completely optical implementation and largely decreased holograms data volume.

  13. Texture-based medical image retrieval in compressed domain using compressive sensing. (United States)

    Yadav, Kuldeep; Srivastava, Avi; Mittal, Ankush; Ansari, M A


    Content-based image retrieval has gained considerable attention in today's scenario as a useful tool in many applications; texture is one of them. In this paper, we focus on texture-based image retrieval in compressed domain using compressive sensing with the help of DC coefficients. Medical imaging is one of the fields which have been affected most, as there had been huge size of image database and getting out the concerned image had been a daunting task. Considering this, in this paper we propose a new model of image retrieval process using compressive sampling, since it allows accurate recovery of image from far fewer samples of unknowns and it does not require a close relation of matching between sampling pattern and characteristic image structure with increase acquisition speed and enhanced image quality.

  14. CWICOM: A Highly Integrated & Innovative CCSDS Image Compression ASIC (United States)

    Poupat, Jean-Luc; Vitulli, Raffaele


    The space market is more and more demanding in terms of on image compression performances. The earth observation satellites instrument resolution, the agility and the swath are continuously increasing. It multiplies by 10 the volume of picture acquired on one orbit. In parallel, the satellites size and mass are decreasing, requiring innovative electronic technologies reducing size, mass and power consumption. Astrium, leader on the market of the combined solutions for compression and memory for space application, has developed a new image compression ASIC which is presented in this paper. CWICOM is a high performance and innovative image compression ASIC developed by Astrium in the frame of the ESA contract n°22011/08/NLL/LvH. The objective of this ESA contract is to develop a radiation hardened ASIC that implements the CCSDS 122.0-B-1 Standard for Image Data Compression, that has a SpaceWire interface for configuring and controlling the device, and that is compatible with Sentinel-2 interface and with similar Earth Observation missions. CWICOM stands for CCSDS Wavelet Image COMpression ASIC. It is a large dynamic, large image and very high speed image compression ASIC potentially relevant for compression of any 2D image with bi-dimensional data correlation such as Earth observation, scientific data compression… The paper presents some of the main aspects of the CWICOM development, such as the algorithm and specification, the innovative memory organization, the validation approach and the status of the project.

  15. Structure Assisted Compressed Sensing Reconstruction of Undersampled AFM Images

    DEFF Research Database (Denmark)

    Oxvig, Christian Schou; Arildsen, Thomas; Larsen, Torben


    The use of compressed sensing in atomic force microscopy (AFM) can potentially speed-up image acquisition, lower probe-specimen interaction, or enable super resolution imaging. The idea in compressed sensing for AFM is to spatially undersample the specimen, i.e. only acquire a small fraction...

  16. Phase Imaging: A Compressive Sensing Approach

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, Sebastian; Stevens, Andrew; Browning, Nigel D.; Pohl, Darius; Nielsch, Kornelius; Rellinghaus, Bernd


    Since Wolfgang Pauli posed the question in 1933, whether the probability densities |Ψ(r)|² (real-space image) and |Ψ(q)|² (reciprocal space image) uniquely determine the wave function Ψ(r) [1], the so called Pauli Problem sparked numerous methods in all fields of microscopy [2, 3]. Reconstructing the complete wave function Ψ(r) = a(r)e-iφ(r) with the amplitude a(r) and the phase φ(r) from the recorded intensity enables the possibility to directly study the electric and magnetic properties of the sample through the phase. In transmission electron microscopy (TEM), electron holography is by far the most established method for phase reconstruction [4]. Requiring a high stability of the microscope, next to the installation of a biprism in the TEM, holography cannot be applied to any microscope straightforwardly. Recently, a phase retrieval approach was proposed using conventional TEM electron diffractive imaging (EDI). Using the SAD aperture as reciprocal-space constraint, a localized sample structure can be reconstructed from its diffraction pattern and a real-space image using the hybrid input-output algorithm [5]. We present an alternative approach using compressive phase-retrieval [6]. Our approach does not require a real-space image. Instead, random complimentary pairs of checkerboard masks are cut into a 200 nm Pt foil covering a conventional TEM aperture (cf. Figure 1). Used as SAD aperture, subsequently diffraction patterns are recorded from the same sample area. Hereby every mask blocks different parts of gold particles on a carbon support (cf. Figure 2). The compressive sensing problem has the following formulation. First, we note that the complex-valued reciprocal-space wave-function is the Fourier transform of the (also complex-valued) real-space wave-function, Ψ(q) = F[Ψ(r)], and subsequently the diffraction pattern image is given by |Ψ(q)|2 = |F[Ψ(r)]|2. We want to find Ψ(r) given a few differently coded diffraction pattern measurements yn

  17. Compression multi-vues par representation LDI (Layered Depth Images)


    Jantet, Vincent


    This thesis presents an advanced framework for multi-view plus depth video processing and compression based on the concept of layered depth image (LDI). Several contributions are proposed for both depth-image based rendering and LDI construction and compression. The first contribution is a novel virtual view synthesis technique called Joint Projection Filling (JPF). This technique takes as input any image plus depth content and provides a virtual view in general position and performs image wa...

  18. On-line structure-lossless digital mammogram image compression (United States)

    Wang, Jun; Huang, H. K.


    This paper proposes a novel on-line structure lossless compression method for digital mammograms during the film digitization process. The structure-lossless compression segments the breast and the background, compresses the former with a predictive lossless coding method and discards the latter. This compression scheme is carried out during the film digitization process and no additional time is required for the compression. Digital mammograms are compressed on-the-fly while they are created. During digitization, lines of scanned data are first acquired into a small temporary buffer in the scanner, then they are transferred to a large image buffer in an acquisition computer which is connected to the scanner. The compression process, running concurrently with the digitization process in the acquisition computer, constantly checks the image buffer and compresses any newly arrived data. Since compression is faster than digitization, data compression is completed as soon as digitization is finished. On-line compression during digitization does not increase overall digitizing time. Additionally, it reduces the mammogram image size by a factor of 3 to 9 with no loss of information. This algorithm has been implemented in a film digitizer. Statistics were obtained based on digitizing 46 mammograms at four sampling distances from 50 to 200 microns.

  19. Wavelet/scalar quantization compression standard for fingerprint images

    Energy Technology Data Exchange (ETDEWEB)

    Brislawn, C.M.


    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.

  20. [Lossless compression of hyperspectral image for space-borne application]. (United States)

    Li, Jin; Jin, Long-xu; Li, Guo-ning


    In order to resolve the difficulty in hardware implementation, lower compression ratio and time consuming for the whole hyperspectral image lossless compression algorithm based on the prediction, transform, vector quantization and their combination, a hyperspectral image lossless compression algorithm for space-borne application was proposed in the present paper. Firstly, intra-band prediction is used only for the first image along the spectral line using a median predictor. And inter- band prediction is applied to other band images. A two-step and bidirectional prediction algorithm is proposed for the inter-band prediction. In the first step prediction, a bidirectional and second order predictor proposed is used to obtain a prediction reference value. And a improved LUT prediction algorithm proposed is used to obtain four values of LUT prediction. Then the final prediction is obtained through comparison between them and the prediction reference. Finally, the verification experiments for the compression algorithm proposed using compression system test equipment of XX-X space hyperspectral camera were carried out. The experiment results showed that compression system can be fast and stable work. The average compression ratio reached 3.05 bpp. Compared with traditional approaches, the proposed method could improve the average compression ratio by 0.14-2.94 bpp. They effectively improve the lossless compression ratio and solve the difficulty of hardware implementation of the whole wavelet-based compression scheme.

  1. Compressing industrial computed tomography images by means of contour coding (United States)

    Jiang, Haina; Zeng, Li


    An improved method for compressing industrial computed tomography (CT) images is presented. To have higher resolution and precision, the amount of industrial CT data has become larger and larger. Considering that industrial CT images are approximately piece-wise constant, we develop a compression method based on contour coding. The traditional contour-based method for compressing gray images usually needs two steps. The first is contour extraction and then compression, which is negative for compression efficiency. So we merge the Freeman encoding idea into an improved method for two-dimensional contours extraction (2-D-IMCE) to improve the compression efficiency. By exploiting the continuity and logical linking, preliminary contour codes are directly obtained simultaneously with the contour extraction. By that, the two steps of the traditional contour-based compression method are simplified into only one. Finally, Huffman coding is employed to further losslessly compress preliminary contour codes. Experimental results show that this method can obtain a good compression ratio as well as keeping satisfactory quality of compressed images.

  2. Fractal Analysis of Elastographic Images for Automatic Detection of Diffuse Diseases of Salivary Glands: Preliminary Results

    Directory of Open Access Journals (Sweden)

    Alexandru Florin Badea


    Full Text Available The geometry of some medical images of tissues, obtained by elastography and ultrasonography, is characterized in terms of complexity parameters such as the fractal dimension (FD. It is well known that in any image there are very subtle details that are not easily detectable by the human eye. However, in many cases like medical imaging diagnosis, these details are very important since they might contain some hidden information about the possible existence of certain pathological lesions like tissue degeneration, inflammation, or tumors. Therefore, an automatic method of analysis could be an expedient tool for physicians to give a faultless diagnosis. The fractal analysis is of great importance in relation to a quantitative evaluation of “real-time” elastography, a procedure considered to be operator dependent in the current clinical practice. Mathematical analysis reveals significant discrepancies among normal and pathological image patterns. The main objective of our work is to demonstrate the clinical utility of this procedure on an ultrasound image corresponding to a submandibular diffuse pathology.

  3. A simple data compression scheme for binary images of bacteria compared with commonly used image data compression schemes

    NARCIS (Netherlands)

    Wilkinson, M.H.F.


    A run length code compression scheme of extreme simplicity, used for image storage in an automated bacterial morphometry system, is compared with more common compression schemes, such as are used in the tag image file format. These schemes are Lempel-Ziv and Welch (LZW), Macintosh Packbits, and CCIT

  4. Data delivery system for MAPPER using image compression (United States)

    Yang, Jeehong; Savari, Serap A.


    The data delivery throughput of electron beam lithography systems can be improved by applying lossless image compression to the layout image and using an electron beam writer that can decode the compressed image on-the-fly. In earlier research we introduced the lossless layout image compression algorithm Corner2, which assumes a somewhat idealized writing strategy, namely row-by-row with a raster order. The MAPPER system has electron beam writers positioned in a lattice formation and each electron beam writer writes a designated block in a zig-zag order. We introduce Corner2-MEB, which redesigns Corner2 for MAPPER systems.

  5. Image encryption and compression based on kronecker compressed sensing and elementary cellular automata scrambling (United States)

    Chen, Tinghuan; Zhang, Meng; Wu, Jianhui; Yuen, Chau; Tong, You


    Because of simple encryption and compression procedure in single step, compressed sensing (CS) is utilized to encrypt and compress an image. Difference of sparsity levels among blocks of the sparsely transformed image degrades compression performance. In this paper, motivated by this difference of sparsity levels, we propose an encryption and compression approach combining Kronecker CS (KCS) with elementary cellular automata (ECA). In the first stage of encryption, ECA is adopted to scramble the sparsely transformed image in order to uniformize sparsity levels. A simple approximate evaluation method is introduced to test the sparsity uniformity. Due to low computational complexity and storage, in the second stage of encryption, KCS is adopted to encrypt and compress the scrambled and sparsely transformed image, where the measurement matrix with a small size is constructed from the piece-wise linear chaotic map. Theoretical analysis and experimental results show that our proposed scrambling method based on ECA has great performance in terms of scrambling and uniformity of sparsity levels. And the proposed encryption and compression method can achieve better secrecy, compression performance and flexibility.

  6. [Hyperspectral image compression technology research based on EZW]. (United States)

    Wei, Jun-Xia; Xiangli, Bin; Duan, Xiao-Feng; Xu, Zhao-Hui; Xue, Li-Jun


    Along with the development of hyperspectral remote sensing technology, hyperspectral imaging technology has been applied in the aspect of aviation and spaceflight, which is different from multispectral imaging, and with the band width of nanoscale spectral imaging the target continuously, the image resolution is very high. However, with the increasing number of band, spectral data quantity will be more and more, and these data storage and transmission is the problem that the authors must face. Along with the development of wavelet compression technology, in field of image compression, many people adopted and improved EZW, the present paper used the method in hyperspectral spatial dimension compression, but does not involved the spectrum dimension compression. From hyperspectral image compression reconstruction results, whether from the peak signal-to-noise ratio (PSNR) and spectral curve or from the subjective comparison of source and reconstruction image, the effect is well. If the first compression of image from spectrum dimension is made, then compression on space dimension, the authors believe the effect will be better.

  7. Fractal analysis and its impact factors on pore structure of artificial cores based on the images obtained using magnetic resonance imaging (United States)

    Wang, Heming; Liu, Yu; Song, Yongchen; Zhao, Yuechao; Zhao, Jiafei; Wang, Dayong


    Pore structure is one of important factors affecting the properties of porous media, but it is difficult to describe the complexity of pore structure exactly. Fractal theory is an effective and available method for quantifying the complex and irregular pore structure. In this paper, the fractal dimension calculated by box-counting method based on fractal theory was applied to characterize the pore structure of artificial cores. The microstructure or pore distribution in the porous material was obtained using the nuclear magnetic resonance imaging (MRI). Three classical fractals and one sand packed bed model were selected as the experimental material to investigate the influence of box sizes, threshold value, and the image resolution when performing fractal analysis. To avoid the influence of box sizes, a sequence of divisors of the image was proposed and compared with other two algorithms (geometric sequence and arithmetic sequence) with its performance of partitioning the image completely and bringing the least fitted error. Threshold value selected manually and automatically showed that it plays an important role during the image binary processing and the minimum-error method can be used to obtain an appropriate or reasonable one. Images obtained under different pixel matrices in MRI were used to analyze the influence of image resolution. Higher image resolution can detect more quantity of pore structure and increase its irregularity. With benefits of those influence factors, fractal analysis on four kinds of artificial cores showed the fractal dimension can be used to distinguish the different kinds of artificial cores and the relationship between fractal dimension and porosity or permeability can be expressed by the model of D = a - bln(x + c).

  8. Compressing subbanded image data with Lempel-Ziv-based coders (United States)

    Glover, Daniel; Kwatra, S. C.


    A method of improving the compression of image data using Lempel-Ziv-based coding is presented. Image data is first processed with a simple transform, such as the Walsh Hadamard Transform, to produce subbands. The subbanded data can be rounded to eight bits or it can be quantized for higher compression at the cost of some reduction in the quality of the reconstructed image. The data is then run-length coded to take advantage of the large runs of zeros produced by quantization. Compression results are presented and contrasted with a subband compression method using quantization followed by run-length coding and Huffman coding. The Lempel-Ziv-based coding in conjunction with run-length coding produces the best compression results at the same reconstruction quality (compared with the Huffman-based coding) on the image data used.

  9. Correlation and image compression for limited-bandwidth CCD.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, Douglas G.


    As radars move to Unmanned Aerial Vehicles with limited-bandwidth data downlinks, the amount of data stored and transmitted with each image becomes more significant. This document gives the results of a study to determine the effect of lossy compression in the image magnitude and phase on Coherent Change Detection (CCD). We examine 44 lossy compression types, plus lossless zlib compression, and test each compression method with over 600 CCD image pairs. We also derive theoretical predictions for the correlation for most of these compression schemes, which compare favorably with the experimental results. We recommend image transmission formats for limited-bandwidth programs having various requirements for CCD, including programs which cannot allow performance degradation and those which have stricter bandwidth requirements at the expense of CCD performance.

  10. PCNN-Based Image Fusion in Compressed Domain

    Directory of Open Access Journals (Sweden)

    Yang Chen


    Full Text Available This paper addresses a novel method of image fusion problem for different application scenarios, employing compressive sensing (CS as the image sparse representation method and pulse-coupled neural network (PCNN as the fusion rule. Firstly, source images are compressed through scrambled block Hadamard ensemble (SBHE for its compression capability and computational simplicity on the sensor side. Local standard variance is input to motivate PCNN and coefficients with large firing times are selected as the fusion coefficients in compressed domain. Fusion coefficients are smoothed by sliding window in order to avoid blocking effect. Experimental results demonstrate that the proposed fusion method outperforms other fusion methods in compressed domain and is effective and adaptive in different image fusion applications.

  11. Fractal lacunarity of trabecular bone and magnetic resonance imaging: New perspectives for osteoporotic fracture risk assessment. (United States)

    Zaia, Annamaria


    Osteoporosis represents one major health condition for our growing elderly population. It accounts for severe morbidity and increased mortality in postmenopausal women and it is becoming an emerging health concern even in aging men. Screening of the population at risk for bone degeneration and treatment assessment of osteoporotic patients to prevent bone fragility fractures represent useful tools to improve quality of life in the elderly and to lighten the related socio-economic impact. Bone mineral density (BMD) estimate by means of dual-energy X-ray absorptiometry is normally used in clinical practice for osteoporosis diagnosis. Nevertheless, BMD alone does not represent a good predictor of fracture risk. From a clinical point of view, bone microarchitecture seems to be an intriguing aspect to characterize bone alteration patterns in aging and pathology. The widening into clinical practice of medical imaging techniques and the impressive advances in information technologies together with enhanced capacity of power calculation have promoted proliferation of new methods to assess changes of trabecular bone architecture (TBA) during aging and osteoporosis. Magnetic resonance imaging (MRI) has recently arisen as a useful tool to measure bone structure in vivo. In particular, high-resolution MRI techniques have introduced new perspectives for TBA characterization by non-invasive non-ionizing methods. However, texture analysis methods have not found favor with clinicians as they produce quite a few parameters whose interpretation is difficult. The introduction in biomedical field of paradigms, such as theory of complexity, chaos, and fractals, suggests new approaches and provides innovative tools to develop computerized methods that, by producing a limited number of parameters sensitive to pathology onset and progression, would speed up their application into clinical practice. Complexity of living beings and fractality of several physio-anatomic structures suggest

  12. Designing robust sensing matrix for image compression. (United States)

    Li, Gang; Li, Xiao; Li, Sheng; Bai, Huang; Jiang, Qianru; He, Xiongxiong


    This paper deals with designing sensing matrix for compressive sensing systems. Traditionally, the optimal sensing matrix is designed so that the Gram of the equivalent dictionary is as close as possible to a target Gram with small mutual coherence. A novel design strategy is proposed, in which, unlike the traditional approaches, the measure considers of mutual coherence behavior of the equivalent dictionary as well as sparse representation errors of the signals. The optimal sensing matrix is defined as the one that minimizes this measure and hence is expected to be more robust against sparse representation errors. A closed-form solution is derived for the optimal sensing matrix with a given target Gram. An alternating minimization-based algorithm is also proposed for addressing the same problem with the target Gram searched within a set of relaxed equiangular tight frame Grams. The experiments are carried out and the results show that the sensing matrix obtained using the proposed approach outperforms those existing ones using a fixed dictionary in terms of signal reconstruction accuracy for synthetic data and peak signal-to-noise ratio for real images.

  13. Image compression and transmission based on LAN (United States)

    Huang, Sujuan; Li, Yufeng; Zhang, Zhijiang


    In this work an embedded system is designed which implements MPEG-2 LAN transmission of CVBS or S-video signal. The hardware consists of three parts. The first is digitization of analog inputs CVBS or S-video (Y/C) from TV or VTR sources. The second is MPEG-2 compression coding primarily performed by a MPEG-2 1chip audio/video encoder. Its output is MPEG-2 system PS/TS. The third part includes data stream packing, accessing LAN and system control based on an ARM microcontroller. It packs the encoded stream into Ethernet data frames and accesses LAN, and accepts Ethernet data packets bearing control information from the network and decodes corresponding commands to control digitization, coding, and other operations. In order to increase the network transmission rate to conform to the MEPG-2 data stream, an efficient TCP/IP network protocol stack is constructed directly from network hardware provided by the embedded system, instead of using an ordinary operating system for embedded systems. In the design of the network protocol stack to obtain a high LAN transmission rate on a low-end ARM, a special transmission channel is opened for the MPEG-2 stream. The designed system has been tested on an experimental LAN. The experiment shows a maximum LAN transmission rate up to 12.7 Mbps with good sound and image quality, and satisfactory system reliability.

  14. Fractal analysis of en face tomographic images obtained with full field optical coherence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Wanrong; Zhu, Yue [Department of Optical Engineering, Nanjing University of Science and Technology, Jiangsu (China)


    The quantitative modeling of the imaging signal of pathological areas and healthy areas is necessary to improve the specificity of diagnosis with tomographic en face images obtained with full field optical coherence tomography (FFOCT). In this work, we propose to use the depth-resolved change in the fractal parameter as a quantitative specific biomarker of the stages of disease. The idea is based on the fact that tissue is a random medium and only statistical parameters that characterize tissue structure are appropriate. We successfully relate the imaging signal in FFOCT to the tissue structure in terms of the scattering function and the coherent transfer function of the system. The formula is then used to analyze the ratio of the Fourier transforms of the cancerous tissue to the normal tissue. We found that when the tissue changes from the normal to cancerous the ratio of the spectrum of the index inhomogeneities takes the form of an inverse power law and the changes in the fractal parameter can be determined by estimating slopes of the spectra of the ratio plotted on a log-log scale. The fresh normal and cancer liver tissues were imaged to demonstrate the potential diagnostic value of the method at early stages when there are no significant changes in tissue microstructures. (copyright 2016 by WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  15. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing (United States)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong


    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.


    Directory of Open Access Journals (Sweden)

    Rohit Kumar Gangwar


    Full Text Available With the increase in demand, product of multimedia is increasing fast and thus contributes to insufficient network bandwidth and memory storage. Therefore image compression is more significant for reducing data redundancy for save more memory and transmission bandwidth. An efficient compression technique has been proposed which combines fuzzy logic with that of Huffman coding. While normalizing image pixel, each value of pixel image belonging to that image foreground are characterized and interpreted. The image is sub divided into pixel which is then characterized by a pair of set of approximation. Here encoding represent Huffman code which is statistically independent to produce more efficient code for compression and decoding represents rough fuzzy logic which is used to rebuilt the pixel of image. The method used here are rough fuzzy logic with Huffman coding algorithm (RFHA. Here comparison of different compression techniques with Huffman coding is done and fuzzy logic is applied on the Huffman reconstructed image. Result shows that high compression rates are achieved and visually negligible difference between compressed images and original images.

  17. Discrete-cosine-transform-based image compression applied to dermatology (United States)

    Cookson, John P.; Sneiderman, Charles; Rivera, Christopher


    The research reported in this paper concerns an evaluation of the impact of compression on the quality of digitized color dermatologic images. 35 mm slides of four morphologic types of skin lesions were captured at 1000 pixels per inch (ppi) in 24 bit RGB color, to give an approximate 1K X 1K image. The discrete cosine transform (DCT) algorithm, was applied to the resulting image files to achieve compression ratios of about 7:1, 28:1, and 70:1. The original scans and the decompressed files were written to a 35 mm film recorder. Together with the original photo slides, the slides resulting from digital images were evaluated in a study of morphology recognition and image quality assessment. A panel of dermatologists was asked to identify the morphology depicted and to rate the image quality of each slide. The images were shown in a progression from highest level of compression to original photo slides. We conclude that the use of DCT file compression yields acceptable performance for skin lesion images since differences in morphology recognition performance do not correlate significantly with the use of original photos versus compressed versions. Additionally, image quality evaluation does not correlate significantly with level of compression.

  18. Lossless compression of hyperspectral images using hybrid context prediction. (United States)

    Liang, Yuan; Li, Jianping; Guo, Ke


    In this letter a new algorithm for lossless compression of hyperspectral images using hybrid context prediction is proposed. Lossless compression algorithms are typically divided into two stages, a decorrelation stage and a coding stage. The decorrelation stage supports both intraband and interband predictions. The intraband (spatial) prediction uses the median prediction model, since the median predictor is fast and efficient. The interband prediction uses hybrid context prediction. The hybrid context prediction is the combination of a linear prediction (LP) and a context prediction. Finally, the residual image of hybrid context prediction is coded by the arithmetic coding. We compare the proposed lossless compression algorithm with some of the existing algorithms for hyperspectral images such as 3D-CALIC, M-CALIC, LUT, LAIS-LUT, LUT-NN, DPCM (C-DPCM), JPEG-LS. The performance of the proposed lossless compression algorithm is evaluated. Simulation results show that our algorithm achieves high compression ratios with low complexity and computational cost.

  19. CMOS low data rate imaging method based on compressed sensing (United States)

    Xiao, Long-long; Liu, Kun; Han, Da-peng


    Complementary metal-oxide semiconductor (CMOS) technology enables the integration of image sensing and image compression processing, making improvements on overall system performance possible. We present a CMOS low data rate imaging approach by implementing compressed sensing (CS). On the basis of the CS framework, the image sensor projects the image onto a separable two-dimensional (2D) basis set and measures the corresponding coefficients obtained. First, the electrical current output from the pixels in a column are combined, with weights specified by voltage, in accordance with Kirchhoff's law. The second computation is performed in an analog vector-matrix multiplier (VMM). Each element of the VMM considers the total value of each column as the input and multiplies it by a unique coefficient. Both weights and coefficients are reprogrammable through analog floating-gate (FG) transistors. The image can be recovered from a percentage of these measurements using an optimization algorithm. The percentage, which can be altered flexibly by programming on the hardware circuit, determines the image compression ratio. These novel designs facilitate image compression during the image-capture phase before storage, and have the potential to reduce power consumption. Experimental results demonstrate that the proposed method achieves a large image compression ratio and ensures imaging quality.

  20. Survey for Image Representation Using Block Compressive Sensing For Compression Applications

    Directory of Open Access Journals (Sweden)

    Ankita Hundet


    Full Text Available Compressing sensing theory have been favourable in evolving data compression techniques, though it was put forward with objective to achieve dimension reduced sampling for saving data sampling cost. In this paper two sampling methods are explored for block CS (BCS with discrete cosine transform (DCT based image representation for compression applications - (a coefficient random permutation (b adaptive sampling. CRP method has the potency to balance the sparsity of sampled vectors in DCT field of image, and then in improving the CS sampling efficiency. To attain AS we design an adaptive measurement matrix used in CS based on the energy distribution characteristics of image in DCT domain, which has a good impact in magnifying the CS performance. It has been revealed in our experimental results that our proposed methods are efficacious in reducing the dimension of the BCS-based image representation and/or improving the recovered image quality. The planned BCS based image representation scheme could be an efficient alternative for applications of encrypted image compression and/or robust image compression.

  1. CMOS Image Sensor with On-Chip Image Compression: A Review and Performance Analysis

    Directory of Open Access Journals (Sweden)

    Milin Zhang


    Full Text Available Demand for high-resolution, low-power sensing devices with integrated image processing capabilities, especially compression capability, is increasing. CMOS technology enables the integration of image sensing and image processing, making it possible to improve the overall system performance. This paper reviews the current state of the art in CMOS image sensors featuring on-chip image compression. Firstly, typical sensing systems consisting of separate image-capturing unit and image-compression processing unit are reviewed, followed by systems that integrate focal-plane compression. The paper also provides a thorough review of a new design paradigm, in which image compression is performed during the image-capture phase prior to storage, referred to as compressive acquisition. High-performance sensor systems reported in recent years are also introduced. Performance analysis and comparison of the reported designs using different design paradigm are presented at the end.

  2. An Efficient Image Compression Technique Based on Arithmetic Coding

    Directory of Open Access Journals (Sweden)

    Prof. Rajendra Kumar Patel


    Full Text Available The rapid growth of digital imaging applications, including desktop publishing, multimedia, teleconferencing, and high visual definition has increased the need for effective and standardized image compression techniques. Digital Images play a very important role for describing the detailed information. The key obstacle for many applications is the vast amount of data required to represent a digital image directly. The various processes of digitizing the images to obtain it in the best quality for the more clear and accurate information leads to the requirement of more storage space and better storage and accessing mechanism in the form of hardware or software. In this paper we concentrate mainly on the above flaw so that we reduce the space with best quality image compression. State-ofthe-art techniques can compress typical images from 1/10 to 1/50 their uncompressed size without visibly affecting image quality. From our study I observe that there is a need of good image compression technique which provides better reduction technique in terms of storage and quality. Arithmetic coding is the best way to reducing encoding data. So in this paper we propose arithmetic coding with walsh transformation based image compression technique which is an efficient way of reduction

  3. Wavelet based hierarchical coding scheme for radar image compression (United States)

    Sheng, Wen; Jiao, Xiaoli; He, Jifeng


    This paper presents a wavelet based hierarchical coding scheme for radar image compression. Radar signal is firstly quantized to digital signal, and reorganized as raster-scanned image according to radar's repeated period frequency. After reorganization, the reformed image is decomposed to image blocks with different frequency band by 2-D wavelet transformation, each block is quantized and coded by the Huffman coding scheme. A demonstrating system is developed, showing that under the requirement of real time processing, the compression ratio can be very high, while with no significant loss of target signal in restored radar image.

  4. Implementation of Novel Medical Image Compression Using Artificial Intelligence

    Directory of Open Access Journals (Sweden)

    Mohammad Al-Rababah


    Full Text Available The medical image processing process is one of the most important areas of research in medical applications in digitized medical information. A medical images have a large sizes. Since the coming of digital medical information, the important challenge is to care for the conduction and requirements of huge data, including medical images. Compression is considered as one of the necessary algorithm to explain this problem. A large amount of medical images must be compressed using lossless compression. This paper proposes a new medical image compression algorithm founded on lifting wavelet transform CDF 9/7 joined with SPIHT coding algorithm, this algorithm applied the lifting composition to confirm the benefit of the wavelet transform. To develop the proposed algorithm, the outcomes compared with other compression algorithm like JPEG codec. Experimental results proves that the anticipated algorithm is superior to another algorithm in both lossy and lossless compression for all medical images tested. The Wavelet-SPIHT algorithm provides PSNR very important values for MRI images.

  5. A simple data compression scheme for binary images of bacteria compared with commonly used image data compression schemes. (United States)

    Wilkinson, M H


    A run length code compression scheme of extreme simplicity, used for image storage in an automated bacterial morphometry system, is compared with more common compression schemes, such as are used in the tag image file format. These schemes are Lempel-Ziv and Welch (LZW), Macintosh Packbits, and CCITT Group 3 Facsimile 1-dimensional modified Huffman run length code. In a set of 25 images consisting of full microscopic fields of view of bacterial slides, the method gave a 10.3-fold compression: 1.074 times better than LZW. In a second set of images of single areas of interest within each field of view, compression ratios of over 600 were obtained, 12.8 times that of LZW. The drawback of the system is its bad worst case performance. The method could be used in any application requiring storage of binary images of relatively small objects with fairly large spaces in between.

  6. Lossy Compression Color Medical Image Using CDF Wavelet Lifting Scheme

    Directory of Open Access Journals (Sweden)

    M. beladghem


    Full Text Available As the coming era is that of digitized medical information, an important challenge to deal with is the storage and transmission requirements of enormous data, including color medical images. Compression is one of the indispensable techniques to solve this problem. In this work, we propose an algorithm for color medical image compression based on a biorthogonal wavelet transform CDF 9/7 coupled with SPIHT coding algorithm, of which we applied the lifting structure to improve the drawbacks of wavelet transform. In order to enhance the compression by our algorithm, we have compared the results obtained with wavelet based filters bank. Experimental results show that the proposed algorithm is superior to traditional methods in both lossy and lossless compression for all tested color images. Our algorithm provides very important PSNR and MSSIM values for color medical images.

  7. Dynamic CT perfusion image data compression for efficient parallel processing. (United States)

    Barros, Renan Sales; Olabarriaga, Silvia Delgado; Borst, Jordi; van Walderveen, Marianne A A; Posthuma, Jorrit S; Streekstra, Geert J; van Herk, Marcel; Majoie, Charles B L M; Marquering, Henk A


    The increasing size of medical imaging data, in particular time series such as CT perfusion (CTP), requires new and fast approaches to deliver timely results for acute care. Cloud architectures based on graphics processing units (GPUs) can provide the processing capacity required for delivering fast results. However, the size of CTP datasets makes transfers to cloud infrastructures time-consuming and therefore not suitable in acute situations. To reduce this transfer time, this work proposes a fast and lossless compression algorithm for CTP data. The algorithm exploits redundancies in the temporal dimension and keeps random read-only access to the image elements directly from the compressed data on the GPU. To the best of our knowledge, this is the first work to present a GPU-ready method for medical image compression with random access to the image elements from the compressed data.

  8. New Methods for Lossless Image Compression Using Arithmetic Coding. (United States)

    Howard, Paul G.; Vitter, Jeffrey Scott


    Identifies four components of a good predictive lossless image compression method: (1) pixel sequence, (2) image modeling and prediction, (3) error modeling, and (4) error coding. Highlights include Laplace distribution and a comparison of the multilevel progressive method for image coding with the prediction by partial precision matching method.…

  9. Architecture for hardware compression/decompression of large images (United States)

    Akil, Mohamed; Perroton, Laurent; Gailhard, Stephane; Denoulet, Julien; Bartier, Frederic


    In this article, we present a popular loseless compression/decompression algorithm, GZIP, and the study to implement it on a FPGA based architecture. The algorithm is loseless, and applied to 'bi-level' images of large size. It insures a minimum compression rate for the images we are considering. The proposed architecture for the compressor is based ona hash table and the decompressor is based on a parallel decoder of the Huffman codes.

  10. Effect of Embedding Watermark on Compression of the Digital Images

    CERN Document Server

    Aggarwal, Deepak


    Image Compression plays a very important role in image processing especially when we are to send the image on the internet. The threat to the information on the internet increases and image is no exception. Generally the image is sent on the internet as the compressed image to optimally use the bandwidth of the network. But as we are on the network, at any intermediate level the image can be changed intentionally or unintentionally. To make sure that the correct image is being delivered at the other end we embed the water mark to the image. The watermarked image is then compressed and sent on the network. When the image is decompressed at the other end we can extract the watermark and make sure that the image is the same that was sent by the other end. Though watermarking the image increases the size of the uncompressed image but that has to done to achieve the high degree of robustness i.e. how an image sustains the attacks on it. The present paper is an attempt to make transmission of the images secure from...

  11. Image Denoising of Wavelet based Compressed Images Corrupted by Additive White Gaussian Noise

    Directory of Open Access Journals (Sweden)

    Shyam Lal


    Full Text Available In this study an efficient algorithm is proposed for removal of additive white Gaussian noise from compressed natural images in wavelet based domain. First, the natural image is compressed by discrete wavelet transform and then proposed hybrid filter is applied for image denoising of compressed images corrupted by Additive White Gaussian Noise (AWGN. The proposed hybrid filter (HMCD is combination of non-linear fourth order partial differential equation and bivariate shrinkage function. The proposed hybrid filter provides better results in term of noise suppression with keeping minimum edge blurring as compared to other existing image denoising techniques for wavelet based compressed images. Simulation and experimental results on benchmark test images demonstrate that the proposed hybrid filter attains competitive image denoising performances as compared with other state-of-the-art image denoising algorithms. It is more effective particularly for the highly corrupted images in wavelet based compressed domain.

  12. Imaging industry expectations for compressed sensing in MRI (United States)

    King, Kevin F.; Kanwischer, Adriana; Peters, Rob


    Compressed sensing requires compressible data, incoherent acquisition and a nonlinear reconstruction algorithm to force creation of a compressible image consistent with the acquired data. MRI images are compressible using various transforms (commonly total variation or wavelets). Incoherent acquisition of MRI data by appropriate selection of pseudo-random or non-Cartesian locations in k-space is straightforward. Increasingly, commercial scanners are sold with enough computing power to enable iterative reconstruction in reasonable times. Therefore integration of compressed sensing into commercial MRI products and clinical practice is beginning. MRI frequently requires the tradeoff of spatial resolution, temporal resolution and volume of spatial coverage to obtain reasonable scan times. Compressed sensing improves scan efficiency and reduces the need for this tradeoff. Benefits to the user will include shorter scans, greater patient comfort, better image quality, more contrast types per patient slot, the enabling of previously impractical applications, and higher throughput. Challenges to vendors include deciding which applications to prioritize, guaranteeing diagnostic image quality, maintaining acceptable usability and workflow, and acquisition and reconstruction algorithm details. Application choice depends on which customer needs the vendor wants to address. The changing healthcare environment is putting cost and productivity pressure on healthcare providers. The improved scan efficiency of compressed sensing can help alleviate some of this pressure. Image quality is strongly influenced by image compressibility and acceleration factor, which must be appropriately limited. Usability and workflow concerns include reconstruction time and user interface friendliness and response. Reconstruction times are limited to about one minute for acceptable workflow. The user interface should be designed to optimize workflow and minimize additional customer training. Algorithm

  13. An Improved Interpolative Vector Quantization Scheme for Image Compression

    Directory of Open Access Journals (Sweden)

    Ms. Darshana Chaware


    Full Text Available The aim of this paper is to develop a new image compression scheme by introducing visual patterns to interpolative vector quantization (IVQ. In this scheme first input images are down-sampled by ideal filter. Then, the down sampled images are compressed lossly by JPEG and transmitted to the decoder. In the decoder side, the decoded images are first up-sampled to the original resolution. The codebook is designed using LBG algorithm. We introduce visual patterns on designing the codebook. Experiment results shows that our scheme achieves much better performance over JPEG in terms of visual quality and PSNR

  14. Applications of chaos theory to lossy image compression (United States)

    Perrone, A. L.


    The aim of this paper is to show that the theoretical issues presented elsewhere (Perrone, Lecture Notes in Computer Science 880 (1995) 9-52) and relative to a new technique of stabilization of chaotic dynamics can be partially implemented to develop a new efficient prototype for lossy image compression. The results of the comparison between the performances of this prototype and the usual algorithms for image compression will also be discussed. The tests were performed on standard test images of the European Space Agency (E.S.A.). These images were obtained from a Synthetic Aperture Radar (S.A.R.) device mounted on an ERS-1 satellite.

  15. Grayscale Image Compression Based on Min Max Block Truncating Coding

    Directory of Open Access Journals (Sweden)

    Hilal Almarabeh


    Full Text Available This paper presents an image compression techniques based on block truncating coding. In this work, a min max block truncating coding (MM_BTC is presented for grayscale image compression relies on applying dividing image into non-overlapping blocks. MM_BTC differ from other block truncating coding such as block truncating coding (BTC in the way of selecting the quantization level in order to remove redundancy. Objectives measures such as: Bit Rate (BR, Mean Square Error (MSE, Peak Signal to Noise Ratio (PSNR, and Redundancy (R, were used to present a detailed evaluation of MM_BTC of image quality.

  16. Realization of Fractal Affine Transformation

    Institute of Scientific and Technical Information of China (English)


    This paper gives the definition of fractal affine transformation and presents a specific method for its realization and its cor responding mathematical equations which are essential in fractal image construction.

  17. Lossless Compression of Medical Images Using 3D Predictors. (United States)

    Lucas, Luis; Rodrigues, Nuno; Cruz, Luis; Faria, Sergio


    This paper describes a highly efficient method for lossless compression of volumetric sets of medical images, such as CTs or MRIs. The proposed method, referred to as 3D-MRP, is based on the principle of minimum rate predictors (MRP), which is one of the state-of-the-art lossless compression technologies, presented in the data compression literature. The main features of the proposed method include the use of 3D predictors, 3D-block octree partitioning and classification, volume-based optimisation and support for 16 bit-depth images. Experimental results demonstrate the efficiency of the 3D-MRP algorithm for the compression of volumetric sets of medical images, achieving gains above 15% and 12% for 8 bit and 16 bit-depth contents, respectively, when compared to JPEG-LS, JPEG2000, CALIC, HEVC, as well as other proposals based on MRP algorithm.

  18. Lossless/Lossy Compression of Bi-level Images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren


    We present a general and robust method for lossless/lossy coding of bi-level images. The compression and decompression method is analoguous to JBIG, the current international standard for bi-level image compression, andis based on arithmetic coding and a template to determine the coding state. Loss......-too-low rate. The current flipping algorithm is intended for relatively fast encoding and moderate latency.By this method, many halftones can be compressed at perceptually lossless quality at a rate whichis half of what can be achieved with (lossless) JBIG.The (de)coding method is proposed as part of JBIG-2......, an emerging international standard for lossless/lossy compression of bi-level images....

  19. DCT and DST Based Image Compression for 3D Reconstruction (United States)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.


    This paper introduces a new method for 2D image compression whose quality is demonstrated through accurate 3D reconstruction using structured light techniques and 3D reconstruction from multiple viewpoints. The method is based on two discrete transforms: (1) A one-dimensional Discrete Cosine Transform (DCT) is applied to each row of the image. (2) The output from the previous step is transformed again by a one-dimensional Discrete Sine Transform (DST), which is applied to each column of data generating new sets of high-frequency components followed by quantization of the higher frequencies. The output is then divided into two parts where the low-frequency components are compressed by arithmetic coding and the high frequency ones by an efficient minimization encoding algorithm. At decompression stage, a binary search algorithm is used to recover the original high frequency components. The technique is demonstrated by compressing 2D images up to 99% compression ratio. The decompressed images, which include images with structured light patterns for 3D reconstruction and from multiple viewpoints, are of high perceptual quality yielding accurate 3D reconstruction. Perceptual assessment and objective quality of compression are compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results show that the proposed compression method is superior to both JPEG and JPEG2000 concerning 3D reconstruction, and with equivalent perceptual quality to JPEG2000.

  20. Spatial and radiometric characterization of multi-spectrum satellite images through multi-fractal analysis (United States)

    Alonso, Carmelo; Tarquis, Ana M.; Zúñiga, Ignacio; Benito, Rosa M.


    Several studies have shown that vegetation indexes can be used to estimate root zone soil moisture. Earth surface images, obtained by high-resolution satellites, presently give a lot of information on these indexes, based on the data of several wavelengths. Because of the potential capacity for systematic observations at various scales, remote sensing technology extends the possible data archives from the present time to several decades back. Because of this advantage, enormous efforts have been made by researchers and application specialists to delineate vegetation indexes from local scale to global scale by applying remote sensing imagery. In this work, four band images have been considered, which are involved in these vegetation indexes, and were taken by satellites Ikonos-2 and Landsat-7 of the same geographic location, to study the effect of both spatial (pixel size) and radiometric (number of bits coding the image) resolution on these wavelength bands as well as two vegetation indexes: the Normalized Difference Vegetation Index (NDVI) and the Enhanced Vegetation Index (EVI). In order to do so, a multi-fractal analysis of these multi-spectral images was applied in each of these bands and the two indexes derived. The results showed that spatial resolution has a similar scaling effect in the four bands, but radiometric resolution has a larger influence in blue and green bands than in red and near-infrared bands. The NDVI showed a higher sensitivity to the radiometric resolution than EVI. Both were equally affected by the spatial resolution. From both factors, the spatial resolution has a major impact in the multi-fractal spectrum for all the bands and the vegetation indexes. This information should be taken in to account when vegetation indexes based on different satellite sensors are obtained.

  1. Secure and Faster Clustering Environment for Advanced Image Compression

    Directory of Open Access Journals (Sweden)



    Full Text Available Cloud computing provides ample opportunity in many areas such as fastest image transmission, secure and efficient imaging as a service. In general users needs faster and secure service. Usually Image Compression Algorithms are not working faster. In spite of several ongoing researches, Conventional Compression and its Algorithms might not be able to run faster. So, we perform comparative study of three image compression algorithm and their variety of features and factors to choose best among them for cluster processing. After choosing a best one it can be applied for a cluster computing environment to run parallel image compression for faster processing. This paper is the real time implementation of a Distributed Image Compression in Clustering of Nodes. In cluster computing, security is also more important factor. So, we propose a Distributed Intrusion Detection System to monitors all the nodes in cluster . If an intrusion occur in node processing then take an prevention step based on RIC (Robust Intrusion Control Method. We demonstrate the effectiveness and feasibility of our method on a set of satellite images for defense forces. The efficiency ratio of this computation process is 91.20.

  2. High resolution remote sensing image segmentation based on graph theory and fractal net evolution approach (United States)

    Yang, Y.; Li, H. T.; Han, Y. S.; Gu, H. Y.


    Image segmentation is the foundation of further object-oriented image analysis, understanding and recognition. It is one of the key technologies in high resolution remote sensing applications. In this paper, a new fast image segmentation algorithm for high resolution remote sensing imagery is proposed, which is based on graph theory and fractal net evolution approach (FNEA). Firstly, an image is modelled as a weighted undirected graph, where nodes correspond to pixels, and edges connect adjacent pixels. An initial object layer can be obtained efficiently from graph-based segmentation, which runs in time nearly linear in the number of image pixels. Then FNEA starts with the initial object layer and a pairwise merge of its neighbour object with the aim to minimize the resulting summed heterogeneity. Furthermore, according to the character of different features in high resolution remote sensing image, three different merging criterions for image objects based on spectral and spatial information are adopted. Finally, compared with the commercial remote sensing software eCognition, the experimental results demonstrate that the efficiency of the algorithm has significantly improved, and the result can maintain good feature boundaries.

  3. Biomaterial porosity determined by fractal dimensions, succolarity and lacunarity on microcomputed tomographic images. (United States)

    N'Diaye, Mambaye; Degeratu, Cristinel; Bouler, Jean-Michel; Chappard, Daniel


    Porous structures are becoming more and more important in biology and material science because they help in reducing the density of the grafted material. For biomaterials, porosity also increases the accessibility of cells and vessels inside the grafted area. However, descriptors of porosity are scanty. We have used a series of biomaterials with different types of porosity (created by various porogens: fibers, beads …). Blocks were studied by microcomputed tomography for the measurement of 3D porosity. 2D sections were re-sliced to analyze the microarchitecture of the pores and were transferred to image analysis programs: star volumes, interconnectivity index, Minkowski-Bouligand and Kolmogorov fractal dimensions were determined. Lacunarity and succolarity, two recently described fractal dimensions, were also computed. These parameters provided a precise description of porosity and pores' characteristics. Non-linear relationships were found between several descriptors e.g. succolarity and star volume of the material. A linear correlation was found between lacunarity and succolarity. These techniques appear suitable in the study of biomaterials usable as bone substitutes. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. K-cluster-valued compressive sensing for imaging

    Directory of Open Access Journals (Sweden)

    Xu Mai


    Full Text Available Abstract The success of compressive sensing (CS implies that an image can be compressed directly into acquisition with the measurement number over the whole image less than pixel number of the image. In this paper, we extend the existing CS by including the prior knowledge of K-cluster values available for the pixels or wavelet coefficients of an image. In order to model such prior knowledge, we propose in this paper K-cluster-valued CS approach for imaging, by incorporating the K-means algorithm in CoSaMP recovery algorithm. One significant advantage of the proposed approach, rather than the conventional CS, is the capability of reducing measurement numbers required for the accurate image reconstruction. Finally, the performance of conventional CS and K-cluster-valued CS is evaluated using some natural images and background subtraction images.

  5. A novel psychovisual threshold on large DCT for image compression. (United States)

    Abu, Nur Azman; Ernawan, Ferda


    A psychovisual experiment prescribes the quantization values in image compression. The quantization process is used as a threshold of the human visual system tolerance to reduce the amount of encoded transform coefficients. It is very challenging to generate an optimal quantization value based on the contribution of the transform coefficient at each frequency order. The psychovisual threshold represents the sensitivity of the human visual perception at each frequency order to the image reconstruction. An ideal contribution of the transform at each frequency order will be the primitive of the psychovisual threshold in image compression. This research study proposes a psychovisual threshold on the large discrete cosine transform (DCT) image block which will be used to automatically generate the much needed quantization tables. The proposed psychovisual threshold will be used to prescribe the quantization values at each frequency order. The psychovisual threshold on the large image block provides significant improvement in the quality of output images. The experimental results on large quantization tables from psychovisual threshold produce largely free artifacts in the visual output image. Besides, the experimental results show that the concept of psychovisual threshold produces better quality image at the higher compression rate than JPEG image compression.


    Directory of Open Access Journals (Sweden)

    Ferda Ernawan


    Full Text Available An extension of the standard JPEG image compression known as JPEG-3 allows rescaling of the quantization matrix to achieve a certain image output quality. Recently, Tchebichef Moment Transform (TMT has been introduced in the field of image compression. TMT has been shown to perform better than the standard JPEG image compression. This study presents an adaptive TMT image compression. This task is obtained by generating custom quantization tables for low, medium and high image output quality levels based on a psychovisual model. A psychovisual model is developed to approximate visual threshold on Tchebichef moment from image reconstruction error. The contribution of each moment will be investigated and analyzed in a quantitative experiment. The sensitivity of TMT basis functions can be measured by evaluating their contributions to image reconstruction for each moment order. The psychovisual threshold model allows a developer to design several custom TMT quantization tables for a user to choose from according to his or her target output preference. Consequently, these quantization tables produce lower average bit length of Huffman code while still retaining higher image quality than the extended JPEG scaling scheme.

  7. A Novel Psychovisual Threshold on Large DCT for Image Compression (United States)


    A psychovisual experiment prescribes the quantization values in image compression. The quantization process is used as a threshold of the human visual system tolerance to reduce the amount of encoded transform coefficients. It is very challenging to generate an optimal quantization value based on the contribution of the transform coefficient at each frequency order. The psychovisual threshold represents the sensitivity of the human visual perception at each frequency order to the image reconstruction. An ideal contribution of the transform at each frequency order will be the primitive of the psychovisual threshold in image compression. This research study proposes a psychovisual threshold on the large discrete cosine transform (DCT) image block which will be used to automatically generate the much needed quantization tables. The proposed psychovisual threshold will be used to prescribe the quantization values at each frequency order. The psychovisual threshold on the large image block provides significant improvement in the quality of output images. The experimental results on large quantization tables from psychovisual threshold produce largely free artifacts in the visual output image. Besides, the experimental results show that the concept of psychovisual threshold produces better quality image at the higher compression rate than JPEG image compression. PMID:25874257

  8. Optimization of wavelet decomposition for image compression and feature preservation. (United States)

    Lo, Shih-Chung B; Li, Huai; Freedman, Matthew T


    A neural-network-based framework has been developed to search for an optimal wavelet kernel that can be used for a specific image processing task. In this paper, a linear convolution neural network was employed to seek a wavelet that minimizes errors and maximizes compression efficiency for an image or a defined image pattern such as microcalcifications in mammograms and bone in computed tomography (CT) head images. We have used this method to evaluate the performance of tap-4 wavelets on mammograms, CTs, magnetic resonance images, and Lena images. We found that the Daubechies wavelet or those wavelets with similar filtering characteristics can produce the highest compression efficiency with the smallest mean-square-error for many image patterns including general image textures as well as microcalcifications in digital mammograms. However, the Haar wavelet produces the best results on sharp edges and low-noise smooth areas. We also found that a special wavelet whose low-pass filter coefficients are 0.32252136, 0.85258927, 1.38458542, and -0.14548269) produces the best preservation outcomes in all tested microcalcification features including the peak signal-to-noise ratio, the contrast and the figure of merit in the wavelet lossy compression scheme. Having analyzed the spectrum of the wavelet filters, we can find the compression outcomes and feature preservation characteristics as a function of wavelets. This newly developed optimization approach can be generalized to other image analysis applications where a wavelet decomposition is employed.

  9. Perceptually tuned JPEG coder for echocardiac image compression. (United States)

    Al-Fahoum, Amjed S; Reza, Ali M


    In this work, we propose an efficient framework for compressing and displaying medical images. Image compression for medical applications, due to available Digital Imaging and Communications in Medicine requirements, is limited to the standard discrete cosine transform-based joint picture expert group. The objective of this work is to develop a set of quantization tables (Q tables) for compression of a specific class of medical image sequences, namely echocardiac. The main issue of concern is to achieve a Q table that matches the specific application and can linearly change the compression rate by adjusting the gain factor. This goal is achieved by considering the region of interest, optimum bit allocation, human visual system constraint, and optimum coding technique. These parameters are jointly optimized to design a Q table that works robustly for a category of medical images. Application of this approach to echocardiac images shows high subjective and quantitative performance. The proposed approach exhibits objectively a 2.16-dB improvement in the peak signal-to-noise ratio and subjectively 25% improvement over the most useable compression techniques.

  10. Watermarking of ultrasound medical images in teleradiology using compressed watermark. (United States)

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohamad; Ali, Mushtaq


    The open accessibility of Internet-based medical images in teleradialogy face security threats due to the nonsecured communication media. This paper discusses the spatial domain watermarking of ultrasound medical images for content authentication, tamper detection, and lossless recovery. For this purpose, the image is divided into two main parts, the region of interest (ROI) and region of noninterest (RONI). The defined ROI and its hash value are combined as watermark, lossless compressed, and embedded into the RONI part of images at pixel's least significant bits (LSBs). The watermark lossless compression and embedding at pixel's LSBs preserve image diagnostic and perceptual qualities. Different lossless compression techniques including Lempel-Ziv-Welch (LZW) were tested for watermark compression. The performances of these techniques were compared based on more bit reduction and compression ratio. LZW was found better than others and used in tamper detection and recovery watermarking of medical images (TDARWMI) scheme development to be used for ROI authentication, tamper detection, localization, and lossless recovery. TDARWMI performance was compared and found to be better than other watermarking schemes.

  11. Fractal analysis of SEM images and mercury intrusion porosimetry data for the microstructural characterization of microcrystalline cellulose-based pellets

    Energy Technology Data Exchange (ETDEWEB)

    Gomez-Carracedo, A.; Alvarez-Lorenzo, C.; Coca, R.; Martinez-Pacheco, R.; Concheiro, A. [Departamento de Farmacia y Tecnologia Farmaceutica, Universidad de Santiago de Compostela, Santiago de Compostela 15782 (Spain); Gomez-Amoza, J.L. [Departamento de Farmacia y Tecnologia Farmaceutica, Universidad de Santiago de Compostela, Santiago de Compostela 15782 (Spain)], E-mail:


    The microstructure of theophylline pellets prepared from microcrystalline cellulose, carbopol and dicalcium phosphate dihydrate, according to a mixture design, was characterized using textural analysis of gray-level scanning electron microscopy (SEM) images and thermodynamic analysis of the cumulative pore volume distribution obtained by mercury intrusion porosimetry. Surface roughness evaluated in terms of gray-level non-uniformity and fractal dimension of pellet surface depended on agglomeration phenomena during extrusion/spheronization. Pores at the surface, mainly 1-15 {mu}m in diameter, determined both the mechanism and the rate of theophylline release, and a strong negative correlation between the fractal geometry and the b parameter of the Weibull function was found for pellets containing >60% carbopol. Theophylline mean dissolution time from these pellets was about two to four times greater. Textural analysis of SEM micrographs and fractal analysis of mercury intrusion data are complementary techniques that enable complete characterization of multiparticulate drug dosage forms.

  12. Efficient Short Boundary Detection & Key Frame Extraction using Image Compression

    Directory of Open Access Journals (Sweden)

    Shilpa. R. Jadhav Anup. V. Kalaskar Shruti Bhargava


    Full Text Available This paper present novel algorithm for efficient short boundary detection and key frames extraction using image compression. The algorithm differs from conventional methods mainly in the use of image segmentation and attention model.. Matching difference between two consecutive frames is computed with different weight. Shot boundaries are detected with automatic threshold. Key frame is extracted by using reference frame-based approach. Experimental results show improved performance of short boundary detection by using the proposed algorithms, and key frames represent shot content. And also satisfactorily image compression of result frame.

  13. Three-Dimensional Image Compression With Integer Wavelet Transforms (United States)

    Bilgin, Ali; Zweig, George; Marcellin, Michael W.


    A three-dimensional (3-D) image-compression algorithm based on integer wavelet transforms and zerotree coding is presented. The embedded coding of zerotrees of wavelet coefficients (EZW) algorithm is extended to three dimensions, and context-based adaptive arithmetic coding is used to improve its performance. The resultant algorithm, 3-D CB-EZW, efficiently encodes 3-D image data by the exploitation of the dependencies in all dimensions, while enabling lossy and lossless decompression from the same bit stream. Compared with the best available two-dimensional lossless compression techniques, the 3-D CB-EZW algorithm produced averages of 22%, 25%, and 20% decreases in compressed file sizes for computed tomography, magnetic resonance, and Airborne Visible Infrared Imaging Spectrometer images, respectively. The progressive performance of the algorithm is also compared with other lossy progressive-coding algorithms.

  14. Fast Adaptive Wavelet for Remote Sensing Image Compression

    Institute of Scientific and Technical Information of China (English)

    Bo Li; Run-Hai Jiao; Yuan-Cheng Li


    Remote sensing images are hard to achieve high compression ratio because of their rich texture. By analyzing the influence of wavelet properties on image compression, this paper proposes wavelet construction rules and builds a new biorthogonal wavelet construction model with parameters. The model parameters are optimized by using genetic algorithm and adopting energy compaction as the optimization object function. In addition, in order to resolve the computation complexity problem of online construction, according to the image classification rule proposed in this paper we construct wavelets for different classes of images and implement the fast adaptive wavelet selection algorithm (FAWS). Experimental results show wavelet bases of FAWS gain better compression performance than Daubechies9/7.

  15. The FBI compression standard for digitized fingerprint images

    Energy Technology Data Exchange (ETDEWEB)

    Brislawn, C.M.; Bradley, J.N. [Los Alamos National Lab., NM (United States); Onyshczak, R.J. [National Inst. of Standards and Technology, Gaithersburg, MD (United States); Hopper, T. [Federal Bureau of Investigation, Washington, DC (United States)


    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  16. Compression of grayscale scientific and medical image data

    Directory of Open Access Journals (Sweden)

    F Murtagh


    Full Text Available A review of issues in image compression is presented, with a strong focus on the wavelet transform and other closely related multiresolution transforms. The roles of information content, resolution scale, and image capture noise, are discussed. Experimental and practical results are reviewed.

  17. Optimal context quantization in lossless compression of image data sequences

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Wu, X.; Andersen, Jakob Dahl


    In image compression context-based entropy coding is commonly used. A critical issue to the performance of context-based image coding is how to resolve the conflict of a desire for large templates to model high-order statistic dependency of the pixels and the problem of context dilution due to in...

  18. Image Compression and Watermarking scheme using Scalar Quantization

    CERN Document Server

    Swamy, Kilari Veera; Reddy, Y V Bhaskar; Kumar, S Srinivas; 10.5121/ijngn.2010.2104


    This paper presents a new compression technique and image watermarking algorithm based on Contourlet Transform (CT). For image compression, an energy based quantization is used. Scalar quantization is explored for image watermarking. Double filter bank structure is used in CT. The Laplacian Pyramid (LP) is used to capture the point discontinuities, and then followed by a Directional Filter Bank (DFB) to link point discontinuities. The coefficients of down sampled low pass version of LP decomposed image are re-ordered in a pre-determined manner and prediction algorithm is used to reduce entropy (bits/pixel). In addition, the coefficients of CT are quantized based on the energy in the particular band. The superiority of proposed algorithm to JPEG is observed in terms of reduced blocking artifacts. The results are also compared with wavelet transform (WT). Superiority of CT to WT is observed when the image contains more contours. The watermark image is embedded in the low pass image of contourlet decomposition. ...

  19. Compressive SAR imaging with joint sparsity and local similarity exploitation. (United States)

    Shen, Fangfang; Zhao, Guanghui; Shi, Guangming; Dong, Weisheng; Wang, Chenglong; Niu, Yi


    Compressive sensing-based synthetic aperture radar (SAR) imaging has shown its superior capability in high-resolution image formation. However, most of those works focus on the scenes that can be sparsely represented in fixed spaces. When dealing with complicated scenes, these fixed spaces lack adaptivity in characterizing varied image contents. To solve this problem, a new compressive sensing-based radar imaging approach with adaptive sparse representation is proposed. Specifically, an autoregressive model is introduced to adaptively exploit the structural sparsity of an image. In addition, similarity among pixels is integrated into the autoregressive model to further promote the capability and thus an adaptive sparse representation facilitated by a weighted autoregressive model is derived. Since the weighted autoregressive model is inherently determined by the unknown image, we propose a joint optimization scheme by iterative SAR imaging and updating of the weighted autoregressive model to solve this problem. Eventually, experimental results demonstrated the validity and generality of the proposed approach.

  20. Prior image constrained compressed sensing: a quantitative performance evaluation (United States)

    Thériault Lauzier, Pascal; Tang, Jie; Chen, Guang-Hong


    The appeal of compressed sensing (CS) in the context of medical imaging is undeniable. In MRI, it could enable shorter acquisition times while in CT, it has the potential to reduce the ionizing radiation dose imparted to patients. However, images reconstructed using a CS-based approach often show an unusual texture and a potential loss in spatial resolution. The prior image constrained compressed sensing (PICCS) algorithm has been shown to enable accurate image reconstruction at lower levels of sampling. This study systematically evaluates an implementation of PICCS applied to myocardial perfusion imaging with respect to two parameters of its objective function. The prior image parameter α was shown here to yield an optimal image quality in the range 0.4 to 0.5. A quantitative evaluation in terms of temporal resolution, spatial resolution, noise level, noise texture, and reconstruction accuracy was performed.

  1. Fractal Bread. (United States)

    Esbenshade, Donald H., Jr.


    Develops the idea of fractals through a laboratory activity that calculates the fractal dimension of ordinary white bread. Extends use of the fractal dimension to compare other complex structures as other breads and sponges. (MDH)

  2. Fractal Bread. (United States)

    Esbenshade, Donald H., Jr.


    Develops the idea of fractals through a laboratory activity that calculates the fractal dimension of ordinary white bread. Extends use of the fractal dimension to compare other complex structures as other breads and sponges. (MDH)

  3. Compression of 3D integral images using wavelet decomposition (United States)

    Mazri, Meriem; Aggoun, Amar


    This paper presents a wavelet-based lossy compression technique for unidirectional 3D integral images (UII). The method requires the extraction of different viewpoint images from the integral image. A single viewpoint image is constructed by extracting one pixel from each microlens, then each viewpoint image is decomposed using a Two Dimensional Discrete Wavelet Transform (2D-DWT). The resulting array of coefficients contains several frequency bands. The lower frequency bands of the viewpoint images are assembled and compressed using a 3 Dimensional Discrete Cosine Transform (3D-DCT) followed by Huffman coding. This will achieve decorrelation within and between 2D low frequency bands from the different viewpoint images. The remaining higher frequency bands are Arithmetic coded. After decoding and decompression of the viewpoint images using an inverse 3D-DCT and an inverse 2D-DWT, each pixel from every reconstructed viewpoint image is put back into its original position within the microlens to reconstruct the whole 3D integral image. Simulations were performed on a set of four different grey level 3D UII using a uniform scalar quantizer with deadzone. The results for the average of the four UII intensity distributions are presented and compared with previous use of 3D-DCT scheme. It was found that the algorithm achieves better rate-distortion performance, with respect to compression ratio and image quality at very low bit rates.

  4. Onboard low-complexity compression of solar stereo images. (United States)

    Wang, Shuang; Cui, Lijuan; Cheng, Samuel; Stanković, Lina; Stanković, Vladimir


    We propose an adaptive distributed compression solution using particle filtering that tracks correlation, as well as performing disparity estimation, at the decoder side. The proposed algorithm is tested on the stereo solar images captured by the twin satellites system of NASA's Solar TErrestrial RElations Observatory (STEREO) project. Our experimental results show improved compression performance w.r.t. to a benchmark compression scheme, accurate correlation estimation by our proposed particle-based belief propagation algorithm, and significant peak signal-to-noise ratio improvement over traditional separate bit-plane decoding without dynamic correlation and disparity estimation.

  5. The Generation of a Sort of Fractal Graphs

    Institute of Scientific and Technical Information of China (English)

    张钹; 张铃; 等


    We present an approach for generating a sort of fractal graphs by a simple probabilistic logic neuron network and show that the graphs can be represented by a set of compressed codings.An algorithm for quickly finding the codings,i.e.,recognizing the corresponding graphs,is given.The codings are shown to be optimal.The results above possibly give us the clue for studying image compression and pattern recognition.

  6. Fast-adaptive near-lossless image compression (United States)

    He, Kejing


    The purpose of image compression is to store or transmit image data efficiently. However, most compression methods emphasize the compression ratio rather than the throughput. We propose an encoding process and rules, and consequently a fast-adaptive near-lossless image compression method (FAIC) with good compression ratio. FAIC is a single-pass method, which removes bits from each codeword, then predicts the next pixel value through localized edge detection techniques, and finally uses Golomb-Rice codes to encode the residuals. FAIC uses only logical operations, bitwise operations, additions, and subtractions. Meanwhile, it eliminates the slow operations (e.g., multiplication, division, and logarithm) and the complex entropy coder, which can be a bottleneck in hardware implementations. Besides, FAIC does not depend on any precomputed tables or parameters. Experimental results demonstrate that FAIC achieves good balance between compression ratio and computational complexity in certain range (e.g., peak signal-to-noise ratio >35 dB, bits per pixel>2). It is suitable for applications in which the amount of data is huge or the computation power is limited.

  7. Feature preserving compression of high resolution SAR images (United States)

    Yang, Zhigao; Hu, Fuxiang; Sun, Tao; Qin, Qianqing


    Compression techniques are required to transmit the large amounts of high-resolution synthetic aperture radar (SAR) image data over the available channels. Common Image compression methods may lose detail and weak information in original images, especially at smoothness areas and edges with low contrast. This is known as "smoothing effect". It becomes difficult to extract and recognize some useful image features such as points and lines. We propose a new SAR image compression algorithm that can reduce the "smoothing effect" based on adaptive wavelet packet transform and feature-preserving rate allocation. For the reason that images should be modeled as non-stationary information resources, a SAR image is partitioned to overlapped blocks. Each overlapped block is then transformed by adaptive wavelet packet according to statistical features of different blocks. In quantifying and entropy coding of wavelet coefficients, we integrate feature-preserving technique. Experiments show that quality of our algorithm up to 16:1 compression ratio is improved significantly, and more weak information is reserved.

  8. Medical image compression with embedded-wavelet transform (United States)

    Cheng, Po-Yuen; Lin, Freddie S.; Jannson, Tomasz


    The need for effective medical image compression and transmission techniques continues to grow because of the huge volume of radiological images captured each year. The limited bandwidth and efficiency of current networking systems cannot meet this need. In response, Physical Optics Corporation devised an efficient medical image management system to significantly reduce the storage space and transmission bandwidth required for digitized medical images. The major functions of this system are: (1) compressing medical imagery, using a visual-lossless coder, to reduce the storage space required; (2) transmitting image data progressively, to use the transmission bandwidth efficiently; and (3) indexing medical imagery according to image characteristics, to enable automatic content-based retrieval. A novel scalable wavelet-based image coder was developed to implement the system. In addition to its high compression, this approach is scalable in both image size and quality. The system provides dramatic solutions to many medical image handling problems. One application is the efficient storage and fast transmission of medical images over picture archiving and communication systems. In addition to reducing costs, the potential impact on improving the quality and responsiveness of health care delivery in the US is significant.

  9. 3D passive integral imaging using compressive sensing. (United States)

    Cho, Myungjin; Mahalanobis, Abhijit; Javidi, Bahram


    Passive 3D sensing using integral imaging techniques has been well studied in the literature. It has been shown that a scene can be reconstructed at various depths using several 2D elemental images. This provides the ability to reconstruct objects in the presence of occlusions, and passively estimate their 3D profile. However, high resolution 2D elemental images are required for high quality 3D reconstruction. Compressive Sensing (CS) provides a way to dramatically reduce the amount of data that needs to be collected to form the elemental images, which in turn can reduce the storage and bandwidth requirements. In this paper, we explore the effects of CS in acquisition of the elemental images, and ultimately on passive 3D scene reconstruction and object recognition. Our experiments show that the performance of passive 3D sensing systems remains robust even when elemental images are recovered from very few compressive measurements.

  10. Improved vector quantization scheme for grayscale image compression (United States)

    Hu, Y.-C.; Chen, W.-L.; Lo, C.-C.; Chuang, J.-C.


    This paper proposes an improved image coding scheme based on vector quantization. It is well known that the image quality of a VQ-compressed image is poor when a small-sized codebook is used. In order to solve this problem, the mean value of the image block is taken as an alternative block encoding rule to improve the image quality in the proposed scheme. To cut down the storage cost of compressed codes, a two-stage lossless coding approach including the linear prediction technique and the Huffman coding technique is employed in the proposed scheme. The results show that the proposed scheme achieves better image qualities than vector quantization while keeping low bit rates.

  11. Effect of Image Linearization on Normalized Compression Distance (United States)

    Mortensen, Jonathan; Wu, Jia Jie; Furst, Jacob; Rogers, John; Raicu, Daniela

    Normalized Information Distance, based on Kolmogorov complexity, is an emerging metric for image similarity. It is approximated by the Normalized Compression Distance (NCD) which generates the relative distance between two strings by using standard compression algorithms to compare linear strings of information. This relative distance quantifies the degree of similarity between the two objects. NCD has been shown to measure similarity effectively on information which is already a string: genomic string comparisons have created accurate phylogeny trees and NCD has also been used to classify music. Currently, to find a similarity measure using NCD for images, the images must first be linearized into a string, and then compared. To understand how linearization of a 2D image affects the similarity measure, we perform four types of linearization on a subset of the Corel image database and compare each for a variety of image transformations. Our experiment shows that different linearization techniques produce statistically significant differences in NCD for identical spatial transformations.

  12. Lossless/Lossy Compression of Bi-level Images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren


    .g. halftoning and text without any segmentation of the image. The decoding is analoguous to the decoder of JBIG which means that software implementations easily have a through-put of 1 Mpixels per second.In general, the flipping method can target the lossy image for a given not-too-large distortion ornot-too......, an emerging international standard for lossless/lossy compression of bi-level images....

  13. A compression tolerant scheme for image authentication

    Institute of Scientific and Technical Information of China (English)

    刘宝锋; 张文军; 余松煜


    Image authentication techniques used to protect the recipients against malicious forgery. In this paper, we propose a new image authentication technique based on digital signature. The authentication is verified by comparing the features of the each block in tested image with the corresponding features of the block recorded in the digital signature. The proposed authentication scheme is capable of distinguishing visible but non-malicious changes due to common processing operations from malicious changes. At last our experimental results show that the proposed scheme is not only efficient to protect integrity of image, but also with low computation,which are feasible for practical applications.

  14. Compression of Ultrasonic NDT Image by Wavelet Based Local Quantization (United States)

    Cheng, W.; Li, L. Q.; Tsukada, K.; Hanasaki, K.


    Compression on ultrasonic image that is always corrupted by noise will cause `over-smoothness' or much distortion. To solve this problem to meet the need of real time inspection and tele-inspection, a compression method based on Discrete Wavelet Transform (DWT) that can also suppress the noise without losing much flaw-relevant information, is presented in this work. Exploiting the multi-resolution and interscale correlation property of DWT, a simple way named DWCs classification, is introduced first to classify detail wavelet coefficients (DWCs) as dominated by noise, signal or bi-effected. A better denoising can be realized by selective thresholding DWCs. While in `Local quantization', different quantization strategies are applied to the DWCs according to their classification and the local image property. It allocates the bit rate more efficiently to the DWCs thus achieve a higher compression rate. Meanwhile, the decompressed image shows the effects of noise suppressed and flaw characters preserved.

  15. Adaptive interference hyperspectral image compression with spectrum distortion control

    Institute of Scientific and Technical Information of China (English)

    Jing Ma; Yunsong Li; Chengke Wu; Dong Chen


    As one of the next generation imaging spectrometers,interferential spectrometer has been paid much attention.With traditional spectrum compression methods,the hyperspectral images generated by interferential spectrometer can only be protected with better visual quality in spatial domain,but its optical applications in Fourier domain are often ignored.So the relation between the distortion in Fourier domain and the compression in spatial domain is analyzed in this letter.Based on this analysis,a novel coding scheme is proposed,which can compress data in spatial domain while reducing the distortion in Fourier domain.The bitstream of set partitioning in hierarchical trees (SPIHT) is truncated by adaptively lifting the rate-distortion slopes of zerotrees according to the priorities of optical path difference (OPD) based on rate-distortion optimization theory.Experimental results show that the proposed scheme can achieve better performance in Fourier domain while maintaining the image quality in spatial domain.

  16. Password Authentication Based on Fractal Coding Scheme

    Directory of Open Access Journals (Sweden)

    Nadia M. G. Al-Saidi


    Full Text Available Password authentication is a mechanism used to authenticate user identity over insecure communication channel. In this paper, a new method to improve the security of password authentication is proposed. It is based on the compression capability of the fractal image coding to provide an authorized user a secure access to registration and login process. In the proposed scheme, a hashed password string is generated and encrypted to be captured together with the user identity using text to image mechanisms. The advantage of fractal image coding is to be used to securely send the compressed image data through a nonsecured communication channel to the server. The verification of client information with the database system is achieved in the server to authenticate the legal user. The encrypted hashed password in the decoded fractal image is recognized using optical character recognition. The authentication process is performed after a successful verification of the client identity by comparing the decrypted hashed password with those which was stored in the database system. The system is analyzed and discussed from the attacker’s viewpoint. A security comparison is performed to show that the proposed scheme provides an essential security requirement, while their efficiency makes it easier to be applied alone or in hybrid with other security methods. Computer simulation and statistical analysis are presented.

  17. A specific measurement matrix in compressive imaging system (United States)

    Wang, Fen; Wei, Ping; Ke, Jun


    Compressed sensing or compressive sampling (CS) is a new framework for simultaneous data sampling and compression which was proposed by Candes, Donoho, and Tao several years ago. Ever since the advent of a single-pixel camera, one of the CS applications - compressive imaging (CI, also referred as feature-specific imaging) has aroused more interest of numerous researchers. However, it is still a challenging problem to choose a simple and efficient measurement matrix in such a hardware system, especially for large scale image. In this paper, we propose a new measurement matrix whose rows are the odd rows of N order Hadamard matrix and discuss the validity of the matrix theoretically. The advantage of the matrix is its universality and easy implementation in the optical domain owing to its integer-valued elements. In addition, we demonstrate the validity of the matrix through the reconstruction of natural images using Orthogonal Matching Pursuit (OMP) algorithm. Due to the limitation of the memory of the hardware system and personal computer which is used to simulate the process, it is impossible to create such a large matrix that is used to conduct large scale images. In order to solve the problem, the block-wise notion is introduced to conduct large scale images and the experiments results present the validity of this method.

  18. Spatial exemplars and metrics for characterizing image compression transform error (United States)

    Schmalz, Mark S.; Caimi, Frank M.


    The efficient transmission and storage of digital imagery increasingly requires compression to maintain effective channel bandwidth and device capacity. Unfortunately, in applications where high compression ratios are required, lossy compression transforms tend to produce a wide variety of artifacts in decompressed images. Image quality measures (IQMs) have been published that detect global changes in image configuration resulting from the compression or decompression process. Examples include statistical and correlation-based procedures related to mean-squared error, diffusion of energy from features of interest, and spectral analysis. Additional but sparsely-reported research involves local IQMs that quantify feature distortion in terms of objective or subjective models. In this paper, a suite of spatial exemplars and evaluation procedures is introduced that can elicit and measure a wide range of spatial, statistical, or spectral distortions from an image compression transform T. By applying the test suite to the input of T, performance deficits can be highlighted in the transform's design phase, versus discovery under adverse conditions in field practice. In this study, performance analysis is concerned primarily with the effect of compression artifacts on automated target recognition (ATR) algorithm performance. For example, featural distortion can be measured using linear, curvilinear, polygonal, or elliptical features interspersed with various textures or noise-perturbed backgrounds or objects. These simulated target blobs may themselves be perturbed with various types or levels of noise, thereby facilitating measurement of statistical target-background interactions. By varying target-background contrast, resolution, noise level, and target shape, compression transforms can be stressed to isolate performance deficits. Similar techniques can be employed to test spectral, phase and boundary distortions due to decompression. Applicative examples are taken from

  19. Cell type classifiers for breast cancer microscopic images based on fractal dimension texture analysis of image color layers. (United States)

    Jitaree, Sirinapa; Phinyomark, Angkoon; Boonyaphiphat, Pleumjit; Phukpattaranont, Pornchai


    Having a classifier of cell types in a breast cancer microscopic image (BCMI), obtained with immunohistochemical staining, is required as part of a computer-aided system that counts the cancer cells in such BCMI. Such quantitation by cell counting is very useful in supporting decisions and planning of the medical treatment of breast cancer. This study proposes and evaluates features based on texture analysis by fractal dimension (FD), for the classification of histological structures in a BCMI into either cancer cells or non-cancer cells. The cancer cells include positive cells (PC) and negative cells (NC), while the normal cells comprise stromal cells (SC) and lymphocyte cells (LC). The FD feature values were calculated with the box-counting method from binarized images, obtained by automatic thresholding with Otsu's method of the grayscale images for various color channels. A total of 12 color channels from four color spaces (RGB, CIE-L*a*b*, HSV, and YCbCr) were investigated, and the FD feature values from them were used with decision tree classifiers. The BCMI data consisted of 1,400, 1,200, and 800 images with pixel resolutions 128 × 128, 192 × 192, and 256 × 256, respectively. The best cross-validated classification accuracy was 93.87%, for distinguishing between cancer and non-cancer cells, obtained using the Cr color channel with window size 256. The results indicate that the proposed algorithm, based on fractal dimension features extracted from a color channel, performs well in the automatic classification of the histology in a BCMI. This might support accurate automatic cell counting in a computer-assisted system for breast cancer diagnosis.

  20. An Image Coder for Lossless and Near Lossless Compression

    Institute of Scientific and Technical Information of China (English)

    MENChaoguang; LIXiukun; ZHAODebin; YANGXiaozong


    In this paper, we propose a new image coder (DACLIC) for lossless and near lossless image cornpression. The redundancy removal in DACLIC (Direction and context-based lossless/near lossless image coder) is achieved by block direction prediction and context-based error modeling. A quadtree coder and a postprocessing technique in DACLIC are also described. Experiments show that DACLIC has higher compression efficiency than the ISO standard: LOCO-I (Low complexity lossless compression for images). For example, DACLIC is superior to LOCO-I by 0.12bpp, 0.13bpp and 0.21bpp when the maximum absolute tolerant error n = 0. 5 and 10 for 512 × 512 image “Lena”. In term of computational complexity, DACLIC has marginally higher encoding complexity than LOCO-I but is comparable to LOCO-I in decoding complexity.

  1. View compensated compression of volume rendered images for remote visualization. (United States)

    Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S


    Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.

  2. Compressive microscopic imaging with "positive-negative" light modulation (United States)

    Yu, Wen-Kai; Yao, Xu-Ri; Liu, Xue-Feng; Lan, Ruo-Ming; Wu, Ling-An; Zhai, Guang-Jie; Zhao, Qing


    An experiment on compressive microscopic imaging with single-pixel detector and single-arm has been performed on the basis of "positive-negative" (differential) light modulation of a digital micromirror device (DMD). A magnified image of micron-sized objects illuminated by the microscope's own incandescent lamp has been successfully acquired. The image quality is improved by one more orders of magnitude compared with that obtained by conventional single-pixel imaging scheme with normal modulation using the same sampling rate, and moreover, the system is robust against the instability of light source and may be applied to very weak light condition. Its nature and the analysis of noise sources is discussed deeply. The realization of this technique represents a big step to the practical applications of compressive microscopic imaging in the fields of biology and materials science.

  3. Pulse-compression ghost imaging lidar via coherent detection

    CERN Document Server

    Deng, Chenjin; Han, Shensheng


    Ghost imaging (GI) lidar, as a novel remote sensing technique,has been receiving increasing interest in recent years. By combining pulse-compression technique and coherent detection with GI, we propose a new lidar system called pulse-compression GI lidar. Our analytical results, which are backed up by numerical simulations, demonstrate that pulse-compression GI lidar can obtain the target's spatial intensity distribution, range and moving velocity. Compared with conventional pulsed GI lidar system, pulse-compression GI lidar, without decreasing the range resolution, is easy to obtain high single pulse energy with the use of a long pulse, and the mechanism of coherent detection can eliminate the influence of the stray light, which can dramatically improve the detection sensitivity and detection range.

  4. Pulse-compression ghost imaging lidar via coherent detection. (United States)

    Deng, Chenjin; Gong, Wenlin; Han, Shensheng


    Ghost imaging (GI) lidar, as a novel remote sensing technique, has been receiving increasing interest in recent years. By combining pulse-compression technique and coherent detection with GI, we propose a new lidar system called pulse-compression GI lidar. Our analytical results, which are backed up by numerical simulations, demonstrate that pulse-compression GI lidar can obtain the target's spatial intensity distribution, range and moving velocity. Compared with conventional pulsed GI lidar system, pulse-compression GI lidar, without decreasing the range resolution, is easy to obtain high single pulse energy with the use of a long pulse, and the mechanism of coherent detection can eliminate the influence of the stray light, which is helpful to improve the detection sensitivity and detection range.

  5. Diagnostics of hemangioma by the methods of correlation and fractal analysis of laser microscopic images of blood plasma (United States)

    Boychuk, T. M.; Bodnar, B. M.; Vatamanesku, L. I.


    For the first time the complex correlation and fractal analysis was used for the investigation of microscopic images of both tissue images and hemangioma liquids. It was proposed a physical model of description of phase distributions formation of coherent radiation, which was transformed by optical anisotropic biological structures. The phase maps of laser radiation in the boundary diffraction zone were used as the main information parameter. The results of investigating the interrelation between the values of correlation (correlation area, asymmetry coefficient and autocorrelation function excess) and fractal (dispersion of logarithmic dependencies of power spectra) parameters are presented. They characterize the coordinate distributions of phase shifts in the points of laser images of histological sections of hemangioma, hemangioma blood smears and blood plasma with vascular system pathologies. The diagnostic criteria of hemangioma nascency are determined.

  6. An innovative lossless compression method for discrete-color images. (United States)

    Alzahir, Saif; Borici, Arber


    In this paper, we present an innovative method for lossless compression of discrete-color images, such as map images, graphics, GIS, as well as binary images. This method comprises two main components. The first is a fixed-size codebook encompassing 8×8 bit blocks of two-tone data along with their corresponding Huffman codes and their relative probabilities of occurrence. The probabilities were obtained from a very large set of discrete color images which are also used for arithmetic coding. The second component is the row-column reduction coding, which will encode those blocks that are not in the codebook. The proposed method has been successfully applied on two major image categories: 1) images with a predetermined number of discrete colors, such as digital maps, graphs, and GIS images and 2) binary images. The results show that our method compresses images from both categories (discrete color and binary images) with 90% in most case and higher than the JBIG-2 by 5%-20% for binary images, and by 2%-6.3% for discrete color images on average.

  7. Photoacoustic image reconstruction based on Bayesian compressive sensing algorithm

    Institute of Scientific and Technical Information of China (English)

    Mingjian Sun; Naizhang Feng; Yi Shen; Jiangang Li; Liyong Ma; Zhenghua Wu


    The photoacoustic tomography (PAT) method, based on compressive sensing (CS) theory, requires that,for the CS reconstruction, the desired image should have a sparse representation in a known transform domain. However, the sparsity of photoacoustic signals is destroyed because noises always exist. Therefore,the original sparse signal cannot be effectively recovered using the general reconstruction algorithm. In this study, Bayesian compressive sensing (BCS) is employed to obtain highly sparse representations of photoacoustic images based on a set of noisy CS measurements. Results of simulation demonstrate that the BCS-reconstructed image can achieve superior performance than other state-of-the-art CS-reconstruction algorithms.%@@ The photoacoustic tomography (PAT) method, based on compressive sensing (CS) theory, requires that,for the CS reconstruction, the desired image should have a sparse representation in a known transform domain.However, the sparsity of photoacoustic signals is destroyed because noises always exist.Therefore,the original sparse signal cannot be effectively recovered using the general reconstruction algorithm.In this study, Bayesian compressive sensing (BCS) is employed to obtain highly sparse representations of photoacoustic inages based on a set of noisy CS measurements.Results of simulation demonstrate that the BCS-reconstructed image can achieve superior performance than other state-of-the-art CS-reconstruction algorithms.


    Institute of Scientific and Technical Information of China (English)

    Yang Guoan; Zheng Nanning; Guo Shugang


    A new approach for designing the Biorthogonal Wavelet Filter Bank (BWFB) for the purpose of image compression is presented in this letter. The approach is decomposed into two steps.First, an optimal filter bank is designed in theoretical sense based on Vaidyanathan's coding gain criterion in SubBand Coding (SBC) system. Then the above filter bank is optimized based on the criterion of Peak Signal-to-Noise Ratio (PSNR) in JPEG2000 image compression system, resulting in a BWFB in practical application sense. With the approach, a series of BWFB for a specific class of applications related to image compression, such as remote sensing images, can be fast designed. Here,new 5/3 BWFB and 9/7 BWFB are presented based on the above approach for the remote sensing image compression applications. Experiments show that the two filter banks are equally performed with respect to CDF 9/7 and LT 5/3 filter in JPEG2000 standard; at the same time, the coefficients and the lifting parameters of the lifting scheme are all rational, which bring the computational advantage, and the ease for VLSI implementation.

  9. Integer wavelet transform for embedded lossy to lossless image compression. (United States)

    Reichel, J; Menegaz, G; Nadenau, M J; Kunt, M


    The use of the discrete wavelet transform (DWT) for embedded lossy image compression is now well established. One of the possible implementations of the DWT is the lifting scheme (LS). Because perfect reconstruction is granted by the structure of the LS, nonlinear transforms can be used, allowing efficient lossless compression as well. The integer wavelet transform (IWT) is one of them. This is an interesting alternative to the DWT because its rate-distortion performance is similar and the differences can be predicted. This topic is investigated in a theoretical framework. A model of the degradations caused by the use of the IWT instead of the DWT for lossy compression is presented. The rounding operations are modeled as additive noise. The noise are then propagated through the LS structure to measure their impact on the reconstructed pixels. This methodology is verified using simulations with random noise as input. It predicts accurately the results obtained using images compressed by the well-known EZW algorithm. Experiment are also performed to measure the difference in terms of bit rate and visual quality. This allows to a better understanding of the impact of the IWT when applied to lossy image compression.

  10. Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Barni Mauro


    Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.

  11. A Review On Segmentation Based Image Compression Techniques

    Directory of Open Access Journals (Sweden)



    Full Text Available Abstract -The storage and transmission of imagery become more challenging task in the current scenario of multimedia applications. Hence, an efficient compression scheme is highly essential for imagery, which reduces the requirement of storage medium and transmission bandwidth. Not only improvement in performance and also the compression techniques must converge quickly in order to apply them for real time applications. There are various algorithms have been done in image compression, but everyone has its own pros and cons. Here, an extensive analysis between existing methods is performed. Also, the use of existing works is highlighted, for developing the novel techniques which face the challenging task of image storage and transmission in multimedia applications.

  12. Improved zerotree coding algorithm for wavelet image compression (United States)

    Chen, Jun; Li, Yunsong; Wu, Chengke


    A listless minimum zerotree coding algorithm based on the fast lifting wavelet transform with lower memory requirement and higher compression performance is presented in this paper. Most state-of-the-art image compression techniques based on wavelet coefficients, such as EZW and SPIHT, exploit the dependency between the subbands in a wavelet transformed image. We propose a minimum zerotree of wavelet coefficients which exploits the dependency not only between the coarser and the finer subbands but also within the lowest frequency subband. And a ne listless significance map coding algorithm based on the minimum zerotree, using new flag maps and new scanning order different form Wen-Kuo Lin et al. LZC, is also proposed. A comparison reveals that the PSNR results of LMZC are higher than those of LZC, and the compression performance of LMZC outperforms that of SPIHT in terms of hard implementation.

  13. Fast lossless color image compression method using perceptron

    Institute of Scientific and Technical Information of China (English)

    贾克斌; 张延华; 庄新月


    The technique of lossless image compression plays an important role in image transmission and storage for high quality. At present, both the compression ratio and processing speed should be considered in a real-time multimedia system. A novel lossless compression algorithm is researched. A low complexity predictive model is proposed using the correlation of pixels and color components. In the meantime, perceptron in neural network is used to rectify the prediction values adaptively. It makes the prediction residuals smaller and in a small dynamic scope. Also a color space transform is used and good decorrelation is obtained in our algorithm. The compared experimental results have shown that our algorithm has a noticeably better performance than traditional algorithms. Compared to the new standard JPEG-LS, this predictive model reduces its computational complexity. And its speed is faster than the JPEG-LS with negligible performance sacrifice.

  14. Digital image sequence processing, compression, and analysis

    CERN Document Server

    Reed, Todd R



  15. Robust SPIHT-based Image Compression

    Institute of Scientific and Technical Information of China (English)

    CHENHailin; YANGYuhang


    As a famous wavelet-based image coding technique, Set partitioning in hierarchical trees (SPIHT) provides excellent rate distortion performance and progressive display properties when images are transmitted over lossless networks. But due to its highly statedependent properties, it performs poorly over losing networks. In this paper, we propose an algorithm to reorganize the wavelet transform coefficients according to wavelet tree concept and code each wavelet tree independently. Then, each coded bit-plane of each wavelet tree is packetized and transmitted to networks independently with little header information. Experimental results show that the proposed algorithm improves the robustness of the bit steam greatly while preserving its progressive display properties.

  16. Hybrid coding for split gray values in radiological image compression (United States)

    Lo, Shih-Chung B.; Krasner, Brian; Mun, Seong K.; Horii, Steven C.


    Digital techniques are used more often than ever in a variety of fields. Medical information management is one of the largest digital technology applications. It is desirable to have both a large data storage resource and extremely fast data transmission channels for communication. On the other hand, it is also essential to compress these data into an efficient form for storage and transmission. A variety of data compression techniques have been developed to tackle a diversity of situations. A digital value decomposition method using a splitting and remapping method has recently been proposed for image data compression. This method attempts to employ an error-free compression for one part of the digital value containing highly significant value and uses another method for the second part of the digital value. We have reported that the effect of this method is substantial for the vector quantization and other spatial encoding techniques. In conjunction with DCT type coding, however, the splitting method only showed a limited improvement when compared to the nonsplitting method. With the latter approach, we used a nonoptimized method for the images possessing only the top three-most-significant- bit value (3MSBV) and produced a compression ratio of approximately 10:1. Since the 3MSB images are highly correlated and the same values tend to aggregate together, the use of area or contour coding was investigated. In our experiment, we obtained an average error-free compression ratio of 30:1 and 12:1 for 3MSB and 4MSB images, respectively, with the alternate value contour coding. With this technique, we clearly verified that the splitting method is superior to the nonsplitting method for finely digitized radiographs.

  17. Accelerated MR imaging using compressive sensing with no free parameters. (United States)

    Khare, Kedar; Hardy, Christopher J; King, Kevin F; Turski, Patrick A; Marinelli, Luca


    We describe and evaluate a robust method for compressive sensing MRI reconstruction using an iterative soft thresholding framework that is data-driven, so that no tuning of free parameters is required. The approach described here combines a Nesterov type optimal gradient scheme for iterative update along with standard wavelet-based adaptive denoising methods, resulting in a leaner implementation compared with the nonlinear conjugate gradient method. Tests with T₂ weighted brain data and vascular 3D phase contrast data show that the image quality of reconstructions is comparable with those from an empirically tuned nonlinear conjugate gradient approach. Statistical analysis of image quality scores for multiple datasets indicates that the iterative soft thresholding approach as presented here may improve the robustness of the reconstruction and the image quality, when compared with nonlinear conjugate gradient that requires manual tuning for each dataset. A data-driven approach as illustrated in this article should improve future clinical applicability of compressive sensing image reconstruction.

  18. Image Compression Via a Fast DCT Approximation

    NARCIS (Netherlands)

    Bayer, F. M.; Cintra, R. J.


    Discrete transforms play an important role in digital signal processing. In particular, due to its transform domain energy compaction properties, the discrete cosine transform (DCT) is pivotal in many image processing problems. This paper introduces a numerical approximation method for the DCT based

  19. Wavelet-based pavement image compression and noise reduction (United States)

    Zhou, Jian; Huang, Peisen S.; Chiang, Fu-Pen


    For any automated distress inspection system, typically a huge number of pavement images are collected. Use of an appropriate image compression algorithm can save disk space, reduce the saving time, increase the inspection distance, and increase the processing speed. In this research, a modified EZW (Embedded Zero-tree Wavelet) coding method, which is an improved version of the widely used EZW coding method, is proposed. This method, unlike the two-pass approach used in the original EZW method, uses only one pass to encode both the coordinates and magnitudes of wavelet coefficients. An adaptive arithmetic encoding method is also implemented to encode four symbols assigned by the modified EZW into binary bits. By applying a thresholding technique to terminate the coding process, the modified EZW coding method can compress the image and reduce noise simultaneously. The new method is much simpler and faster. Experimental results also show that the compression ratio was increased one and one-half times compared to the EZW coding method. The compressed and de-noised data can be used to reconstruct wavelet coefficients for off-line pavement image processing such as distress classification and quantification.

  20. Feasibility Study of Compressive Sensing Underwater Imaging Lidar (United States)


    patterns generated using this scheme can significantly reduce the cost and complexity of the antenna design in such imaging systems. Another...currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1, REPORT DATE (’DD- MW -yYVyj 03/28/2014 2. REPORT TYPE Final...Feasibility study of Compressive Sensing Underwater Imaging Lidar 5a. CONTRACT NUMBER 5b. GRANT NUMBER N00014-12-1-0921 5c. PROGRAM ELEMENT NUMBER 6

  1. Fast algorithm for exploring and compressing of large hyperspectral images

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey


    A new method for calculation of latent variable space for exploratory analysis and dimension reduction of large hyperspectral images is proposed. The method is based on significant downsampling of image pixels with preservation of pixels’ structure in feature (variable) space. To achieve this, in...... can be used first of all for fast compression of large data arrays with principal component analysis or similar projection techniques....

  2. Improved method for predicting the peak signal-to-noise ratio quality of decoded images in fractal image coding (United States)

    Wang, Qiang; Bi, Sheng


    To predict the peak signal-to-noise ratio (PSNR) quality of decoded images in fractal image coding more efficiently and accurately, an improved method is proposed. After some derivations and analyses, we find that the linear correlation coefficients between coded range blocks and their respective best-matched domain blocks can determine the dynamic range of their collage errors, which can also provide the minimum and the maximum of the accumulated collage error (ACE) of uncoded range blocks. Moreover, the dynamic range of the actual percentage of accumulated collage error (APACE), APACEmin to APACEmax, can be determined as well. When APACEmin reaches a large value, such as 90%, APACEmin to APACEmax will be limited in a small range and APACE can be computed approximately. Furthermore, with ACE and the approximate APACE, the ACE of all range blocks and the average collage error (ACER) can be obtained. Finally, with the logarithmic relationship between ACER and the PSNR quality of decoded images, the PSNR quality of decoded images can be predicted directly. Experiments show that compared with the previous similar method, the proposed method can predict the PSNR quality of decoded images more accurately and needs less computation time simultaneously.

  3. Clinical evaluation of irreversible image compression: analysis of chest imaging with computed radiography. (United States)

    Ishigaki, T; Sakuma, S; Ikeda, M; Itoh, Y; Suzuki, M; Iwai, S


    To implement a picture archiving and communication system, clinical evaluation of irreversible image compression with a newly developed modified two-dimensional discrete cosine transform (DCT) and bit-allocation technique was performed for chest images with computed radiography (CR). CR images were observed on a cathode-ray-tube monitor in a 1,024 X 1,536 matrix. One original and five reconstructed versions of the same images with compression ratios of 3:1, 6:1, 13:1, 19:1, and 31:1 were ranked according to quality. Test images with higher spatial frequency were ranked better than those with lower spatial frequency and the acceptable upper limit of the compression ratio was 19:1. In studies of receiver operating characteristics for scoring the presence or absence of nodules and linear shadows, the images with a compression ratio of 25:1 showed a statistical difference as compared with the other images with a compression ratio of 20:1 or less. Both studies show that plain CR chest images with a compression ratio of 10:1 are acceptable and, with use of an improved DCT technique, the upper limit of the compression ratio is 20:1.

  4. Data Mining Un-Compressed Images from cloud with Clustering Compression technique using Lempel-Ziv-Welch

    Directory of Open Access Journals (Sweden)

    C. Parthasarathy


    Full Text Available Cloud computing is a highly discussed topic in the technical and economic world, and many of the big players of the software industry have entered the development of cloud services. Several companies’ and organizations wants to explore the possibilities and benefits of incorporating such cloud computing services in their business, as well as the possibilities to offer own cloud services. We are going to mine the un-compressed image from the cloud and use k-means clustering grouping the uncompressed image and compress it with Lempel-ziv-welch coding technique so that the un-compressed images becomes error-free compression and spatial redundancies.

  5. Space, time, error, and power optimization of image compression transforms (United States)

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.


    The implementation of an image compression transform on one or more small, embedded processors typically involves stringent constraints on power consumption and form factor. Traditional methods of optimizing compression algorithm performance typically emphasize joint minimization of space and time complexity, often without significant consideration of arithmetic accuracy or power consumption. However, small autonomous imaging platforms typically require joint optimization of space, time, error (or accuracy), and power (STEP) parameters, which the authors call STEP optimization. In response to implementational constraints on space and power consumption, the authors have developed systems and techniques for STEP optimization that are based on recent research in VLSI circuit design, as well as extensive previous work in system optimization. Building on the authors' previous research in embedded processors as well as adaptive or reconfigurable computing, it is possible to produce system-independent STEP optimization that can be customized for a given set of system-specific constraints. This approach is particularly useful when algorithms for image and signal processing (ISP) computer vision (CV), or automated target recognition (ATR), expressed in a machine- independent notation, are mapped to one or more heterogeneous processors (e.g., digital signal processors or DSPs, SIMD mesh processors, or reconfigurable logic). Following a theoretical summary, this paper illustrates various STEP optimization techniques via case studies, for example, real-time compression of underwater imagery on board an autonomous vehicle. Optimization algorithms are taken from the literature, and error profiling/analysis methodologies developed in the authors' previous research are employed. This yields a more rigorous basis for the simulation and evaluation of compression algorithms on a wide variety of hardware models. In this study, image algebra is employed as the notation of choice

  6. Simultaneous compression and encryption of closely resembling images: application to video sequences and polarimetric images. (United States)

    Aldossari, M; Alfalou, A; Brosseau, C


    This study presents and validates an optimized method of simultaneous compression and encryption designed to process images with close spectra. This approach is well adapted to the compression and encryption of images of a time-varying scene but also to static polarimetric images. We use the recently developed spectral fusion method [Opt. Lett.35, 1914-1916 (2010)] to deal with the close resemblance of the images. The spectral plane (containing the information to send and/or to store) is decomposed in several independent areas which are assigned according a specific way. In addition, each spectrum is shifted in order to minimize their overlap. The dual purpose of these operations is to optimize the spectral plane allowing us to keep the low- and high-frequency information (compression) and to introduce an additional noise for reconstructing the images (encryption). Our results show that not only can the control of the spectral plane enhance the number of spectra to be merged, but also that a compromise between the compression rate and the quality of the reconstructed images can be tuned. We use a root-mean-square (RMS) optimization criterion to treat compression. Image encryption is realized at different security levels. Firstly, we add a specific encryption level which is related to the different areas of the spectral plane, and then, we make use of several random phase keys. An in-depth analysis at the spectral fusion methodology is done in order to find a good trade-off between the compression rate and the quality of the reconstructed images. Our new proposal spectral shift allows us to minimize the image overlap. We further analyze the influence of the spectral shift on the reconstructed image quality and compression rate. The performance of the multiple-image optical compression and encryption method is verified by analyzing several video sequences and polarimetric images.

  7. Lossless compression of multispectral images using spectral information (United States)

    Ma, Long; Shi, Zelin; Tang, Xusheng


    Multispectral images are available for different purposes due to developments in spectral imaging systems. The sizes of multispectral images are enormous. Thus transmission and storage of these volumes of data require huge time and memory resources. That is why compression algorithms must be developed. A salient property of multispectral images is that strong spectral correlation exists throughout almost all bands. This fact is successfully used to predict each band based on the previous bands. We propose to use spectral linear prediction and entropy coding with context modeling for encoding multispectral images. Linear prediction predicts the value for the next sample and computes the difference between predicted value and the original value. This difference is usually small, so it can be encoded with less its than the original value. The technique implies prediction of each image band by involving number of bands along the image spectra. Each pixel is predicted using information provided by pixels in the previous bands in the same spatial position. As done in the JPEG-LS, the proposed coder also represents the mapped residuals by using an adaptive Golomb-Rice code with context modeling. This residual coding is context adaptive, where the context used for the current sample is identified by a context quantization function of the three gradients. Then, context-dependent Golomb-Rice code and bias parameters are estimated sample by sample. The proposed scheme was compared with three algorithms applied to the lossless compression of multispectral images, namely JPEG-LS, Rice coding, and JPEG2000. Simulation tests performed on AVIRIS images have demonstrated that the proposed compression scheme is suitable for multispectral images.

  8. Spectrally Adaptable Compressive Sensing Imaging System (United States)


    viewed by a Stingray F-033C CCD Color Camera. The desired bands are depicted in (g). The original desired bands are shown in (a). Reconstructed images...would be viewed by a Stingray F-033C CCD Color Camera. The desired bands are indicated in (e). The original desired bands are shown in (a). Reconstructed...times and the mean PSNR is estimated. The resulting spectral data cubes are shown as they would be viewed by a Stingray F-033C CCD Color Camera. Figure

  9. Wavelet-based image compression using fixed residual value (United States)

    Muzaffar, Tanzeem; Choi, Tae-Sun


    Wavelet based compression is getting popular due to its promising compaction properties at low bitrate. Zerotree wavelet image coding scheme efficiently exploits multi-level redundancy present in transformed data to minimize coding bits. In this paper, a new technique is proposed to achieve high compression by adding new zerotree and significant symbols to original EZW coder. Contrary to four symbols present in basic EZW scheme, modified algorithm uses eight symbols to generate fewer bits for a given data. Subordinate pass of EZW is eliminated and replaced with fixed residual value transmission for easy implementation. This modification simplifies the coding technique as well and speeds up the process, retaining the property of embeddedness.

  10. 2D image compression using concurrent wavelet transform (United States)

    Talukder, Kamrul Hasan; Harada, Koichi


    In the recent years wavelet transform (WT) has been widely used for image compression. As WT is a sequential process, much time is required to transform data. Here a new approach has been presented where the transformation process is executed concurrently. As a result the procedure runs first and the time of transformation is reduced. Multiple threads are used for row and column transformation and the communication among threads has been managed effectively. Thus, the transformation time has been reduced significantly. The proposed system provides better compression ratio and PSNR value with lower time complexity.

  11. JPIC-Rad-Hard JPEG2000 Image Compression ASIC (United States)

    Zervas, Nikos; Ginosar, Ran; Broyde, Amitai; Alon, Dov


    JPIC is a rad-hard high-performance image compression ASIC for the aerospace market. JPIC implements tier 1 of the ISO/IEC 15444-1 JPEG2000 (a.k.a. J2K) image compression standard [1] as well as the post compression rate-distortion algorithm, which is part of tier 2 coding. A modular architecture enables employing a single JPIC or multiple coordinated JPIC units. JPIC is designed to support wide data sources of imager in optical, panchromatic and multi-spectral space and airborne sensors. JPIC has been developed as a collaboration of Alma Technologies S.A. (Greece), MBT/IAI Ltd (Israel) and Ramon Chips Ltd (Israel). MBT IAI defined the system architecture requirements and interfaces, The JPEG2K-E IP core from Alma implements the compression algorithm [2]. Ramon Chips adds SERDES interfaces and host interfaces and integrates the ASIC. MBT has demonstrated the full chip on an FPGA board and created system boards employing multiple JPIC units. The ASIC implementation, based on Ramon Chips' 180nm CMOS RadSafe[TM] RH cell library enables superior radiation hardness.

  12. Mechanical compression for contrasting OCT images of biotissues (United States)

    Kirillin, Mikhail Y.; Argba, Pavel D.; Kamensky, Vladislav A.


    Contrasting of biotissue layers in OCT images after application of mechanical compression is discussed. The study is performed on ex vivo samples of human rectum, and in vivo on skin of human volunteers. We show that mechanical compression provides contrasting of biotissue layer boundaries due to different mechanical properties of layers. We show that alteration of pressure from 0 up to 0.45 N/mm2 causes contrast increase from 1 to 10 dB in OCT imaging of human rectum ex vivo. Results of ex vivo studies are in good agreement with Monte Carlo simulations. Application of pressure of 0.45 N/mm2 causes increase in contrast of epidermis-dermis junction in OCT-images of human skin in vivo for about 10 dB.

  13. Compressive Fluorescence Microscopy for Biological and Hyperspectral Imaging

    CERN Document Server

    Studer, Vincent; Chahid, Makhlad; Moussavi, Hamed; Candes, Emmanuel; Dahan, Maxime


    The mathematical theory of compressed sensing (CS) asserts that one can acquire signals from measurements whose rate is much lower than the total bandwidth. Whereas the CS theory is now well developed, challenges concerning hardware implementations of CS-based acquisition devices---especially in optics---have only started being addressed. This paper presents an implementation of compressive sensing in fluorescence microscopy and its applications to biomedical imaging. Our CS microscope combines a dynamic structured wide-field illumination and a fast and sensitive single-point fluorescence detection to enable reconstructions of images of fluorescent beads, cells and tissues with undersampling ratios (between the number of pixels and number of measurements) up to 32. We further demonstrate a hyperspectral mode and record images with 128 spectral channels and undersampling ratios up to 64, illustrating the potential benefits of CS acquisition for higher dimensional signals which typically exhibits extreme redund...

  14. Implementation of aeronautic image compression technology on DSP (United States)

    Wang, Yujing; Gao, Xueqiang; Wang, Mei


    According to the designed characteristics and demands of aeronautic image compression system, lifting scheme wavelet and SPIHT algorithm was selected as the key part of software implementation, which was introduced with details. In order to improve execution efficiency, border processing was simplified reasonably and SPIHT (Set Partitioning in Hierarchical Trees) algorithm was also modified partly. The results showed that the selected scheme has a 0.4dB improvement in PSNR(peak-peak-ratio) compared with classical Shaprio's scheme. To improve the operating speed, the hardware system was then designed based on DSP and many optimization measures were then applied successfully. Practical test showed that the system can meet the real-time demand with good quality of reconstruct image, which has been used in an aeronautic image compression system practically.

  15. A Progressive Image Compression Method Based on EZW Algorithm (United States)

    Du, Ke; Lu, Jianming; Yahagi, Takashi

    A simple method based on the EZW algorithm is presented for improving image compression performance. Recent success in wavelet image coding is mainly attributed to recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's EZW(Embedded Zerotree Wavelets)(1), Said and Pearlman's SPIHT(Set Partitioning In Hierarchical Trees)(2), and Bing-Bing Chai's SLCCA(Significance-Linked Connected Component Analysis for Wavelet Image Coding)(3). The EZW algorithm is based on five key concepts: (1) a DWT(Discrete Wavelet Transform) or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, (4) universal lossless data compression which is achieved via adaptive arithmetic coding. and (5) DWT coefficients' degeneration from high scale subbands to low scale subbands. In this paper, we have improved the self-similarity statistical characteristic in concept (5) and present a progressive image compression method.

  16. Spatial compression algorithm for the analysis of very large multivariate images (United States)

    Keenan, Michael R.


    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  17. De l'image vers la compression


    Wagner, Charles


    Les domaines d'applications mettant en jeu l'image vidéo ne cessent de s'étendre sous l'impulsion des progrès réalisés en traitement du signal, en architecture de machines ainsi que des avancées technologiques en matière d'intégration de composants. En télévision haute définition cette évolution est plus particulièrement sensible et l'on constate que l'application, les algorithmes mis en oeuvre, les supports de transmission utilisés et les aspects normalisation sont étroitement liés....

  18. Split Bregman's optimization method for image construction in compressive sensing (United States)

    Skinner, D.; Foo, S.; Meyer-Bäse, A.


    The theory of compressive sampling (CS) was reintroduced by Candes, Romberg and Tao, and D. Donoho in 2006. Using a priori knowledge that a signal is sparse, it has been mathematically proven that CS can defY Nyquist sampling theorem. Theoretically, reconstruction of a CS image relies on the minimization and optimization techniques to solve this complex almost NP-complete problem. There are many paths to consider when compressing and reconstructing an image but these methods have remained untested and unclear on natural images, such as underwater sonar images. The goal of this research is to perfectly reconstruct the original sonar image from a sparse signal while maintaining pertinent information, such as mine-like object, in Side-scan sonar (SSS) images. Goldstein and Osher have shown how to use an iterative method to reconstruct the original image through a method called Split Bregman's iteration. This method "decouples" the energies using portions of the energy from both the !1 and !2 norm. Once the energies are split, Bregman iteration is used to solve the unconstrained optimization problem by recursively solving the problems simultaneously. The faster these two steps or energies can be solved then the faster the overall method becomes. While the majority of CS research is still focused on the medical field, this paper will demonstrate the effectiveness of the Split Bregman's methods on sonar images.

  19. A novel image fusion approach based on compressive sensing (United States)

    Yin, Hongpeng; Liu, Zhaodong; Fang, Bin; Li, Yanxia


    Image fusion can integrate complementary and relevant information of source images captured by multiple sensors into a unitary synthetic image. The compressive sensing-based (CS) fusion approach can greatly reduce the processing speed and guarantee the quality of the fused image by integrating fewer non-zero coefficients. However, there are two main limitations in the conventional CS-based fusion approach. Firstly, directly fusing sensing measurements may bring greater uncertain results with high reconstruction error. Secondly, using single fusion rule may result in the problems of blocking artifacts and poor fidelity. In this paper, a novel image fusion approach based on CS is proposed to solve those problems. The non-subsampled contourlet transform (NSCT) method is utilized to decompose the source images. The dual-layer Pulse Coupled Neural Network (PCNN) model is used to integrate low-pass subbands; while an edge-retention based fusion rule is proposed to fuse high-pass subbands. The sparse coefficients are fused before being measured by Gaussian matrix. The fused image is accurately reconstructed by Compressive Sampling Matched Pursuit algorithm (CoSaMP). Experimental results demonstrate that the fused image contains abundant detailed contents and preserves the saliency structure. These also indicate that our proposed method achieves better visual quality than the current state-of-the-art methods.


    Directory of Open Access Journals (Sweden)

    S. Manimurugan


    Full Text Available The eXtensible Markup Language (XML is a format that is widely used as a tool for data exchange and storage. It is being increasingly used in secure transmission of image data over wireless network and World Wide Web. Verbose in nature, XML files can be tens of megabytes long. Thus, to reduce their size and to allow faster transmission, compression becomes vital. Several general purpose compression tools have been proposed without satisfactory results. This paper proposes a novel technique using modified BWT for compressing XML files in a lossless fashion. The experimental results show that the performance of the proposed technique outperforms both general purpose and XML-specific compressors.

  1. Optimum image compression rate maintaining diagnostic image quality of digital intraoral radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Song, Ju Seop; Koh, Kwang Joon [Dept. of Oral and Maxillofacial Radiology and Institute of Oral Bio Science, School of Dentistry, Chonbuk National University, Chonju (Korea, Republic of)


    The aims of the present study are to determine the optimum compression rate in terms of file size reduction and diagnostic quality of the images after compression and evaluate the transmission speed of original or each compressed images. The material consisted of 24 extracted human premolars and molars. The occlusal surfaces and proximal surfaces of the teeth had a clinical disease spectrum that ranged from sound to varying degrees of fissure discoloration and cavitation. The images from Digora system were exported in TIFF and the images from conventional intraoral film were scanned and digitalized in TIFF by Nikon SF-200 scanner(Nikon, Japan). And six compression factors were chosen and applied on the basis of the results from a pilot study. The total number of images to be assessed were 336. Three radiologists assessed the occlusal and proximal surfaces of the teeth with 5-rank scale. Finally diagnosed as either sound or carious lesion by one expert oral pathologist. And sensitivity and specificity and kappa value for diagnostic agreement was calculated. Also the area (Az) values under the ROC curve were calculated and paired t-test and oneway ANOVA test was performed. Thereafter, transmission time of the image files of the each compression level were compared with that of the original image files. No significant difference was found between original and the corresponding images up to 7% (1:14) compression ratio for both the occlusal and proximal caries (p<0.05). JPEG3 (1:14) image files are transmitted fast more than 10 times, maintained diagnostic information in image, compared with original image files. 1:14 compressed image file may be used instead of the original image and reduce storage needs and transmission time.

  2. Compression and Processing of Space Image Sequences of Northern Lights and Sprites

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto; Martins, Bo; Jensen, Ole Riis


    Compression of image sequences of auroral activity as northern lights and thunderstorms with sprites is investigated.......Compression of image sequences of auroral activity as northern lights and thunderstorms with sprites is investigated....

  3. Novel image compression-encryption hybrid algorithm based on key-controlled measurement matrix in compressive sensing (United States)

    Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua


    The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.

  4. Image compression using address-vector quantization (United States)

    Nasrabadi, Nasser M.; Feng, Yushu


    A novel vector quantization scheme, the address-vector quantizer (A-VQ), is proposed which exploits the interblock correlation by encoding a group of blocks together using an address-codebook (AC). The AC is a set of address-codevectors (ACVs), each representing a combination of addresses or indices. Each element of the ACV is an address of an entry in the LBG-codebook, representing a vector-quantized block. The AC consists of an active (addressable) region and an inactive (nonaddressable) region. During encoding the ACVs in the AC are reordered adaptively to bring the most probable ACVs into the active region. When encoding an ACV, the active region is checked, and if such an address combination exists, its index is transmitted to the receiver. Otherwise, the address of each block is transmitted individually. The SNR of the images encoded by the A-VQ method is the same as that of a memoryless vector quantizer, but the bit rate is by a factor of approximately two.

  5. Lossless Image Compression Based on Multiple-Tables Arithmetic Coding

    Directory of Open Access Journals (Sweden)

    Rung-Ching Chen


    Full Text Available This paper is intended to present a lossless image compression method based on multiple-tables arithmetic coding (MTAC method to encode a gray-level image f. First, the MTAC method employs a median edge detector (MED to reduce the entropy rate of f. The gray levels of two adjacent pixels in an image are usually similar. A base-switching transformation approach is then used to reduce the spatial redundancy of the image. The gray levels of some pixels in an image are more common than those of others. Finally, the arithmetic encoding method is applied to reduce the coding redundancy of the image. To promote high performance of the arithmetic encoding method, the MTAC method first classifies the data and then encodes each cluster of data using a distinct code table. The experimental results show that, in most cases, the MTAC method provides a higher efficiency in use of storage space than the lossless JPEG2000 does.

  6. Digital image compression for a 2f multiplexing optical setup (United States)

    Vargas, J.; Amaya, D.; Rueda, E.


    In this work a virtual 2f multiplexing system was implemented in combination with digital image compression techniques and redundant information elimination. Depending on the image type to be multiplexed, a memory-usage saving of as much as 99% was obtained. The feasibility of the system was tested using three types of images, binary characters, QR codes, and grey level images. A multiplexing step was implemented digitally, while a demultiplexing step was implemented in a virtual 2f optical setup following real experimental parameters. To avoid cross-talk noise, each image was codified with a specially designed phase diffraction carrier that would allow the separation and relocation of the multiplexed images on the observation plane by simple light propagation. A description of the system is presented together with simulations that corroborate the method. The present work may allow future experimental implementations that will make use of all the parallel processing capabilities of optical systems.

  7. Hybrid tenso-vectorial compressive sensing for hyperspectral imaging (United States)

    Li, Qun; Bernal, Edgar A.


    Hyperspectral imaging has a wide range of applications relying on remote material identification, including astronomy, mineralogy, and agriculture; however, due to the large volume of data involved, the complexity and cost of hyperspectral imagers can be prohibitive. The exploitation of redundancies along the spatial and spectral dimensions of a hyperspectral image of a scene has created new paradigms that overcome the limitations of traditional imaging systems. While compressive sensing (CS) approaches have been proposed and simulated with success on already acquired hyperspectral imagery, most of the existing work relies on the capability to simultaneously measure the spatial and spectral dimensions of the hyperspectral cube. Most real-life devices, however, are limited to sampling one or two dimensions at a time, which renders a significant portion of the existing work unfeasible. We propose a new variant of the recently proposed serial hybrid vectorial and tensorial compressive sensing (HCS-S) algorithm that, like its predecessor, is compatible with real-life devices both in terms of the acquisition and reconstruction requirements. The newly introduced approach is parallelizable, and we abbreviate it as HCS-P. Together, HCS-S and HCS-P comprise a generalized framework for hybrid tenso-vectorial compressive sensing, or HCS for short. We perform a detailed analysis that demonstrates the uniqueness of the signal reconstructed by both the original HCS-S and the proposed HCS-P algorithms. Last, we analyze the behavior of the HCS reconstruction algorithms in the presence of measurement noise, both theoretically and experimentally.

  8. Astronomical Image Compression Techniques Based on ACC and KLT Coder

    Directory of Open Access Journals (Sweden)

    J. Schindler


    Full Text Available This paper deals with a compression of image data in applications in astronomy. Astronomical images have typical specific properties — high grayscale bit depth, size, noise occurrence and special processing algorithms. They belong to the class of scientific images. Their processing and compression is quite different from the classical approach of multimedia image processing. The database of images from BOOTES (Burst Observer and Optical Transient Exploring System has been chosen as a source of the testing signal. BOOTES is a Czech-Spanish robotic telescope for observing AGN (active galactic nuclei and the optical transient of GRB (gamma ray bursts searching. This paper discusses an approach based on an analysis of statistical properties of image data. A comparison of two irrelevancy reduction methods is presented from a scientific (astrometric and photometric point of view. The first method is based on a statistical approach, using the Karhunen-Loeve transform (KLT with uniform quantization in the spectral domain. The second technique is derived from wavelet decomposition with adaptive selection of used prediction coefficients. Finally, the comparison of three redundancy reduction methods is discussed. Multimedia format JPEG2000 and HCOMPRESS, designed especially for astronomical images, are compared with the new Astronomical Context Coder (ACC coder based on adaptive median regression.

  9. Compressive SAR Imaging with Joint Sparsity and Local Similarity Exploitation

    Directory of Open Access Journals (Sweden)

    Fangfang Shen


    Full Text Available Compressive sensing-based synthetic aperture radar (SAR imaging has shown its superior capability in high-resolution image formation. However, most of those works focus on the scenes that can be sparsely represented in fixed spaces. When dealing with complicated scenes, these fixed spaces lack adaptivity in characterizing varied image contents. To solve this problem, a new compressive sensing-based radar imaging approach with adaptive sparse representation is proposed. Specifically, an autoregressive model is introduced to adaptively exploit the structural sparsity of an image. In addition, similarity among pixels is integrated into the autoregressive model to further promote the capability and thus an adaptive sparse representation facilitated by a weighted autoregressive model is derived. Since the weighted autoregressive model is inherently determined by the unknown image, we propose a joint optimization scheme by iterative SAR imaging and updating of the weighted autoregressive model to solve this problem. Eventually, experimental results demonstrated the validity and generality of the proposed approach.

  10. An Improved Fast SPIHT Image Compression Algorithm for Aerial Applications

    Directory of Open Access Journals (Sweden)

    Ning Zhang


    Full Text Available In this paper, an improved fast SPIHT algorithm has been presented. SPIHT and NLS (Not List SPIHT are efficient compression algorithms, but the algorithms application is limited by the shortcomings of the poor error resistance and slow compression speed in the aviation areas. In this paper, the error resilience and the compression speed are improved. The remote sensing images are decomposed by Le Gall5/3 wavelet, and wavelet coefficients are indexed, scanned and allocated by the means of family blocks. The bit-plane importance is predicted by bitwise OR, so the N bit-planes can be encoded at the same time. Compared with the SPIHT algorithm, this improved algorithm is easy implemented by hardware, and the compression speed is improved. The PSNR of reconstructed images encoded by fast SPIHT is higher than SPIHT and CCSDS from 0.3 to 0.9db, and the speed is 4-6 times faster than SPIHT encoding process. The algorithm meets the high speed and reliability requirements of aerial applications.

  11. Image Compression based on DCT and BPSO for MRI and Standard Images

    Directory of Open Access Journals (Sweden)

    D.J. Ashpin Pabi


    Full Text Available Nowadays, digital image compression has become a crucial factor of modern telecommunication systems. Image compression is the process of reducing total bits required to represent an image by reducing redundancies while preserving the image quality as much as possible. Various applications including internet, multimedia, satellite imaging, medical imaging uses image compression in order to store and transmit images in an efficient manner. Selection of compression technique is an application-specific process. In this paper, an improved compression technique based on Butterfly-Particle Swarm Optimization (BPSO is proposed. BPSO is an intelligence-based iterative algorithm utilized for finding optimal solution from a set of possible values. The dominant factors of BPSO over other optimization techniques are higher convergence rate, searching ability and overall performance. The proposed technique divides the input image into 88 blocks. Discrete Cosine Transform (DCT is applied to each block to obtain the coefficients. Then, the threshold values are obtained from BPSO. Based on this threshold, values of the coefficients are modified. Finally, quantization followed by the Huffman encoding is used to encode the image. Experimental results show the effectiveness of the proposed method over the existing method.

  12. Performance Analysis of Multi Spectral Band Image Compression using Discrete Wavelet Transform

    Directory of Open Access Journals (Sweden)

    S. S. Ramakrishnan


    Full Text Available Problem statement: Efficient and effective utilization of transmission bandwidth and storage capacity have been a core area of research for remote sensing images. Hence image compression is required for multi-band satellite imagery. In addition, image quality is also an important factor after compression and reconstruction. Approach: In this investigation, the discrete wavelet transform is used to compress the Landsat5 agriculture and forestry image using various wavelets and the spectral signature graph is drawn. Results: The compressed image performance is analyzed using Compression Ratio (CR, Peak Signal to Noise Ratio (PSNR. The compressed image using dmey wavelet is selected based on its Digital Number Minimum (DNmin and Digital Number Maximum (DNmax. Then it is classified using maximum likelihood classification and the accuracy is determined using error matrix, kappa statistics and over all accuracy. Conclusion: Hence the proposed compression technique is well suited to compress the agriculture and forestry multi-band image.

  13. Empirical data decomposition and its applications in image compression

    Institute of Scientific and Technical Information of China (English)

    Deng Jiaxian; Wu Xiaoqin


    A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, is automatically determined by observed data, and is able to implement multi-resolution analysis as wavelet transform. The algorithm is suitable for analyzing non-stationary data and can effectively wipe off the relevance of observed data. Then through discussing the applications of EDD in image compression, the paper presents a 2-dimension data decomposition framework and makes some modifications of contexts used by Embedded Block Coding with Optimized Truncation (EBCOT). Simulation results show that EDD is more suitable for non-stationary image data compression.

  14. Compression of fingerprint data using the wavelet vector quantization image compression algorithm. 1992 progress report

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.


    This report describes the development of a Wavelet Vector Quantization (WVQ) image compression algorithm for fingerprint raster files. The pertinent work was performed at Los Alamos National Laboratory for the Federal Bureau of Investigation. This document describes a previously-sent package of C-language source code, referred to as LAFPC, that performs the WVQ fingerprint compression and decompression tasks. The particulars of the WVQ algorithm and the associated design procedure are detailed elsewhere; the purpose of this document is to report the results of the design algorithm for the fingerprint application and to delineate the implementation issues that are incorporated in LAFPC. Special attention is paid to the computation of the wavelet transform, the fast search algorithm used for the VQ encoding, and the entropy coding procedure used in the transmission of the source symbols.

  15. Accelerated dynamic EPR imaging using fast acquisition and compressive recovery (United States)

    Ahmad, Rizwan; Samouilov, Alexandre; Zweier, Jay L.


    Electron paramagnetic resonance (EPR) allows quantitative imaging of tissue redox status, which provides important information about ischemic syndromes, cancer and other pathologies. For continuous wave EPR imaging, however, poor signal-to-noise ratio and low acquisition efficiency limit its ability to image dynamic processes in vivo including tissue redox, where conditions can change rapidly. Here, we present a data acquisition and processing framework that couples fast acquisition with compressive sensing-inspired image recovery to enable EPR-based redox imaging with high spatial and temporal resolutions. The fast acquisition (FA) allows collecting more, albeit noisier, projections in a given scan time. The composite regularization based processing method, called spatio-temporal adaptive recovery (STAR), not only exploits sparsity in multiple representations of the spatio-temporal image but also adaptively adjusts the regularization strength for each representation based on its inherent level of the sparsity. As a result, STAR adjusts to the disparity in the level of sparsity across multiple representations, without introducing any tuning parameter. Our simulation and phantom imaging studies indicate that a combination of fast acquisition and STAR (FASTAR) enables high-fidelity recovery of volumetric image series, with each volumetric image employing less than 10 s of scan. In addition to image fidelity, the time constants derived from FASTAR also match closely to the ground truth even when a small number of projections are used for recovery. This development will enhance the capability of EPR to study fast dynamic processes that cannot be investigated using existing EPR imaging techniques.

  16. Remotely sensed image compression based on wavelet transform (United States)

    Kim, Seong W.; Lee, Heung K.; Kim, Kyung S.; Choi, Soon D.


    In this paper, we present an image compression algorithm that is capable of significantly reducing the vast amount of information contained in multispectral images. The developed algorithm exploits the spectral and spatial correlations found in multispectral images. The scheme encodes the difference between images after contrast/brightness equalization to remove the spectral redundancy, and utilizes a two-dimensional wavelet transform to remove the spatial redundancy. the transformed images are then encoded by Hilbert-curve scanning and run-length-encoding, followed by Huffman coding. We also present the performance of the proposed algorithm with the LANDSAT MultiSpectral Scanner data. The loss of information is evaluated by PSNR (peak signal to noise ratio) and classification capability.

  17. Bi-level image compression with tree coding

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren


    Presently, tree coders are the best bi-level image coders. The current ISO standard, JBIG, is a good example. By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code. Three general-purpose coders...... version that without sacrificing speed brings it close to the multi-pass coders in compression performance...

  18. Processing and image compression based on the platform Arduino (United States)

    Lazar, Jan; Kostolanyova, Katerina; Bradac, Vladimir


    This paper focuses on the use of a minicomputer built on platform Arduino for the purposes of image compression and decompression. Arduino is used as a control element, which integrates needed proposed algorithms. This solution is unique as there is no commonly available solution with low computational performance for demanding graphical operations with the possibility of subsequent extending, because using Arduino, as an open source, enables further extensions and adjustments.

  19. Evaluation of color-embedded wavelet image compression techniques (United States)

    Saenz, Martha; Salama, Paul; Shen, Ke; Delp, Edward J., III


    Color embedded image compression is investigated by means of a set of core experiments that seek to evaluate the advantages of various color transformations, spatial orientation trees and the use of monochrome embedded coding schemes such as EZW and SPIHT. In order to take advantage of the interdependencies of the color components for a given color space, two new spatial orientation trees that relate frequency bands and color components are investigated.



    Narwaria, Manish; Perreira Da Silva, Matthieu; Le Callet, Patrick; Pépion, Romuald


    International audience; Tone mapping or range reduction is often used in High Dynamic Range (HDR) visual signal compression to take advantage of the existing image/video coding architectures. Thus, it is important to study the impact of tone mapping on the visual quality of decompressed HDR visual signals. To our knowledge, most of the existing studies focus only on the quality loss in the resultant low dynamic range (LDR) signal (obtained via tone mapping) and typically employ LDR displays f...

  1. Fractal Dimension Invariant Filtering and Its CNN-based Implementation


    Xu, Hongteng; Yan, Junchi; Persson, Nils; Lin, Weiyao; Zha, Hongyuan


    Fractal analysis has been widely used in computer vision, especially in texture image processing and texture analysis. The key concept of fractal-based image model is the fractal dimension, which is invariant to bi-Lipschitz transformation of image, and thus capable of representing intrinsic structural information of image robustly. However, the invariance of fractal dimension generally does not hold after filtering, which limits the application of fractal-based image model. In this paper, we...

  2. A geometric approach to multi-view compressive imaging (United States)

    Park, Jae Young; Wakin, Michael B.


    In this paper, we consider multi-view imaging problems in which an ensemble of cameras collect images describing a common scene. To simplify the acquisition and encoding of these images, we study the effectiveness of non-collaborative compressive sensing encoding schemes wherein each sensor directly and independently compresses its image using randomized measurements. After these measurements and also perhaps the camera positions are transmitted to a central node, the key to an accurate reconstruction is to fully exploit the joint correlation among the signal ensemble. To capture such correlations, we propose a geometric modeling framework in which the image ensemble is treated as a sampling of points from a low-dimensional manifold in the ambient signal space. Building on results that guarantee stable embeddings of manifolds under random measurements, we propose a "manifold lifting" algorithm for recovering the ensemble that can operate even without knowledge of the camera positions. We divide our discussion into two scenarios, the near-field and far-field cases, and describe how the manifold lifting algorithm could be applied to these scenarios. At the end of this paper, we present an in-depth case study of a far-field imaging scenario, where the aim is to reconstruct an ensemble of satellite images taken from different positions with limited but overlapping fields of view. In this case study, we demonstrate the impressive power of random measurements to capture single- and multi-image structure without explicitly searching for it, as the randomized measurement encoding in conjunction with the proposed manifold lifting algorithm can even outperform image-by-image transform coding.

  3. Filtered gradient reconstruction algorithm for compressive spectral imaging (United States)

    Mejia, Yuri; Arguello, Henry


    Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.

  4. High-resolution three-dimensional imaging with compress sensing (United States)

    Wang, Jingyi; Ke, Jun


    LIDAR three-dimensional imaging technology have been used in many fields, such as military detection. However, LIDAR require extremely fast data acquisition speed. This makes the manufacture of detector array for LIDAR system is very difficult. To solve this problem, we consider using compress sensing which can greatly decrease the data acquisition and relax the requirement of a detection device. To use the compressive sensing idea, a spatial light modulator will be used to modulate the pulsed light source. Then a photodetector is used to receive the reflected light. A convex optimization problem is solved to reconstruct the 2D depth map of the object. To improve the resolution in transversal direction, we use multiframe image restoration technology. For each 2D piecewise-planar scene, we move the SLM half-pixel each time. Then the position where the modulated light illuminates will changed accordingly. We repeat moving the SLM to four different directions. Then we can get four low-resolution depth maps with different details of the same plane scene. If we use all of the measurements obtained by the subpixel movements, we can reconstruct a high-resolution depth map of the sense. A linear minimum-mean-square error algorithm is used for the reconstruction. By combining compress sensing and multiframe image restoration technology, we reduce the burden on data analyze and improve the efficiency of detection. More importantly, we obtain high-resolution depth maps of a 3D scene.

  5. Infrastructural Fractals

    DEFF Research Database (Denmark)

    Bruun Jensen, Casper


    . Instead, I outline a fractal approach to the study of space, society, and infrastructure. A fractal orientation requires a number of related conceptual reorientations. It has implications for thinking about scale and perspective, and (sociotechnical) relations, and for considering the role of the social...... and a fractal social theory....


    Institute of Scientific and Technical Information of China (English)

    Jiang Lai; Huang Cailing; Liao Huilian; Ji Zhen


    In this letter, a new Linde-Buzo-Gray (LBG)-based image compression method using Discrete Cosine Transform (DCT) and Vector Quantization (VQ) is proposed. A gray-level image is firstly decomposed into blocks, then each block is subsequently encoded by a 2D DCT coding scheme. The dimension of vectors as the input of a generalized VQ scheme is reduced. The time of encoding by a generalized VQ is reduced with the introduction of DCT process. The experimental results demonstrate the efficiency of the proposed method.

  7. Block-based adaptive lifting schemes for multiband image compression (United States)

    Masmoudi, Hela; Benazza-Benyahia, Amel; Pesquet, Jean-Christophe


    In this paper, we are interested in designing lifting schemes adapted to the statistics of the wavelet coefficients of multiband images for compression applications. More precisely, nonseparable vector lifting schemes are used in order to capture simultaneously the spatial and the spectral redundancies. The underlying operators are then computed in order to minimize the entropy of the resulting multiresolution representation. To this respect, we have developed a new iterative block-based classification algorithm. Simulation tests carried out on remotely sensed multispectral images indicate that a substantial gain in terms of bit-rate is achieved by the proposed adaptive coding method w.r.t the non-adaptive one.

  8. Compressive imaging system design using task-specific information. (United States)

    Ashok, Amit; Baheti, Pawan K; Neifeld, Mark A


    We present a task-specific information (TSI) based framework for designing compressive imaging (CI) systems. The task of target detection is chosen to demonstrate the performance of the optimized CI system designs relative to a conventional imager. In our optimization framework, we first select a projection basis and then find the associated optimal photon-allocation vector in the presence of a total photon-count constraint. Several projection bases, including principal components (PC), independent components, generalized matched-filter, and generalized Fisher discriminant (GFD) are considered for candidate CI systems, and their respective performance is analyzed for the target-detection task. We find that the TSI-optimized CI system design based on a GFD projection basis outperforms all other candidate CI system designs as well as the conventional imager. The GFD-based compressive imager yields a TSI of 0.9841 bits (out of a maximum possible 1 bit for the detection task), which is nearly ten times the 0.0979 bits achieved by the conventional imager at a signal-to-noise ratio of 5.0. We also discuss the relation between the information-theoretic TSI metric and a conventional statistical metric like probability of error in the context of the target-detection problem. It is shown that the TSI can be used to derive an upper bound on the probability of error that can be attained by any detection algorithm.



    P. Arockia Jansi Rani; V. Sadasivam


    Image compression is very important in reducing the costs of data storage and transmission in relatively slow channels. In this paper, a still image compression scheme driven by Self-Organizing Map with polynomial regression modeling and entropy coding, employed within the wavelet framework is presented. The image compressibility and interpretability are improved by incorporating noise reduction into the compression scheme. The implementation begins with the classical wavelet decomposition, q...

  10. Relationship between necrotic patterns in glioblastoma and patient survival: fractal dimension and lacunarity analyses using magnetic resonance imaging. (United States)

    Liu, Shuai; Wang, Yinyan; Xu, Kaibin; Wang, Zheng; Fan, Xing; Zhang, Chuanbao; Li, Shaowu; Qiu, Xiaoguang; Jiang, Tao


    Necrosis is a hallmark feature of glioblastoma (GBM). This study investigated the prognostic role of necrotic patterns in GBM using fractal dimension (FD) and lacunarity analyses of magnetic resonance imaging (MRI) data and evaluated the role of lacunarity in the biological processes leading to necrosis. We retrospectively reviewed clinical and MRI data of 95 patients with GBM. FD and lacunarity of the necrosis on MRI were calculated by fractal analysis and subjected to survival analysis. We also performed gene ontology analysis in 32 patients with available RNA-seq data. Univariate analysis revealed that FD lacunarity > 0.46 significantly correlated with poor progression-free survival (p = 0.006 and p = 0.012, respectively) and overall survival (p = 0.008 and p = 0.005, respectively). Multivariate analysis revealed that both parameters were independent factors for unfavorable progression-free survival (p = 0.001 and p = 0.015, respectively) and overall survival (p = 0.002 and p = 0.007, respectively). Gene ontology analysis revealed that genes positively correlated with lacunarity were involved in the suppression of apoptosis and necrosis-associated biological processes. We demonstrate that the fractal parameters of necrosis in GBM can predict patient survival and are associated with the biological processes of tumor necrosis.

  11. Image Compression using Haar and Modified Haar Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Mohannad Abid Shehab Ahmed


    Full Text Available Efficient image compression approaches can provide the best solutions to the recent growth of the data intensive and multimedia based applications. As presented in many papers the Haar matrix–based methods and wavelet analysis can be used in various areas of image processing such as edge detection, preserving, smoothing or filtering. In this paper, color image compression analysis and synthesis based on Haar and modified Haar is presented. The standard Haar wavelet transformation with N=2 is composed of a sequence of low-pass and high-pass filters, known as a filter bank, the vertical and horizontal Haar filters are composed to construct four 2-dimensional filters, such filters applied directly to the image to speed up the implementation of the Haar wavelet transform. Modified Haar technique is studied and implemented for odd based numbers i.e. (N=3 & N=5 to generate many solution sets, these sets are tested using the energy function or numerical method to get the optimum one.The Haar transform is simple, efficient in memory usage due to high zero value spread (it can use sparse principle, and exactly reversible without the edge effects as compared to DCT (Discrete Cosine Transform. The implemented Matlab simulation results prove the effectiveness of DWT (Discrete Wave Transform algorithms based on Haar and Modified Haar techniques in attaining an efficient compression ratio (C.R, achieving higher peak signal to noise ratio (PSNR, and the resulting images are of much smoother as compared to standard JPEG especially for high C.R. A comparison between standard JPEG, Haar, and Modified Haar techniques is done finally which approves the highest capability of Modified Haar between others.

  12. An efficient BTC image compression algorithm with visual patterns

    Institute of Scientific and Technical Information of China (English)


    Discusses block truncation coding (BTC) a simple and fast image compression technique suitable for real-time image transmission with high channel error resisting capability and good reconstructed image quality, and its main drawback of high bit rate of 2 bits/pixel for a 256-gray image for the purpose of reducing the bit rate, and introduces a simple look-up-table method for coding the higher mean and the lower mean of a block, and a set of 24 visual patterns used to encode 4×4 bit plane of the high-detail block and proposes a new algorithm, when needs only 19 bits to encode 4×4 high-detail block and 12 bits to encode the 4×4 low-detail block.

  13. Compressed sensing sparse reconstruction for coherent field imaging (United States)

    Bei, Cao; Xiu-Juan, Luo; Yu, Zhang; Hui, Liu; Ming-Lai, Chen


    Return signal processing and reconstruction plays a pivotal role in coherent field imaging, having a significant influence on the quality of the reconstructed image. To reduce the required samples and accelerate the sampling process, we propose a genuine sparse reconstruction scheme based on compressed sensing theory. By analyzing the sparsity of the received signal in the Fourier spectrum domain, we accomplish an effective random projection and then reconstruct the return signal from as little as 10% of traditional samples, finally acquiring the target image precisely. The results of the numerical simulations and practical experiments verify the correctness of the proposed method, providing an efficient processing approach for imaging fast-moving targets in the future. Project supported by the National Natural Science Foundation of China (Grant No. 61505248) and the Fund from Chinese Academy of Sciences, the Light of “Western” Talent Cultivation Plan “Dr. Western Fund Project” (Grant No. Y429621213).

  14. Compressed Sensing Inspired Image Reconstruction from Overlapped Projections

    Directory of Open Access Journals (Sweden)

    Lin Yang


    Full Text Available The key idea discussed in this paper is to reconstruct an image from overlapped projections so that the data acquisition process can be shortened while the image quality remains essentially uncompromised. To perform image reconstruction from overlapped projections, the conventional reconstruction approach (e.g., filtered backprojection (FBP algorithms cannot be directly used because of two problems. First, overlapped projections represent an imaging system in terms of summed exponentials, which cannot be transformed into a linear form. Second, the overlapped measurement carries less information than the traditional line integrals. To meet these challenges, we propose a compressive sensing-(CS- based iterative algorithm for reconstruction from overlapped data. This algorithm starts with a good initial guess, relies on adaptive linearization, and minimizes the total variation (TV. Then, we demonstrated the feasibility of this algorithm in numerical tests.

  15. Fast Second Degree Total Variation Method for Image Compressive Sensing. (United States)

    Liu, Pengfei; Xiao, Liang; Zhang, Jun


    This paper presents a computationally efficient algorithm for image compressive sensing reconstruction using a second degree total variation (HDTV2) regularization. Firstly, a preferably equivalent formulation of the HDTV2 functional is derived, which can be formulated as a weighted L1-L2 mixed norm of second degree image derivatives under the spectral decomposition framework. Secondly, using the equivalent formulation of HDTV2, we introduce an efficient forward-backward splitting (FBS) scheme to solve the HDTV2-based image reconstruction model. Furthermore, from the averaged non-expansive operator point of view, we make a detailed analysis on the convergence of the proposed FBS algorithm. Experiments on medical images demonstrate that the proposed method outperforms several fast algorithms of the TV and HDTV2 reconstruction models in terms of peak signal to noise ratio (PSNR), structural similarity index (SSIM) and convergence speed.

  16. SAR Imaging of Moving Targets via Compressive Sensing

    CERN Document Server

    Wang, Jun; Zhang, Hao; Wang, Xiqin


    An algorithm based on compressive sensing (CS) is proposed for synthetic aperture radar (SAR) imaging of moving targets. The received SAR echo is decomposed into the sum of basis sub-signals, which are generated by discretizing the target spatial domain and velocity domain and synthesizing the SAR received data for every discretized spatial position and velocity candidate. In this way, the SAR imaging problem is converted into sub-signal selection problem. In the case that moving targets are sparsely distributed in the observed scene, their reflectivities, positions and velocities can be obtained by using the CS technique. It is shown that, compared with traditional algorithms, the target image obtained by the proposed algorithm has higher resolution and lower side-lobe while the required number of measurements can be an order of magnitude less than that by sampling at Nyquist sampling rate. Moreover, multiple targets with different speeds can be imaged simultaneously, so the proposed algorithm has higher eff...

  17. Edge-Based Image Compression with Homogeneous Diffusion (United States)

    Mainberger, Markus; Weickert, Joachim

    It is well-known that edges contain semantically important image information. In this paper we present a lossy compression method for cartoon-like images that exploits information at image edges. These edges are extracted with the Marr-Hildreth operator followed by hysteresis thresholding. Their locations are stored in a lossless way using JBIG. Moreover, we encode the grey or colour values at both sides of each edge by applying quantisation, subsampling and PAQ coding. In the decoding step, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. Our experiments show that the suggested method outperforms the widely-used JPEG standard and can even beat the advanced JPEG2000 standard for cartoon-like images.

  18. Compressive sensing for direct millimeter-wave holographic imaging. (United States)

    Qiao, Lingbo; Wang, Yingxin; Shen, Zongjun; Zhao, Ziran; Chen, Zhiqiang


    Direct millimeter-wave (MMW) holographic imaging, which provides both the amplitude and phase information by using the heterodyne mixing technique, is considered a powerful tool for personnel security surveillance. However, MWW imaging systems usually suffer from the problem of high cost or relatively long data acquisition periods for array or single-pixel systems. In this paper, compressive sensing (CS), which aims at sparse sampling, is extended to direct MMW holographic imaging for reducing the number of antenna units or the data acquisition time. First, following the scalar diffraction theory, an exact derivation of the direct MMW holographic reconstruction is presented. Then, CS reconstruction strategies for complex-valued MMW images are introduced based on the derived reconstruction formula. To pursue the applicability for near-field MMW imaging and more complicated imaging targets, three sparsity bases, including total variance, wavelet, and curvelet, are evaluated for the CS reconstruction of MMW images. We also discuss different sampling patterns for single-pixel, linear array and two-dimensional array MMW imaging systems. Both simulations and experiments demonstrate the feasibility of recovering MMW images from measurements at 1/2 or even 1/4 of the Nyquist rate.

  19. Progressive image data compression with adaptive scale-space quantization (United States)

    Przelaskowski, Artur


    Some improvements of embedded zerotree wavelet algorithm are considere. Compression methods tested here are based on dyadic wavelet image decomposition, scalar quantization and coding in progressive fashion. Profitable coders with embedded form of code and rate fixing abilities like Shapiro EZW and Said nad Pearlman SPIHT are modified to improve compression efficiency. We explore the modifications of the initial threshold value, reconstruction levels and quantization scheme in SPIHT algorithm. Additionally, we present the result of the best filter bank selection. The most efficient biorthogonal filter banks are tested. Significant efficiency improvement of SPIHT coder was finally noticed even up to 0.9dB of PSNR in some cases. Because of the problems with optimization of quantization scheme in embedded coder we propose another solution: adaptive threshold selection of wavelet coefficients in progressive coding scheme. Two versions of this coder are tested: progressive in quality and resolution. As a result, improved compression effectiveness is achieved - close to 1.3 dB in comparison to SPIHT for image Barbara. All proposed algorithms are optimized automatically and are not time-consuming. But sometimes the most efficient solution must be found in iterative way. Final results are competitive across the most efficient wavelet coders.

  20. A Double-Minded Fractal (United States)

    Simoson, Andrew J.


    This article presents a fun activity of generating a double-minded fractal image for a linear algebra class once the idea of rotation and scaling matrices are introduced. In particular the fractal flip-flops between two words, depending on the level at which the image is viewed. (Contains 5 figures.)

  1. A novel image compression-encryption hybrid algorithm based on the analysis sparse representation (United States)

    Zhang, Ye; Xu, Biao; Zhou, Nanrun


    Recent advances on the compressive sensing theory were invoked for image compression-encryption based on the synthesis sparse model. In this paper we concentrate on an alternative sparse representation model, i.e., the analysis sparse model, to propose a novel image compression-encryption hybrid algorithm. The analysis sparse representation of the original image is obtained with an overcomplete fixed dictionary that the order of the dictionary atoms is scrambled, and the sparse representation can be considered as an encrypted version of the image. Moreover, the sparse representation is compressed to reduce its dimension and re-encrypted by the compressive sensing simultaneously. To enhance the security of the algorithm, a pixel-scrambling method is employed to re-encrypt the measurements of the compressive sensing. Various simulation results verify that the proposed image compression-encryption hybrid algorithm could provide a considerable compression performance with a good security.

  2. Interactive decoding for the CCSDS recommendation for image data compression (United States)

    García-Vílchez, Fernando; Serra-Sagristà, Joan; Zabala, Alaitz; Pons, Xavier


    In 2005, the Consultative Committee for Space Data Systems (CCSDS) approved a new Recommendation (CCSDS 122.0-B-1) for Image Data Compression. Our group has designed a new file syntax for the Recommendation. The proposal consists of adding embedded headers. Such modification provides scalability by quality, spatial location, resolution and component. The main advantages of our proposal are: 1) the definition of multiple types of progression order, which enhances abilities in transmission scenarios, and 2) the support for the extraction and decoding of specific windows of interest without needing to decode the complete code-stream. In this paper we evaluate the performance of our proposal. First we measure the impact of the embedded headers in the encoded stream. Second we compare the compression performance of our technique to JPEG2000.

  3. Entropy coders for image compression based on binary forward classification (United States)

    Yoo, Hoon; Jeong, Jechang


    Entropy coders as a noiseless compression method are widely used as final step compression for images, and there have been many contributions to increase of entropy coder performance and to reduction of entropy coder complexity. In this paper, we propose some entropy coders based on the binary forward classification (BFC). The BFC requires overhead of classification but there is no change between the amount of input information and the total amount of classified output information, which we prove this property in this paper. And using the proved property, we propose entropy coders that are the BFC followed by Golomb-Rice coders (BFC+GR) and the BFC followed by arithmetic coders (BFC+A). The proposed entropy coders introduce negligible additional complexity due to the BFC. Simulation results also show better performance than other entropy coders that have similar complexity to the proposed coders.

  4. Simultaneous encryption and compression of medical images based on optimized tensor compressed sensing with 3D Lorenz. (United States)

    Wang, Qingzhu; Chen, Xiaoming; Wei, Mengying; Miao, Zhuang


    The existing techniques for simultaneous encryption and compression of images refer lossy compression. Their reconstruction performances did not meet the accuracy of medical images because most of them have not been applicable to three-dimensional (3D) medical image volumes intrinsically represented by tensors. We propose a tensor-based algorithm using tensor compressive sensing (TCS) to address these issues. Alternating least squares is further used to optimize the TCS with measurement matrices encrypted by discrete 3D Lorenz. The proposed method preserves the intrinsic structure of tensor-based 3D images and achieves a better balance of compression ratio, decryption accuracy, and security. Furthermore, the characteristic of the tensor product can be used as additional keys to make unauthorized decryption harder. Numerical simulation results verify the validity and the reliability of this scheme.

  5. Low Memory Low Complexity Image Compression Using HSSPIHT Encoder

    Directory of Open Access Journals (Sweden)



    Full Text Available Due to the large requirement for memory and the high complexity of computation, JPEG2000 cannot be used in many conditions especially in the memory constraint equipment. The line-based W avelet transform was proposed and accepted because lower memory is required without affecting the result of W avelet transform, In this paper, the improved lifting schem e is introduced to perform W avelet transform to replace Mallat method that is used in the original line-based wavelet transform. In this a three-adder unit is adopted to realize lifting scheme. It can perform wavelet transform with less computation and reduce memory than Mallat algorithm. The corresponding HS_SPIHT coding is designed here so that the proposed algorithm is more suitable for equipment. W e proposed a highly scale image compression scheme based on the Set Partitioning in Hierarchical Trees (SPIHT algorithm. Our algorithm, called Highly Scalable SPIHT (HS_SPIHT, supports High Compression efficiency, spatial and SNR scalability and provides l bit stream that can be easily adapted to give bandwidth and resolution requirements by a simple transcoder (parse. The HS_SPIHT algorithm adds the spatial scalability feature without sacrificing the S NR embeddedness property as found in the original SPIHT bit stream. Highly scalable image compression scheme based on the SPIHT algorithm the proposed algorithm used, highly scalable SPIHT (HS_SPIHT Algorithm, adds the spatial scalability feature to the SPIHT algorithm through the introduction of multiple resolution dependent lists and a resolution-dependent sorting pass. SPIHT keeps the import features of the original SPIHT algorithm such as compression efficiency, full SNR Scalability and low complexity.

  6. Surface-enhanced Raman imaging of fractal shaped periodic metal nanostructures

    DEFF Research Database (Denmark)

    Beermann, Jonas; Novikov, Sergey Mikhailovich; Albrektsen, Ole;


    Surface-enhanced Raman scattering (SERS) from Rhodamine 6G (R6G) homogenously adsorbed on fractal shaped 170-nm-period square arrays formed by 50-nm-high gold nanoparticles (diameters of 80, 100, or 120 nm are constant within each array), fabricated on a smooth gold film by electron-beam lithogra......Surface-enhanced Raman scattering (SERS) from Rhodamine 6G (R6G) homogenously adsorbed on fractal shaped 170-nm-period square arrays formed by 50-nm-high gold nanoparticles (diameters of 80, 100, or 120 nm are constant within each array), fabricated on a smooth gold film by electron...

  7. Recent Advances in Compressed Sensing: Discrete Uncertainty Principles and Fast Hyperspectral Imaging (United States)




    Institute of Scientific and Technical Information of China (English)


    In this paper,the technique of quasi-lossless compression basedon the image restoration is presented.The technique of compression described in the paper includes three steps,namely bit compression,correlation removing and image restoration based on the theory of modulation transfer function (MTF).The quasi-lossless compression comes to a high speed.The quality of the reconstruction image under restoration is up to par of the quasi-lossless with higher compression ratio.The experiments of the TM and SPOT images show that the technique is reasonable and applicable.

  9. FPGA Implementation of 5/3 Integer DWT for Image Compression

    Directory of Open Access Journals (Sweden)

    M Puttaraju


    Full Text Available The wavelet transform has emerged as a cutting edge technology, in the field of image compression. Wavelet-based coding provides substantial improvements in picture quality at higher compression ratios. In this paper an approach is proposed for the compression of an image using 5/3(lossless Integer discrete wavelet transform (DWT for Image Compression. The proposed architecture, based on new and fast lifting scheme approach for (5, 3 filter in DWT. Here an attempt is made to establish a Standard for a data compression algorithm applied to two-dimensional digital spatial image data from payload instruments.

  10. Application of strong zerotrees to compression of correlated MRI image sets (United States)

    Soloveyko, Olexandr M.; Musatenko, Yurij S.; Kurashov, Vitalij N.; Dubikovskiy, Vladislav A.


    It is known that gainful interframe compression of magnetic resonance(MR) image set is quite difficult problem. Only few authors reported gain in performance of compressors like that comparing to separate compression of every MR image from the set (intraframe compression). Known reasons of such a situation are significant noise in MR images and presence of only low frequency correlations in images of the set. Recently we suggested new method of correlated image set compression based on Karhunen-Loeve(KL) transform and special EZW compression scheme with strong zerotrees(KLSEZW). KLSEZW algorithm showed good results in compression of video sequences with low and middle motion even without motion compensation. The paper presents successful application of the basic method and its modification to interframe MR image compression problem.

  11. Research of Image Compression Based on Quantum BP Network

    Directory of Open Access Journals (Sweden)

    Hao-yu Zhou


    Full Text Available Quantum Neural Network (QNN, which integrates the characteristics of Artificial Neural Network (ANN with quantum theory, is a new study field. It takes advantages of ANN and quantum computing and has a high theoretical value and potential applications. Based on quantum neuron model with a quantum input and output quantum and artificial neural network theory, at the same time, QBP algorithm is proposed on the basis of the complex BP algorithm, the network of a 3-layer quantum BP which implements image compression and image reconstruction is built. The simulation results show that QBP can obtain the reconstructed images with better quantity compared with BP in spite of the less learning iterations.  

  12. Efficient image compression scheme based on differential coding (United States)

    Zhu, Li; Wang, Guoyou; Liu, Ying


    Embedded zerotree (EZW) and Set Partitioning in Hierarchical Trees (SPIHT) coding, introduced by J.M. Shapiro and Amir Said, are very effective and being used in many fields widely. In this study, brief explanation of the principles of SPIHT was first provided, and then, some improvement of SPIHT algorithm according to experiments was introduced. 1) For redundancy among the coefficients in the wavelet region, we propose differential method to reduce it during coding. 2) Meanwhile, based on characteristic of the coefficients' distribution in subband, we adjust sorting pass and optimize differential coding, in order to reduce the redundancy coding in each subband. 3) The image coding result, calculated by certain threshold, shows that through differential coding, the rate of compression get higher, and the quality of reconstructed image have been get raised greatly, when bpp (bit per pixel)=0.5, PSNR (Peak Signal to Noise Ratio) of reconstructed image exceeds that of standard SPIHT by 0.2~0.4db.

  13. Single image non-uniformity correction using compressive sensing (United States)

    Jian, Xian-zhong; Lu, Rui-zhi; Guo, Qiang; Wang, Gui-pu


    A non-uniformity correction (NUC) method for an infrared focal plane array imaging system was proposed. The algorithm, based on compressive sensing (CS) of single image, overcame the disadvantages of "ghost artifacts" and bulk calculating costs in traditional NUC algorithms. A point-sampling matrix was designed to validate the measurements of CS on the time domain. The measurements were corrected using the midway infrared equalization algorithm, and the missing pixels were solved with the regularized orthogonal matching pursuit algorithm. Experimental results showed that the proposed method can reconstruct the entire image with only 25% pixels. A small difference was found between the correction results using 100% pixels and the reconstruction results using 40% pixels. Evaluation of the proposed method on the basis of the root-mean-square error, peak signal-to-noise ratio, and roughness index (ρ) proved the method to be robust and highly applicable.

  14. Degradative encryption: An efficient way to protect SPIHT compressed images (United States)

    Xiang, Tao; Qu, Jinyu; Yu, Chenyun; Fu, Xinwen


    Degradative encryption, a new selective image encryption paradigm, is proposed to encrypt only a small part of image data to make the detail blurred but keep the skeleton discernible. The efficiency is further optimized by combining compression and encryption. A format-compliant degradative encryption algorithm based on set partitioning in hierarchical trees (SPIHT) is then proposed, and the scheme is designed to work in progressive mode for gaining a tradeoff between efficiency and security. Extensive experiments are conducted to evaluate the strength and efficiency of the scheme, and it is found that less than 10% data need to be encrypted for a secure degradation. In security analysis, the scheme is verified to be immune to cryptographic attacks as well as those adversaries utilizing image processing techniques. The scheme can find its wide applications in online try-and-buy service on mobile devices, searchable multimedia encryption in cloud computing, etc.

  15. Edge-Oriented Compression Coding on Image Sequence

    Institute of Scientific and Technical Information of China (English)


    An edge-oriented image sequence coding scheme is presented.On the basis of edge detecting,an image could be divided into the sensitized region and the smooth region.In this scheme,the architecture of sensityzed region is approximated with linear type of segments.Then a rectangle belt is constructed for each segment.Finally,the gray value distribution in the region is fitted by normal forms polynomials.The model matching and motion analysis are also based on the architecture of sensityized region.For the smooth region we use the run length scanning and linear approximating.By means of normal forms polynomial fitting and motion prediction by matching,the images are compressed.It is shown through the simulations that the subjective quality of reconstructed picture is excellent at 0.0075 bit-per-pel.

  16. Coherent temporal imaging with analog time-bandwidth compression

    CERN Document Server

    Asghari, Mohammad H


    We introduce the concept of coherent temporal imaging and its combination with the anamorphic stretch transform. The new system can measure both temporal profile of fast waveforms as well as their spectrum in real time and at high-throughput. We show that the combination of coherent detection and warped time-frequency mapping also performs time-bandwidth compression. By reducing the temporal width without sacrificing spectral resolution, it addresses the Big Data problem in real time instruments. The proposed method is the first application of the recently demonstrated Anamorphic Stretch Transform to temporal imaging. Using this method narrow spectral features beyond the spectrometer resolution can be captured. At the same time the output bandwidth and hence the record length is minimized. Coherent detection allows the temporal imaging and dispersive Fourier transform systems to operate in the traditional far field as well as in near field regimes.

  17. An RGB Image Encryption Supported by Wavelet-based Lossless Compression

    Directory of Open Access Journals (Sweden)

    Ch. Samson


    Full Text Available In this paper we have proposed a method for an RGB image encryption supported by lifting scheme based lossless compression. Firstly we have compressed the input color image using a 2-D integer wavelet transform. Then we have applied lossless predictive coding to achieve additional compression. The compressed image is encrypted by using Secure Advanced Hill Cipher (SAHC involving a pair of involutory matrices, a function called Mix( and an operation called XOR. Decryption followed by reconstruction shows that there is no difference between the output image and the input image. The proposed method can be used for efficient and secure transmission of image data.

  18. Microarray BASICA: Background Adjustment, Segmentation, Image Compression and Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jianping Hua


    Full Text Available This paper presents microarray BASICA: an integrated image processing tool for background adjustment, segmentation, image compression, and analysis of cDNA microarray images. BASICA uses a fast Mann-Whitney test-based algorithm to segment cDNA microarray images, and performs postprocessing to eliminate the segmentation irregularities. The segmentation results, along with the foreground and background intensities obtained with the background adjustment, are then used for independent compression of the foreground and background. We introduce a new distortion measurement for cDNA microarray image compression and devise a coding scheme by modifying the embedded block coding with optimized truncation (EBCOT algorithm (Taubman, 2000 to achieve optimal rate-distortion performance in lossy coding while still maintaining outstanding lossless compression performance. Experimental results show that the bit rate required to ensure sufficiently accurate gene expression measurement varies and depends on the quality of cDNA microarray images. For homogeneously hybridized cDNA microarray images, BASICA is able to provide from a bit rate as low as 5 bpp the gene expression data that are 99% in agreement with those of the original 32 bpp images.


    Directory of Open Access Journals (Sweden)

    Benjamin Joseph


    Full Text Available The main contribution of this article is introducing an intelligent classifier to distinguish between benign and malignant areas of micro-calcification in companded mammogram image which is not proved or addressed elsewhere. This method does not require any manual processing technique for classification, thus it can be assimilated for identifying benign and malignant areas in intelligent way. Moreover it gives good classification responses for compressed mammogram image. The goal of the proposed method is twofold: one is to preserve the details in Region of Interest (ROI at low bit rate without affecting the diagnostic related information and second is to classify and segment the micro-calcification area in reconstructed mammogram image with high accuracy. The prime contribution of this work is that details of ROI and Non-ROI regions extracted using multi-wavelet transform are coded at variable bit rate using proposed Region Based Set Partitioning in Hierarchical Trees (RBSPIHT before storing or transmitting the image. Image reconstructed during retrieval or at the receiving end is preprocessed to remove the channel noise and to enhance the diagnostic contrast information. Then the preprocessed image is classified as normal or abnormal (benign or malignant using Probabilistic neural network. Segmentation of cancerous region is done using Fuzzy C-means Clustering (FCC algorithm and the cancerous area is computed. The experimental result shows that the proposed model performance is good at achieving high sensitivity of 97.27%, specificity of 94.38% at an average compression rate and Peak Signal to Noise Ratio (PSNR of 0.5bpp and 58dB respectively.

  20. Real-time Image Generation for Compressive Light Field Displays (United States)

    Wetzstein, G.; Lanman, D.; Hirsch, M.; Raskar, R.


    With the invention of integral imaging and parallax barriers in the beginning of the 20th century, glasses-free 3D displays have become feasible. Only today—more than a century later—glasses-free 3D displays are finally emerging in the consumer market. The technologies being employed in current-generation devices, however, are fundamentally the same as what was invented 100 years ago. With rapid advances in optical fabrication, digital processing power, and computational perception, a new generation of display technology is emerging: compressive displays exploring the co-design of optical elements and computational processing while taking particular characteristics of the human visual system into account. In this paper, we discuss real-time implementation strategies for emerging compressive light field displays. We consider displays composed of multiple stacked layers of light-attenuating or polarization-rotating layers, such as LCDs. The involved image generation requires iterative tomographic image synthesis. We demonstrate that, for the case of light field display, computed tomographic light field synthesis maps well to operations included in the standard graphics pipeline, facilitating efficient GPU-based implementations with real-time framerates.

  1. Multifrequency Bayesian compressive sensing methods for microwave imaging. (United States)

    Poli, Lorenzo; Oliveri, Giacomo; Ding, Ping Ping; Moriyama, Toshifumi; Massa, Andrea


    The Bayesian retrieval of sparse scatterers under multifrequency transverse magnetic illuminations is addressed. Two innovative imaging strategies are formulated to process the spectral content of microwave scattering data according to either a frequency-hopping multistep scheme or a multifrequency one-shot scheme. To solve the associated inverse problems, customized implementations of single-task and multitask Bayesian compressive sensing are introduced. A set of representative numerical results is discussed to assess the effectiveness and the robustness against the noise of the proposed techniques also in comparison with some state-of-the-art deterministic strategies.

  2. COxSwAIN: Compressive Sensing for Advanced Imaging and Navigation (United States)

    Kurwitz, Richard; Pulley, Marina; LaFerney, Nathan; Munoz, Carlos


    The COxSwAIN project focuses on building an image and video compression scheme that can be implemented in a small or low-power satellite. To do this, we used Compressive Sensing, where the compression is performed by matrix multiplications on the satellite and reconstructed on the ground. Our paper explains our methodology and demonstrates the results of the scheme, being able to achieve high quality image compression that is robust to noise and corruption.

  3. Skin cancer texture analysis of OCT images based on Haralick, fractal dimension and the complex directional field features (United States)

    Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Zakharov, Valery P.; Khramov, Alexander G.


    Optical coherence tomography (OCT) is usually employed for the measurement of tumor topology, which reflects structural changes of a tissue. We investigated the possibility of OCT in detecting changes using a computer texture analysis method based on Haralick texture features, fractal dimension and the complex directional field method from different tissues. These features were used to identify special spatial characteristics, which differ healthy tissue from various skin cancers in cross-section OCT images (B-scans). Speckle reduction is an important pre-processing stage for OCT image processing. In this paper, an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images was used. The Haralick texture feature set includes contrast, correlation, energy, and homogeneity evaluated in different directions. A box-counting method is applied to compute fractal dimension of investigated tissues. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. The complex directional field (as well as the "classical" directional field) can help describe an image as set of directions. Considering to a fact that malignant tissue grows anisotropically, some principal grooves may be observed on dermoscopic images, which mean possible existence of principal directions on OCT images. Our results suggest that described texture features may provide useful information to differentiate pathological from healthy patients. The problem of recognition melanoma from nevi is decided in this work due to the big quantity of experimental data (143 OCT-images include tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevi). We have sensitivity about 90% and specificity about 85%. Further research is warranted to determine how this approach may be used to select the regions of interest automatically.

  4. Resolution enhancement for ISAR imaging via improved statistical compressive sensing (United States)

    Zhang, Lei; Wang, Hongxian; Qiao, Zhi-jun


    Developing compressed sensing (CS) theory reveals that optimal reconstruction of an unknown signal can be achieved from very limited observations by utilizing signal sparsity. For inverse synthetic aperture radar (ISAR), the image of an interesting target is generally constructed by limited strong scattering centers, representing strong spatial sparsity. Such prior sparsity intrinsically paves a way to improved ISAR imaging performance. In this paper, we develop a super-resolution algorithm for forming ISAR images from limited observations. When the amplitude of the target scattered field follows an identical Laplace probability distribution, the approach converts super-resolution imaging into sparsity-driven optimization in the Bayesian statistics sense. We show that improved performance is achievable by taking advantage of the meaningful spatial structure of the scattered field. Further, we use the nonidentical Laplace distribution with small scale on strong signal components and large scale on noise to discriminate strong scattering centers from noise. A maximum likelihood estimator combined with a bandwidth extrapolation technique is also developed to estimate the scale parameters. Real measured data processing indicates the proposal can reconstruct the high-resolution image though only limited pulses even with low SNR, which shows advantages over current super-resolution imaging methods.

  5. An investigation of image compression on NIIRS rating degradation through automated image analysis (United States)

    Chen, Hua-Mei; Blasch, Erik; Pham, Khanh; Wang, Zhonghai; Chen, Genshe


    The National Imagery Interpretability Rating Scale (NIIRS) is a subjective quantification of static image widely adopted by the Geographic Information System (GIS) community. Efforts have been made to relate NIIRS image quality to sensor parameters using the general image quality equations (GIQE), which make it possible to automatically predict the NIIRS rating of an image through automated image analysis. In this paper, we present an automated procedure to extract line edge profile based on which the NIIRS rating of a given image can be estimated through the GIQEs if the ground sampling distance (GSD) is known. Steps involved include straight edge detection, edge stripes determination, and edge intensity determination, among others. Next, we show how to employ GIQEs to estimate NIIRS degradation without knowing the ground truth GSD and investigate the effects of image compression on the degradation of an image's NIIRS rating. Specifically, we consider JPEG and JPEG2000 image compression standards. The extensive experimental results demonstrate the effect of image compression on the ground sampling distance and relative edge response, which are the major factors effecting NIIRS rating.

  6. ZPEG: a hybrid DPCM-DCT based approach for compression of Z-stack images. (United States)

    Khire, Sourabh; Cooper, Lee; Park, Yuna; Carter, Alexis; Jayant, Nikil; Saltz, Joel


    Modern imaging technology permits obtaining images at varying depths along the thickness, or the Z-axis of the sample being imaged. A stack of multiple such images is called a Z-stack image. The focus capability offered by Z-stack images is critical for many digital pathology applications. A single Z-stack image may result in several hundred gigabytes of data, and needs to be compressed for archival and distribution purposes. Currently, the existing methods for compression of Z-stack images such as JPEG and JPEG 2000 compress each focal plane independently, and do not take advantage of the Z-signal redundancy. It is possible to achieve additional compression efficiency over the existing methods, by exploiting the high Z-signal correlation during image compression. In this paper, we propose a novel algorithm for compression of Z-stack images, which we term as ZPEG. ZPEG extends the popular discrete-cosine transform (DCT) based image encoder to compress Z-stack images. This is achieved by decorrelating the neighboring layers of the Z-stack image using differential pulse-code modulation (DPCM). PSNR measurements, as well as subjective evaluations by experts indicate that ZPEG can encode Z-stack images at a higher quality as compared to JPEG, JPEG 2000 and JP3D at compression ratios below 50∶1.

  7. High-performance JPEG image compression chip set for multimedia applications (United States)

    Razavi, Abbas; Shenberg, Isaac; Seltz, Danny; Fronczak, Dave


    By its very nature, multimedia includes images, text and audio stored in digital format. Image compression is an enabling technology essential to overcoming two bottlenecks: cost of storage and bus speed limitation. Storing 10 seconds of high resolution RGB (640 X 480) motion video (30 frames/sec) requires 277 MBytes and a bus speed of 28 MBytes/sec (which cannot be handled by a standard bus). With high quality JPEG baseline compression the storage and bus requirements are reduced to 12 MBytes of storage and a bus speed of 1.2 MBytes/sec. Moreover, since consumer video and photography products (e.g., digital still video cameras, camcorders, TV) will increasingly use digital (and therefore compressed) images because of quality, accessibility, and the ease of adding features, compressed images may become the bridge between the multimedia computer and consumer products. The image compression challenge can be met by implementing the discrete cosine transform (DCT)-based image compression algorithm defined by the JPEG baseline standard. Using the JPEG baseline algorithm, an image can be compressed by a factor of about 24:1 without noticeable degradation in image quality. Because motion video is compressed frame by frame (or field by field), system cost is minimized (no frame or field memories and interframe operations are required) and each frame can be edited independently. Since JPEG is an international standard, the compressed files generated by this solution can be readily interchanged with other users and processed by standard software packages. This paper describes a multimedia image compression board utilizing Zoran's 040 JPEG Image Compression chip set. The board includes digitization, video decoding and compression. While the original video is sent to the display (`video in a window'), it is also compressed and transferred to the computer bus for storage. During playback, the system receives the compressed sequence from the bus and displays it on the screen.

  8. A linear mixture analysis-based compression for hyperspectral image analysis

    Energy Technology Data Exchange (ETDEWEB)

    C. I. Chang; I. W. Ginsberg


    In this paper, the authors present a fully constrained least squares linear spectral mixture analysis-based compression technique for hyperspectral image analysis, particularly, target detection and classification. Unlike most compression techniques that directly deal with image gray levels, the proposed compression approach generates the abundance fractional images of potential targets present in an image scene and then encodes these fractional images so as to achieve data compression. Since the vital information used for image analysis is generally preserved and retained in the abundance fractional images, the loss of information may have very little impact on image analysis. In some occasions, it even improves analysis performance. Airborne visible infrared imaging spectrometer (AVIRIS) data experiments demonstrate that it can effectively detect and classify targets while achieving very high compression ratios.

  9. Improving multispectral satellite image compression using onboard subpixel registration (United States)

    Albinet, Mathieu; Camarero, Roberto; Isnard, Maxime; Poulet, Christophe; Perret, Jokin


    Future CNES earth observation missions will have to deal with an ever increasing telemetry data rate due to improvements in resolution and addition of spectral bands. Current CNES image compressors implement a discrete wavelet transform (DWT) followed by a bit plane encoding (BPE) but only on a mono spectral basis and do not profit from the multispectral redundancy of the observed scenes. Recent CNES studies have proven a substantial gain on the achievable compression ratio, +20% to +40% on selected scenarios, by implementing a multispectral compression scheme based on a Karhunen Loeve transform (KLT) followed by the classical DWT+BPE. But such results can be achieved only on perfectly registered bands; a default of registration as low as 0.5 pixel ruins all the benefits of multispectral compression. In this work, we first study the possibility to implement a multi-bands subpixel onboard registration based on registration grids generated on-the-fly by the satellite attitude control system and simplified resampling and interpolation techniques. Indeed bands registration is usually performed on ground using sophisticated techniques too computationally intensive for onboard use. This fully quantized algorithm is tuned to meet acceptable registration performances within stringent image quality criteria, with the objective of onboard real-time processing. In a second part, we describe a FPGA implementation developed to evaluate the design complexity and, by extrapolation, the data rate achievable on a spacequalified ASIC. Finally, we present the impact of this approach on the processing chain not only onboard but also on ground and the impacts on the design of the instrument.

  10. Underwater Acoustic Matched Field Imaging Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Huichen Yan


    Full Text Available Matched field processing (MFP is an effective method for underwater target imaging and localizing, but its performance is not guaranteed due to the nonuniqueness and instability problems caused by the underdetermined essence of MFP. By exploiting the sparsity of the targets in an imaging area, this paper proposes a compressive sensing MFP (CS-MFP model from wave propagation theory by using randomly deployed sensors. In addition, the model’s recovery performance is investigated by exploring the lower bounds of the coherence parameter of the CS dictionary. Furthermore, this paper analyzes the robustness of CS-MFP with respect to the displacement of the sensors. Subsequently, a coherence-excluding coherence optimized orthogonal matching pursuit (CCOOMP algorithm is proposed to overcome the high coherent dictionary problem in special cases. Finally, some numerical experiments are provided to demonstrate the effectiveness of the proposed CS-MFP method.

  11. Compressive dynamic range imaging via Bayesian shrinkage dictionary learning (United States)

    Yuan, Xin


    We apply the Bayesian shrinkage dictionary learning into compressive dynamic-range imaging. By attenuating the luminous intensity impinging upon the detector at the pixel level, we demonstrate a conceptual design of an 8-bit camera to sample high-dynamic-range scenes with a single snapshot. Coding strategies for both monochrome and color cameras are proposed. A Bayesian reconstruction algorithm is developed to learn a dictionary in situ on the sampled image, for joint reconstruction and demosaicking. We use global-local shrinkage priors to learn the dictionary and dictionary coefficients representing the data. Simulation results demonstrate the feasibility of the proposed camera and the superior performance of the Bayesian shrinkage dictionary learning algorithm.

  12. Pairwise KLT-Based Compression for Multispectral Images (United States)

    Nian, Yongjian; Liu, Yu; Ye, Zhen


    This paper presents a pairwise KLT-based compression algorithm for multispectral images. Although the KLT has been widely employed for spectral decorrelation, its complexity is high if it is performed on the global multispectral images. To solve this problem, this paper presented a pairwise KLT for spectral decorrelation, where KLT is only performed on two bands every time. First, KLT is performed on the first two adjacent bands and two principle components are obtained. Secondly, one remainning band and the principal component (PC) with the larger eigenvalue is selected to perform a KLT on this new couple. This procedure is repeated until the last band is reached. Finally, the optimal truncation technique of post-compression rate-distortion optimization is employed for the rate allocation of all the PCs, followed by embedded block coding with optimized truncation to generate the final bit-stream. Experimental results show that the proposed algorithm outperforms the algorithm based on global KLT. Moreover, the pairwise KLT structure can significantly reduce the complexity compared with a global KLT.

  13. A CMOS Imager with Focal Plane Compression using Predictive Coding (United States)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.


    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  14. Area and power efficient DCT architecture for image compression (United States)

    Dhandapani, Vaithiyanathan; Ramachandran, Seshasayanan


    The discrete cosine transform (DCT) is one of the major components in image and video compression systems. The final output of these systems is interpreted by the human visual system (HVS), which is not perfect. The limited perception of human visualization allows the algorithm to be numerically approximate rather than exact. In this paper, we propose a new matrix for discrete cosine transform. The proposed 8 × 8 transformation matrix contains only zeros and ones which requires only adders, thus avoiding the need for multiplication and shift operations. The new class of transform requires only 12 additions, which highly reduces the computational complexity and achieves a performance in image compression that is comparable to that of the existing approximated DCT. Another important aspect of the proposed transform is that it provides an efficient area and power optimization while implementing in hardware. To ensure the versatility of the proposal and to further evaluate the performance and correctness of the structure in terms of speed, area, and power consumption, the model is implemented on Xilinx Virtex 7 field programmable gate array (FPGA) device and synthesized with Cadence® RTL Compiler® using UMC 90 nm standard cell library. The analysis obtained from the implementation indicates that the proposed structure is superior to the existing approximation techniques with a 30% reduction in power and 12% reduction in area.

  15. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian


    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  16. High-performance compression and double cryptography based on compressive ghost imaging with the fast Fourier transform (United States)

    Leihong, Zhang; Zilan, Pan; Luying, Wu; Xiuhua, Ma


    To solve the problem that large images can hardly be retrieved for stringent hardware restrictions and the security level is low, a method based on compressive ghost imaging (CGI) with Fast Fourier Transform (FFT) is proposed, named FFT-CGI. Initially, the information is encrypted by the sender with FFT, and the FFT-coded image is encrypted by the system of CGI with a secret key. Then the receiver decrypts the image with the aid of compressive sensing (CS) and FFT. Simulation results are given to verify the feasibility, security, and compression of the proposed encryption scheme. The experiment suggests the method can improve the quality of large images compared with conventional ghost imaging and achieve the imaging for large-sized images, further the amount of data transmitted largely reduced because of the combination of compressive sensing and FFT, and improve the security level of ghost images through ciphertext-only attack (COA), chosen-plaintext attack (CPA), and noise attack. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.

  17. IFSM fractal image compression with entropy and sparsity constraints: A sequential quadratic programming approach (United States)

    Kunze, Herb; La Torre, Davide; Lin, Jianyi


    We consider the inverse problem associated with IFSM: Given a target function f , find an IFSM, such that its fixed point f ¯ is sufficiently close to f in the Lp distance. Forte and Vrscay [1] showed how to reduce this problem to a quadratic optimization model. In this paper, we extend the collage-based method developed by Kunze, La Torre and Vrscay ([2][3][4]), by proposing the minimization of the 1-norm instead of the 0-norm. In fact, optimization problems involving the 0-norm are combinatorial in nature, and hence in general NP-hard. To overcome these difficulties, we introduce the 1-norm and propose a Sequential Quadratic Programming algorithm to solve the corresponding inverse problem. As in Kunze, La Torre and Vrscay [3] in our formulation, the minimization of collage error is treated as a multi-criteria problem that includes three different and conflicting criteria i.e., collage error, entropy and sparsity. This multi-criteria program is solved by means of a scalarization technique which reduces the model to a single-criterion program by combining all objective functions with different trade-off weights. The results of some numerical computations are presented.

  18. Weyl law for fat fractals

    CERN Document Server

    Spina, Maria E; Saraceno, Marcos


    It has been conjectured that for a class of piecewise linear maps the closure of the set of images of the discontinuity has the structure of a fat fractal, that is, a fractal with positive measure. An example of such maps is the sawtooth map in the elliptic regime. In this work we analyze this problem quantum mechanically in the semiclassical regime. We find that the fraction of states localized on the unstable set satisfies a modified fractal Weyl law, where the exponent is given by the exterior dimension of the fat fractal.

  19. Fractals in several electrode materials

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Chunyong, E-mail: [Department of Chemistry, College of Science, Nanjing Agricultural University, Nanjing 210095 (China); Suzhou Key Laboratory of Environment and Biosafety, Suzhou Academy of Southeast University, Dushuhu lake higher education town, Suzhou 215123 (China); Wu, Jingyu [Department of Chemistry, College of Science, Nanjing Agricultural University, Nanjing 210095 (China); Fu, Degang [Suzhou Key Laboratory of Environment and Biosafety, Suzhou Academy of Southeast University, Dushuhu lake higher education town, Suzhou 215123 (China); State Key Laboratory of Bioelectronics, Southeast University, Nanjing 210096 (China)


    Highlights: • Fractal geometry was employed to characterize three important electrode materials. • The surfaces of all studied electrodes were proved to be very rough. • The fractal dimensions of BDD and ACF were scale dependent. • MMO film was more uniform than BDD and ACF in terms of fractal structures. - Abstract: In the present paper, the fractal properties of boron-doped diamond (BDD), mixed metal oxide (MMO) and activated carbon fiber (ACF) electrode have been studied by SEM imaging at different scales. Three materials are self-similar with mean fractal dimension in the range of 2.6–2.8, confirming that they all exhibit very rough surfaces. Specifically, it is found that MMO film is more uniform in terms of fractal structure than BDD and ACF. As a result, the intriguing characteristics make these electrodes as ideal candidates for high-performance decontamination processes.

  20. Diagnostic imaging of compression neuropathy; Bildgebende Diagnostik von Nervenkompressionssyndromen

    Energy Technology Data Exchange (ETDEWEB)

    Weishaupt, D.; Andreisek, G. [Universitaetsspital, Institut fuer Diagnostische Radiologie, Zuerich (Switzerland)


    Compression-induced neuropathy of peripheral nerves can cause severe pain of the foot and ankle. Early diagnosis is important to institute prompt treatment and to minimize potential injury. Although clinical examination combined with electrophysiological studies remain the cornerstone of the diagnostic work-up, in certain cases, imaging may provide key information with regard to the exact anatomic location of the lesion or aid in narrowing the differential diagnosis. In other patients with peripheral neuropathies of the foot and ankle, imaging may establish the etiology of the condition and provide information crucial for management and/or surgical planning. MR imaging and ultrasound provide direct visualization of the nerve and surrounding abnormalities. Bony abnormalities contributing to nerve compression are best assessed by radiographs and CT. Knowledge of the anatomy, the etiology, typical clinical findings, and imaging features of peripheral neuropathies affecting the peripheral nerves of the foot and ankle will allow for a more confident diagnosis. (orig.) [German] Kompressionsbedingte Schaedigungen peripherer Nerven koennen die Ursache hartnaeckiger Schmerzen im Bereich des Sprunggelenks und Fusses sein. Eine fruehzeitige Diagnose ist entscheidend, um den Patienten der richtigen Therapie zuzufuehren und potenzielle Schaedigungen zu vermeiden oder zu verringern. Obschon die klinische Untersuchung und die elektrophysiologische Abklaerungen die wichtigsten Elemente der Diagnostik peripherer Nervenkompressionssyndrome sind, kann die Bildgebung entscheidend sein, wenn es darum geht, die Hoehe des Nervenschadens festzulegen oder die Differenzialdiagnose einzugrenzen. In gewissen Faellen kann durch Bildgebung sogar die Ursache der Nervenkompression gefunden werden. In anderen Faellen ist die Bildgebung wichtig bei der Therapieplanung, insbesondere dann, wenn die Laesion chirurgisch angegangen wird. Magnetresonanztomographie (MRT) und Sonographie ermoeglichen eine

  1. Probability of correct reconstruction in compressive spectral imaging

    Directory of Open Access Journals (Sweden)

    Samuel Eduardo Pinilla


    Full Text Available Coded Aperture Snapshot Spectral Imaging (CASSI systems capture the 3-dimensional (3D spatio-spectral information of a scene using a set of 2-dimensional (2D random coded Focal Plane Array (FPA measurements. A compressed sensing reconstruction algorithm is then used to recover the underlying spatio-spectral 3D data cube. The quality of the reconstructed spectral images depends exclusively on the CASSI sensing matrix, which is determined by the statistical structure of the coded apertures. The Restricted Isometry Property (RIP of the CASSI sensing matrix is used to determine the probability of correct image reconstruction and provides guidelines for the minimum number of FPA measurement shots needed for image reconstruction. Further, the RIP can be used to determine the optimal structure of the coded projections in CASSI. This article describes the CASSI optical architecture and develops the RIP for the sensing matrix in this system. Simulations show the higher quality of spectral image reconstructions when the RIP property is satisfied. Simulations also illustrate the higher performance of the optimal structured projections in CASSI.

  2. Oriented wavelet transform for image compression and denoising. (United States)

    Chappelier, Vivien; Guillemot, Christine


    In this paper, we introduce a new transform for image processing, based on wavelets and the lifting paradigm. The lifting steps of a unidimensional wavelet are applied along a local orientation defined on a quincunx sampling grid. To maximize energy compaction, the orientation minimizing the prediction error is chosen adaptively. A fine-grained multiscale analysis is provided by iterating the decomposition on the low-frequency band. In the context of image compression, the multiresolution orientation map is coded using a quad tree. The rate allocation between the orientation map and wavelet coefficients is jointly optimized in a rate-distortion sense. For image denoising, a Markov model is used to extract the orientations from the noisy image. As long as the map is sufficiently homogeneous, interesting properties of the original wavelet are preserved such as regularity and orthogonality. Perfect reconstruction is ensured by the reversibility of the lifting scheme. The mutual information between the wavelet coefficients is studied and compared to the one observed with a separable wavelet transform. The rate-distortion performance of this new transform is evaluated for image coding using state-of-the-art subband coders. Its performance in a denoising application is also assessed against the performance obtained with other transforms or denoising methods.

  3. Application of region selective embedded zerotree wavelet coder in CT image compression. (United States)

    Li, Guoli; Zhang, Jian; Wang, Qunjing; Hu, Cungang; Deng, Na; Li, Jianping


    Compression is necessary in medical image preservation because of the huge data quantity. Medical images are different from the common images because of their own characteristics, for example, part of information in CT image is useless, and it's a kind of resource waste to save this part information. The region selective EZW coder was proposed with which only useful part of image was selected and compressed, and the test image provides good result.

  4. Image Quality Assessment for Different Wavelet Compression Techniques in a Visual Communication Framework

    Directory of Open Access Journals (Sweden)

    Nuha A. S. Alwan


    Full Text Available Images with subband coding and threshold wavelet compression are transmitted over a Rayleigh communication channel with additive white Gaussian noise (AWGN, after quantization and 16-QAM modulation. A comparison is made between these two types of compression using both mean square error (MSE and structural similarity (SSIM image quality assessment (IQA criteria applied to the reconstructed image at the receiver. The two methods yielded comparable SSIM but different MSE measures. In this work, we justify our results which support previous findings in the literature that the MSE between two images is not indicative of structural similarity or the visibility of errors. It is found that it is difficult to reduce the pointwise errors in subband-compressed images (higher MSE. However, the compressed images provide comparable SSIM or perceived quality for both types of compression provided that the retained energy after compression is the same.

  5. Effect of Lossy JPEG Compression of an Image with Chromatic Aberrations on Target Measurement Accuracy (United States)

    Matsuoka, R.


    This paper reports an experiment conducted to investigate the effect of lossy JPEG compression of an image with chromatic aberrations on the measurement accuracy of target center by the intensity-weighted centroid method. I utilized six images shooting a white sheet with 30 by 20 black filled circles in the experiment. The images were acquired by a digital camera Canon EOS 20D. The image data were compressed by using two compression parameter sets of a downsampling ratio, a quantization table and a Huffman code table utilized in EOS 20D. The experiment results clearly indicate that lossy JPEG compression of an image with chromatic aberrations would produce a significant effect on measurement accuracy of target center by the intensity-weighted centroid method. The maximum displacements of red, green and blue components caused by lossy JPEG compression were 0.20, 0.09, and 0.20 pixels respectively. The results also suggest that the downsampling of the chrominance components Cb and Cr in lossy JPEG compression would produce displacements between uncompressed image data and compressed image data. In conclusion, since the author consider that it would be unable to correct displacements caused by lossy JPEG compression, the author would recommend that lossy JPEG compression before recording an image in a digital camera should not be executed in case of highly precise image measurement by using color images acquired by a non-metric digital camera.

  6. Three-dimensional imaging reconstruction algorithm of gated-viewing laser imaging with compressive sensing. (United States)

    Li, Li; Xiao, Wei; Jian, Weijian


    Three-dimensional (3D) laser imaging combining compressive sensing (CS) has an advantage in lower power consumption and less imaging sensors; however, it brings enormous stress to subsequent calculation devices. In this paper we proposed a fast 3D imaging reconstruction algorithm to deal with time-slice images sampled by single-pixel detectors. The algorithm implements 3D imaging reconstruction before CS recovery, thus it saves plenty of runtime of CS recovery. Several experiments are conducted to verify the performance of the algorithm. Simulation results demonstrated that the proposed algorithm has better performance in terms of efficiency compared to an existing algorithm.

  7. Statistical Analysis of Compression Methods for Storing Binary Image for Low-Memory Systems

    Directory of Open Access Journals (Sweden)

    Roman Slaby


    Full Text Available The paper is focused on the statistical comparison of the selected compression methods which are used for compression of the binary images. The aim is to asses, which of presented compression method for low-memory system requires less number of bytes of memory. For assessment of the success rates of the input image to binary image the correlation functions are used. Correlation function is one of the methods of OCR algorithm used for the digitization of printed symbols. Using of compression methods is necessary for systems based on low-power micro-controllers. The data stream saving is very important for such systems with limited memory as well as the time required for decoding the compressed data. The success rate of the selected compression algorithms is evaluated using the basic characteristics of the exploratory analysis. The searched samples represent the amount of bytes needed to compress the test images, representing alphanumeric characters.

  8. Acquisition of STEM Images by Adaptive Compressive Sensing

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Weiyi; Feng, Qianli; Srinivasan, Ramprakash; Stevens, Andrew; Browning, Nigel D.


    Compressive Sensing (CS) allows a signal to be sparsely measured first and accurately recovered later in software [1]. In scanning transmission electron microscopy (STEM), it is possible to compress an image spatially by reducing the number of measured pixels, which decreases electron dose and increases sensing speed [2,3,4]. The two requirements for CS to work are: (1) sparsity of basis coefficients and (2) incoherence of the sensing system and the representation system. However, when pixels are missing from the image, it is difficult to have an incoherent sensing matrix. Nevertheless, dictionary learning techniques such as Beta-Process Factor Analysis (BPFA) [5] are able to simultaneously discover a basis and the sparse coefficients in the case of missing pixels. On top of CS, we would like to apply active learning [6,7] to further reduce the proportion of pixels being measured, while maintaining image reconstruction quality. Suppose we initially sample 10% of random pixels. We wish to select the next 1% of pixels that are most useful in recovering the image. Now, we have 11% of pixels, and we want to decide the next 1% of “most informative” pixels. Active learning methods are online and sequential in nature. Our goal is to adaptively discover the best sensing mask during acquisition using feedback about the structures in the image. In the end, we hope to recover a high quality reconstruction with a dose reduction relative to the non-adaptive (random) sensing scheme. In doing this, we try three metrics applied to the partial reconstructions for selecting the new set of pixels: (1) variance, (2) Kullback-Leibler (KL) divergence using a Radial Basis Function (RBF) kernel, and (3) entropy. Figs. 1 and 2 display the comparison of Peak Signal-to-Noise (PSNR) using these three different active learning methods at different percentages of sampled pixels. At 20% level, all the three active learning methods underperform the original CS without active learning. However

  9. Study of fractal dimension in chest images using normal and interstitial lung disease cases (United States)

    Tucker, Douglas M.; Correa, Jose L.; Souto, Miguel; Malagari, Katerina S.


    A quantitative computerized method which provides accurate discrimination between chest radiographs with positive findings of interstitial disease patterns and normal chest radiographs may increase the efficacy of radiologic screening of the chest and the utility of digital radiographic systems. This report is a comparison of fractal dimension measured in normal chest radiographs and in radiographs with abnormal lungs having reticular, nodular, reticulonodular and linear patterns of interstitial disease. Six regions of interest (ROI's) from each of 33 normal chest radiographs and 33 radiographs with positive findings of interstitial disease were studied. Results indicate that there is a statistically significant difference between the distribution of the fractal dimension in normal radiographs and radiographs where disease is present.

  10. On optimisation of wavelet algorithms for non-perfect wavelet compression of digital medical images

    CERN Document Server

    Ricke, J


    Aim: Optimisation of medical image compression. Evaluation of wavelet-filters for wavelet-compression. Results: Application of filters with different complexity results in significant variations in the quality of image reconstruction after compression specifically in low frequency information. Filters of high complexity proved to be advantageous despite of heterogenous results during visual analysis. For high frequency details, complexity of filters did not prove to be of significant impact on image after reconstruction.


    Directory of Open Access Journals (Sweden)

    Nishat kanvel


    Full Text Available This paper presents an adaptive lifting scheme with Particle Swarm Optimization technique for image compression. Particle swarm Optimization technique is used to improve the accuracy of the predictionfunction used in the lifting scheme. This scheme is applied in Image compression and parameters such as PSNR, Compression Ratio and the visual quality of the image is calculated .The proposed scheme iscompared with the existing methods.

  12. Effective palette indexing for image compression using self-organization of Kohonen feature map. (United States)

    Pei, Soo-Chang; Chuang, Yu-Ting; Chuang, Wei-Hong


    The process of limited-color image compression usually involves color quantization followed by palette re-indexing. Palette re-indexing could improve the compression of color-indexed images, but it is still complicated and consumes extra time. Making use of the topology-preserving property of self-organizing Kohonen feature map, we can generate a fairly good color index table to achieve both high image quality and high compression, without re-indexing. Promising experiment results will be presented.

  13. Block Compressed Sensing of Images Using Adaptive Granular Reconstruction

    Directory of Open Access Journals (Sweden)

    Ran Li


    Full Text Available In the framework of block Compressed Sensing (CS, the reconstruction algorithm based on the Smoothed Projected Landweber (SPL iteration can achieve the better rate-distortion performance with a low computational complexity, especially for using the Principle Components Analysis (PCA to perform the adaptive hard-thresholding shrinkage. However, during learning the PCA matrix, it affects the reconstruction performance of Landweber iteration to neglect the stationary local structural characteristic of image. To solve the above problem, this paper firstly uses the Granular Computing (GrC to decompose an image into several granules depending on the structural features of patches. Then, we perform the PCA to learn the sparse representation basis corresponding to each granule. Finally, the hard-thresholding shrinkage is employed to remove the noises in patches. The patches in granule have the stationary local structural characteristic, so that our method can effectively improve the performance of hard-thresholding shrinkage. Experimental results indicate that the reconstructed image by the proposed algorithm has better objective quality when compared with several traditional ones. The edge and texture details in the reconstructed image are better preserved, which guarantees the better visual quality. Besides, our method has still a low computational complexity of reconstruction.

  14. Iliac vein compression syndrome: Clinical, imaging and pathologic findings

    Institute of Scientific and Technical Information of China (English)

    Katelyn; N; Brinegar; Rahul; A; Sheth; Ali; Khademhosseini; Jemianne; Bautista; Rahmi; Oklu


    May-Thurner syndrome(MTS) is the pathologic compression of the left common iliac vein by the right common iliac artery, resulting in left lower extremity pain, swelling, and deep venous thrombosis. Though this syndrome was first described in 1851, there are currently no standardized criteria to establish the diagnosis of MTS. Since MTS is treated by a wide array of specialties, including interventional radiology, vascular surgery, cardiology, and vascular medicine, the need for an established diagnostic criterion is imperative in order to reduce misdiagnosis and inappropriate treatment. Although MTS has historically been diagnosed by the presence of pathologic features, the use of dynamic imaging techniques has led to a more radiologic based diagnosis. Thus, imaging plays an integral part in screening patients for MTS, and the utility of a wide array of imaging modalities has been evaluated. Here, we summarize the historical aspects of the clinical features of this syndrome. We then provide a comprehensive assessment of the literature on the efficacy of imaging tools available to diagnose MTS. Lastly, we provide clinical pearls and recommendations to aid physicians in diagnosing the syndrome through the use of provocative measures.

  15. Compressed Sensing Techniques Applied to Ultrasonic Imaging of Cargo Containers (United States)

    Álvarez López, Yuri; Martínez Lorenzo, José Ángel


    One of the key issues in the fight against the smuggling of goods has been the development of scanners for cargo inspection. X-ray-based radiographic system scanners are the most developed sensing modality. However, they are costly and use bulky sources that emit hazardous, ionizing radiation. Aiming to improve the probability of threat detection, an ultrasonic-based technique, capable of detecting the footprint of metallic containers or compartments concealed within the metallic structure of the inspected cargo, has been proposed. The system consists of an array of acoustic transceivers that is attached to the metallic structure-under-inspection, creating a guided acoustic Lamb wave. Reflections due to discontinuities are detected in the images, provided by an imaging algorithm. Taking into consideration that the majority of those images are sparse, this contribution analyzes the application of Compressed Sensing (CS) techniques in order to reduce the amount of measurements needed, thus achieving faster scanning, without compromising the detection capabilities of the system. A parametric study of the image quality, as a function of the samples needed in spatial and frequency domains, is presented, as well as the dependence on the sampling pattern. For this purpose, realistic cargo inspection scenarios have been simulated. PMID:28098841

  16. Compressed Sensing Techniques Applied to Ultrasonic Imaging of Cargo Containers. (United States)

    López, Yuri Álvarez; Lorenzo, José Ángel Martínez


    One of the key issues in the fight against the smuggling of goods has been the development of scanners for cargo inspection. X-ray-based radiographic system scanners are the most developed sensing modality. However, they are costly and use bulky sources that emit hazardous, ionizing radiation. Aiming to improve the probability of threat detection, an ultrasonic-based technique, capable of detecting the footprint of metallic containers or compartments concealed within the metallic structure of the inspected cargo, has been proposed. The system consists of an array of acoustic transceivers that is attached to the metallic structure-under-inspection, creating a guided acoustic Lamb wave. Reflections due to discontinuities are detected in the images, provided by an imaging algorithm. Taking into consideration that the majority of those images are sparse, this contribution analyzes the application of Compressed Sensing (CS) techniques in order to reduce the amount of measurements needed, thus achieving faster scanning, without compromising the detection capabilities of the system. A parametric study of the image quality, as a function of the samples needed in spatial and frequency domains, is presented, as well as the dependence on the sampling pattern. For this purpose, realistic cargo inspection scenarios have been simulated.

  17. Compressed Sensing Techniques Applied to Ultrasonic Imaging of Cargo Containers

    Directory of Open Access Journals (Sweden)

    Yuri Álvarez López


    Full Text Available One of the key issues in the fight against the smuggling of goods has been the development of scanners for cargo inspection. X-ray-based radiographic system scanners are the most developed sensing modality. However, they are costly and use bulky sources that emit hazardous, ionizing radiation. Aiming to improve the probability of threat detection, an ultrasonic-based technique, capable of detecting the footprint of metallic containers or compartments concealed within the metallic structure of the inspected cargo, has been proposed. The system consists of an array of acoustic transceivers that is attached to the metallic structure-under-inspection, creating a guided acoustic Lamb wave. Reflections due to discontinuities are detected in the images, provided by an imaging algorithm. Taking into consideration that the majority of those images are sparse, this contribution analyzes the application of Compressed Sensing (CS techniques in order to reduce the amount of measurements needed, thus achieving faster scanning, without compromising the detection capabilities of the system. A parametric study of the image quality, as a function of the samples needed in spatial and frequency domains, is presented, as well as the dependence on the sampling pattern. For this purpose, realistic cargo inspection scenarios have been simulated.

  18. Adaptive wavelet transform algorithm for lossy image compression (United States)

    Pogrebnyak, Oleksiy B.; Ramirez, Pablo M.; Acevedo Mosqueda, Marco Antonio


    A new algorithm of locally adaptive wavelet transform based on the modified lifting scheme is presented. It performs an adaptation of the wavelet high-pass filter at the prediction stage to the local image data activity. The proposed algorithm uses the generalized framework for the lifting scheme that permits to obtain easily different wavelet filter coefficients in the case of the (~N, N) lifting. Changing wavelet filter order and different control parameters, one can obtain the desired filter frequency response. It is proposed to perform the hard switching between different wavelet lifting filter outputs according to the local data activity estimate. The proposed adaptive transform possesses a good energy compaction. The designed algorithm was tested on different images. The obtained simulation results show that the visual and quantitative quality of the restored images is high. The distortions are less in the vicinity of high spatial activity details comparing to the non-adaptive transform, which introduces ringing artifacts. The designed algorithm can be used for lossy image compression and in the noise suppression applications.

  19. Compressive spectral polarization imaging by a pixelized polarizer and colored patterned detector. (United States)

    Fu, Chen; Arguello, Henry; Sadler, Brian M; Arce, Gonzalo R


    A compressive spectral and polarization imager based on a pixelized polarizer and colored patterned detector is presented. The proposed imager captures several dispersed compressive projections with spectral and polarization coding. Stokes parameter images at several wavelengths are reconstructed directly from 2D projections. Employing a pixelized polarizer and colored patterned detector enables compressive sensing over spatial, spectral, and polarization domains, reducing the total number of measurements. Compressive sensing codes are specially designed to enhance the peak signal-to-noise ratio in the reconstructed images. Experiments validate the architecture and reconstruction algorithms.

  20. Auto-shape lossless compression of pharynx and esophagus fluoroscopic images. (United States)

    Arif, Arif Sameh; Mansor, Sarina; Logeswaran, Rajasvaran; Karim, Hezerul Abdul


    The massive number of medical images produced by fluoroscopic and other conventional diagnostic imaging devices demand a considerable amount of space for data storage. This paper proposes an effective method for lossless compression of fluoroscopic images. The main contribution in this paper is the extraction of the regions of interest (ROI) in fluoroscopic images using appropriate shapes. The extracted ROI is then effectively compressed using customized correlation and the combination of Run Length and Huffman coding, to increase compression ratio. The experimental results achieved show that the proposed method is able to improve the compression ratio by 400 % as compared to that of traditional methods.

  1. Exploring Fractals. (United States)

    Dewdney, A. K.


    Explores the subject of fractal geometry focusing on the occurrence of fractal-like shapes in the natural world. Topics include iterated functions, chaos theory, the Lorenz attractor, logistic maps, the Mandelbrot set, and mini-Mandelbrot sets. Provides appropriate computer algorithms, as well as further sources of information. (JJK)

  2. Measuring Fractality

    Directory of Open Access Journals (Sweden)

    Tatjana eStadnitski


    Full Text Available When investigating fractal phenomena, the following questions are fundamental for the applied researcher: (1 What are essential statistical properties of 1/f noise? (2 Which estimators are available for measuring fractality? (3 Which measurement instruments are appropriate and how are they applied? The purpose of this article is to give clear and comprehensible answers to these questions. First, theoretical characteristics of a fractal pattern (self-similarity, long memory, power law and the related fractal parameters (the Hurst coefficient, the scaling exponent, the fractional differencing parameter d of the ARFIMA methodology, the power exponent of the spectral analysis are discussed. Then, estimators of fractal parameters from different software packages commonly used by applied researchers (R, SAS, SPSS are introduced and evaluated. Advantages, disadvantages, and constrains of the popular estimators are illustrated by elaborate examples. Finally, crucial steps of fractal analysis (plotting time series data, autocorrelation and spectral functions; performing stationarity tests; choosing an adequate estimator; estimating fractal parameters; distinguishing fractal processes from short memory patterns are demonstrated with empirical time series.

  3. Measuring fractality. (United States)

    Stadnitski, Tatjana


    WHEN INVESTIGATING FRACTAL PHENOMENA, THE FOLLOWING QUESTIONS ARE FUNDAMENTAL FOR THE APPLIED RESEARCHER: (1) What are essential statistical properties of 1/f noise? (2) Which estimators are available for measuring fractality? (3) Which measurement instruments are appropriate and how are they applied? The purpose of this article is to give clear and comprehensible answers to these questions. First, theoretical characteristics of a fractal pattern (self-similarity, long memory, power law) and the related fractal parameters (the Hurst coefficient, the scaling exponent α, the fractional differencing parameter d of the autoregressive fractionally integrated moving average methodology, the power exponent β of the spectral analysis) are discussed. Then, estimators of fractal parameters from different software packages commonly used by applied researchers (R, SAS, SPSS) are introduced and evaluated. Advantages, disadvantages, and constrains of the popular estimators ([Formula: see text] power spectral density, detrended fluctuation analysis, signal summation conversion) are illustrated by elaborate examples. Finally, crucial steps of fractal analysis (plotting time series data, autocorrelation, and spectral functions; performing stationarity tests; choosing an adequate estimator; estimating fractal parameters; distinguishing fractal processes from short-memory patterns) are demonstrated with empirical time series.

  4. Fractal Interpolation Function and its Dimension%分形插值函数及其维数

    Institute of Scientific and Technical Information of China (English)

    马林涛; 陈德勇; 张琰


    主要从分形插值函数的理论出发,利用Matlab软件绘制分形插值函数的图像,绘出确定的垂直压缩因子与随机垂直压缩因子的函数图像,定性地分析垂直压缩因子的变化所引起的分形插值函数图像的变化.最后,通过计算得到分形插值函数的图像的盒维数随着垂直压缩因子的变大而变大.%Based on the theories of fractal interpolation functions, by using Matlab software, we draw the images of fractal interpolation functions for the determined vertical compression factors and random vertical compression factors. Quantitatively analyze the change of the images of the fractal interpolation functions caused by the vertical compression factors. Finally, we calculate Box dimension of the fractal interpolation function. Therefore, the relationships between the vertical compression factors and Box dimensions of fractal interpolation functions are obtained.

  5. Compressive imaging for difference image formation and wide-field-of-view target tracking (United States)



    Use of imaging systems for performing various situational awareness tasks in military and commercial settings has a long history. There is increasing recognition, however, that a much better job can be done by developing non-traditional optical systems that exploit the task-specific system aspects within the imager itself. In some cases, a direct consequence of this approach can be real-time data compression along with increased measurement fidelity of the task-specific features. In others, compression can potentially allow us to perform high-level tasks such as direct tracking using the compressed measurements without reconstructing the scene of interest. In this dissertation we present novel advancements in feature-specific (FS) imagers for large field-of-view surveillence, and estimation of temporal object-scene changes utilizing the compressive imaging paradigm. We develop these two ideas in parallel. In the first case we show a feature-specific (FS) imager that optically multiplexes multiple, encoded sub-fields of view onto a common focal plane. Sub-field encoding enables target tracking by creating a unique connection between target characteristics in superposition space and the target's true position in real space. This is accomplished without reconstructing a conventional image of the large field of view. System performance is evaluated in terms of two criteria: average decoding time and probability of decoding error. We study these performance criteria as a function of resolution in the encoding scheme and signal-to-noise ratio. We also include simulation and experimental results demonstrating our novel tracking method. In the second case we present a FS imager for estimating temporal changes in the object scene over time by quantifying these changes through a sequence of difference images. The difference images are estimated by taking compressive measurements of the scene. Our goals are twofold. First, to design the optimal sensing matrix for taking

  6. Joint image encryption and compression scheme based on IWT and SPIHT (United States)

    Zhang, Miao; Tong, Xiaojun


    A joint lossless image encryption and compression scheme based on integer wavelet transform (IWT) and set partitioning in hierarchical trees (SPIHT) is proposed to achieve lossless image encryption and compression simultaneously. Making use of the properties of IWT and SPIHT, encryption and compression are combined. Moreover, the proposed secure set partitioning in hierarchical trees (SSPIHT) via the addition of encryption in the SPIHT coding process has no effect on compression performance. A hyper-chaotic system, nonlinear inverse operation, Secure Hash Algorithm-256(SHA-256), and plaintext-based keystream are all used to enhance the security. The test results indicate that the proposed methods have high security and good lossless compression performance.

  7. Visually Improved Image Compression by using Embedded Zero-tree Wavelet Coding

    Directory of Open Access Journals (Sweden)

    Janaki R


    Full Text Available Image compression is very important for efficient transmission and storage of images. Embedded Zero- tree Wavelet (EZW algorithm is a simple yet powerful algorithm having the property that the bits in the stream are generated in the order of their importance. Image compression can improve the performance of the digital systems by reducing time and cost in image storage and transmission without significant reduction of the image quality. For image compression it is desirable that the selection of transform should reduce the size of resultant data set as compared to source data set. EZW is computationally very fast and among the best image compression algorithm known today. This paper proposes a technique for image compression which uses the Wavelet-based Image Coding. A large number of experimental results are shown that this method saves a lot of bits in transmission, further enhances the compression performance. This paper aims to determine the best threshold to compress the still image at a particular decomposition level by using Embedded Zero-tree Wavelet encoder. Compression Ratio (CR and Peak-Signal-to-Noise (PSNR is determined for different threshold values ranging from 6 to 60 for decomposition level 8.

  8. 基于小波分形的图像分割算法%Wavelet Fractal-Based Image Segment Algorithm

    Institute of Scientific and Technical Information of China (English)

    叶俊勇; 汪同庆; 彭健; 杨波


    The image of shoe leather lumen is not very satisfaction because of technology of CT. The smart imagesegment is the base of getting smart measurement data. An algorithm of image segment based on wavelet and fractalhas been proposed after analyzing the specialty of images. The image is decomposed through wavelet multi-resolutiondecomposition , and the fractal dimension is calculated by the decomposed image. This approach is more satisfied thangeneral method in image segment of shoe leather lumen image by CT. This algorithm can segment the edge of shoe lu-men exactly. The experimentations prove the approach is rational.

  9. Compressed Sensing on the Image of Bilinear Maps

    CERN Document Server

    Walk, Philipp


    For several communication models, the dispersive part of a communication channel is described by a bilinear operation $T$ between the possible sets of input signals and channel parameters. The received channel output has then to be identified from the image $T(X,Y)$ of the input signal difference sets $X$ and the channel state sets $Y$. The main goal in this contribution is to characterize the compressibility of $T(X,Y)$ with respect to an ambient dimension $N$. In this paper we show that a restricted norm multiplicativity of $T$ on all canonical subspaces $X$ and $Y$ with dimension $S$ resp. $F$ is sufficient for the reconstruction of output signals with an overwhelming probability from $\\mathcal{O}((S+F)\\log N)$ random sub-Gaussian measurements.

  10. Image compression with QM-AYA adaptive binary arithmetic coder (United States)

    Cheng, Joe-Ming; Langdon, Glen G., Jr.


    The Q-coder has been reported in the literature, and is a renorm-driven binary adaptive arithmetic coder. A similar renorm-driven coder, the QM coder, uses the same approach with an initial attack to more rapidly estimate the statistics in the beginning, and with a different state table. The QM coder is the adaptive binary arithmetic coder employed in the JBIG and JPEG image compression algorithms. The QM-AYA arithmetic coder is similar to the QM coder, with a different state table, that offers balanced improvements to the QM probability estimation for the less skewed distributions. The QM-AYA performs better when the probability estimate is near 0.5 for each binary symbol. An approach for constructing effective index change tables for Q-coder type adaptation is discussed.

  11. Novel Efficient De-blocking Method for Highly Compressed Images

    Institute of Scientific and Technical Information of China (English)

    SHI Min; YI Qing-ming; YANG Liang


    Due to coarse quantization,block-based discrete cosine transform(BDCT) compression methods usually suffer from visible blocking artifacts at the block boundaries.A novel efficient de-blocking method in DCT domain is proposed.A specific criterion for edge detection is given,one-dimensional DCT is applied on each row of the adjacent blocks and the shifted block in smooth region,and the transform coefficients of the shifted block are modified by weighting the average of three coefficients of the block.Mean square difference of slope criterion is used to judge the efficiency of the proposed algorithm.Simulation results show that the new method not only obtains satisfactory image quality,but also maintains high frequency information.

  12. Image compression with directional lifting on separated sections (United States)

    Zhu, Jieying; Wang, Nengchao


    A novel image compression scheme is presented that the directional sections are separated and transformed differently from the rest of image. The discrete directions of anisotropic pixels are calculated and then grouped to compact directional sections. One dimensional (1-D) adaptive directional lifting is continuously applied along orientations of direction sections other than applying 1-D wavelet transform alternately in two dimensions for the whole image. For the rest sections, 2-D adaptive lifting filters are applied according to pixels' positions. Our single embedded coding stream can be truncated exactly for any bit rate. Experiments have showed that large coefficients can be significantly reduced along directional sections by our transform which makes energy more compact than traditional wavelet transform. Though rate-distortion (R-D) optimization isn't exploited, the PSNR is still comparable to that of JPEG-2000 with 9/7 filters at high bit rates. And at low bit rates, the visual quality is better than that of JPEG-2000 for along directional sections both blurring and ringing artifacts can be avoided and edge preservation is good.

  13. Adaptive wavelet transform algorithm for image compression applications (United States)

    Pogrebnyak, Oleksiy B.; Manrique Ramirez, Pablo


    A new algorithm of locally adaptive wavelet transform is presented. The algorithm implements the integer-to-integer lifting scheme. It performs an adaptation of the wavelet function at the prediction stage to the local image data activity. The proposed algorithm is based on the generalized framework for the lifting scheme that permits to obtain easily different wavelet coefficients in the case of the (N~,N) lifting. It is proposed to perform the hard switching between (2, 4) and (4, 4) lifting filter outputs according to an estimate of the local data activity. When the data activity is high, i.e., in the vicinity of edges, the (4, 4) lifting is performed. Otherwise, in the plain areas, the (2,4) decomposition coefficients are calculated. The calculations are rather simples that permit the implementation of the designed algorithm in fixed point DSP processors. The proposed adaptive transform possesses the perfect restoration of the processed data and possesses good energy compactation. The designed algorithm was tested on different images. The proposed adaptive transform algorithm can be used for image/signal lossless compression.

  14. Stable and Robust Sampling Strategies for Compressive Imaging. (United States)

    Krahmer, Felix; Ward, Rachel


    In many signal processing applications, one wishes to acquire images that are sparse in transform domains such as spatial finite differences or wavelets using frequency domain samples. For such applications, overwhelming empirical evidence suggests that superior image reconstruction can be obtained through variable density sampling strategies that concentrate on lower frequencies. The wavelet and Fourier transform domains are not incoherent because low-order wavelets and low-order frequencies are correlated, so compressive sensing theory does not immediately imply sampling strategies and reconstruction guarantees. In this paper, we turn to a more refined notion of coherence-the so-called local coherence-measuring for each sensing vector separately how correlated it is to the sparsity basis. For Fourier measurements and Haar wavelet sparsity, the local coherence can be controlled and bounded explicitly, so for matrices comprised of frequencies sampled from a suitable inverse square power-law density, we can prove the restricted isometry property with near-optimal embedding dimensions. Consequently, the variable-density sampling strategy we provide allows for image reconstructions that are stable to sparsity defects and robust to measurement noise. Our results cover both reconstruction by ℓ1-minimization and total variation minimization. The local coherence framework developed in this paper should be of independent interest, as it implies that for optimal sparse recovery results, it suffices to have bounded average coherence from sensing basis to sparsity basis-as opposed to bounded maximal coherence-as long as the sampling strategy is adapted accordingly.

  15. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information (United States)

    Pence, William D.; White, R. L.; Seaman, R.


    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  16. A Near-Lossless Image Compression Algorithm Suitable for Hardware Design in Wireless Endoscopy System

    Directory of Open Access Journals (Sweden)

    Xie Xiang


    Full Text Available In order to decrease the communication bandwidth and save the transmitting power in the wireless endoscopy capsule, this paper presents a new near-lossless image compression algorithm based on the Bayer format image suitable for hardware design. This algorithm can provide low average compression rate ( bits/pixel with high image quality (larger than dB for endoscopic images. Especially, it has low complexity hardware overhead (only two line buffers and supports real-time compressing. In addition, the algorithm can provide lossless compression for the region of interest (ROI and high-quality compression for other regions. The ROI can be selected arbitrarily by varying ROI parameters. In addition, the VLSI architecture of this compression algorithm is also given out. Its hardware design has been implemented in m CMOS process.

  17. A Near-Lossless Image Compression Algorithm Suitable for Hardware Design in Wireless Endoscopy System

    Directory of Open Access Journals (Sweden)

    ZhiHua Wang


    Full Text Available In order to decrease the communication bandwidth and save the transmitting power in the wireless endoscopy capsule, this paper presents a new near-lossless image compression algorithm based on the Bayer format image suitable for hardware design. This algorithm can provide low average compression rate (2.12 bits/pixel with high image quality (larger than 53.11 dB for endoscopic images. Especially, it has low complexity hardware overhead (only two line buffers and supports real-time compressing. In addition, the algorithm can provide lossless compression for the region of interest (ROI and high-quality compression for other regions. The ROI can be selected arbitrarily by varying ROI parameters. In addition, the VLSI architecture of this compression algorithm is also given out. Its hardware design has been implemented in 0.18μm CMOS process.

  18. Block-Based Compressed Sensing for Neutron Radiation Image Using WDFB

    Directory of Open Access Journals (Sweden)

    Wei Jin


    Full Text Available An ideal compression method for neutron radiation image should have high compression ratio while keeping more details of the original image. Compressed sensing (CS, which can break through the restrictions of sampling theorem, is likely to offer an efficient compression scheme for the neutron radiation image. Combining wavelet transform with directional filter banks, a novel nonredundant multiscale geometry analysis transform named Wavelet Directional Filter Banks (WDFB is constructed and applied to represent neutron radiation image sparsely. Then, the block-based CS technique is introduced and a high performance CS scheme for neutron radiation image is proposed. By performing two-step iterative shrinkage algorithm the problem of L1 norm minimization is solved to reconstruct neutron radiation image from random measurements. The experiment results demonstrate that the scheme not only improves the quality of reconstructed image obviously but also retains more details of original image.

  19. Bit-plane-channelized hotelling observer for predicting task performance using lossy-compressed images (United States)

    Schmanske, Brian M.; Loew, Murray H.


    A technique for assessing the impact of lossy wavelet-based image compression on signal detection tasks is presented. A medical image"s value is based on its ability to support clinical decisions such as detecting and diagnosing abnormalities. Image quality of compressed images is, however, often stated in terms of mathematical metrics such as mean square error. The presented technique provides a more suitable measure of image degradation by building on the channelized Hotelling observer model, which has been shown to predict human performance of signal detection tasks in noise-limited images. The technique first decomposes an image into its constituent wavelet subband coefficient bit-planes. Channel responses for the individual subband bit-planes are computed, combined,and processed with a Hotelling observer model to provide a measure of signal detectability versus compression ratio. This allows a user to determine how much compression can be tolerated before signal detectability drops below a certain threshold.

  20. Fast image coding approach with denoising based on fractal and DWT%带降噪的快速DWT与分形相结合图像编码方法

    Institute of Scientific and Technical Information of China (English)

    刘波; 房斌; 罗棻; 张世勇


    A new image coding algorithm is presented, which combined the fractal and discrete wavelet transform, based on the statistical character of image blocks, the distance between the value block and its best matched domain block in the baseline fractal coding and the distribution of noise unknown. In the proposed algorithm, ifthe robust regression between the value block and it' s best matched domain block is less than the given threshold value, the value block is compressed by fractal coding, otherwise is compressed by discrete waveform transform. Simulation results show that the proposed algorithm can speed up encoding time greatly and improve on the quality of the reconstructed image. Especially that has good robustness against the outliers caused by salt and pepper noise.%针对图像易受外界噪声干扰,且这种噪声分布通常是未知的这一问题,结合图像的统计特性,基本分形编码中值域块和最佳匹配的定义域块之间的距离统计特性等,提出一种基于方差不变特性、邻域搜索的分形与小波相结合的图像分形编码算法.在该算法中,如果值域决和最佳匹配之间的稳健回归优化目标函数取值小于给定的阈值,则用分形压缩算法编码该块,否则用小波变换压缩该块.实验结果表明,该方法可使编码速度比基本分形算法有较大提高,而且原始图像在受到外界干扰的情况下,该算法表现出了较好的鲁棒特性.

  1. Edge-based compression of cartoon-like images with homogeneous diffusion

    DEFF Research Database (Denmark)

    Mainberger, Markus; Bruhn, Andrés; Weickert, Joachim;


    Edges provide semantically important image features. In this paper a lossy compression method for cartoon-like images is presented, which is based on edge information. Edges together with some adjacent grey/colour values are extracted and encoded using a classical edge detector, binary compression...

  2. A Coded Aperture Compressive Imaging Array and Its Visual Detection and Tracking Algorithms for Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Hanxiao Wu


    Full Text Available In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l1 optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l1 tracker without any optimization.

  3. Research on application for integer wavelet transform for lossless compression of medical image (United States)

    Zhou, Zude; Li, Quan; Long, Quan


    This paper proposes an approach based on using lifting scheme to construct integer wavelet transform whose purpose is to realize the lossless compression of images. Then researches on application of medical image, software simulation of corresponding algorithm and experiment result are presented in this paper. Experiment shows that this method could improve the compression ration and resolution.

  4. Image and video compression for multimedia engineering fundamentals, algorithms, and standards

    CERN Document Server

    Shi, Yun Q


    Part I: Fundamentals Introduction Quantization Differential Coding Transform Coding Variable-Length Coding: Information Theory Results (II) Run-Length and Dictionary Coding: Information Theory Results (III) Part II: Still Image Compression Still Image Coding: Standard JPEG Wavelet Transform for Image Coding: JPEG2000 Nonstandard Still Image Coding Part III: Motion Estimation and Compensation Motion Analysis and Motion Compensation Block Matching Pel-Recursive Technique Optical Flow Further Discussion and Summary on 2-D Motion Estimation Part IV: Video Compression Fundam

  5. Magni: A Python Package for Compressive Sampling and Reconstruction of Atomic Force Microscopy Images

    DEFF Research Database (Denmark)

    Oxvig, Christian Schou; Pedersen, Patrick Steffen; Arildsen, Thomas


    provides researchers in compressed sensing with a selection of algorithms for reconstructing undersampled general images, and offers a consistent and rigorous way to efficiently evaluate the researchers own developed reconstruction algorithms in terms of phase transitions. The package also serves......Magni is an open source Python package that embraces compressed sensing and Atomic Force Microscopy (AFM) imaging techniques. It provides AFM-specific functionality for undersampling and reconstructing images from AFM equipment and thereby accelerating the acquisition of AFM images. Magni also...

  6. The wavelet/scalar quantization compression standard for digital fingerprint images

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.


    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  7. Correlated image set compression system based on new fast efficient algorithm of Karhunen-Loeve transform (United States)

    Musatenko, Yurij S.; Kurashov, Vitalij N.


    The paper presents improved version of our new method for compression of correlated image sets Optimal Image Coding using Karhunen-Loeve transform (OICKL). It is known that Karhunen-Loeve (KL) transform is most optimal representation for such a purpose. The approach is based on fact that every KL basis function gives maximum possible average contribution in every image and this contribution decreases most quickly among all possible bases. So, we lossy compress every KL basis function by Embedded Zerotree Wavelet (EZW) coding with essentially different loss that depends on the functions' contribution in the images. The paper presents new fast low memory consuming algorithm of KL basis construction for compression of correlated image ensembles that enable our OICKL system to work on common hardware. We also present procedure for determining of optimal losses of KL basic functions caused by compression. It uses modified EZW coder which produce whole PSNR (bitrate) curve during the only compression pass.

  8. Entangling Fractals

    CERN Document Server

    Astaneh, Amin Faraji


    We use the Heat Kernel method to calculate the Entanglement Entropy for a given entangling region on a fractal. The leading divergent term of the entropy is obtained as a function of the fractal dimension as well as the walk dimension. The power of the UV cut-off parameter is (generally) a fractional number which indeed is a certain combination of these two indices. This exponent is known as the spectral dimension. We show that there is a novel log periodic oscillatory behavior in the entropy which has root in the complex dimension of a fractal. We finally indicate that the Holographic calculation in a certain Hyper-scaling violating bulk geometry yields the same leading term for the entanglement entropy, if one identifies the effective dimension of the hyper-scaling violating theory with the spectral dimension of the fractal. We provide more supports with comparing the behavior of the thermal entropy in terms of the temperature in these two cases.

  9. Configuration entropy of fractal landscapes

    National Research Council Canada - National Science Library

    Rodríguez‐Iturbe, Ignacio; D'Odorico, Paolo; Rinaldo, Andrea


    .... The spatial arrangement of two‐dimensional images is found to be an effective way to characterize fractal landscapes and the configurational entropy of these arrangements imposes demanding conditions for models attempting to represent these fields.

  10. MR Image Compression Based on Selection of Mother Wavelet and Lifting Based Wavelet

    Directory of Open Access Journals (Sweden)

    Sheikh Md. Rabiul Islam


    Full Text Available Magnetic Resonance (MR image is a medical image technique required enormous data to be stored and transmitted for high quality diagnostic application. Various algorithms have been proposed to improve the performance of the compression scheme. In this paper we extended the commonly used algorithms to image compression and compared its performance. For an image compression technique, we have linked different wavelet techniques using traditional mother wavelets and lifting based Cohen-Daubechies-Feauveau wavelets with the low-pass filters of the length 9 and 7 (CDF 9/7 wavelet transform with Set Partition in Hierarchical Trees (SPIHT algorithm. A novel image quality index with highlighting shape of histogram of the image targeted is introduced to assess image compression quality. The index will be used in place of existing traditional Universal Image Quality Index (UIQI “in one go”. It offers extra information about the distortion between an original image and a compressed image in comparisons with UIQI. The proposed index is designed based on modelling image compression as combinations of four major factors: loss of correlation, luminance distortion, contrast distortion and shape distortion. This index is easy to calculate and applicable in various image processing applications. One of our contributions is to demonstrate the choice of mother wavelet is very important for achieving superior wavelet compression performances based on proposed image quality indexes. Experimental results show that the proposed image quality index plays a significantly role in the quality evaluation of image compression on the open sources “BrainWeb: Simulated Brain Database (SBD ”.

  11. Improving a DWT-based compression algorithm for high image-quality requirement of satellite images (United States)

    Thiebaut, Carole; Latry, Christophe; Camarero, Roberto; Cazanave, Grégory


    Past and current optical Earth observation systems designed by CNES are using a fixed-rate data compression processing performed at a high-rate in a pushbroom mode (also called scan-based mode). This process generates fixed-length data to the mass memory and data downlink is performed at a fixed rate too. Because of on-board memory limitations and high data rate processing needs, the rate allocation procedure is performed over a small image area called a "segment". For both PLEIADES compression algorithm and CCSDS Image Data Compression recommendation, this rate allocation is realised by truncating to the desired rate a hierarchical bitstream of coded and quantized wavelet coefficients for each segment. Because the quantisation induced by truncation of the bit planes description is the same for the whole segment, some parts of the segment have a poor image quality. These artefacts generally occur in low energy areas within a segment of higher level of energy. In order to locally correct these areas, CNES has studied "exceptional processing" targeted for DWT-based compression algorithms. According to a criteria computed for each part of the segment (called block), the wavelet coefficients can be amplified before bit-plane encoding. As usual Region of Interest handling, these multiplied coefficients will be processed earlier by the encoder than in the nominal case (without exceptional processing). The image quality improvement brought by the exceptional processing has been confirmed by visual image analysis and fidelity criteria. The complexity of the proposed improvement for on-board application has also been analysed.

  12. Optimal Compression of Floating-point Astronomical Images Without Significant Loss of Information

    CERN Document Server

    Pence, W D; Seaman, R


    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 -- 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process can greatly improve the precision of measurements in the images. This is especially important if the analysis algorithm relies on the mode or the median which would be similarly quantized if the pixel values are not dithered. We perform a series of experiments on both synthetic and real...

  13. Design of vector quantizer for image compression using self-organizing feature map and surface fitting. (United States)

    Laha, Arijit; Pal, Nikhil R; Chanda, Bhabatosh


    We propose a new scheme of designing a vector quantizer for image compression. First, a set of codevectors is generated using the self-organizing feature map algorithm. Then, the set of blocks associated with each code vector is modeled by a cubic surface for better perceptual fidelity of the reconstructed images. Mean-removed vectors from a set of training images is used for the construction of a generic codebook. Further, Huffman coding of the indices generated by the encoder and the difference-coded mean values of the blocks are used to achieve better compression ratio. We proposed two indices for quantitative assessment of the psychovisual quality (blocking effect) of the reconstructed image. Our experiments on several training and test images demonstrate that the proposed scheme can produce reconstructed images of good quality while achieving compression at low bit rates. Index Terms-Cubic surface fitting, generic codebook, image compression, self-organizing feature map, vector quantization.

  14. Application of Fisher Score and mRMR Techniques for Feature Selection in Compressed Medical Images

    Directory of Open Access Journals (Sweden)

    Vamsidhar Enireddy


    Full Text Available In nowadays there is a large increase in the digital medical images and different medical imaging equipments are available for diagnoses, medical professionals are increasingly relying on computer aided techniques for both indexing these images and retrieving similar images from large repositories. To develop systems which are computationally less intensive without compromising on the accuracy from the high dimensional feature space is always challenging. In this paper an investigation is made on the retrieval of compressed medical images. Images are compressed using the visually lossless compression technique. Shape and texture features are extracted and best features are selected using the fisher technique and mRMR. Using these selected features RNN with BPTT was utilized for classification of the compressed images.

  15. Improving the Performance of Backpropagation Neural Network Algorithm for Image Compression/Decompression System

    Directory of Open Access Journals (Sweden)

    Omaima N. A.


    Full Text Available Problem statement: The problem inherent to any digital image is the large amount of bandwidth required for transmission or storage. This has driven the research area of image compression to develop algorithms that compress images to lower data rates with better quality. Artificial neural networks are becoming attractive in image processing where high computational performance and parallel architectures are required. Approach: In this research, a three layered Backpropagation Neural Network (BPNN was designed for building image compression/decompression system. The Backpropagation neural network algorithm (BP was used for training the designed BPNN. Many techniques were used to speed up and improve this algorithm by using different BPNN architecture and different values of learning rate and momentum variables. Results: Experiments had been achieved, the results obtained, such as Compression Ratio (CR and peak signal to noise ratio (PSNR are compared with the performance of BP with different BPNN architecture and different learning parameters. The efficiency of the designed BPNN comes from reducing the chance of error occurring during the compressed image transmission through analog or digital channel. Conclusion: The performance of the designed BPNN image compression system can be increased by modifying the network itself, learning parameters and weights. Practically, we can note that the BPNN has the ability to compress untrained images but not in the same performance of the trained images.

  16. Rapid MR spectroscopic imaging of lactate using compressed sensing (United States)

    Vidya Shankar, Rohini; Agarwal, Shubhangi; Geethanath, Sairam; Kodibagkar, Vikram D.


    Imaging lactate metabolism in vivo may improve cancer targeting and therapeutics due to its key role in the development, maintenance, and metastasis of cancer. The long acquisition times associated with magnetic resonance spectroscopic imaging (MRSI), which is a useful technique for assessing metabolic concentrations, are a deterrent to its routine clinical use. The objective of this study was to combine spectral editing and prospective compressed sensing (CS) acquisitions to enable precise and high-speed imaging of the lactate resonance. A MRSI pulse sequence with two key modifications was developed: (1) spectral editing components for selective detection of lactate, and (2) a variable density sampling mask for pseudo-random under-sampling of the k-space `on the fly'. The developed sequence was tested on phantoms and in vivo in rodent models of cancer. Datasets corresponding to the 1X (fully-sampled), 2X, 3X, 4X, 5X, and 10X accelerations were acquired. The under-sampled datasets were reconstructed using a custom-built algorithm in MatlabTM, and the fidelity of the CS reconstructions was assessed in terms of the peak amplitudes, SNR, and total acquisition time. The accelerated reconstructions demonstrate a reduction in the scan time by up to 90% in vitro and up to 80% in vivo, with negligible loss of information when compared with the fully-sampled dataset. The proposed unique combination of spectral editing and CS facilitated rapid mapping of the spatial distribution of lactate at high temporal resolution. This technique could potentially be translated to the clinic for the routine assessment of lactate changes in solid tumors.

  17. Contour fractal analysis of grains (United States)

    Guida, Giulia; Casini, Francesca; Viggiani, Giulia MB


    Fractal analysis has been shown to be useful in image processing to characterise the shape and the grey-scale complexity in different applications spanning from electronic to medical engineering (e.g. [1]). Fractal analysis consists of several methods to assign a dimension and other fractal characteristics to a dataset describing geometric objects. Limited studies have been conducted on the application of fractal analysis to the classification of the shape characteristics of soil grains. The main objective of the work described in this paper is to obtain, from the results of systematic fractal analysis of artificial simple shapes, the characterization of the particle morphology at different scales. The long term objective of the research is to link the microscopic features of granular media with the mechanical behaviour observed in the laboratory and in situ.

  18. Magni: A Python Package for Compressive Sampling and Reconstruction of Atomic Force Microscopy Images

    Directory of Open Access Journals (Sweden)

    Christian Schou Oxvig


    Full Text Available Magni is an open source Python package that embraces compressed sensing and Atomic Force Microscopy (AFM imaging techniques. It provides AFM-specific functionality for undersampling and reconstructing images from AFM equipment and thereby accelerating the acquisition of AFM images. Magni also provides researchers in compressed sensing with a selection of algorithms for reconstructing undersampled general images, and offers a consistent and rigorous way to efficiently evaluate the researchers own developed reconstruction algorithms in terms of phase transitions. The package also serves as a convenient platform for researchers in compressed sensing aiming at obtaining a high degree of reproducibility of their research.

  19. Compressed Sensing and Low-Rank Matrix Decomposition in Multisource Images Fusion

    Directory of Open Access Journals (Sweden)

    Kan Ren


    Full Text Available We propose a novel super-resolution multisource images fusion scheme via compressive sensing and dictionary learning theory. Under the sparsity prior of images patches and the framework of the compressive sensing theory, the multisource images fusion is reduced to a signal recovery problem from the compressive measurements. Then, a set of multiscale dictionaries are learned from several groups of high-resolution sample image’s patches via a nonlinear optimization algorithm. Moreover, a new linear weights fusion rule is proposed to obtain the high-resolution image. Some experiments are taken to investigate the performance of our proposed method, and the results prove its superiority to its counterparts.

  20. Method for low-light-level image compression based on wavelet transform (United States)

    Sun, Shaoyuan; Zhang, Baomin; Wang, Liping; Bai, Lianfa


    Low light level (LLL) image communication has received more and more attentions in the night vision field along with the advance of the importance of image communication. LLL image compression technique is the key of LLL image wireless transmission. LLL image, which is different from the common visible light image, has its special characteristics. As still image compression, we propose in this paper a wavelet-based image compression algorithm suitable for LLL image. Because the information in the LLL image is significant, near lossless data compression is required. The LLL image is compressed based on improved EZW (Embedded Zerotree Wavelet) algorithm. We encode the lowest frequency subband data using DPCM (Differential Pulse Code Modulation). All the information in the lowest frequency is kept. Considering the HVS (Human Visual System) characteristics and the LLL image characteristics, we detect the edge contour in the high frequency subband image first using templet and then encode the high frequency subband data using EZW algorithm. And two guiding matrix is set to avoid redundant scanning and replicate encoding of significant wavelet coefficients in the above coding. The experiment results show that the decoded image quality is good and the encoding time is shorter than that of the original EZW algorithm.