WorldWideScience

Sample records for fractal image compression

  1. Fast hybrid fractal image compression using an image feature and neural network

    International Nuclear Information System (INIS)

    Zhou Yiming; Zhang Chao; Zhang Zengke

    2008-01-01

    Since fractal image compression could maintain high-resolution reconstructed images at very high compression ratio, it has great potential to improve the efficiency of image storage and image transmission. On the other hand, fractal image encoding is time consuming for the best matching search between range blocks and domain blocks, which limits the algorithm to practical application greatly. In order to solve this problem, two strategies are adopted to improve the fractal image encoding algorithm in this paper. Firstly, based on the definition of an image feature, a necessary condition of the best matching search and FFC algorithm are proposed, and it could reduce the search space observably and exclude most inappropriate domain blocks according to each range block before the best matching search. Secondly, on the basis of FFC algorithm, in order to reduce the mapping error during the best matching search, a special neural network is constructed to modify the mapping scheme for the subblocks, in which the pixel values fluctuate greatly (FNFC algorithm). Experimental results show that the proposed algorithms could obtain good quality of the reconstructed images and need much less time than the baseline encoding algorithm

  2. A new modified fast fractal image compression algorithm

    DEFF Research Database (Denmark)

    Salarian, Mehdi; Nadernejad, Ehsan; MiarNaimi, Hossein

    2013-01-01

    In this paper, a new fractal image compression algorithm is proposed, in which the time of the encoding process is considerably reduced. The algorithm exploits a domain pool reduction approach, along with the use of innovative predefined values for contrast scaling factor, S, instead of searching...

  3. Spatial correlation genetic algorithm for fractal image compression

    International Nuclear Information System (INIS)

    Wu, M.-S.; Teng, W.-C.; Jeng, J.-H.; Hsieh, J.-G.

    2006-01-01

    Fractal image compression explores the self-similarity property of a natural image and utilizes the partitioned iterated function system (PIFS) to encode it. This technique is of great interest both in theory and application. However, it is time-consuming in the encoding process and such drawback renders it impractical for real time applications. The time is mainly spent on the search for the best-match block in a large domain pool. In this paper, a spatial correlation genetic algorithm (SC-GA) is proposed to speed up the encoder. There are two stages for the SC-GA method. The first stage makes use of spatial correlations in images for both the domain pool and the range pool to exploit local optima. The second stage is operated on the whole image to explore more adequate similarities if the local optima are not satisfied. With the aid of spatial correlation in images, the encoding time is 1.5 times faster than that of traditional genetic algorithm method, while the quality of the retrieved image is almost the same. Moreover, about half of the matched blocks come from the correlated space, so fewer bits are required to represent the fractal transform and therefore the compression ratio is also improved

  4. Interpolation decoding method with variable parameters for fractal image compression

    International Nuclear Information System (INIS)

    He Chuanjiang; Li Gaoping; Shen Xiaona

    2007-01-01

    The interpolation fractal decoding method, which is introduced by [He C, Yang SX, Huang X. Progressive decoding method for fractal image compression. IEE Proc Vis Image Signal Process 2004;3:207-13], involves generating progressively the decoded image by means of an interpolation iterative procedure with a constant parameter. It is well-known that the majority of image details are added at the first steps of iterations in the conventional fractal decoding; hence the constant parameter for the interpolation decoding method must be set as a smaller value in order to achieve a better progressive decoding. However, it needs to take an extremely large number of iterations to converge. It is thus reasonable for some applications to slow down the iterative process at the first stages of decoding and then to accelerate it afterwards (e.g., at some iteration as we need). To achieve the goal, this paper proposed an interpolation decoding scheme with variable (iteration-dependent) parameters and proved the convergence of the decoding process mathematically. Experimental results demonstrate that the proposed scheme has really achieved the above-mentioned goal

  5. Novel prediction- and subblock-based algorithm for fractal image compression

    International Nuclear Information System (INIS)

    Chung, K.-L.; Hsu, C.-H.

    2006-01-01

    Fractal encoding is the most consuming part in fractal image compression. In this paper, a novel two-phase prediction- and subblock-based fractal encoding algorithm is presented. Initially the original gray image is partitioned into a set of variable-size blocks according to the S-tree- and interpolation-based decomposition principle. In the first phase, each current block of variable-size range block tries to find the best matched domain block based on the proposed prediction-based search strategy which utilizes the relevant neighboring variable-size domain blocks. The first phase leads to a significant computation-saving effect. If the domain block found within the predicted search space is unacceptable, in the second phase, a subblock strategy is employed to partition the current variable-size range block into smaller blocks to improve the image quality. Experimental results show that our proposed prediction- and subblock-based fractal encoding algorithm outperforms the conventional full search algorithm and the recently published spatial-correlation-based algorithm by Truong et al. in terms of encoding time and image quality. In addition, the performance comparison among our proposed algorithm and the other two algorithms, the no search-based algorithm and the quadtree-based algorithm, are also investigated

  6. ITERATION FREE FRACTAL COMPRESSION USING GENETIC ALGORITHM FOR STILL COLOUR IMAGES

    Directory of Open Access Journals (Sweden)

    A.R. Nadira Banu Kamal

    2014-02-01

    Full Text Available The storage requirements for images can be excessive, if true color and a high-perceived image quality are desired. An RGB image may be viewed as a stack of three gray-scale images that when fed into the red, green and blue inputs of a color monitor, produce a color image on the screen. The abnormal size of many images leads to long, costly, transmission times. Hence, an iteration free fractal algorithm is proposed in this research paper to design an efficient search of the domain pools for colour image compression using Genetic Algorithm (GA. The proposed methodology reduces the coding process time and intensive computation tasks. Parameters such as image quality, compression ratio and coding time are analyzed. It is observed that the proposed method achieves excellent performance in image quality with reduction in storage space.

  7. Intelligent fuzzy approach for fast fractal image compression

    Science.gov (United States)

    Nodehi, Ali; Sulong, Ghazali; Al-Rodhaan, Mznah; Al-Dhelaan, Abdullah; Rehman, Amjad; Saba, Tanzila

    2014-12-01

    Fractal image compression (FIC) is recognized as a NP-hard problem, and it suffers from a high number of mean square error (MSE) computations. In this paper, a two-phase algorithm was proposed to reduce the MSE computation of FIC. In the first phase, based on edge property, range and domains are arranged. In the second one, imperialist competitive algorithm (ICA) is used according to the classified blocks. For maintaining the quality of the retrieved image and accelerating algorithm operation, we divided the solutions into two groups: developed countries and undeveloped countries. Simulations were carried out to evaluate the performance of the developed approach. Promising results thus achieved exhibit performance better than genetic algorithm (GA)-based and Full-search algorithms in terms of decreasing the number of MSE computations. The number of MSE computations was reduced by the proposed algorithm for 463 times faster compared to the Full-search algorithm, although the retrieved image quality did not have a considerable change.

  8. An efficient fractal image coding algorithm using unified feature and DCT

    International Nuclear Information System (INIS)

    Zhou Yiming; Zhang Chao; Zhang Zengke

    2009-01-01

    Fractal image compression is a promising technique to improve the efficiency of image storage and image transmission with high compression ratio, however, the huge time consumption for the fractal image coding is a great obstacle to the practical applications. In order to improve the fractal image coding, efficient fractal image coding algorithms using a special unified feature and a DCT coder are proposed in this paper. Firstly, based on a necessary condition to the best matching search rule during fractal image coding, the fast algorithm using a special unified feature (UFC) is addressed, and it can reduce the search space obviously and exclude most inappropriate matching subblocks before the best matching search. Secondly, on the basis of UFC algorithm, in order to improve the quality of the reconstructed image, a DCT coder is combined to construct a hybrid fractal image algorithm (DUFC). Experimental results show that the proposed algorithms can obtain good quality of the reconstructed images and need much less time than the baseline fractal coding algorithm.

  9. Lossy image compression for digital medical imaging systems

    Science.gov (United States)

    Wilhelm, Paul S.; Haynor, David R.; Kim, Yongmin; Nelson, Alan C.; Riskin, Eve A.

    1990-07-01

    Image compression at rates of 10:1 or greater could make PACS much more responsive and economically attractive. This paper describes a protocol for subjective and objective evaluation of the fidelity of compressed/decompressed images to the originals and presents the results ofits application to four representative and promising compression methods. The methods examined are predictive pruned tree-structured vector quantization, fractal compression, the discrete cosine transform with equal weighting of block bit allocation, and the discrete cosine transform with human visual system weighting of block bit allocation. Vector quantization is theoretically capable of producing the best compressed images, but has proven to be difficult to effectively implement. It has the advantage that it can reconstruct images quickly through a simple lookup table. Disadvantages are that codebook training is required, the method is computationally intensive, and achieving the optimum performance would require prohibitively long vector dimensions. Fractal compression is a relatively new compression technique, but has produced satisfactory results while being computationally simple. It is fast at both image compression and image reconstruction. Discrete cosine iransform techniques reproduce images well, but have traditionally been hampered by the need for intensive computing to compress and decompress images. A protocol was developed for side-by-side observer comparison of reconstructed images with originals. Three 1024 X 1024 CR (Computed Radiography) images and two 512 X 512 X-ray CT images were viewed at six bit rates (0.2, 0.4, 0.6, 0.9, 1.2, and 1.5 bpp for CR, and 1.0, 1.3, 1.6, 1.9, 2.2, 2.5 bpp for X-ray CT) by nine radiologists at the University of Washington Medical Center. The CR images were viewed on a Pixar II Megascan (2560 X 2048) monitor and the CT images on a Sony (1280 X 1024) monitor. The radiologists' subjective evaluations of image fidelity were compared to

  10. A Parallel Approach to Fractal Image Compression

    OpenAIRE

    Lubomir Dedera

    2004-01-01

    The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  11. A Parallel Approach to Fractal Image Compression

    Directory of Open Access Journals (Sweden)

    Lubomir Dedera

    2004-01-01

    Full Text Available The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  12. An enhanced fractal image denoising algorithm

    International Nuclear Information System (INIS)

    Lu Jian; Ye Zhongxing; Zou Yuru; Ye Ruisong

    2008-01-01

    In recent years, there has been a significant development in image denoising using fractal-based method. This paper presents an enhanced fractal predictive denoising algorithm for denoising the images corrupted by an additive white Gaussian noise (AWGN) by using quadratic gray-level function. Meanwhile, a quantization method for the fractal gray-level coefficients of the quadratic function is proposed to strictly guarantee the contractivity requirement of the enhanced fractal coding, and in terms of the quality of the fractal representation measured by PSNR, the enhanced fractal image coding using quadratic gray-level function generally performs better than the standard fractal coding using linear gray-level function. Based on this enhanced fractal coding, the enhanced fractal image denoising is implemented by estimating the fractal gray-level coefficients of the quadratic function of the noiseless image from its noisy observation. Experimental results show that, compared with other standard fractal-based image denoising schemes using linear gray-level function, the enhanced fractal denoising algorithm can improve the quality of the restored image efficiently

  13. Fractal image coding by an approximation of the collage error

    Science.gov (United States)

    Salih, Ismail; Smith, Stanley H.

    1998-12-01

    In fractal image compression an image is coded as a set of contractive transformations, and is guaranteed to generate an approximation to the original image when iteratively applied to any initial image. In this paper we present a method for mapping similar regions within an image by an approximation of the collage error; that is, range blocks can be approximated by a linear combination of domain blocks.

  14. Modified Three-Step Search Block Matching Motion Estimation and Weighted Finite Automata based Fractal Video Compression

    Directory of Open Access Journals (Sweden)

    Shailesh Kamble

    2017-08-01

    Full Text Available The major challenge with fractal image/video coding technique is that, it requires more encoding time. Therefore, how to reduce the encoding time is the research component remains in the fractal coding. Block matching motion estimation algorithms are used, to reduce the computations performed in the process of encoding. The objective of the proposed work is to develop an approach for video coding using modified three step search (MTSS block matching algorithm and weighted finite automata (WFA coding with a specific focus on reducing the encoding time. The MTSS block matching algorithm are used for computing motion vectors between the two frames i.e. displacement of pixels and WFA is used for the coding as it behaves like the Fractal Coding (FC. WFA represents an image (frame or motion compensated prediction error based on the idea of fractal that the image has self-similarity in itself. The self-similarity is sought from the symmetry of an image, so the encoding algorithm divides an image into multi-levels of quad-tree segmentations and creates an automaton from the sub-images. The proposed MTSS block matching algorithm is based on the combination of rectangular and hexagonal search pattern and compared with the existing New Three-Step Search (NTSS, Three-Step Search (TSS, and Efficient Three-Step Search (ETSS block matching estimation algorithm. The performance of the proposed MTSS block matching algorithm is evaluated on the basis of performance evaluation parameters i.e. mean absolute difference (MAD and average search points required per frame. Mean of absolute difference (MAD distortion function is used as the block distortion measure (BDM. Finally, developed approaches namely, MTSS and WFA, MTSS and FC, and Plane FC (applied on every frame are compared with each other. The experimentations are carried out on the standard uncompressed video databases, namely, akiyo, bus, mobile, suzie, traffic, football, soccer, ice etc. Developed

  15. Fractal Image Compression Based on High Entropy Values Technique

    Directory of Open Access Journals (Sweden)

    Douaa Younis Abbaas

    2018-04-01

    Full Text Available There are many attempts tried to improve the encoding stage of FIC because it consumed time. These attempts worked by reducing size of the search pool for pair range-domain matching but most of them led to get a bad quality, or a lower compression ratio of reconstructed image. This paper aims to present a method to improve performance of the full search algorithm by combining FIC (lossy compression and another lossless technique (in this case entropy coding is used. The entropy technique will reduce size of the domain pool (i. e., number of domain blocks based on the entropy value of each range block and domain block and then comparing the results of full search algorithm and proposed algorithm based on entropy technique to see each of which give best results (such as reduced the encoding time with acceptable values in both compression quali-ty parameters which are C. R (Compression Ratio and PSNR (Image Quality. The experimental results of the proposed algorithm proven that using the proposed entropy technique reduces the encoding time while keeping compression rates and reconstruction image quality good as soon as possible.

  16. Fractal-Based Image Analysis In Radiological Applications

    Science.gov (United States)

    Dellepiane, S.; Serpico, S. B.; Vernazza, G.; Viviani, R.

    1987-10-01

    We present some preliminary results of a study aimed to assess the actual effectiveness of fractal theory and to define its limitations in the area of medical image analysis for texture description, in particular, in radiological applications. A general analysis to select appropriate parameters (mask size, tolerance on fractal dimension estimation, etc.) has been performed on synthetically generated images of known fractal dimensions. Moreover, we analyzed some radiological images of human organs in which pathological areas can be observed. Input images were subdivided into blocks of 6x6 pixels; then, for each block, the fractal dimension was computed in order to create fractal images whose intensity was related to the D value, i.e., texture behaviour. Results revealed that the fractal images could point out the differences between normal and pathological tissues. By applying histogram-splitting segmentation to the fractal images, pathological areas were isolated. Two different techniques (i.e., the method developed by Pentland and the "blanket" method) were employed to obtain fractal dimension values, and the results were compared; in both cases, the appropriateness of the fractal description of the original images was verified.

  17. A novel fractal image compression scheme with block classification and sorting based on Pearson's correlation coefficient.

    Science.gov (United States)

    Wang, Jianji; Zheng, Nanning

    2013-09-01

    Fractal image compression (FIC) is an image coding technology based on the local similarity of image structure. It is widely used in many fields such as image retrieval, image denoising, image authentication, and encryption. FIC, however, suffers from the high computational complexity in encoding. Although many schemes are published to speed up encoding, they do not easily satisfy the encoding time or the reconstructed image quality requirements. In this paper, a new FIC scheme is proposed based on the fact that the affine similarity between two blocks in FIC is equivalent to the absolute value of Pearson's correlation coefficient (APCC) between them. First, all blocks in the range and domain pools are chosen and classified using an APCC-based block classification method to increase the matching probability. Second, by sorting the domain blocks with respect to APCCs between these domain blocks and a preset block in each class, the matching domain block for a range block can be searched in the selected domain set in which these APCCs are closer to APCC between the range block and the preset block. Experimental results show that the proposed scheme can significantly speed up the encoding process in FIC while preserving the reconstructed image quality well.

  18. A Review on Block Matching Motion Estimation and Automata Theory based Approaches for Fractal Coding

    Directory of Open Access Journals (Sweden)

    Shailesh Kamble

    2016-12-01

    Full Text Available Fractal compression is the lossy compression technique in the field of gray/color image and video compression. It gives high compression ratio, better image quality with fast decoding time but improvement in encoding time is a challenge. This review paper/article presents the analysis of most significant existing approaches in the field of fractal based gray/color images and video compression, different block matching motion estimation approaches for finding out the motion vectors in a frame based on inter-frame coding and intra-frame coding i.e. individual frame coding and automata theory based coding approaches to represent an image/sequence of images. Though different review papers exist related to fractal coding, this paper is different in many sense. One can develop the new shape pattern for motion estimation and modify the existing block matching motion estimation with automata coding to explore the fractal compression technique with specific focus on reducing the encoding time and achieving better image/video reconstruction quality. This paper is useful for the beginners in the domain of video compression.

  19. A fractal-based image encryption system

    KAUST Repository

    Abd-El-Hafiz, S. K.; Radwan, Ahmed Gomaa; Abdel Haleem, Sherif H.; Barakat, Mohamed L.

    2014-01-01

    single-fractal image and statistical analysis is performed. A general encryption system utilising multiple fractal images is, then, introduced to improve the performance and increase the encryption key up to hundreds of bits. This improvement is achieved

  20. An Efficient Fractal Video Sequences Codec with Multiviews

    Directory of Open Access Journals (Sweden)

    Shiping Zhu

    2013-01-01

    Full Text Available Multiview video consists of multiple views of the same scene. They require enormous amount of data to achieve high image quality, which makes it indispensable to compress multiview video. Therefore, data compression is a major issue for multiviews. In this paper, we explore an efficient fractal video codec to compress multiviews. The proposed scheme first compresses a view-dependent geometry of the base view using fractal video encoder with homogeneous region condition. With the extended fractional pel motion estimation algorithm and fast disparity estimation algorithm, it then generates prediction images of other views. The prediction image uses the image-based rendering techniques based on the decoded video. And the residual signals are obtained by the prediction image and the original image. Finally, it encodes residual signals by the fractal video encoder. The idea is also to exploit the statistical dependencies from both temporal and interview reference pictures for motion compensated prediction. Experimental results show that the proposed algorithm is consistently better than JMVC8.5, with 62.25% bit rate decrease and 0.37 dB PSNR increase based on the Bjontegaard metric, and the total encoding time (TET of the proposed algorithm is reduced by 92%.

  1. Pre-Service Teachers' Concept Images on Fractal Dimension

    Science.gov (United States)

    Karakus, Fatih

    2016-01-01

    The analysis of pre-service teachers' concept images can provide information about their mental schema of fractal dimension. There is limited research on students' understanding of fractal and fractal dimension. Therefore, this study aimed to investigate the pre-service teachers' understandings of fractal dimension based on concept image. The…

  2. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    Science.gov (United States)

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  3. Fractal Image Coding with Digital Watermarks

    Directory of Open Access Journals (Sweden)

    Z. Klenovicova

    2000-12-01

    Full Text Available In this paper are presented some results of implementation of digitalwatermarking methods into image coding based on fractal principles. Thepaper focuses on two possible approaches of embedding digitalwatermarks into fractal code of images - embedding digital watermarksinto parameters for position of similar blocks and coefficients ofblock similarity. Both algorithms were analyzed and verified on grayscale static images.

  4. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    Directory of Open Access Journals (Sweden)

    Huiyan Jiang

    2012-01-01

    Full Text Available An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  5. A fractal-based image encryption system

    KAUST Repository

    Abd-El-Hafiz, S. K.

    2014-12-01

    This study introduces a novel image encryption system based on diffusion and confusion processes in which the image information is hidden inside the complex details of fractal images. A simplified encryption technique is, first, presented using a single-fractal image and statistical analysis is performed. A general encryption system utilising multiple fractal images is, then, introduced to improve the performance and increase the encryption key up to hundreds of bits. This improvement is achieved through several parameters: feedback delay, multiplexing and independent horizontal or vertical shifts. The effect of each parameter is studied separately and, then, they are combined to illustrate their influence on the encryption quality. The encryption quality is evaluated using different analysis techniques such as correlation coefficients, differential attack measures, histogram distributions, key sensitivity analysis and the National Institute of Standards and Technology (NIST) statistical test suite. The obtained results show great potential compared to other techniques.

  6. Fractal Dimension Of CT Images Of Normal Parotid Glands

    International Nuclear Information System (INIS)

    Lee, Sang Jin; Heo, Min Suk; You, Dong Soo

    1999-01-01

    This study was to investigate the age and sex differences of the fractal dimension of the normal parotid glands in the digitized CT images. The six groups, which were composed of 42 men and women from 20's, 40's and 60's and over were picked. Each group contained seven people of the same sex. The normal parotid CT images were digitized, and their fractal dimensions were calculated using Scion Image PC program. The mean of fractal dimensions in males was 1.7292 (+/-0.0588) and 1.6329 (+/-0.0425) in females. The mean of fractal dimensions in young males was 1.7617, 1.7328 in middle males, and 1.6933 in old males. The mean of fractal dimensions in young females was 1.6318, 1.6365 in middle females, and 1.6303 in old females. There was no statistical difference in fractal dimension between left and right parotid gland of the same subject (p>0.05). Fractal dimensions in male were decreased in older group (p 0.05). The fractal dimension of parotid glands in the digitized CT images will be useful to evaluate the age and sex differences.

  7. Fractal characterization of brain lesions in CT images

    International Nuclear Information System (INIS)

    Jauhari, Rajnish K.; Trivedi, Rashmi; Munshi, Prabhat; Sahni, Kamal

    2005-01-01

    Fractal Dimension (FD) is a parameter used widely for classification, analysis, and pattern recognition of images. In this work we explore the quantification of CT (computed tomography) lesions of the brain by using fractal theory. Five brain lesions, which are portions of CT images of diseased brains, are used for the study. These lesions exhibit self-similarity over a chosen range of scales, and are broadly characterized by their fractal dimensions

  8. Determination of fish gender using fractal analysis of ultrasound images

    DEFF Research Database (Denmark)

    McEvoy, Fintan J.; Tomkiewicz, Jonna; Støttrup, Josianne

    2009-01-01

    The gender of cod Gadus morhua can be determined by considering the complexity in their gonadal ultrasonographic appearance. The fractal dimension (DB) can be used to describe this feature in images. B-mode gonadal ultrasound images in 32 cod, where gender was known, were collected. Fractal...... by subjective analysis alone. The mean (and standard deviation) of the fractal dimension DB for male fish was 1.554 (0.073) while for female fish it was 1.468 (0.061); the difference was statistically significant (P=0.001). The area under the ROC curve was 0.84 indicating the value of fractal analysis in gender...... result. Fractal analysis is useful for gender determination in cod. This or a similar form of analysis may have wide application in veterinary imaging as a tool for quantification of complexity in images...

  9. Password Authentication Based on Fractal Coding Scheme

    Directory of Open Access Journals (Sweden)

    Nadia M. G. Al-Saidi

    2012-01-01

    Full Text Available Password authentication is a mechanism used to authenticate user identity over insecure communication channel. In this paper, a new method to improve the security of password authentication is proposed. It is based on the compression capability of the fractal image coding to provide an authorized user a secure access to registration and login process. In the proposed scheme, a hashed password string is generated and encrypted to be captured together with the user identity using text to image mechanisms. The advantage of fractal image coding is to be used to securely send the compressed image data through a nonsecured communication channel to the server. The verification of client information with the database system is achieved in the server to authenticate the legal user. The encrypted hashed password in the decoded fractal image is recognized using optical character recognition. The authentication process is performed after a successful verification of the client identity by comparing the decrypted hashed password with those which was stored in the database system. The system is analyzed and discussed from the attacker’s viewpoint. A security comparison is performed to show that the proposed scheme provides an essential security requirement, while their efficiency makes it easier to be applied alone or in hybrid with other security methods. Computer simulation and statistical analysis are presented.

  10. FRACTAL IMAGE FEATURE VECTORS WITH APPLICATIONS IN FRACTOGRAPHY

    Directory of Open Access Journals (Sweden)

    Hynek Lauschmann

    2011-05-01

    Full Text Available The morphology of fatigue fracture surface (caused by constant cycle loading is strictly related to crack growth rate. This relation may be expressed, among other methods, by means of fractal analysis. Fractal dimension as a single numerical value is not sufficient. Two types of fractal feature vectors are discussed: multifractal and multiparametric. For analysis of images, the box-counting method for 3D is applied with respect to the non-homogeneity of dimensions (two in space, one in brightness. Examples of application are shown: images of several fracture surfaces are analyzed and related to crack growth rate.

  11. Fractal Image Coding Based on a Fitting Surface

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2014-01-01

    Full Text Available A no-search fractal image coding method based on a fitting surface is proposed. In our research, an improved gray-level transform with a fitting surface is introduced. One advantage of this method is that the fitting surface is used for both the range and domain blocks and one set of parameters can be saved. Another advantage is that the fitting surface can approximate the range and domain blocks better than the previous fitting planes; this can result in smaller block matching errors and better decoded image quality. Since the no-search and quadtree techniques are adopted, smaller matching errors also imply less number of blocks matching which results in a faster encoding process. Moreover, by combining all the fitting surfaces, a fitting surface image (FSI is also proposed to speed up the fractal decoding. Experiments show that our proposed method can yield superior performance over the other three methods. Relative to range-averaged image, FSI can provide faster fractal decoding process. Finally, by combining the proposed fractal coding method with JPEG, a hybrid coding method is designed which can provide higher PSNR than JPEG while maintaining the same Bpp.

  12. Fractal analysis in radiological and nuclear medicine perfusion imaging: a systematic review

    Energy Technology Data Exchange (ETDEWEB)

    Michallek, Florian; Dewey, Marc [Humboldt-Universitaet zu Berlin, Freie Universitaet Berlin, Charite - Universitaetsmedizin Berlin, Medical School, Department of Radiology, Berlin (Germany)

    2014-01-15

    To provide an overview of recent research in fractal analysis of tissue perfusion imaging, using standard radiological and nuclear medicine imaging techniques including computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, positron emission tomography (PET) and single-photon emission computed tomography (SPECT) and to discuss implications for different fields of application. A systematic review of fractal analysis for tissue perfusion imaging was performed by searching the databases MEDLINE (via PubMed), EMBASE (via Ovid) and ISI Web of Science. Thirty-seven eligible studies were identified. Fractal analysis was performed on perfusion imaging of tumours, lung, myocardium, kidney, skeletal muscle and cerebral diseases. Clinically, different aspects of tumour perfusion and cerebral diseases were successfully evaluated including detection and classification. In physiological settings, it was shown that perfusion under different conditions and in various organs can be properly described using fractal analysis. Fractal analysis is a suitable method for quantifying heterogeneity from radiological and nuclear medicine perfusion images under a variety of conditions and in different organs. Further research is required to exploit physiologically proven fractal behaviour in the clinical setting. (orig.)

  13. Fractal analysis in radiological and nuclear medicine perfusion imaging: a systematic review

    International Nuclear Information System (INIS)

    Michallek, Florian; Dewey, Marc

    2014-01-01

    To provide an overview of recent research in fractal analysis of tissue perfusion imaging, using standard radiological and nuclear medicine imaging techniques including computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, positron emission tomography (PET) and single-photon emission computed tomography (SPECT) and to discuss implications for different fields of application. A systematic review of fractal analysis for tissue perfusion imaging was performed by searching the databases MEDLINE (via PubMed), EMBASE (via Ovid) and ISI Web of Science. Thirty-seven eligible studies were identified. Fractal analysis was performed on perfusion imaging of tumours, lung, myocardium, kidney, skeletal muscle and cerebral diseases. Clinically, different aspects of tumour perfusion and cerebral diseases were successfully evaluated including detection and classification. In physiological settings, it was shown that perfusion under different conditions and in various organs can be properly described using fractal analysis. Fractal analysis is a suitable method for quantifying heterogeneity from radiological and nuclear medicine perfusion images under a variety of conditions and in different organs. Further research is required to exploit physiologically proven fractal behaviour in the clinical setting. (orig.)

  14. Effect of exposure time and image resolution on fractal dimension

    International Nuclear Information System (INIS)

    An, Byung Mo; Heo, Min Suk; Lee, Seung Pyo; Lee, Sam Sun; Choi, Soon Chul; Park, Tae Won; Kim, Jong Dae

    2002-01-01

    To evaluate the effect of exposure time and image resolution on fractal dimension calculations for determining the optimal range of these two variances. Thirty-one radiographs of the mandibular angle area of sixteen human dry mandibles were taken at different exposure times (0.01, 0.08, 0.16, 0.25, 0.40, 0.64, and 0.80 s). Each radiograph was digitized at 1200 dpi, 8 bit, 256 gray level using a film scanner. We selected an Region of Interest (ROI) that corresponded to the same region as in each radiograph, but the resolution of ROI was degraded to 1000, 800, 600, 500, 400, 300, 200, and 100 dpi. The fractal dimension was calculated by using the tile-counting method for each image, and the calculated values were then compared statistically. As the exposure time and the image resolution increased, the mean value of the fractal dimension decreased, except the case where exposure time was set at 0.01 seconds (alpha = 0.05). The exposure time and image resolution affected the fractal dimension by interaction (p<0.001). When the exposure time was set to either 0.64 seconds or 0.80 seconds, the resulting fractal dimensions were lower, irrespective of image resolution, than at shorter exposure times (alpha = 0.05). The optimal range for exposure time and resolution was determined to be 0.08-0.40 seconds and from 400-1000 dpi, respectively. Adequate exposure time and image resolution is essential for acquiring the fractal dimension using tile-counting method for evaluation of the mandible.

  15. Single-Image Super-Resolution Based on Rational Fractal Interpolation.

    Science.gov (United States)

    Zhang, Yunfeng; Fan, Qinglan; Bao, Fangxun; Liu, Yifang; Zhang, Caiming

    2018-08-01

    This paper presents a novel single-image super-resolution (SR) procedure, which upscales a given low-resolution (LR) input image to a high-resolution image while preserving the textural and structural information. First, we construct a new type of bivariate rational fractal interpolation model and investigate its analytical properties. This model has different forms of expression with various values of the scaling factors and shape parameters; thus, it can be employed to better describe image features than current interpolation schemes. Furthermore, this model combines the advantages of rational interpolation and fractal interpolation, and its effectiveness is validated through theoretical analysis. Second, we develop a single-image SR algorithm based on the proposed model. The LR input image is divided into texture and non-texture regions, and then, the image is interpolated according to the characteristics of the local structure. Specifically, in the texture region, the scaling factor calculation is the critical step. We present a method to accurately calculate scaling factors based on local fractal analysis. Extensive experiments and comparisons with the other state-of-the-art methods show that our algorithm achieves competitive performance, with finer details and sharper edges.

  16. Probability- and curve-based fractal reconstruction on 2D DEM terrain profile

    International Nuclear Information System (INIS)

    Lai, F.-J.; Huang, Y.M.

    2009-01-01

    Data compression and reconstruction has been playing important roles in information science and engineering. As part of them, image compression and reconstruction that mainly deal with image data set reduction for storage or transmission and data set restoration with least loss is still a topic deserved a great deal of works to focus on. In this paper we propose a new scheme in comparison with the well-known Improved Douglas-Peucker (IDP) method to extract characteristic or feature points of two-dimensional digital elevation model (2D DEM) terrain profile to compress data set. As for reconstruction in use of fractal interpolation, we propose a probability-based method to speed up the fractal interpolation execution to a rate as high as triple or even ninefold of the regular. In addition, a curve-based method is proposed in the study to determine the vertical scaling factor that much affects the generation of the interpolated data points to significantly improve the reconstruction performance. Finally, an evaluation is made to show the advantage of employing the proposed new method to extract characteristic points associated with our novel fractal interpolation scheme.

  17. L-system fractals

    CERN Document Server

    Mishra, Jibitesh

    2007-01-01

    The book covers all the fundamental aspects of generating fractals through L-system. Also it provides insight to various researches in this area for generating fractals through L-system approach & estimating dimensions. Also it discusses various applications of L-system fractals. Key Features: - Fractals generated from L-System including hybrid fractals - Dimension calculation for L-system fractals - Images & codes for L-system fractals - Research directions in the area of L-system fractals - Usage of various freely downloadable tools in this area - Fractals generated from L-System including hybrid fractals- Dimension calculation for L-system fractals- Images & codes for L-system fractals- Research directions in the area of L-system fractals- Usage of various freely downloadable tools in this area

  18. Radiological Image Compression

    Science.gov (United States)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  19. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Directory of Open Access Journals (Sweden)

    Xiangwei Li

    2014-12-01

    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  20. Fractal Loop Heat Pipe Performance Comparisons of a Soda Lime Glass and Compressed Carbon Foam Wick

    Science.gov (United States)

    Myre, David; Silk, Eric A.

    2014-01-01

    This study compares heat flux performance of a Loop Heat Pipe (LHP) wick structure fabricated from compressed carbon foam with that of a wick structure fabricated from sintered soda lime glass. Each wick was used in an LHP containing a fractal based evaporator. The Fractal Loop Heat Pipe (FLHP) was designed and manufactured by Mikros Manufacturing Inc. The compressed carbon foam wick structure was manufactured by ERG Aerospace Inc., and machined to specifications comparable to that of the initial soda lime glass wick structure. Machining of the compressed foam as well as performance testing was conducted at the United States Naval Academy. Performance testing with the sintered soda lime glass wick structures was conducted at NASA Goddard Space Flight Center. Heat input for both wick structures was supplied via cartridge heaters mounted in a copper block. The copper heater block was placed in contact with the FLHP evaporator which had a circular cross-sectional area of 0.88 cm(sup 2). Twice distilled, deionized water was used as the working fluid in both sets of experiments. Thermal performance data was obtained for three different Condenser/Subcooler temperatures under degassed conditions. Both wicks demonstrated comparable heat flux performance with a maximum of 75 W/cm observed for the soda lime glass wick and 70 W /cm(sup 2) for the compressed carbon foam wick.

  1. Study on fractal characteristics of remote sensing image in the typical volcanic uranium metallogenic areas

    International Nuclear Information System (INIS)

    Pan Wei; Ni Guoqiang; Li Hanbo

    2010-01-01

    Computing Methods of fractal dimension and multifractal spectrum about the remote sensing image are briefly introduced. The fractal method is used to study the characteristics of remote sensing images in Xiangshan and Yuhuashan volcanic uranium metallogenic areas in southern China. The research results indicate that the Xiangshan basin in which lots of volcanic uranium deposits occur,is of bigger fractal dimension based on remote sensing image texture than that of the Yuhuashan basin in which two uranium ore occurrences exist, and the multifractal spectrum in the Xiangshan basin obviously leans to less singularity index than in the Yuhuashan basin. The relation of the fractal dimension and multifractal singularity of remote sensing image to uranium metallogeny are discussed. The fractal dimension and multifractal singularity index of remote sensing image may be used to predict the volcanic uranium metallogenic areas. (authors)

  2. Novel welding image processing method based on fractal theory

    Institute of Scientific and Technical Information of China (English)

    陈强; 孙振国; 肖勇; 路井荣

    2002-01-01

    Computer vision has come into used in the fields of welding process control and automation. In order to improve precision and rapidity of welding image processing, a novel method based on fractal theory has been put forward in this paper. Compared with traditional methods, the image is preliminarily processed in the macroscopic regions then thoroughly analyzed in the microscopic regions in the new method. With which, an image is divided up to some regions according to the different fractal characters of image edge, and the fuzzy regions including image edges are detected out, then image edges are identified with Sobel operator and curved by LSM (Lease Square Method). Since the data to be processed have been decreased and the noise of image has been reduced, it has been testified through experiments that edges of weld seam or weld pool could be recognized correctly and quickly.

  3. Infrared Image Segmentation by Combining Fractal Geometry with Wavelet Transformation

    Directory of Open Access Journals (Sweden)

    Xionggang Tu

    2014-11-01

    Full Text Available An infrared image is decomposed into three levels by discrete stationary wavelet transform (DSWT. Noise is reduced by wiener filter in the high resolution levels in the DSWT domain. Nonlinear gray transformation operation is used to enhance details in the low resolution levels in the DSWT domain. Enhanced infrared image is obtained by inverse DSWT. The enhanced infrared image is divided into many small blocks. The fractal dimensions of all the blocks are computed. Region of interest (ROI is extracted by combining all the blocks, which have similar fractal dimensions. ROI is segmented by global threshold method. The man-made objects are efficiently separated from the infrared image by the proposed method.

  4. Analysis of fractal dimensions of rat bones from film and digital images

    Science.gov (United States)

    Pornprasertsuk, S.; Ludlow, J. B.; Webber, R. L.; Tyndall, D. A.; Yamauchi, M.

    2001-01-01

    OBJECTIVES: (1) To compare the effect of two different intra-oral image receptors on estimates of fractal dimension; and (2) to determine the variations in fractal dimensions between the femur, tibia and humerus of the rat and between their proximal, middle and distal regions. METHODS: The left femur, tibia and humerus from 24 4-6-month-old Sprague-Dawley rats were radiographed using intra-oral film and a charge-coupled device (CCD). Films were digitized at a pixel density comparable to the CCD using a flat-bed scanner. Square regions of interest were selected from proximal, middle, and distal regions of each bone. Fractal dimensions were estimated from the slope of regression lines fitted to plots of log power against log spatial frequency. RESULTS: The fractal dimensions estimates from digitized films were significantly greater than those produced from the CCD (P=0.0008). Estimated fractal dimensions of three types of bone were not significantly different (P=0.0544); however, the three regions of bones were significantly different (P=0.0239). The fractal dimensions estimated from radiographs of the proximal and distal regions of the bones were lower than comparable estimates obtained from the middle region. CONCLUSIONS: Different types of image receptors significantly affect estimates of fractal dimension. There was no difference in the fractal dimensions of the different bones but the three regions differed significantly.

  5. Measurement of heterogeneous distribution on technegas SPECT images by three-dimensional fractal analysis

    International Nuclear Information System (INIS)

    Nagao, Michinobu; Murase, Kenya

    2002-01-01

    This review article describes a method for quantifying heterogeneous distribution on Technegas ( 99m Tc-carbon particle radioaerosol) SPECT images by three-dimensional fractal analysis (3D-FA). Technegas SPECT was performed to quantify the severity of pulmonary emphysema. We delineated the SPECT images by using five cut-offs (15, 20, 25, 30 and 35% of the maximal voxel radioactivity), and measured the total number of voxels in the areas surrounded by the contours obtained with each cut-off level. We calculated fractal dimensions from the relationship between the total number of voxels and the cut-off levels transformed into natural logarithms. The fractal dimension derived from 3D-FA is the relative and objective measurement, which can assess the heterogeneous distribution on Technegas SPECT images. The fractal dimension strongly correlate pulmonary function in patients with emphysema and well documented the overall and regional severity of emphysema. (author)

  6. Image quality (IQ) guided multispectral image compression

    Science.gov (United States)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  7. Predicting beauty: fractal dimension and visual complexity in art.

    Science.gov (United States)

    Forsythe, A; Nadal, M; Sheehy, N; Cela-Conde, C J; Sawey, M

    2011-02-01

    Visual complexity has been known to be a significant predictor of preference for artistic works for some time. The first study reported here examines the extent to which perceived visual complexity in art can be successfully predicted using automated measures of complexity. Contrary to previous findings the most successful predictor of visual complexity was Gif compression. The second study examined the extent to which fractal dimension could account for judgments of perceived beauty. The fractal dimension measure accounts for more of the variance in judgments of perceived beauty in visual art than measures of visual complexity alone, particularly for abstract and natural images. Results also suggest that when colour is removed from an artistic image observers are unable to make meaningful judgments as to its beauty. ©2010 The British Psychological Society.

  8. Compression for radiological images

    Science.gov (United States)

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  9. Mammography image compression using Wavelet

    International Nuclear Information System (INIS)

    Azuhar Ripin; Md Saion Salikin; Wan Hazlinda Ismail; Asmaliza Hashim; Norriza Md Isa

    2004-01-01

    Image compression plays an important role in many applications like medical imaging, televideo conferencing, remote sensing, document and facsimile transmission, which depend on the efficient manipulation, storage, and transmission of binary, gray scale, or color images. In Medical imaging application such Picture Archiving and Communication System (PACs), the image size or image stream size is too large and requires a large amount of storage space or high bandwidth for communication. Image compression techniques are divided into two categories namely lossy and lossless data compression. Wavelet method used in this project is a lossless compression method. In this method, the exact original mammography image data can be recovered. In this project, mammography images are digitized by using Vider Sierra Plus digitizer. The digitized images are compressed by using this wavelet image compression technique. Interactive Data Language (IDLs) numerical and visualization software is used to perform all of the calculations, to generate and display all of the compressed images. Results of this project are presented in this paper. (Author)

  10. Texture segmentation of non-cooperative spacecrafts images based on wavelet and fractal dimension

    Science.gov (United States)

    Wu, Kanzhi; Yue, Xiaokui

    2011-06-01

    With the increase of on-orbit manipulations and space conflictions, missions such as tracking and capturing the target spacecrafts are aroused. Unlike cooperative spacecrafts, fixing beacons or any other marks on the targets is impossible. Due to the unknown shape and geometry features of non-cooperative spacecraft, in order to localize the target and obtain the latitude, we need to segment the target image and recognize the target from the background. The data and errors during the following procedures such as feature extraction and matching can also be reduced. Multi-resolution analysis of wavelet theory reflects human beings' recognition towards images from low resolution to high resolution. In addition, spacecraft is the only man-made object in the image compared to the natural background and the differences will be certainly observed between the fractal dimensions of target and background. Combined wavelet transform and fractal dimension, in this paper, we proposed a new segmentation algorithm for the images which contains complicated background such as the universe and planet surfaces. At first, Daubechies wavelet basis is applied to decompose the image in both x axis and y axis, thus obtain four sub-images. Then, calculate the fractal dimensions in four sub-images using different methods; after analyzed the results of fractal dimensions in sub-images, we choose Differential Box Counting in low resolution image as the principle to segment the texture which has the greatest divergences between different sub-images. This paper also presents the results of experiments by using the algorithm above. It is demonstrated that an accurate texture segmentation result can be obtained using the proposed technique.

  11. Plant Identification Based on Leaf Midrib Cross-Section Images Using Fractal Descriptors.

    Directory of Open Access Journals (Sweden)

    Núbia Rosa da Silva

    Full Text Available The correct identification of plants is a common necessity not only to researchers but also to the lay public. Recently, computational methods have been employed to facilitate this task, however, there are few studies front of the wide diversity of plants occurring in the world. This study proposes to analyse images obtained from cross-sections of leaf midrib using fractal descriptors. These descriptors are obtained from the fractal dimension of the object computed at a range of scales. In this way, they provide rich information regarding the spatial distribution of the analysed structure and, as a consequence, they measure the multiscale morphology of the object of interest. In Biology, such morphology is of great importance because it is related to evolutionary aspects and is successfully employed to characterize and discriminate among different biological structures. Here, the fractal descriptors are used to identify the species of plants based on the image of their leaves. A large number of samples are examined, being 606 leaf samples of 50 species from Brazilian flora. The results are compared to other imaging methods in the literature and demonstrate that fractal descriptors are precise and reliable in the taxonomic process of plant species identification.

  12. Influence of water-soaking time on the acoustic emission characteristics and spatial fractal dimensions of coal under uniaxial compression

    Directory of Open Access Journals (Sweden)

    Jia Zheqiang

    2017-01-01

    Full Text Available The water-soaking time affects the physical and mechanical properties of coals, and the temporal and spatial evolution of acoustic emissions reflects the fracture damage process of rock. This study conducted uniaxial compression acoustic emissions tests of coal samples with different water-soaking times to investigate the influence of water-soaking time on the acoustic emissions characteristics and spatial fractal dimensions during the deformation and failure process of coals. The results demonstrate that the acoustic emissions characteristics decrease with increases in the water-soaking time. The acoustic emissions spatial fractal dimension changes from a single dimensionality reduction model to a fluctuation dimensionality reduction model, and the stress level of the initial descending point of the fractal dimension increases. With increases in the water-soaking time, the destruction of coal transitions from continuous intense failure throughout the process to a lower release of energy concentrated near the peak strength.

  13. Characteristics of Crushing Energy and Fractal of Magnetite Ore under Uniaxial Compression

    Science.gov (United States)

    Gao, F.; Gan, D. Q.; Zhang, Y. B.

    2018-03-01

    The crushing mechanism of magnetite ore is a critical theoretical problem on the controlling of energy dissipation and machine crushing quality in ore material processing. Uniaxial crushing tests were carried out to research the deformation mechanism and the laws of the energy evolution, based on which the crushing mechanism of magnetite ore was explored. The compaction stage and plasticity and damage stage are two main compression deformation stages, the main transitional forms from inner damage to fracture are plastic deformation and stick-slip. In the process of crushing, plasticity and damage stage is the key link on energy absorption for that the specimen tends to saturate energy state approaching to the peak stress. The characteristics of specimen deformation and energy dissipation can synthetically reply the state of existed defects inner raw magnetite ore and the damage process during loading period. The fast releasing of elastic energy and the work done by the press machine commonly make raw magnetite ore thoroughly broken after peak stress. Magnetite ore fragments have statistical self-similarity and size threshold of fractal characteristics under uniaxial squeezing crushing. The larger ratio of releasable elastic energy and dissipation energy and the faster energy change rate is the better fractal properties and crushing quality magnetite ore has under uniaxial crushing.

  14. Time Series Analysis OF SAR Image Fractal Maps: The Somma-Vesuvio Volcanic Complex Case Study

    Science.gov (United States)

    Pepe, Antonio; De Luca, Claudio; Di Martino, Gerardo; Iodice, Antonio; Manzo, Mariarosaria; Pepe, Susi; Riccio, Daniele; Ruello, Giuseppe; Sansosti, Eugenio; Zinno, Ivana

    2016-04-01

    The fractal dimension is a significant geophysical parameter describing natural surfaces representing the distribution of the roughness over different spatial scale; in case of volcanic structures, it has been related to the specific nature of materials and to the effects of active geodynamic processes. In this work, we present the analysis of the temporal behavior of the fractal dimension estimates generated from multi-pass SAR images relevant to the Somma-Vesuvio volcanic complex (South Italy). To this aim, we consider a Cosmo-SkyMed data-set of 42 stripmap images acquired from ascending orbits between October 2009 and December 2012. Starting from these images, we generate a three-dimensional stack composed by the corresponding fractal maps (ordered according to the acquisition dates), after a proper co-registration. The time-series of the pixel-by-pixel estimated fractal dimension values show that, over invariant natural areas, the fractal dimension values do not reveal significant changes; on the contrary, over urban areas, it correctly assumes values outside the natural surfaces fractality range and show strong fluctuations. As a final result of our analysis, we generate a fractal map that includes only the areas where the fractal dimension is considered reliable and stable (i.e., whose standard deviation computed over the time series is reasonably small). The so-obtained fractal dimension map is then used to identify areas that are homogeneous from a fractal viewpoint. Indeed, the analysis of this map reveals the presence of two distinctive landscape units corresponding to the Mt. Vesuvio and Gran Cono. The comparison with the (simplified) geological map clearly shows the presence in these two areas of volcanic products of different age. The presented fractal dimension map analysis demonstrates the ability to get a figure about the evolution degree of the monitored volcanic edifice and can be profitably extended in the future to other volcanic systems with

  15. Image compression of bone images

    International Nuclear Information System (INIS)

    Hayrapetian, A.; Kangarloo, H.; Chan, K.K.; Ho, B.; Huang, H.K.

    1989-01-01

    This paper reports a receiver operating characteristic (ROC) experiment conducted to compare the diagnostic performance of a compressed bone image with the original. The compression was done on custom hardware that implements an algorithm based on full-frame cosine transform. The compression ratio in this study is approximately 10:1, which was decided after a pilot experiment. The image set consisted of 45 hand images, including normal images and images containing osteomalacia and osteitis fibrosa. Each image was digitized with a laser film scanner to 2,048 x 2,048 x 8 bits. Six observers, all board-certified radiologists, participated in the experiment. For each ROC session, an independent ROC curve was constructed and the area under that curve calculated. The image set was randomized for each session, as was the order for viewing the original and reconstructed images. Analysis of variance was used to analyze the data and derive statistically significant results. The preliminary results indicate that the diagnostic quality of the reconstructed image is comparable to that of the original image

  16. Fractal dimension and image statistics of anal intraepithelial neoplasia

    International Nuclear Information System (INIS)

    Ahammer, H.; Kroepfl, J.M.; Hackl, Ch.; Sedivy, R.

    2011-01-01

    Research Highlights: → Human papillomaviruses cause anal intraepithelial neoplasia (AIN). → Digital image processing was carried out to classify the grades of AIN quantitatively. → The fractal dimension as well as grey value statistics was calculated. → Higher grades of AIN yielded higher values of the fractal dimension. → An automatic detection system is feasible. - Abstract: It is well known that human papillomaviruses (HPV) induce a variety of tumorous lesions of the skin. HPV-subtypes also cause premalignant lesions which are termed anal intraepithelial neoplasia (AIN). The clinical classification of AIN is of growing interest in clinical practice, due to increasing HPV infection rates throughout human population. The common classification approach is based on subjective inspections of histological slices of anal tissues with all the drawbacks of depending on the status and individual variances of the trained pathologists. Therefore, a nonlinear quantitative classification method including the calculation of the fractal dimension and first order as well as second order image statistical parameters was developed. The absolute values of these quantitative parameters reflected the distinct grades of AIN very well. The quantitative approach has the potential to decrease classification errors significantly and it could be used as a widely applied screening technique.

  17. Compressive sensing in medical imaging.

    Science.gov (United States)

    Graff, Christian G; Sidky, Emil Y

    2015-03-10

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  18. Visual tool for estimating the fractal dimension of images

    Science.gov (United States)

    Grossu, I. V.; Besliu, C.; Rusu, M. V.; Jipa, Al.; Bordeianu, C. C.; Felea, D.

    2009-10-01

    This work presents a new Visual Basic 6.0 application for estimating the fractal dimension of images, based on an optimized version of the box-counting algorithm. Following the attempt to separate the real information from "noise", we considered also the family of all band-pass filters with the same band-width (specified as parameter). The fractal dimension can be thus represented as a function of the pixel color code. The program was used for the study of paintings cracks, as an additional tool which can help the critic to decide if an artistic work is original or not. Program summaryProgram title: Fractal Analysis v01 Catalogue identifier: AEEG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 29 690 No. of bytes in distributed program, including test data, etc.: 4 967 319 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30M Classification: 14 Nature of problem: Estimating the fractal dimension of images. Solution method: Optimized implementation of the box-counting algorithm. Use of a band-pass filter for separating the real information from "noise". User friendly graphical interface. Restrictions: Although various file-types can be used, the application was mainly conceived for the 8-bit grayscale, windows bitmap file format. Running time: In a first approximation, the algorithm is linear.

  19. Fractal analyses of osseous healing using Tuned Aperture Computed Tomography images

    International Nuclear Information System (INIS)

    Nair, M.K.; Nair, U.P.; Seyedain, A.; Webber, R.L.; Piesco, N.P.; Agarwal, S.; Mooney, M.P.; Groendahl, H.G.

    2001-01-01

    The aim of this study was to evaluate osseous healing in mandibular defects using fractal analyses on conventional radiographs and tuned aperture computed tomography (TACT; OrthoTACT, Instrumentarium Imaging, Helsinki, Finland) images. Eighty test sites on the inferior margins of rabbit mandibles were subject to lesion induction and treated with one of the following: no treatment (controls); osteoblasts only; polymer matrix only; or osteoblast-polymer matrix (OPM) combination. Images were acquired using conventional radiography and TACT, including unprocessed TACT (TACT-U) and iteratively restored TACT (TACT-IR). Healing was followed up over time and images acquired at 3, 6, 9, and 12 weeks post-surgery. Fractal dimension (FD) was computed within regions of interest in the defects using the TACT workbench. Results were analyzed for effects produced by imaging modality, treatment modality, time after surgery and lesion location. Histomorphometric data were available to assess ground truth. Significant differences (p<0.0001) were noted based on imaging modality with TACT-IR recording the highest mean fractal dimension (MFD), followed by TACT-U and conventional images, in that order. Sites treated with OPM recorded the highest MFDs among all treatment modalities (p<0.0001). The highest MFD based on time was recorded at 3 weeks and differed significantly with 12 weeks (p<0.035). Correlation of FD with results of histomorphometric data was high (r=0.79; p<0.001). The FD computed on TACT-IR showed the highest correlation with histomorphometric data, thus establishing the fact TACT is a more efficient and accurate imaging modality for quantification of osseous changes within healing bony defects. (orig.)

  20. Comparative Survey of Ultrasound Images Compression Methods Dedicated to a Tele-Echography Robotic System

    National Research Council Canada - National Science Library

    Delgorge, C

    2001-01-01

    .... For the purpose of this work, we selected seven compression methods : Fourier Transform, Discrete Cosine Transform, Wavelets, Quadtrees Transform, Fractals, Histogram Thresholding, and Run Length Coding...

  1. Image splitting and remapping method for radiological image compression

    Science.gov (United States)

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  2. JPEG and wavelet compression of ophthalmic images

    Science.gov (United States)

    Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.

    1999-05-01

    This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.

  3. Compression of the digitized X-ray images

    International Nuclear Information System (INIS)

    Terae, Satoshi; Miyasaka, Kazuo; Fujita, Nobuyuki; Takamura, Akio; Irie, Goro; Inamura, Kiyonari.

    1987-01-01

    Medical images are using an increased amount of space in the hospitals, while they are not accessed easily. Thus, suitable data filing system and precise data compression will be necessitated. Image quality was evaluated before and after image data compression, using local filing system (MediFile 1000, NEC Co.) and forty-seven modes of compression parameter. For this study X-ray images of 10 plain radiographs and 7 contrast examinations were digitized using a film reader of CCD sensor in MediFile 1000. Those images were compressed into forty-seven kinds of image data to save in an optical disc and then the compressed images were reconstructed. Each reconstructed image was compared with non-compressed images in respect to several regions of our interest by four radiologists. Compression and extension of radiological images were promptly made by employing the local filing system. Image quality was much more affected by the ratio of data compression than by the mode of parameter itself. In another word, the higher compression ratio became, the worse the image quality were. However, image quality was not significantly degraded until the compression ratio was about 15: 1 on plain radiographs and about 8: 1 on contrast studies. Image compression by this technique will be admitted by diagnostic radiology. (author)

  4. Double-compression method for biomedical images

    Science.gov (United States)

    Antonenko, Yevhenii A.; Mustetsov, Timofey N.; Hamdi, Rami R.; Małecka-Massalska, Teresa; Orshubekov, Nurbek; DzierŻak, RóŻa; Uvaysova, Svetlana

    2017-08-01

    This paper describes a double compression method (DCM) of biomedical images. A comparison of image compression factors in size JPEG, PNG and developed DCM was carried out. The main purpose of the DCM - compression of medical images while maintaining the key points that carry diagnostic information. To estimate the minimum compression factor an analysis of the coding of random noise image is presented.

  5. A Double-Minded Fractal

    Science.gov (United States)

    Simoson, Andrew J.

    2009-01-01

    This article presents a fun activity of generating a double-minded fractal image for a linear algebra class once the idea of rotation and scaling matrices are introduced. In particular the fractal flip-flops between two words, depending on the level at which the image is viewed. (Contains 5 figures.)

  6. Fractals everywhere

    CERN Document Server

    Barnsley, Michael F

    2012-01-01

    ""Difficult concepts are introduced in a clear fashion with excellent diagrams and graphs."" - Alan E. Wessel, Santa Clara University""The style of writing is technically excellent, informative, and entertaining."" - Robert McCartyThis new edition of a highly successful text constitutes one of the most influential books on fractal geometry. An exploration of the tools, methods, and theory of deterministic geometry, the treatment focuses on how fractal geometry can be used to model real objects in the physical world. Two sixteen-page full-color inserts contain fractal images, and a bonus CD of

  7. Radiologic image compression -- A review

    International Nuclear Information System (INIS)

    Wong, S.; Huang, H.K.; Zaremba, L.; Gooden, D.

    1995-01-01

    The objective of radiologic image compression is to reduce the data volume of and to achieve a lot bit rate in the digital representation of radiologic images without perceived loss of image quality. However, the demand for transmission bandwidth and storage space in the digital radiology environment, especially picture archiving and communication systems (PACS) and teleradiology, and the proliferating use of various imaging modalities, such as magnetic resonance imaging, computed tomography, ultrasonography, nuclear medicine, computed radiography, and digital subtraction angiography, continue to outstrip the capabilities of existing technologies. The availability of lossy coding techniques for clinical diagnoses further implicates many complex legal and regulatory issues. This paper reviews the recent progress of lossless and lossy radiologic image compression and presents the legal challenges of using lossy compression of medical records. To do so, the authors first describe the fundamental concepts of radiologic imaging and digitization. Then, the authors examine current compression technology in the field of medical imaging and discuss important regulatory policies and legal questions facing the use of compression in this field. The authors conclude with a summary of future challenges and research directions. 170 refs

  8. Perceptual Image Compression in Telemedicine

    Science.gov (United States)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  9. Composite Techniques Based Color Image Compression

    Directory of Open Access Journals (Sweden)

    Zainab Ibrahim Abood

    2017-03-01

    Full Text Available Compression for color image is now necessary for transmission and storage in the data bases since the color gives a pleasing nature and natural for any object, so three composite techniques based color image compression is implemented to achieve image with high compression, no loss in original image, better performance and good image quality. These techniques are composite stationary wavelet technique (S, composite wavelet technique (W and composite multi-wavelet technique (M. For the high energy sub-band of the 3rd level of each composite transform in each composite technique, the compression parameters are calculated. The best composite transform among the 27 types is the three levels of multi-wavelet transform (MMM in M technique which has the highest values of energy (En and compression ratio (CR and least values of bit per pixel (bpp, time (T and rate distortion R(D. Also the values of the compression parameters of the color image are nearly the same as the average values of the compression parameters of the three bands of the same image.

  10. AFM imaging and fractal analysis of surface roughness of AlN epilayers on sapphire substrates

    Energy Technology Data Exchange (ETDEWEB)

    Dallaeva, Dinara, E-mail: dinara.dallaeva@yandex.ru [Brno University of Technology, Faculty of Electrical Engineering and Communication, Physics Department, Technická 8, 616 00 Brno (Czech Republic); Ţălu, Ştefan [Technical University of Cluj-Napoca, Faculty of Mechanical Engineering, Department of AET, Discipline of Descriptive Geometry and Engineering Graphics, 103-105 B-dul Muncii Street, Cluj-Napoca 400641, Cluj (Romania); Stach, Sebastian [University of Silesia, Faculty of Computer Science and Materials Science, Institute of Informatics, Department of Biomedical Computer Systems, ul. Będzińska 39, 41-205 Sosnowiec (Poland); Škarvada, Pavel; Tománek, Pavel; Grmela, Lubomír [Brno University of Technology, Faculty of Electrical Engineering and Communication, Physics Department, Technická 8, 616 00 Brno (Czech Republic)

    2014-09-01

    Graphical abstract: - Highlights: • We determined the complexity of 3D surface roughness of aluminum nitride layers. • We used atomic force microscopy and analyzed their fractal geometry. • We determined the fractal dimension of surface roughness of aluminum nitride layers. • We determined the dependence of layer morphology on substrate temperature. - Abstract: The paper deals with AFM imaging and characterization of 3D surface morphology of aluminum nitride (AlN) epilayers on sapphire substrates prepared by magnetron sputtering. Due to the effect of temperature changes on epilayer's surface during the fabrication, a surface morphology is studied by combination of atomic force microscopy (AFM) and fractal analysis methods. Both methods are useful tools that may assist manufacturers in developing and fabricating AlN thin films with optimal surface characteristics. Furthermore, they provide different yet complementary information to that offered by traditional surface statistical parameters. This combination is used for the first time for measurement on AlN epilayers on sapphire substrates, and provides the overall 3D morphology of the sample surfaces (by AFM imaging), and reveals fractal characteristics in the surface morphology (fractal analysis)

  11. Subjective evaluation of compressed image quality

    Science.gov (United States)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  12. Determination of the fractal dimension surface of the fracture from SEM images with assistance of the computer image quantitative analysis system

    International Nuclear Information System (INIS)

    Wawszczak, J.

    1999-01-01

    This paper presents a procedure for quantitative image analysis for determination of the fractal dimension from SEM surface images of the fracture 0H14N5CuNb steel. Investigated quenched and tempered samples of the steel after impact tests (in room and -85 o C temperatures). This method can be useful for analysing local fractal dimension of any surface parts (not oriented) of the fracture with different topography of this surface. (author)

  13. Fractal characteristics of an asphaltene deposited heterogeneous surface

    International Nuclear Information System (INIS)

    Amin, J. Sayyad; Ayatollahi, Sh.; Alamdari, A.

    2009-01-01

    Several methods have been employed in recent years to investigate homogeneous surface topography based on image analysis, such as AFM (atomic force microscopy) and SEM (scanning electron microscopy). Fractal analysis of the images provides fractal dimension of the surface which is used as one of the most common surface indices. Surface topography has generally been considered to be mono-fractal. On the other hand, precipitation of organic materials on a rough surface and its irregular growth result in morphology alteration and converts a homogeneous surface to a heterogeneous one. In this case a mono-fractal description of the surface does not completely describe the nature of the altered surface. This work aims to investigate the topography alteration of a glass surface as a result of asphaltene precipitation and its growth at various pressures using a bi-fractal approach. The experimental results of the deposited surfaces were clearly indicating two regions of micro- and macro-asperities namely, surface types I and II, respectively. The fractal plots were indicative of bi-fractal behavior and for each surface type one fractal dimension was calculated. The topography information of the surfaces was obtained by two image analyses, AFM and SEM imaging techniques. Results of the bi-fractal analysis demonstrated that topography alteration in surface type II (macro-asperities) is more evident than that in surface type I (micro-asperities). Compared to surface type II, a better correlation was observed between the fractal dimensions inferred from the AFM images (D A ) and those of the SEM images (D S ) in surface type I.

  14. Classification of diabetic retinopathy using fractal dimension analysis of eye fundus image

    Science.gov (United States)

    Safitri, Diah Wahyu; Juniati, Dwi

    2017-08-01

    Diabetes Mellitus (DM) is a metabolic disorder when pancreas produce inadequate insulin or a condition when body resist insulin action, so the blood glucose level is high. One of the most common complications of diabetes mellitus is diabetic retinopathy which can lead to a vision problem. Diabetic retinopathy can be recognized by an abnormality in eye fundus. Those abnormalities are characterized by microaneurysms, hemorrhage, hard exudate, cotton wool spots, and venous's changes. The diabetic retinopathy is classified depends on the conditions of abnormality in eye fundus, that is grade 1 if there is a microaneurysm only in the eye fundus; grade 2, if there are a microaneurysm and a hemorrhage in eye fundus; and grade 3: if there are microaneurysm, hemorrhage, and neovascularization in the eye fundus. This study proposed a method and a process of eye fundus image to classify of diabetic retinopathy using fractal analysis and K-Nearest Neighbor (KNN). The first phase was image segmentation process using green channel, CLAHE, morphological opening, matched filter, masking, and morphological opening binary image. After segmentation process, its fractal dimension was calculated using box-counting method and the values of fractal dimension were analyzed to make a classification of diabetic retinopathy. Tests carried out by used k-fold cross validation method with k=5. In each test used 10 different grade K of KNN. The accuracy of the result of this method is 89,17% with K=3 or K=4, it was the best results than others K value. Based on this results, it can be concluded that the classification of diabetic retinopathy using fractal analysis and KNN had a good performance.

  15. Hybrid 3D Fractal Coding with Neighbourhood Vector Quantisation

    Directory of Open Access Journals (Sweden)

    Zhen Yao

    2004-12-01

    Full Text Available A hybrid 3D compression scheme which combines fractal coding with neighbourhood vector quantisation for video and volume data is reported. While fractal coding exploits the redundancy present in different scales, neighbourhood vector quantisation, as a generalisation of translational motion compensation, is a useful method for removing both intra- and inter-frame coherences. The hybrid coder outperforms most of the fractal coders published to date while the algorithm complexity is kept relatively low.

  16. HVS-based medical image compression

    Energy Technology Data Exchange (ETDEWEB)

    Kai Xie [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)]. E-mail: xie_kai2001@sjtu.edu.cn; Jie Yang [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China); Min Zhuyue [CREATIS-CNRS Research Unit 5515 and INSERM Unit 630, 69621 Villeurbanne (France); Liang Lixiao [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)

    2005-07-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time.

  17. HVS-based medical image compression

    International Nuclear Information System (INIS)

    Kai Xie; Jie Yang; Min Zhuyue; Liang Lixiao

    2005-01-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time

  18. FRACTAL ANALYSIS OF TRABECULAR BONE: A STANDARDISED METHODOLOGY

    Directory of Open Access Journals (Sweden)

    Ian Parkinson

    2011-05-01

    Full Text Available A standardised methodology for the fractal analysis of histological sections of trabecular bone has been established. A modified box counting method has been developed for use on a PC based image analyser (Quantimet 500MC, Leica Cambridge. The effect of image analyser settings, magnification, image orientation and threshold levels, was determined. Also, the range of scale over which trabecular bone is effectively fractal was determined and a method formulated to objectively calculate more than one fractal dimension from the modified Richardson plot. The results show that magnification, image orientation and threshold settings have little effect on the estimate of fractal dimension. Trabecular bone has a lower limit below which it is not fractal (λ<25 μm and the upper limit is 4250 μm. There are three distinct fractal dimensions for trabecular bone (sectional fractals, with magnitudes greater than 1.0 and less than 2.0. It has been shown that trabecular bone is effectively fractal over a defined range of scale. Also, within this range, there is more than 1 fractal dimension, describing spatial structural entities. Fractal analysis is a model independent method for describing a complex multifaceted structure, which can be adapted for the study of other biological systems. This may be at the cell, tissue or organ level and compliments conventional histomorphometric and stereological techniques.

  19. Evaluation of a new image compression technique

    International Nuclear Information System (INIS)

    Algra, P.R.; Kroon, H.M.; Noordveld, R.B.; DeValk, J.P.J.; Seeley, G.W.; Westerink, P.H.

    1988-01-01

    The authors present the evaluation of a new image compression technique, subband coding using vector quantization, on 44 CT examinations of the upper abdomen. Three independent radiologists reviewed the original images and compressed versions. The compression ratios used were 16:1 and 20:1. Receiver operating characteristic analysis showed no difference in the diagnostic contents between originals and their compressed versions. Subjective visibility of anatomic structures was equal. Except for a few 20:1 compressed images, the observers could not distinguish compressed versions from original images. They conclude that subband coding using vector quantization is a valuable method for data compression in CT scans of the abdomen

  20. A Novel High Efficiency Fractal Multiview Video Codec

    Directory of Open Access Journals (Sweden)

    Shiping Zhu

    2015-01-01

    Full Text Available Multiview video which is one of the main types of three-dimensional (3D video signals, captured by a set of video cameras from various viewpoints, has attracted much interest recently. Data compression for multiview video has become a major issue. In this paper, a novel high efficiency fractal multiview video codec is proposed. Firstly, intraframe algorithm based on the H.264/AVC intraprediction modes and combining fractal and motion compensation (CFMC algorithm in which range blocks are predicted by domain blocks in the previously decoded frame using translational motion with gray value transformation is proposed for compressing the anchor viewpoint video. Then temporal-spatial prediction structure and fast disparity estimation algorithm exploiting parallax distribution constraints are designed to compress the multiview video data. The proposed fractal multiview video codec can exploit temporal and spatial correlations adequately. Experimental results show that it can obtain about 0.36 dB increase in the decoding quality and 36.21% decrease in encoding bitrate compared with JMVC8.5, and the encoding time is saved by 95.71%. The rate-distortion comparisons with other multiview video coding methods also demonstrate the superiority of the proposed scheme.

  1. An efficient adaptive arithmetic coding image compression technology

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei

    2011-01-01

    This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  2. Detecting abrupt dynamic change based on changes in the fractal properties of spatial images

    Science.gov (United States)

    Liu, Qunqun; He, Wenping; Gu, Bin; Jiang, Yundi

    2017-10-01

    Many abrupt climate change events often cannot be detected timely by conventional abrupt detection methods until a few years after these events have occurred. The reason for this lag in detection is that abundant and long-term observational data are required for accurate abrupt change detection by these methods, especially for the detection of a regime shift. So, these methods cannot help us understand and forecast the evolution of the climate system in a timely manner. Obviously, spatial images, generated by a coupled spatiotemporal dynamical model, contain more information about a dynamic system than a single time series, and we find that spatial images show the fractal properties. The fractal properties of spatial images can be quantitatively characterized by the Hurst exponent, which can be estimated by two-dimensional detrended fluctuation analysis (TD-DFA). Based on this, TD-DFA is used to detect an abrupt dynamic change of a coupled spatiotemporal model. The results show that the TD-DFA method can effectively detect abrupt parameter changes in the coupled model by monitoring the changing in the fractal properties of spatial images. The present method provides a new way for abrupt dynamic change detection, which can achieve timely and efficient abrupt change detection results.

  3. ROI-based DICOM image compression for telemedicine

    Indian Academy of Sciences (India)

    ground and reconstruct the image portions losslessly. The compressed image can ... If the image is compressed by 8:1 compression without any perceptual distortion, the ... Figure 2. Cross-sectional view of medical image (statistical representation). ... The Integer Wavelet Transform (IWT) is used to have lossless processing.

  4. Fractal Dimension of Fracture Surface in Rock Material after High Temperature

    Directory of Open Access Journals (Sweden)

    Z. Z. Zhang

    2015-01-01

    Full Text Available Experiments on granite specimens after different high temperature under uniaxial compression were conducted and the fracture surfaces were observed by scanning electron microscope (SEM. The fractal dimensions of the fracture surfaces with increasing temperature were calculated, respectively. The fractal dimension of fracture surface is between 1.44 and 1.63. Its value approximately goes up exponentially with the increase of temperature. There is a quadratic polynomial relationship between the rockburst tendency and fractal dimension of fracture surface; namely, a fractal dimension threshold can be obtained. Below the threshold value, a positive correlativity shows between rockburst tendency and fractal dimension; when the fractal dimension is greater than the threshold value, it shows an inverse correlativity.

  5. Application of content-based image compression to telepathology

    Science.gov (United States)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  6. Added soft tissue contrast using signal attenuation and the fractal dimension for optical coherence tomography images of porcine arterial tissue

    International Nuclear Information System (INIS)

    Flueraru, C; Mao, Y; Chang, S; Popescu, D P; Sowa, M G

    2010-01-01

    Optical coherence tomography (OCT) images of left-descending coronary tissues harvested from three porcine specimens were acquired with a home-build swept-source OCT setup. Despite the fact that OCT is capable of acquiring high resolution circumferential images of vessels, many distinct histological features of a vessel have comparable optical properties leading to poor contrast in OCT images. Two classification methods were tested in this report for the purpose of enhancing contrast between soft-tissue components of porcine coronary vessels. One method involved analyzing the attenuation of the OCT signal as a function of light penetration into the tissue. We demonstrated that by analyzing the signal attenuation in this manner we were able to differentiate two media sub-layers with different orientations of the smooth muscle cells. The other classification method used in our study was fractal analysis. Fractal analysis was implemented in a box-counting (fractal dimension) image-processing code and was used as a tool to differentiate and quantify variations in tissue texture at various locations in the OCT images. The calculated average fractal dimensions had different values in distinct regions of interest (ROI) within the imaged coronary samples. When compared to the results obtained by using the attenuation of the OCT signal, the method of fractal analysis demonstrated better classification potential for distinguishing amongst the tissue ROI.

  7. Watermark Compression in Medical Image Watermarking Using Lempel-Ziv-Welch (LZW) Lossless Compression Technique.

    Science.gov (United States)

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohd; Ali, Mushtaq

    2016-04-01

    In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery.

  8. Task-oriented lossy compression of magnetic resonance images

    Science.gov (United States)

    Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques

    1996-04-01

    A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.

  9. A concise introduction to image processing using C++

    CERN Document Server

    Wang, Meiqing

    2008-01-01

    Image recognition has become an increasingly dynamic field with new and emerging civil and military applications in security, exploration, and robotics. Written by experts in fractal-based image and video compression, A Concise Introduction to Image Processing using C++ strengthens your knowledge of fundamentals principles in image acquisition, conservation, processing, and manipulation, allowing you to easily apply these techniques in real-world problems. The book presents state-of-the-art image processing methodology, including current industrial practices for image compression, image de-noi

  10. Halftoning processing on a JPEG-compressed image

    Science.gov (United States)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  11. Zone specific fractal dimension of retinal images as predictor of stroke incidence.

    Science.gov (United States)

    Aliahmad, Behzad; Kumar, Dinesh Kant; Hao, Hao; Unnikrishnan, Premith; Che Azemin, Mohd Zulfaezal; Kawasaki, Ryo; Mitchell, Paul

    2014-01-01

    Fractal dimensions (FDs) are frequently used for summarizing the complexity of retinal vascular. However, previous techniques on this topic were not zone specific. A new methodology to measure FD of a specific zone in retinal images has been developed and tested as a marker for stroke prediction. Higuchi's fractal dimension was measured in circumferential direction (FDC) with respect to optic disk (OD), in three concentric regions between OD boundary and 1.5 OD diameter from its margin. The significance of its association with future episode of stroke event was tested using the Blue Mountain Eye Study (BMES) database and compared against spectrum fractal dimension (SFD) and box-counting (BC) dimension. Kruskal-Wallis analysis revealed FDC as a better predictor of stroke (H = 5.80, P = 0.016, α = 0.05) compared with SFD (H = 0.51, P = 0.475, α = 0.05) and BC (H = 0.41, P = 0.520, α = 0.05) with overall lower median value for the cases compared to the control group. This work has shown that there is a significant association between zone specific FDC of eye fundus images with future episode of stroke while this difference is not significant when other FD methods are employed.

  12. Analysis of the fractal dimension of volcano geomorphology through Synthetic Aperture Radar (SAR) amplitude images acquired in C and X band.

    Science.gov (United States)

    Pepe, S.; Di Martino, G.; Iodice, A.; Manzo, M.; Pepe, A.; Riccio, D.; Ruello, G.; Sansosti, E.; Tizzani, P.; Zinno, I.

    2012-04-01

    In the last two decades several aspects relevant to volcanic activity have been analyzed in terms of fractal parameters that effectively describe natural objects geometry. More specifically, these researches have been aimed at the identification of (1) the power laws that governed the magma fragmentation processes, (2) the energy of explosive eruptions, and (3) the distribution of the associated earthquakes. In this paper, the study of volcano morphology via satellite images is dealt with; in particular, we use the complete forward model developed by some of the authors (Di Martino et al., 2012) that links the stochastic characterization of amplitude Synthetic Aperture Radar (SAR) images to the fractal dimension of the imaged surfaces, modelled via fractional Brownian motion (fBm) processes. Based on the inversion of such a model, a SAR image post-processing has been implemented (Di Martino et al., 2010), that allows retrieving the fractal dimension of the observed surfaces, dictating the distribution of the roughness over different spatial scales. The fractal dimension of volcanic structures has been related to the specific nature of materials and to the effects of active geodynamic processes. Hence, the possibility to estimate the fractal dimension from a single amplitude-only SAR image is of fundamental importance for the characterization of volcano structures and, moreover, can be very helpful for monitoring and crisis management activities in case of eruptions and other similar natural hazards. The implemented SAR image processing performs the extraction of the point-by-point fractal dimension of the scene observed by the sensor, providing - as an output product - the map of the fractal dimension of the area of interest. In this work, such an analysis is performed on Cosmo-SkyMed, ERS-1/2 and ENVISAT images relevant to active stratovolcanoes in different geodynamic contexts, such as Mt. Somma-Vesuvio, Mt. Etna, Vulcano and Stromboli in Southern Italy, Shinmoe

  13. High Bit-Depth Medical Image Compression With HEVC.

    Science.gov (United States)

    Parikh, Saurin S; Ruiz, Damian; Kalva, Hari; Fernandez-Escribano, Gerardo; Adzic, Velibor

    2018-03-01

    Efficient storing and retrieval of medical images has direct impact on reducing costs and improving access in cloud-based health care services. JPEG 2000 is currently the commonly used compression format for medical images shared using the DICOM standard. However, new formats such as high efficiency video coding (HEVC) can provide better compression efficiency compared to JPEG 2000. Furthermore, JPEG 2000 is not suitable for efficiently storing image series and 3-D imagery. Using HEVC, a single format can support all forms of medical images. This paper presents the use of HEVC for diagnostically acceptable medical image compression, focusing on compression efficiency compared to JPEG 2000. Diagnostically acceptable lossy compression and complexity of high bit-depth medical image compression are studied. Based on an established medically acceptable compression range for JPEG 2000, this paper establishes acceptable HEVC compression range for medical imaging applications. Experimental results show that using HEVC can increase the compression performance, compared to JPEG 2000, by over 54%. Along with this, a new method for reducing computational complexity of HEVC encoding for medical images is proposed. Results show that HEVC intra encoding complexity can be reduced by over 55% with negligible increase in file size.

  14. Characterisation of human non-proliferativediabetic retinopathy using the fractal analysis

    Directory of Open Access Journals (Sweden)

    Carmen Alina Lupaşcu

    2015-08-01

    Full Text Available AIM:To investigate and quantify changes in the branching patterns of the retina vascular network in diabetes using the fractal analysis method.METHODS:This was a clinic-based prospective study of 172 participants managed at the Ophthalmological Clinic of Cluj-Napoca, Romania, between January 2012 and December 2013. A set of 172 segmented and skeletonized human retinal images, corresponding to both normal (24 images and pathological (148 images states of the retina were examined. An automatic unsupervised method for retinal vessel segmentation was applied before fractal analysis. The fractal analyses of the retinal digital images were performed using the fractal analysis software ImageJ. Statistical analyses were performed for these groups using Microsoft Office Excel 2003 and GraphPad InStat software.RESULTS:It was found that subtle changes in the vascular network geometry of the human retina are influenced by diabetic retinopathy (DR and can be estimated using the fractal geometry. The average of fractal dimensions D for the normal images (segmented and skeletonized versions is slightly lower than the corresponding values of mild non-proliferative DR (NPDR images (segmented and skeletonized versions. The average of fractal dimensions D for the normal images (segmented and skeletonized versions is higher than the corresponding values of moderate NPDR images (segmented and skeletonized versions. The lowest values were found for the corresponding values of severe NPDR images (segmented and skeletonized versions.CONCLUSION:The fractal analysis of fundus photographs may be used for a more complete undeTrstanding of the early and basic pathophysiological mechanisms of diabetes. The architecture of the retinal microvasculature in diabetes can be quantitative quantified by means of the fractal dimension. Microvascular abnormalities on retinal imaging may elucidate early mechanistic pathways for microvascular complications and distinguish patients with

  15. Biomaterial porosity determined by fractal dimensions, succolarity and lacunarity on microcomputed tomographic images

    International Nuclear Information System (INIS)

    N'Diaye, Mambaye; Degeratu, Cristinel; Bouler, Jean-Michel; Chappard, Daniel

    2013-01-01

    Porous structures are becoming more and more important in biology and material science because they help in reducing the density of the grafted material. For biomaterials, porosity also increases the accessibility of cells and vessels inside the grafted area. However, descriptors of porosity are scanty. We have used a series of biomaterials with different types of porosity (created by various porogens: fibers, beads …). Blocks were studied by microcomputed tomography for the measurement of 3D porosity. 2D sections were re-sliced to analyze the microarchitecture of the pores and were transferred to image analysis programs: star volumes, interconnectivity index, Minkowski–Bouligand and Kolmogorov fractal dimensions were determined. Lacunarity and succolarity, two recently described fractal dimensions, were also computed. These parameters provided a precise description of porosity and pores' characteristics. Non-linear relationships were found between several descriptors e.g. succolarity and star volume of the material. A linear correlation was found between lacunarity and succolarity. These techniques appear suitable in the study of biomaterials usable as bone substitutes. Highlights: ► Interconnected porosity is important in the development of bone substitutes. ► Porosity was evaluated by 2D and 3D morphometry on microCT images. ► Euclidean and fractal descriptors measure interconnectivity on 2D microCT images. ► Lacunarity and succolarity were evaluated on a series of porous biomaterials

  16. Mathematical transforms and image compression: A review

    Directory of Open Access Journals (Sweden)

    Satish K. Singh

    2010-07-01

    Full Text Available It is well known that images, often used in a variety of computer and other scientific and engineering applications, are difficult to store and transmit due to their sizes. One possible solution to overcome this problem is to use an efficient digital image compression technique where an image is viewed as a matrix and then the operations are performed on the matrix. All the contemporary digital image compression systems use various mathematical transforms for compression. The compression performance is closely related to the performance by these mathematical transforms in terms of energy compaction and spatial frequency isolation by exploiting inter-pixel redundancies present in the image data. Through this paper, a comprehensive literature survey has been carried out and the pros and cons of various transform-based image compression models have also been discussed.

  17. Compression and archiving of digital images

    International Nuclear Information System (INIS)

    Huang, H.K.

    1988-01-01

    This paper describes the application of a full-frame bit-allocation image compression technique to a hierarchical digital image archiving system consisting of magnetic disks, optical disks and an optical disk library. The digital archiving system without the compression has been in clinical operation in the Pediatric Radiology for more than half a year. The database in the system consists of all pediatric inpatients including all images from computed radiography, digitized x-ray films, CT, MR, and US. The rate of image accumulation is approximately 1,900 megabytes per week. The hardware design of the compression module is based on a Motorola 68020 microprocessor, A VME bus, a 16 megabyte image buffer memory board, and three Motorola digital signal processing 56001 chips on a VME board for performing the two-dimensional cosine transform and the quantization. The clinical evaluation of the compression module with the image archiving system is expected to be in February 1988

  18. Lossless medical image compression with a hybrid coder

    Science.gov (United States)

    Way, Jing-Dar; Cheng, Po-Yuen

    1998-10-01

    The volume of medical image data is expected to increase dramatically in the next decade due to the large use of radiological image for medical diagnosis. The economics of distributing the medical image dictate that data compression is essential. While there is lossy image compression, the medical image must be recorded and transmitted lossless before it reaches the users to avoid wrong diagnosis due to the image data lost. Therefore, a low complexity, high performance lossless compression schematic that can approach the theoretic bound and operate in near real-time is needed. In this paper, we propose a hybrid image coder to compress the digitized medical image without any data loss. The hybrid coder is constituted of two key components: an embedded wavelet coder and a lossless run-length coder. In this system, the medical image is compressed with the lossy wavelet coder first, and the residual image between the original and the compressed ones is further compressed with the run-length coder. Several optimization schemes have been used in these coders to increase the coding performance. It is shown that the proposed algorithm is with higher compression ratio than run-length entropy coders such as arithmetic, Huffman and Lempel-Ziv coders.

  19. Images compression in nuclear medicine

    International Nuclear Information System (INIS)

    Rebelo, M.S.; Furuie, S.S.; Moura, L.

    1992-01-01

    The performance of two methods for images compression in nuclear medicine was evaluated. The LZW precise, and Cosine Transformed, approximate, methods were analyzed. The results were obtained, showing that the utilization of approximated method produced images with an agreeable quality for visual analysis and compression rates, considerably high than precise method. (C.G.C.)

  20. Investigation into How 8th Grade Students Define Fractals

    Science.gov (United States)

    Karakus, Fatih

    2015-01-01

    The analysis of 8th grade students' concept definitions and concept images can provide information about their mental schema of fractals. There is limited research on students' understanding and definitions of fractals. Therefore, this study aimed to investigate the elementary students' definitions of fractals based on concept image and concept…

  1. Efficient predictive algorithms for image compression

    CERN Document Server

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla

    2017-01-01

    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  2. Compressive Transient Imaging

    KAUST Repository

    Sun, Qilin

    2017-04-01

    High resolution transient/3D imaging technology is of high interest in both scientific research and commercial application. Nowadays, all of the transient imaging methods suffer from low resolution or time consuming mechanical scanning. We proposed a new method based on TCSPC and Compressive Sensing to achieve a high resolution transient imaging with a several seconds capturing process. Picosecond laser sends a serious of equal interval pulse while synchronized SPAD camera\\'s detecting gate window has a precise phase delay at each cycle. After capturing enough points, we are able to make up a whole signal. By inserting a DMD device into the system, we are able to modulate all the frames of data using binary random patterns to reconstruct a super resolution transient/3D image later. Because the low fill factor of SPAD sensor will make a compressive sensing scenario ill-conditioned, We designed and fabricated a diffractive microlens array. We proposed a new CS reconstruction algorithm which is able to denoise at the same time for the measurements suffering from Poisson noise. Instead of a single SPAD senor, we chose a SPAD array because it can drastically reduce the requirement for the number of measurements and its reconstruction time. Further more, it not easy to reconstruct a high resolution image with only one single sensor while for an array, it just needs to reconstruct small patches and a few measurements. In this thesis, we evaluated the reconstruction methods using both clean measurements and the version corrupted by Poisson noise. The results show how the integration over the layers influence the image quality and our algorithm works well while the measurements suffer from non-trival Poisson noise. It\\'s a breakthrough in the areas of both transient imaging and compressive sensing.

  3. Biometric feature extraction using local fractal auto-correlation

    International Nuclear Information System (INIS)

    Chen Xi; Zhang Jia-Shu

    2014-01-01

    Image texture feature extraction is a classical means for biometric recognition. To extract effective texture feature for matching, we utilize local fractal auto-correlation to construct an effective image texture descriptor. Three main steps are involved in the proposed scheme: (i) using two-dimensional Gabor filter to extract the texture features of biometric images; (ii) calculating the local fractal dimension of Gabor feature under different orientations and scales using fractal auto-correlation algorithm; and (iii) linking the local fractal dimension of Gabor feature under different orientations and scales into a big vector for matching. Experiments and analyses show our proposed scheme is an efficient biometric feature extraction approach. (condensed matter: structural, mechanical, and thermal properties)

  4. The analysis of the influence of fractal structure of stimuli on fractal dynamics in fixational eye movements and EEG signal

    Science.gov (United States)

    Namazi, Hamidreza; Kulish, Vladimir V.; Akrami, Amin

    2016-05-01

    One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex’ visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders.

  5. Thermal properties of bodies in fractal and cantorian physics

    International Nuclear Information System (INIS)

    Zmeskal, Oldrich; Buchnicek, Miroslav; Vala, Martin

    2005-01-01

    Fundamental laws describing the heat diffusion in fractal environment are discussed. It is shown that for the three-dimensional space the heat radiation process occur in structures with fractal dimension D element of heat conduction and convection have the upper hand (generally in the real gases). To describe the heat diffusion a new law has been formulated. Its validity is more general than the Plank's radiation law based on the quantum heat diffusion theory. The energy density w = f (K, D), where K is the fractal measure and D is the fractal dimension exhibit typical dependency peaking with agreement with Planck's radiation law and with the experimental data for the absolutely black body in the energy interval kT m m kT m ∼ 1.5275. The agreement of the fractal model with the experimental outcomes is documented for the spectral characteristics of the Sun. The properties of stellar objects (black holes, relict radiation, etc.) and the elementary particles fields and interactions between them (quarks, leptons, mesons, baryons, bosons and their coupling constants) will be discussed with the help of the described mathematic apparatus in our further contributions. The general gas law for real gases in its more applicable form than the widely used laws (e.g. van der Waals, Berthelot, Kammerlingh-Onnes) has been also formulated. The energy density, which is in this case represented by the gas pressure p = f (K, D), can gain generally complex value and represents the behaviour of real (cohesive) gas in interval D element of (1,3>. The gas behaves as the ideal one only for particular values of the fractal dimensions (the energy density is real-valued). Again, it is shown that above the critical temperature (kT > K h c) and for fractal dimension D m > 2.0269 the results are comparable to the kinetics theory of real (ideal) gas (van der Waals equation of state, compressibility factor, Boyle's temperature). For the critical temperature (K h c = kT r ) the compressibility

  6. Cloud Optimized Image Format and Compression

    Science.gov (United States)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  7. Discriminating between photorealistic computer graphics and natural images using fractal geometry

    Institute of Scientific and Technical Information of China (English)

    PAN Feng; CHEN JiongBin; HUANG JiWu

    2009-01-01

    Rendering technology in computer graphics (CG) Is now capable of producing highly photorealistlc Images, giving rise to the problem of how to identify CG Images from natural images. Some methods were proposed to solve this problem. In this paper, we give a novel method from a new point of view of Image perception. Although the photorealisUc CG images are very similar to natural images, they are surrealistic and smoother than natural images, thus leading to the difference in perception. A part of features are derived from fractal dimension to capture the difference In color perception between CG images and natural Images, and several generalized dimensions are used as the rest features to capture difference in coarseness. The effect of these features is verified by experiments. The average accuracy is over 91.2%.

  8. Iris Recognition: The Consequences of Image Compression

    Directory of Open Access Journals (Sweden)

    Bishop DanielA

    2010-01-01

    Full Text Available Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  9. Iris Recognition: The Consequences of Image Compression

    Science.gov (United States)

    Ives, Robert W.; Bishop, Daniel A.; Du, Yingzi; Belcher, Craig

    2010-12-01

    Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  10. Fractals in several electrode materials

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Chunyong, E-mail: zhangchy@njau.edu.cn [Department of Chemistry, College of Science, Nanjing Agricultural University, Nanjing 210095 (China); Suzhou Key Laboratory of Environment and Biosafety, Suzhou Academy of Southeast University, Dushuhu lake higher education town, Suzhou 215123 (China); Wu, Jingyu [Department of Chemistry, College of Science, Nanjing Agricultural University, Nanjing 210095 (China); Fu, Degang [Suzhou Key Laboratory of Environment and Biosafety, Suzhou Academy of Southeast University, Dushuhu lake higher education town, Suzhou 215123 (China); State Key Laboratory of Bioelectronics, Southeast University, Nanjing 210096 (China)

    2014-09-15

    Highlights: • Fractal geometry was employed to characterize three important electrode materials. • The surfaces of all studied electrodes were proved to be very rough. • The fractal dimensions of BDD and ACF were scale dependent. • MMO film was more uniform than BDD and ACF in terms of fractal structures. - Abstract: In the present paper, the fractal properties of boron-doped diamond (BDD), mixed metal oxide (MMO) and activated carbon fiber (ACF) electrode have been studied by SEM imaging at different scales. Three materials are self-similar with mean fractal dimension in the range of 2.6–2.8, confirming that they all exhibit very rough surfaces. Specifically, it is found that MMO film is more uniform in terms of fractal structure than BDD and ACF. As a result, the intriguing characteristics make these electrodes as ideal candidates for high-performance decontamination processes.

  11. Fractal and Morphological Characteristics of Single Marble Particle Crushing in Uniaxial Compression Tests

    Directory of Open Access Journals (Sweden)

    Yidong Wang

    2015-01-01

    Full Text Available Crushing of rock particles is a phenomenon commonly encountered in geotechnical engineering practice. It is however difficult to study the crushing of rock particles using classical theory because the physical structure of the particles is complex and irregular. This paper aims at evaluating fractal and morphological characteristics of single rock particle. A large number of particle crushing tests are conducted on single rock particle. The force-displacement curves and the particle size distributions (PSD of crushed particles are analysed based on particle crushing tests. Particle shape plays an important role in both the micro- and macroscale responses of a granular assembly. The PSD of an assortment of rocks are analysed by fractal methods, and the fractal dimension is obtained. A theoretical formula for particle crushing strength is derived, utilising the fractal model, and a simple method is proposed for predicting the probability of particle survival based on the Weibull statistics. Based on a few physical assumptions, simple equations are derived for determining particle crushing energy. The results of applying these equations are tested against the actual experimental data and prove to be very consistent. Fractal theory is therefore applicable for analysis of particle crushing.

  12. FONT DISCRIMINATIO USING FRACTAL DIMENSIONS

    Directory of Open Access Journals (Sweden)

    S. Mozaffari

    2014-09-01

    Full Text Available One of the related problems of OCR systems is discrimination of fonts in machine printed document images. This task improves performance of general OCR systems. Proposed methods in this paper are based on various fractal dimensions for font discrimination. First, some predefined fractal dimensions were combined with directional methods to enhance font differentiation. Then, a novel fractal dimension was introduced in this paper for the first time. Our feature extraction methods which consider font recognition as texture identification are independent of document content. Experimental results on different pages written by several font types show that fractal geometry can overcome the complexities of font recognition problem.

  13. Combined Sparsifying Transforms for Compressive Image Fusion

    Directory of Open Access Journals (Sweden)

    ZHAO, L.

    2013-11-01

    Full Text Available In this paper, we present a new compressive image fusion method based on combined sparsifying transforms. First, the framework of compressive image fusion is introduced briefly. Then, combined sparsifying transforms are presented to enhance the sparsity of images. Finally, a reconstruction algorithm based on the nonlinear conjugate gradient is presented to get the fused image. The simulations demonstrate that by using the combined sparsifying transforms better results can be achieved in terms of both the subjective visual effect and the objective evaluation indexes than using only a single sparsifying transform for compressive image fusion.

  14. Fractal analysis of cervical intraepithelial neoplasia.

    Directory of Open Access Journals (Sweden)

    Markus Fabrizii

    Full Text Available INTRODUCTION: Cervical intraepithelial neoplasias (CIN represent precursor lesions of cervical cancer. These neoplastic lesions are traditionally subdivided into three categories CIN 1, CIN 2, and CIN 3, using microscopical criteria. The relation between grades of cervical intraepithelial neoplasia (CIN and its fractal dimension was investigated to establish a basis for an objective diagnosis using the method proposed. METHODS: Classical evaluation of the tissue samples was performed by an experienced gynecologic pathologist. Tissue samples were scanned and saved as digital images using Aperio scanner and software. After image segmentation the box counting method as well as multifractal methods were applied to determine the relation between fractal dimension and grades of CIN. A total of 46 images were used to compare the pathologist's neoplasia grades with the predicted groups obtained by fractal methods. RESULTS: Significant or highly significant differences between all grades of CIN could be found. The confusion matrix, comparing between pathologist's grading and predicted group by fractal methods showed a match of 87.1%. Multifractal spectra were able to differentiate between normal epithelium and low grade as well as high grade neoplasia. CONCLUSION: Fractal dimension can be considered to be an objective parameter to grade cervical intraepithelial neoplasia.

  15. Performance evaluation of breast image compression techniques

    International Nuclear Information System (INIS)

    Anastassopoulos, G.; Lymberopoulos, D.; Panayiotakis, G.; Bezerianos, A.

    1994-01-01

    Novel diagnosis orienting tele working systems manipulate, store, and process medical data through real time communication - conferencing schemes. One of the most important factors affecting the performance of these systems is image handling. Compression algorithms can be applied to the medical images, in order to minimize : a) the volume of data to be stored in the database, b) the demanded bandwidth from the network, c) the transmission costs, and to minimize the speed of the transmitted data. In this paper an estimation of all the factors of the process that affect the presentation of breast images is made, from the time the images are produced from a modality, till the compressed images are stored, or transmitted in a Broadband network (e.g. B-ISDN). The images used were scanned images of the TOR(MAX) Leeds breast phantom, as well as typical breast images. A comparison of seven compression techniques has been done, based on objective criteria such as Mean Square Error (MSE), resolution, contrast, etc. The user can choose the appropriate compression ratio in order to achieve the desired image quality. (authors)

  16. Contour fractal analysis of grains

    Science.gov (United States)

    Guida, Giulia; Casini, Francesca; Viggiani, Giulia MB

    2017-06-01

    Fractal analysis has been shown to be useful in image processing to characterise the shape and the grey-scale complexity in different applications spanning from electronic to medical engineering (e.g. [1]). Fractal analysis consists of several methods to assign a dimension and other fractal characteristics to a dataset describing geometric objects. Limited studies have been conducted on the application of fractal analysis to the classification of the shape characteristics of soil grains. The main objective of the work described in this paper is to obtain, from the results of systematic fractal analysis of artificial simple shapes, the characterization of the particle morphology at different scales. The long term objective of the research is to link the microscopic features of granular media with the mechanical behaviour observed in the laboratory and in situ.

  17. Recognizable or Not: Towards Image Semantic Quality Assessment for Compression

    Science.gov (United States)

    Liu, Dong; Wang, Dandan; Li, Houqiang

    2017-12-01

    Traditionally, image compression was optimized for the pixel-wise fidelity or the perceptual quality of the compressed images given a bit-rate budget. But recently, compressed images are more and more utilized for automatic semantic analysis tasks such as recognition and retrieval. For these tasks, we argue that the optimization target of compression is no longer perceptual quality, but the utility of the compressed images in the given automatic semantic analysis task. Accordingly, we propose to evaluate the quality of the compressed images neither at pixel level nor at perceptual level, but at semantic level. In this paper, we make preliminary efforts towards image semantic quality assessment (ISQA), focusing on the task of optical character recognition (OCR) from compressed images. We propose a full-reference ISQA measure by comparing the features extracted from text regions of original and compressed images. We then propose to integrate the ISQA measure into an image compression scheme. Experimental results show that our proposed ISQA measure is much better than PSNR and SSIM in evaluating the semantic quality of compressed images; accordingly, adopting our ISQA measure to optimize compression for OCR leads to significant bit-rate saving compared to using PSNR or SSIM. Moreover, we perform subjective test about text recognition from compressed images, and observe that our ISQA measure has high consistency with subjective recognizability. Our work explores new dimensions in image quality assessment, and demonstrates promising direction to achieve higher compression ratio for specific semantic analysis tasks.

  18. Contributions in compression of 3D medical images and 2D images; Contributions en compression d'images medicales 3D et d'images naturelles 2D

    Energy Technology Data Exchange (ETDEWEB)

    Gaudeau, Y

    2006-12-15

    The huge amounts of volumetric data generated by current medical imaging techniques in the context of an increasing demand for long term archiving solutions, as well as the rapid development of distant radiology make the use of compression inevitable. Indeed, if the medical community has sided until now with compression without losses, most of applications suffer from compression ratios which are too low with this kind of compression. In this context, compression with acceptable losses could be the most appropriate answer. So, we propose a new loss coding scheme based on 3D (3 dimensional) Wavelet Transform and Dead Zone Lattice Vector Quantization 3D (DZLVQ) for medical images. Our algorithm has been evaluated on several computerized tomography (CT) and magnetic resonance image volumes. The main contribution of this work is the design of a multidimensional dead zone which enables to take into account correlations between neighbouring elementary volumes. At high compression ratios, we show that it can out-perform visually and numerically the best existing methods. These promising results are confirmed on head CT by two medical patricians. The second contribution of this document assesses the effect with-loss image compression on CAD (Computer-Aided Decision) detection performance of solid lung nodules. This work on 120 significant lungs images shows that detection did not suffer until 48:1 compression and still was robust at 96:1. The last contribution consists in the complexity reduction of our compression scheme. The first allocation dedicated to 2D DZLVQ uses an exponential of the rate-distortion (R-D) functions. The second allocation for 2D and 3D medical images is based on block statistical model to estimate the R-D curves. These R-D models are based on the joint distribution of wavelet vectors using a multidimensional mixture of generalized Gaussian (MMGG) densities. (author)

  19. High-quality compressive ghost imaging

    Science.gov (United States)

    Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun

    2018-04-01

    We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.

  20. Enhancing PIV image and fractal descriptor for velocity and shear stresses propagation around a circular pier

    Directory of Open Access Journals (Sweden)

    Alireza Keshavarzi

    2017-07-01

    Full Text Available In this study, the fractal dimensions of velocity fluctuations and the Reynolds shear stresses propagation for flow around a circular bridge pier are presented. In the study reported herein, the fractal dimension of velocity fluctuations (u′, v′, w′ and the Reynolds shear stresses (u′v′ and u′w′ of flow around a bridge pier were computed using a Fractal Interpolation Function (FIF algorithm. The velocity fluctuations of flow along a horizontal plane above the bed were measured using Acoustic Doppler Velocity meter (ADV and Particle Image Velocimetry (PIV. The PIV is a powerful technique which enables us to attain high resolution spatial and temporal information of turbulent flow using instantaneous time snapshots. In this study, PIV was used for detection of high resolution fractal scaling around a bridge pier. The results showed that the fractal dimension of flow fluctuated significantly in the longitudinal and transverse directions in the vicinity of the pier. It was also found that the fractal dimension of velocity fluctuations and shear stresses increased rapidly at vicinity of pier at downstream whereas it remained approximately unchanged far downstream of the pier. The higher value of fractal dimension was found at a distance equal to one times of the pier diameter in the back of the pier. Furthermore, the average fractal dimension for the streamwise and transverse velocity fluctuations decreased from the centreline to the side wall of the flume. Finally, the results from ADV measurement were consistent with the result from PIV, therefore, the ADV enables to detect turbulent characteristics of flow around a circular bridge pier.

  1. Monitoring of dry sliding wear using fractal analysis

    NARCIS (Netherlands)

    Zhang, Jindang; Regtien, Paulus P.L.; Korsten, Maarten J.

    2005-01-01

    Reliable online monitoring of wear remains a challenge to tribology research as well as to the industry. This paper presents a new method for monitoring of dry sliding wear using digital imaging and fractal analysis. Fractal values, namely fractal dimension and intercept, computed from the power

  2. The task of control digital image compression

    OpenAIRE

    TASHMANOV E.B.; МАМАTOV М.S.

    2014-01-01

    In this paper we consider the relationship of control tasks and image compression losses. The main idea of this approach is to allocate structural lines simplified image and further compress the selected data

  3. A new hyperspectral image compression paradigm based on fusion

    Science.gov (United States)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  4. Performance evaluation of breast image compression techniques

    Energy Technology Data Exchange (ETDEWEB)

    Anastassopoulos, G; Lymberopoulos, D [Wire Communications Laboratory, Electrical Engineering Department, University of Patras, Greece (Greece); Panayiotakis, G; Bezerianos, A [Medical Physics Department, School of Medicine, University of Patras, Greece (Greece)

    1994-12-31

    Novel diagnosis orienting tele working systems manipulate, store, and process medical data through real time communication - conferencing schemes. One of the most important factors affecting the performance of these systems is image handling. Compression algorithms can be applied to the medical images, in order to minimize : a) the volume of data to be stored in the database, b) the demanded bandwidth from the network, c) the transmission costs, and to minimize the speed of the transmitted data. In this paper an estimation of all the factors of the process that affect the presentation of breast images is made, from the time the images are produced from a modality, till the compressed images are stored, or transmitted in a Broadband network (e.g. B-ISDN). The images used were scanned images of the TOR(MAX) Leeds breast phantom, as well as typical breast images. A comparison of seven compression techniques has been done, based on objective criteria such as Mean Square Error (MSE), resolution, contrast, etc. The user can choose the appropriate compression ratio in order to achieve the desired image quality. (authors). 12 refs, 4 figs.

  5. Effects on MR images compression in tissue classification quality

    International Nuclear Information System (INIS)

    Santalla, H; Meschino, G; Ballarin, V

    2007-01-01

    It is known that image compression is required to optimize the storage in memory. Moreover, transmission speed can be significantly improved. Lossless compression is used without controversy in medicine, though benefits are limited. If we compress images lossy, where image can not be totally recovered; we can only recover an approximation. In this point definition of 'quality' is essential. What we understand for 'quality'? How can we evaluate a compressed image? Quality in images is an attribute whit several definitions and interpretations, which actually depend on the posterior use we want to give them. This work proposes a quantitative analysis of quality for lossy compressed Magnetic Resonance (MR) images, and their influence in automatic tissue classification, accomplished with these images

  6. Correlation and image compression for limited-bandwidth CCD.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, Douglas G.

    2005-07-01

    As radars move to Unmanned Aerial Vehicles with limited-bandwidth data downlinks, the amount of data stored and transmitted with each image becomes more significant. This document gives the results of a study to determine the effect of lossy compression in the image magnitude and phase on Coherent Change Detection (CCD). We examine 44 lossy compression types, plus lossless zlib compression, and test each compression method with over 600 CCD image pairs. We also derive theoretical predictions for the correlation for most of these compression schemes, which compare favorably with the experimental results. We recommend image transmission formats for limited-bandwidth programs having various requirements for CCD, including programs which cannot allow performance degradation and those which have stricter bandwidth requirements at the expense of CCD performance.

  7. Alzheimer's Disease Detection in Brain Magnetic Resonance Images Using Multiscale Fractal Analysis

    International Nuclear Information System (INIS)

    Lahmiri, Salim; Boukadoum, Mounir

    2013-01-01

    We present a new automated system for the detection of brain magnetic resonance images (MRI) affected by Alzheimer's disease (AD). The MRI is analyzed by means of multiscale analysis (MSA) to obtain its fractals at six different scales. The extracted fractals are used as features to differentiate healthy brain MRI from those of AD by a support vector machine (SVM) classifier. The result of classifying 93 brain MRIs consisting of 51 images of healthy brains and 42 of brains affected by AD, using leave-one-out cross-validation method, yielded 99.18% ± 0.01 classification accuracy, 100% sensitivity, and 98.20% ± 0.02 specificity. These results and a processing time of 5.64 seconds indicate that the proposed approach may be an efficient diagnostic aid for radiologists in the screening for AD

  8. Fractal analysis for studying the evolution of forests

    International Nuclear Information System (INIS)

    Andronache, Ion C.; Ahammer, Helmut; Jelinek, Herbert F.; Peptenatu, Daniel; Ciobotaru, Ana-M.; Draghici, Cristian C.; Pintilii, Radu D.; Simion, Adrian G.

    2016-01-01

    Highlights: • Legal and illegal deforestation is investigated by fractal analysis. • A new fractal fragmentation index FFI is proposed. • Differences in shapes of forest areas indicate the type of deforestation. • Support of ecological management. - Abstract: Deforestation is an important phenomenon that may create major imbalances in ecosystems. In this study we propose a new mathematical analysis of the forest area dynamic, enabling qualitative as well as quantitative statements and results. Fractal dimensions of the area and the perimeter of a forest were determined using digital images. The difference between fractal dimensions of the area and the perimeter images turned out to be a crucial quantitative parameter. Accordingly, we propose a new fractal fragmentation index, FFI, which is based on this difference and which highlights the degree of compaction or non-compaction of the forest area in order to interpret geographic features. Particularly, this method was applied to forests, where large areas have been legally or illegally deforested. However, these methods can easily be used for other ecological or geographical investigations based on digital images, including deforestation of rainforests.

  9. Optimum image compression rate maintaining diagnostic image quality of digital intraoral radiographs

    International Nuclear Information System (INIS)

    Song, Ju Seop; Koh, Kwang Joon

    2000-01-01

    The aims of the present study are to determine the optimum compression rate in terms of file size reduction and diagnostic quality of the images after compression and evaluate the transmission speed of original or each compressed images. The material consisted of 24 extracted human premolars and molars. The occlusal surfaces and proximal surfaces of the teeth had a clinical disease spectrum that ranged from sound to varying degrees of fissure discoloration and cavitation. The images from Digora system were exported in TIFF and the images from conventional intraoral film were scanned and digitalized in TIFF by Nikon SF-200 scanner(Nikon, Japan). And six compression factors were chosen and applied on the basis of the results from a pilot study. The total number of images to be assessed were 336. Three radiologists assessed the occlusal and proximal surfaces of the teeth with 5-rank scale. Finally diagnosed as either sound or carious lesion by one expert oral pathologist. And sensitivity and specificity and kappa value for diagnostic agreement was calculated. Also the area (Az) values under the ROC curve were calculated and paired t-test and oneway ANOVA test was performed. Thereafter, transmission time of the image files of the each compression level were compared with that of the original image files. No significant difference was found between original and the corresponding images up to 7% (1:14) compression ratio for both the occlusal and proximal caries (p<0.05). JPEG3 (1:14) image files are transmitted fast more than 10 times, maintained diagnostic information in image, compared with original image files. 1:14 compressed image file may be used instead of the original image and reduce storage needs and transmission time.

  10. Wavelet/scalar quantization compression standard for fingerprint images

    Energy Technology Data Exchange (ETDEWEB)

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.

  11. Hyperspectral image compressing using wavelet-based method

    Science.gov (United States)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  12. Wavelet compression algorithm applied to abdominal ultrasound images

    International Nuclear Information System (INIS)

    Lin, Cheng-Hsun; Pan, Su-Feng; LU, Chin-Yuan; Lee, Ming-Che

    2006-01-01

    We sought to investigate acceptable compression ratios of lossy wavelet compression on 640 x 480 x 8 abdominal ultrasound (US) images. We acquired 100 abdominal US images with normal and abnormal findings from the view station of a 932-bed teaching hospital. The US images were then compressed at quality factors (QFs) of 3, 10, 30, and 50 followed outcomes of a pilot study. This was equal to the average compression ratios of 4.3:1, 8.5:1, 20:1 and 36.6:1, respectively. Four objective measurements were carried out to examine and compare the image degradation between original and compressed images. Receiver operating characteristic (ROC) analysis was also introduced for subjective assessment. Five experienced and qualified radiologists as reviewers blinded to corresponding pathological findings, analysed paired 400 randomly ordered images with two 17-inch thin film transistor/liquid crystal display (TFT/LCD) monitors. At ROC analysis, the average area under curve (Az) for US abdominal image was 0.874 at the ratio of 36.6:1. The compressed image size was only 2.7% for US original at this ratio. The objective parameters showed the higher the mean squared error (MSE) or root mean squared error (RMSE) values, the poorer the image quality. The higher signal-to-noise ratio (SNR) or peak signal-to-noise ratio (PSNR) values indicated better image quality. The average RMSE, PSNR at 36.6:1 for US were 4.84 ± 0.14, 35.45 dB, respectively. This finding suggests that, on the basis of the patient sample, wavelet compression of abdominal US to a ratio of 36.6:1 did not adversely affect diagnostic performance or evaluation error for radiologists' interpretation so as to risk affecting diagnosis

  13. Contributions in compression of 3D medical images and 2D images; Contributions en compression d'images medicales 3D et d'images naturelles 2D

    Energy Technology Data Exchange (ETDEWEB)

    Gaudeau, Y

    2006-12-15

    The huge amounts of volumetric data generated by current medical imaging techniques in the context of an increasing demand for long term archiving solutions, as well as the rapid development of distant radiology make the use of compression inevitable. Indeed, if the medical community has sided until now with compression without losses, most of applications suffer from compression ratios which are too low with this kind of compression. In this context, compression with acceptable losses could be the most appropriate answer. So, we propose a new loss coding scheme based on 3D (3 dimensional) Wavelet Transform and Dead Zone Lattice Vector Quantization 3D (DZLVQ) for medical images. Our algorithm has been evaluated on several computerized tomography (CT) and magnetic resonance image volumes. The main contribution of this work is the design of a multidimensional dead zone which enables to take into account correlations between neighbouring elementary volumes. At high compression ratios, we show that it can out-perform visually and numerically the best existing methods. These promising results are confirmed on head CT by two medical patricians. The second contribution of this document assesses the effect with-loss image compression on CAD (Computer-Aided Decision) detection performance of solid lung nodules. This work on 120 significant lungs images shows that detection did not suffer until 48:1 compression and still was robust at 96:1. The last contribution consists in the complexity reduction of our compression scheme. The first allocation dedicated to 2D DZLVQ uses an exponential of the rate-distortion (R-D) functions. The second allocation for 2D and 3D medical images is based on block statistical model to estimate the R-D curves. These R-D models are based on the joint distribution of wavelet vectors using a multidimensional mixture of generalized Gaussian (MMGG) densities. (author)

  14. Conference on Fractals and Related Fields III

    CERN Document Server

    Seuret, Stéphane

    2017-01-01

    This contributed volume provides readers with an overview of the most recent developments in the mathematical fields related to fractals, including both original research contributions, as well as surveys from many of the leading experts on modern fractal theory and applications. It is an outgrowth of the Conference of Fractals and Related Fields III, that was held on September 19-25, 2015 in île de Porquerolles, France. Chapters cover fields related to fractals such as harmonic analysis, multifractal analysis, geometric measure theory, ergodic theory and dynamical systems, probability theory, number theory, wavelets, potential theory, partial differential equations, fractal tilings, combinatorics, and signal and image processing. The book is aimed at pure and applied mathematicians in these areas, as well as other researchers interested in discovering the fractal domain.

  15. Turbulence Enhancement by Fractal Square Grids: Effects of the Number of Fractal Scales

    Science.gov (United States)

    Omilion, Alexis; Ibrahim, Mounir; Zhang, Wei

    2017-11-01

    Fractal square grids offer a unique solution for passive flow control as they can produce wakes with a distinct turbulence intensity peak and a prolonged turbulence decay region at the expense of only minimal pressure drop. While previous studies have solidified this characteristic of fractal square grids, how the number of scales (or fractal iterations N) affect turbulence production and decay of the induced wake is still not well understood. The focus of this research is to determine the relationship between the fractal iteration N and the turbulence produced in the wake flow using well-controlled water-tunnel experiments. Particle Image Velocimetry (PIV) is used to measure the instantaneous velocity fields downstream of four different fractal grids with increasing number of scales (N = 1, 2, 3, and 4) and a conventional single-scale grid. By comparing the turbulent scales and statistics of the wake, we are able to determine how each iteration affects the peak turbulence intensity and the production/decay of turbulence from the grid. In light of the ability of these fractal grids to increase turbulence intensity with low pressure drop, this work can potentially benefit a wide variety of applications where energy efficient mixing or convective heat transfer is a key process.

  16. Wavelet-based compression of pathological images for telemedicine applications

    Science.gov (United States)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  17. Fractal Analysis of Elastographic Images for Automatic Detection of Diffuse Diseases of Salivary Glands: Preliminary Results

    Directory of Open Access Journals (Sweden)

    Alexandru Florin Badea

    2013-01-01

    Full Text Available The geometry of some medical images of tissues, obtained by elastography and ultrasonography, is characterized in terms of complexity parameters such as the fractal dimension (FD. It is well known that in any image there are very subtle details that are not easily detectable by the human eye. However, in many cases like medical imaging diagnosis, these details are very important since they might contain some hidden information about the possible existence of certain pathological lesions like tissue degeneration, inflammation, or tumors. Therefore, an automatic method of analysis could be an expedient tool for physicians to give a faultless diagnosis. The fractal analysis is of great importance in relation to a quantitative evaluation of “real-time” elastography, a procedure considered to be operator dependent in the current clinical practice. Mathematical analysis reveals significant discrepancies among normal and pathological image patterns. The main objective of our work is to demonstrate the clinical utility of this procedure on an ultrasound image corresponding to a submandibular diffuse pathology.

  18. Image compression with Iris-C

    Science.gov (United States)

    Gains, David

    2009-05-01

    Iris-C is an image codec designed for streaming video applications that demand low bit rate, low latency, lossless image compression. To achieve compression and low latency the codec features the discrete wavelet transform, Exp-Golomb coding, and online processes that construct dynamic models of the input video. Like H.264 and Dirac, the Iris-C codec accepts input video from both the YUV and YCOCG colour spaces, but the system can also operate on Bayer RAW data read directly from an image sensor. Testing shows that the Iris-C codec is competitive with the Dirac low delay syntax codec which is typically regarded as the state-of-the-art low latency, lossless video compressor.

  19. Contributions in compression of 3D medical images and 2D images

    International Nuclear Information System (INIS)

    Gaudeau, Y.

    2006-12-01

    The huge amounts of volumetric data generated by current medical imaging techniques in the context of an increasing demand for long term archiving solutions, as well as the rapid development of distant radiology make the use of compression inevitable. Indeed, if the medical community has sided until now with compression without losses, most of applications suffer from compression ratios which are too low with this kind of compression. In this context, compression with acceptable losses could be the most appropriate answer. So, we propose a new loss coding scheme based on 3D (3 dimensional) Wavelet Transform and Dead Zone Lattice Vector Quantization 3D (DZLVQ) for medical images. Our algorithm has been evaluated on several computerized tomography (CT) and magnetic resonance image volumes. The main contribution of this work is the design of a multidimensional dead zone which enables to take into account correlations between neighbouring elementary volumes. At high compression ratios, we show that it can out-perform visually and numerically the best existing methods. These promising results are confirmed on head CT by two medical patricians. The second contribution of this document assesses the effect with-loss image compression on CAD (Computer-Aided Decision) detection performance of solid lung nodules. This work on 120 significant lungs images shows that detection did not suffer until 48:1 compression and still was robust at 96:1. The last contribution consists in the complexity reduction of our compression scheme. The first allocation dedicated to 2D DZLVQ uses an exponential of the rate-distortion (R-D) functions. The second allocation for 2D and 3D medical images is based on block statistical model to estimate the R-D curves. These R-D models are based on the joint distribution of wavelet vectors using a multidimensional mixture of generalized Gaussian (MMGG) densities. (author)

  20. Depth of focus enhancement of a modified imaging quasi-fractal zone plate.

    Science.gov (United States)

    Zhang, Qinqin; Wang, Jingang; Wang, Mingwei; Bu, Jing; Zhu, Siwei; Gao, Bruce Z; Yuan, Xiaocong

    2012-10-01

    We propose a new parameter w for optimization of foci distribution of conventional fractal zone plates (FZPs) with a greater depth of focus (DOF) in imaging. Numerical simulations of DOF distribution on axis directions indicate that the values of DOF can be extended by a factor of 1.5 or more by a modified quasi-FZP. In experiments, we employ a simple object-lens-image-plane arrangement to pick up images at various positions within the DOF of a conventional FZP and a quasi-FZP, respectively. Experimental results show that the parameter w improves foci distribution of FZPs in good agreement with theoretical predictions.

  1. On-board image compression for the RAE lunar mission

    Science.gov (United States)

    Miller, W. H.; Lynch, T. J.

    1976-01-01

    The requirements, design, implementation, and flight performance of an on-board image compression system for the lunar orbiting Radio Astronomy Explorer-2 (RAE-2) spacecraft are described. The image to be compressed is a panoramic camera view of the long radio astronomy antenna booms used for gravity-gradient stabilization of the spacecraft. A compression ratio of 32 to 1 is obtained by a combination of scan line skipping and adaptive run-length coding. The compressed imagery data are convolutionally encoded for error protection. This image compression system occupies about 1000 cu cm and consumes 0.4 W.

  2. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    Science.gov (United States)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  3. Data compression of digital X-ray images from a clinical viewpoint

    International Nuclear Information System (INIS)

    Ando, Yutaka

    1992-01-01

    For the PACS (picture archiving and communication system), large storage capacity recording media and a fast data transfer network are necessary. When the PACS are working, these technology requirements become an large problem. So we need image data compression having a higher recording efficiency media and an improved transmission ratio. There are two kinds of data compression methods, one is reversible compression and other is the irreversible one. By these reversible compression methods, a compressed-expanded image is exactly equal to the original image. The ratio of data compression is about between 1/2 an d1/3. On the other hand, for irreversible data compression, the compressed-expanded image is a distorted image, and we can achieve a high compression ratio by using this method. In the medical field, the discrete cosine transform (DCT) method is popular because of the low distortion and fast performance. The ratio of data compression is actually from 1/10 to 1/20. It is important for us to decide the compression ratio according to the purposes and modality of the image. We must carefully select the ratio of the data compression because the suitable compression ratio alters in the usage of image for education, clinical diagnosis and reference. (author)

  4. Spatial compression algorithm for the analysis of very large multivariate images

    Science.gov (United States)

    Keenan, Michael R [Albuquerque, NM

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  5. Acceptable levels of digital image compression in chest radiology

    International Nuclear Information System (INIS)

    Smith, I.

    2000-01-01

    The introduction of picture archival and communications systems (PACS) and teleradiology has prompted an examination of techniques that optimize the storage capacity and speed of digital storage and distribution networks. The general acceptance of the move to replace conventional screen-film capture with computed radiography (CR) is an indication that clinicians within the radiology community are willing to accept images that have been 'compressed'. The question to be answered, therefore, is what level of compression is acceptable. The purpose of the present study is to provide an assessment of the ability of a group of imaging professionals to determine whether an image has been compressed. To undertake this study a single mobile chest image, selected for the presence of some subtle pathology in the form of a number of septal lines in both costphrenic angles, was compressed to levels of 10:1, 20:1 and 30:1. These images were randomly ordered and shown to the observers for interpretation. Analysis of the responses indicates that in general it was not possible to distinguish the original image from its compressed counterparts. Furthermore, a preference appeared to be shown for images that have undergone low levels of compression. This preference can most likely be attributed to the 'de-noising' effect of the compression algorithm at low levels. Copyright (1999) Blackwell Science Pty. Ltd

  6. Analysis and classification of commercial ham slice images using directional fractal dimension features.

    Science.gov (United States)

    Mendoza, Fernando; Valous, Nektarios A; Allen, Paul; Kenny, Tony A; Ward, Paddy; Sun, Da-Wen

    2009-02-01

    This paper presents a novel and non-destructive approach to the appearance characterization and classification of commercial pork, turkey and chicken ham slices. Ham slice images were modelled using directional fractal (DF(0°;45°;90°;135°)) dimensions and a minimum distance classifier was adopted to perform the classification task. Also, the role of different colour spaces and the resolution level of the images on DF analysis were investigated. This approach was applied to 480 wafer thin ham slices from four types of hams (120 slices per type): i.e., pork (cooked and smoked), turkey (smoked) and chicken (roasted). DF features were extracted from digitalized intensity images in greyscale, and R, G, B, L(∗), a(∗), b(∗), H, S, and V colour components for three image resolution levels (100%, 50%, and 25%). Simulation results show that in spite of the complexity and high variability in colour and texture appearance, the modelling of ham slice images with DF dimensions allows the capture of differentiating textural features between the four commercial ham types. Independent DF features entail better discrimination than that using the average of four directions. However, DF dimensions reveal a high sensitivity to colour channel, orientation and image resolution for the fractal analysis. The classification accuracy using six DF dimension features (a(90°)(∗),a(135°)(∗),H(0°),H(45°),S(0°),H(90°)) was 93.9% for training data and 82.2% for testing data.

  7. Focusing behavior of the fractal vector optical fields designed by fractal lattice growth model.

    Science.gov (United States)

    Gao, Xu-Zhen; Pan, Yue; Zhao, Meng-Dan; Zhang, Guan-Lin; Zhang, Yu; Tu, Chenghou; Li, Yongnan; Wang, Hui-Tian

    2018-01-22

    We introduce a general fractal lattice growth model, significantly expanding the application scope of the fractal in the realm of optics. This model can be applied to construct various kinds of fractal "lattices" and then to achieve the design of a great diversity of fractal vector optical fields (F-VOFs) combinating with various "bases". We also experimentally generate the F-VOFs and explore their universal focusing behaviors. Multiple focal spots can be flexibly enginnered, and the optical tweezers experiment validates the simulated tight focusing fields, which means that this model allows the diversity of the focal patterns to flexibly trap and manipulate micrometer-sized particles. Furthermore, the recovery performance of the F-VOFs is also studied when the input fields and spatial frequency spectrum are obstructed, and the results confirm the robustness of the F-VOFs in both focusing and imaging processes, which is very useful in information transmission.

  8. Towards Video Quality Metrics Based on Colour Fractal Geometry

    Directory of Open Access Journals (Sweden)

    Richard Noël

    2010-01-01

    Full Text Available Vision is a complex process that integrates multiple aspects of an image: spatial frequencies, topology and colour. Unfortunately, so far, all these elements were independently took into consideration for the development of image and video quality metrics, therefore we propose an approach that blends together all of them. Our approach allows for the analysis of the complexity of colour images in the RGB colour space, based on the probabilistic algorithm for calculating the fractal dimension and lacunarity. Given that all the existing fractal approaches are defined only for gray-scale images, we extend them to the colour domain. We show how these two colour fractal features capture the multiple aspects that characterize the degradation of the video signal, based on the hypothesis that the quality degradation perceived by the user is directly proportional to the modification of the fractal complexity. We claim that the two colour fractal measures can objectively assess the quality of the video signal and they can be used as metrics for the user-perceived video quality degradation and we validated them through experimental results obtained for an MPEG-4 video streaming application; finally, the results are compared against the ones given by unanimously-accepted metrics and subjective tests.

  9. A fractal derivative constitutive model for three stages in granite creep

    Directory of Open Access Journals (Sweden)

    R. Wang

    Full Text Available In this paper, by replacing the Newtonian dashpot with the fractal dashpot and considering damage effect, a new constitutive model is proposed in terms of time fractal derivative to describe the full creep regions of granite. The analytic solutions of the fractal derivative creep constitutive equation are derived via scaling transform. The conventional triaxial compression creep tests are performed on MTS 815 rock mechanics test system to verify the efficiency of the new model. The granite specimen is taken from Beishan site, the most potential area for the China’s high-level radioactive waste repository. It is shown that the proposed fractal model can characterize the creep behavior of granite especially in accelerating stage which the classical models cannot predict. The parametric sensitivity analysis is also conducted to investigate the effects of model parameters on the creep strain of granite. Keywords: Beishan granite, Fractal derivative, Damage evolution, Scaling transformation

  10. Reevaluation of JPEG image compression to digitalized gastrointestinal endoscopic color images: a pilot study

    Science.gov (United States)

    Kim, Christopher Y.

    1999-05-01

    Endoscopic images p lay an important role in describing many gastrointestinal (GI) disorders. The field of radiology has been on the leading edge of creating, archiving and transmitting digital images. With the advent of digital videoendoscopy, endoscopists now have the ability to generate images for storage and transmission. X-rays can be compressed 30-40X without appreciable decline in quality. We reported results of a pilot study using JPEG compression of 24-bit color endoscopic images. For that study, the result indicated that adequate compression ratios vary according to the lesion and that images could be compressed to between 31- and 99-fold smaller than the original size without an appreciable decline in quality. The purpose of this study was to expand upon the methodology of the previous sty with an eye towards application for the WWW, a medium which would expand both clinical and educational purposes of color medical imags. The results indicate that endoscopists are able to tolerate very significant compression of endoscopic images without loss of clinical image quality. This finding suggests that even 1 MB color images can be compressed to well under 30KB, which is considered a maximal tolerable image size for downloading on the WWW.

  11. FAST TRACK COMMUNICATION: Weyl law for fat fractals

    Science.gov (United States)

    Spina, María E.; García-Mata, Ignacio; Saraceno, Marcos

    2010-10-01

    It has been conjectured that for a class of piecewise linear maps the closure of the set of images of the discontinuity has the structure of a fat fractal, that is, a fractal with positive measure. An example of such maps is the sawtooth map in the elliptic regime. In this work we analyze this problem quantum mechanically in the semiclassical regime. We find that the fraction of states localized on the unstable set satisfies a modified fractal Weyl law, where the exponent is given by the exterior dimension of the fat fractal.

  12. Microarray BASICA: Background Adjustment, Segmentation, Image Compression and Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jianping Hua

    2004-01-01

    Full Text Available This paper presents microarray BASICA: an integrated image processing tool for background adjustment, segmentation, image compression, and analysis of cDNA microarray images. BASICA uses a fast Mann-Whitney test-based algorithm to segment cDNA microarray images, and performs postprocessing to eliminate the segmentation irregularities. The segmentation results, along with the foreground and background intensities obtained with the background adjustment, are then used for independent compression of the foreground and background. We introduce a new distortion measurement for cDNA microarray image compression and devise a coding scheme by modifying the embedded block coding with optimized truncation (EBCOT algorithm (Taubman, 2000 to achieve optimal rate-distortion performance in lossy coding while still maintaining outstanding lossless compression performance. Experimental results show that the bit rate required to ensure sufficiently accurate gene expression measurement varies and depends on the quality of cDNA microarray images. For homogeneously hybridized cDNA microarray images, BASICA is able to provide from a bit rate as low as 5 bpp the gene expression data that are 99% in agreement with those of the original 32 bpp images.

  13. [Modeling continuous scaling of NDVI based on fractal theory].

    Science.gov (United States)

    Luan, Hai-Jun; Tian, Qing-Jiu; Yu, Tao; Hu, Xin-Li; Huang, Yan; Du, Ling-Tong; Zhao, Li-Min; Wei, Xi; Han, Jie; Zhang, Zhou-Wei; Li, Shao-Peng

    2013-07-01

    Scale effect was one of the very important scientific problems of remote sensing. The scale effect of quantitative remote sensing can be used to study retrievals' relationship between different-resolution images, and its research became an effective way to confront the challenges, such as validation of quantitative remote sensing products et al. Traditional up-scaling methods cannot describe scale changing features of retrievals on entire series of scales; meanwhile, they are faced with serious parameters correction issues because of imaging parameters' variation of different sensors, such as geometrical correction, spectral correction, etc. Utilizing single sensor image, fractal methodology was utilized to solve these problems. Taking NDVI (computed by land surface radiance) as example and based on Enhanced Thematic Mapper Plus (ETM+) image, a scheme was proposed to model continuous scaling of retrievals. Then the experimental results indicated that: (a) For NDVI, scale effect existed, and it could be described by fractal model of continuous scaling; (2) The fractal method was suitable for validation of NDVI. All of these proved that fractal was an effective methodology of studying scaling of quantitative remote sensing.

  14. Comparison of JPEG and wavelet compression on intraoral digital radiographic images

    International Nuclear Information System (INIS)

    Kim, Eun Kyung

    2004-01-01

    To determine the proper image compression method and ratio without image quality degradation in intraoral digital radiographic images, comparing the discrete cosine transform (DCT)-based JPEG with the wavelet-based JPEG 2000 algorithm. Thirty extracted sound teeth and thirty extracted teeth with occlusal caries were used for this study. Twenty plaster blocks were made with three teeth each. They were radiographically exposed using CDR sensors (Schick Inc., Long Island, USA). Digital images were compressed to JPEG format, using Adobe Photoshop v. 7.0 and JPEG 2000 format using Jasper program with compression ratios of 5 : 1, 9 : 1, 14 : 1, 28 : 1 each. To evaluate the lesion detectability, receiver operating characteristic (ROC) analysis was performed by the three oral and maxillofacial radiologists. To evaluate the image quality, all the compressed images were assessed subjectively using 5 grades, in comparison to the original uncompressed images. Compressed images up to compression ratio of 14: 1 in JPEG and 28 : 1 in JPEG 2000 showed nearly the same the lesion detectability as the original images. In the subjective assessment of image quality, images up to compression ratio of 9 : 1 in JPEG and 14 : 1 in JPEG 2000 showed minute mean paired differences from the original images. The results showed that the clinically acceptable compression ratios were up to 9 : 1 for JPEG and 14 : 1 for JPEG 2000. The wavelet-based JPEG 2000 is a better compression method, comparing to DCT-based JPEG for intraoral digital radiographic images.

  15. The fractal dimension of cell membrane correlates with its capacitance: A new fractal single-shell model

    Science.gov (United States)

    Wang, Xujing; Becker, Frederick F.; Gascoyne, Peter R. C.

    2010-01-01

    The scale-invariant property of the cytoplasmic membrane of biological cells is examined by applying the Minkowski–Bouligand method to digitized scanning electron microscopy images of the cell surface. The membrane is found to exhibit fractal behavior, and the derived fractal dimension gives a good description of its morphological complexity. Furthermore, we found that this fractal dimension correlates well with the specific membrane dielectric capacitance derived from the electrorotation measurements. Based on these findings, we propose a new fractal single-shell model to describe the dielectrics of mammalian cells, and compare it with the conventional single-shell model (SSM). We found that while both models fit with experimental data well, the new model is able to eliminate the discrepancy between the measured dielectric property of cells and that predicted by the SSM. PMID:21198103

  16. The effects of voltage of x-ray tube on fractal dimension and anisotropy of diagnostic image

    International Nuclear Information System (INIS)

    Baik, Jee Seon; Lee, Sam Sun; Huh, Kyung Hoe; Yi, Won Jin; Heo, Min Suk; Choi, Soon Chul; Park, Kwan Soo

    2007-01-01

    The purpose of this study was to evaluate the effect of the kV on fractal dimension of trabecular bone in digital radiographs. 16 bone cores were obtained from patients who had taken partial resection of tibia due to accidents. Each bone core along with an aluminum step wedge was radiographed with an occlusal film at 0.08 sec and with the constant film-focus distance (32 cm). All radiographs were acquired at 60, 75, and 90 kV. A rectangular ROI was drawn at medial part, distal part, and the bone defect area of each bone core image according to each kV. The directional fractal dimension was measured using Fourier Transform spectrum, and the anisotropy was obtained using directional fractal dimension. The values were compared by the repeated measures ANOVA. The fractal dimensions increased along with kV increase (p<0.05). The anisotropy measurements did not show statistically significant difference according to kV change. The fractal dimensions of the bone defect areas of the bone cores have low values contrast to the non-defect areas of the bone cores. The anisotropy measurements of the bone defect areas were lower than those of the non-defect areas of the bone cores, but not statistically significant. Fractal analysis can notice a difference of a change of voltage of x-ray tube and bone defect or not. And anisotropy of a trabecular bone is coherent even with change of the voltage of x-ray tube or defecting off a part of bone

  17. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    Science.gov (United States)

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.

  18. An investigation of fractal characteristics of mesoporous carbon electrodes with various pore structures

    International Nuclear Information System (INIS)

    Pyun, Su-Il; Rhee, Chang-Kyu

    2004-01-01

    Fractal characteristics of mesoporous carbon electrodes were investigated with various pore structures using the N 2 gas adsorption method and the transmission electron microscopy (TEM) image analysis method. The mesoporous carbons with various pore structures were prepared by imprinting mesophase pitch used as a carbonaceous precursor with different colloidal silica particles. All imprinted mesoporous carbons were composed of two groups of pores produced from the carbonisation of mesophase pitch and from the silica imprinting. The overall surface fractal dimensions of the carbon specimens were determined from the analyses of the N 2 gas adsorption isotherms. In order to distinguish the surface fractal dimension of the carbonisation-induced pore surface from that fractal dimension of the silica-imprinted pore surface, the individual surface fractal dimensions were determined from the image analyses of the TEM images. From the comparison of the overall surface fractal dimension with the individual surface fractal dimensions, it was recognised that the overall surface fractal dimension is crucially influenced by the individual surface fractal dimension of the silica-imprinted pore surface. Moreover, from the fact that the silica-imprinted pore surface with broad relative pore size distribution (PSD) gave lower value of the individual surface fractal dimension than that pore surface with narrow relative PSD, it is concluded that as the silica-imprinted pores comprising the carbon specimen agglomerate, the individual surface fractal dimension of that pore surface decreases

  19. Fractal cosmology

    International Nuclear Information System (INIS)

    Dickau, Jonathan J.

    2009-01-01

    The use of fractals and fractal-like forms to describe or model the universe has had a long and varied history, which begins long before the word fractal was actually coined. Since the introduction of mathematical rigor to the subject of fractals, by Mandelbrot and others, there have been numerous cosmological theories and analyses of astronomical observations which suggest that the universe exhibits fractality or is by nature fractal. In recent years, the term fractal cosmology has come into usage, as a description for those theories and methods of analysis whereby a fractal nature of the cosmos is shown.

  20. High bit depth infrared image compression via low bit depth codecs

    Science.gov (United States)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-08-01

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.

  1. Fractal characteristics of fracture morphology of steels irradiated with high-energy ions

    Energy Technology Data Exchange (ETDEWEB)

    Xian, Yongqiang; Liu, Juan [Institute of Modern Physics, Chinese Academy of Science, Lanzhou 730000 (China); University of Chinese Academy of Science, Beijing 100049 (China); Zhang, Chonghong, E-mail: c.h.zhang@impcas.ac.cn [Institute of Modern Physics, Chinese Academy of Science, Lanzhou 730000 (China); Chen, Jiachao [Paul Scherrer Institute, Villigen PSI (Switzerland); Yang, Yitao; Zhang, Liqing; Song, Yin [Institute of Modern Physics, Chinese Academy of Science, Lanzhou 730000 (China)

    2015-06-15

    Highlights: • Fractal dimensions of fracture surfaces of steels before and after irradiation were calculated. • Fractal dimension can effectively describe change of fracture surfaces induced by irradiation. • Correlation of change of fractal dimension with embrittlement of irradiated steels is discussed. - Abstract: A fractal analysis of fracture surfaces of steels (a ferritic/martensitic steel and an oxide-dispersion-strengthened ferritic steel) before and after the irradiation with high-energy ions is presented. Fracture surfaces were acquired from a tensile test and a small-ball punch test (SP). Digital images of the fracture surfaces obtained from scanning electron microscopy (SEM) were used to calculate the fractal dimension (FD) by using the pixel covering method. Boundary of binary image and fractal dimension were determined with a MATLAB program. The results indicate that fractal dimension can be an effective parameter to describe the characteristics of fracture surfaces before and after irradiation. The rougher the fracture surface, the larger the fractal dimension. Correlation of the change of fractal dimension with the embrittlement of the irradiated steels is discussed.

  2. Fractal analysis as a potential tool for surface morphology of thin films

    Science.gov (United States)

    Soumya, S.; Swapna, M. S.; Raj, Vimal; Mahadevan Pillai, V. P.; Sankararaman, S.

    2017-12-01

    Fractal geometry developed by Mandelbrot has emerged as a potential tool for analyzing complex systems in the diversified fields of science, social science, and technology. Self-similar objects having the same details in different scales are referred to as fractals and are analyzed using the mathematics of non-Euclidean geometry. The present work is an attempt to correlate fractal dimension for surface characterization by Atomic Force Microscopy (AFM). Taking the AFM images of zinc sulphide (ZnS) thin films prepared by pulsed laser deposition (PLD) technique, under different annealing temperatures, the effect of annealing temperature and surface roughness on fractal dimension is studied. The annealing temperature and surface roughness show a strong correlation with fractal dimension. From the regression equation set, the surface roughness at a given annealing temperature can be calculated from the fractal dimension. The AFM images are processed using Photoshop and fractal dimension is calculated by box-counting method. The fractal dimension decreases from 1.986 to 1.633 while the surface roughness increases from 1.110 to 3.427, for a change of annealing temperature 30 ° C to 600 ° C. The images are also analyzed by power spectrum method to find the fractal dimension. The study reveals that the box-counting method gives better results compared to the power spectrum method.

  3. Image data compression in diagnostic imaging. International literature review and workflow recommendation

    International Nuclear Information System (INIS)

    Braunschweig, R.; Kaden, Ingmar; Schwarzer, J.; Sprengel, C.; Klose, K.

    2009-01-01

    Purpose: Today healthcare policy is based on effectiveness. Diagnostic imaging became a ''pace-setter'' due to amazing technical developments (e.g. multislice CT), extensive data volumes, and especially the well defined workflow-orientated scenarios on a local and (inter)national level. To make centralized networks sufficient, image data compression has been regarded as the key to a simple and secure solution. In February 2008 specialized working groups of the DRG held a consensus conference. They designed recommended data compression techniques and ratios. Material und methoden: The purpose of our paper is an international review of the literature of compression technologies, different imaging procedures (e.g. DR, CT etc.), and targets (abdomen, etc.) and to combine recommendations for compression ratios and techniques with different workflows. The studies were assigned to 4 different levels (0-3) according to the evidence. 51 studies were assigned to the highest level 3. Results: We recommend a compression factor of 1: 8 (excluding cranial scans 1:5). For workflow reasons data compression should be based on the modalities (CT, etc.). PACS-based compression is currently possible but fails to maximize workflow benefits. Only the modality-based scenarios achieve all benefits. (orig.)

  4. Image data compression in diagnostic imaging. International literature review and workflow recommendation

    Energy Technology Data Exchange (ETDEWEB)

    Braunschweig, R.; Kaden, Ingmar [Klinik fuer Bildgebende Diagnostik und Interventionsradiologie, BG-Kliniken Bergmannstrost Halle (Germany); Schwarzer, J.; Sprengel, C. [Dept. of Management Information System and Operations Research, Martin-Luther-Univ. Halle Wittenberg (Germany); Klose, K. [Medizinisches Zentrum fuer Radiologie, Philips-Univ. Marburg (Germany)

    2009-07-15

    Purpose: Today healthcare policy is based on effectiveness. Diagnostic imaging became a ''pace-setter'' due to amazing technical developments (e.g. multislice CT), extensive data volumes, and especially the well defined workflow-orientated scenarios on a local and (inter)national level. To make centralized networks sufficient, image data compression has been regarded as the key to a simple and secure solution. In February 2008 specialized working groups of the DRG held a consensus conference. They designed recommended data compression techniques and ratios. Material und methoden: The purpose of our paper is an international review of the literature of compression technologies, different imaging procedures (e.g. DR, CT etc.), and targets (abdomen, etc.) and to combine recommendations for compression ratios and techniques with different workflows. The studies were assigned to 4 different levels (0-3) according to the evidence. 51 studies were assigned to the highest level 3. Results: We recommend a compression factor of 1: 8 (excluding cranial scans 1:5). For workflow reasons data compression should be based on the modalities (CT, etc.). PACS-based compression is currently possible but fails to maximize workflow benefits. Only the modality-based scenarios achieve all benefits. (orig.)

  5. Diagnostic imaging of compression neuropathy

    International Nuclear Information System (INIS)

    Weishaupt, D.; Andreisek, G.

    2007-01-01

    Compression-induced neuropathy of peripheral nerves can cause severe pain of the foot and ankle. Early diagnosis is important to institute prompt treatment and to minimize potential injury. Although clinical examination combined with electrophysiological studies remain the cornerstone of the diagnostic work-up, in certain cases, imaging may provide key information with regard to the exact anatomic location of the lesion or aid in narrowing the differential diagnosis. In other patients with peripheral neuropathies of the foot and ankle, imaging may establish the etiology of the condition and provide information crucial for management and/or surgical planning. MR imaging and ultrasound provide direct visualization of the nerve and surrounding abnormalities. Bony abnormalities contributing to nerve compression are best assessed by radiographs and CT. Knowledge of the anatomy, the etiology, typical clinical findings, and imaging features of peripheral neuropathies affecting the peripheral nerves of the foot and ankle will allow for a more confident diagnosis. (orig.) [de

  6. [Medical image compression: a review].

    Science.gov (United States)

    Noreña, Tatiana; Romero, Eduardo

    2013-01-01

    Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings.

  7. An efficient algorithm for MR image reconstruction and compression

    International Nuclear Information System (INIS)

    Wang, Hang; Rosenfeld, D.; Braun, M.; Yan, Hong

    1992-01-01

    In magnetic resonance imaging (MRI), the original data are sampled in the spatial frequency domain. The sampled data thus constitute a set of discrete Fourier transform (DFT) coefficients. The image is usually reconstructed by taking inverse DFT. The image data may then be efficiently compressed using the discrete cosine transform (DCT). A method of using DCT to treat the sampled data is presented which combines two procedures, image reconstruction and data compression. This method may be particularly useful in medical picture archiving and communication systems where both image reconstruction and compression are important issues. 11 refs., 3 figs

  8. The relationship between compression force, image quality and ...

    African Journals Online (AJOL)

    Theoretically, an increase in breast compression gives a reduction in thickness without changing the density, resulting in improved image quality and reduced radiation dose. Aim. This study investigates the relationship between compression force, phantom thickness, image quality and radiation dose. The existence of a ...

  9. Evaluation of compression ratio using JPEG 2000 on diagnostic images in dentistry

    International Nuclear Information System (INIS)

    Jung, Gi Hun; Han, Won Jeong; Yoo, Dong Soo; Kim, Eun Kyung; Choi, Soon Chul

    2005-01-01

    To find out the proper compression ratios without degrading image quality and affecting lesion detectability on diagnostic images used in dentistry compressed with JPEG 2000 algorithm. Sixty Digora peri apical images, sixty panoramic computed radiographic (CR) images, sixty computed tomography (CT) images, and sixty magnetic resonance (MR) images were compressed into JPEG 2000 with ratios of 10 levels from 5:1 to 50:1. To evaluate the lesion detectability, the images were graded with 5 levels (1 : definitely absent ; 2 : probably absent ; 3 : equivocal ; 4 : probably present ; 5 : definitely present), and then receiver operating characteristic analysis was performed using the original image as a gold standard. Also to evaluate subjectively the image quality, the images were graded with 5 levels (1 : definitely unacceptable ; 2 : probably unacceptable ; 3 : equivocal ; 4 : probably acceptable ; 5 : definitely acceptable), and then paired t-test was performed. In Digora, CR panoramic and CT images, compressed images up to ratios of 15:1 showed nearly the same lesion detectability as original images, and in MR images, compressed images did up to ratios of 25:1. In Digora and CR panoramic images, compressed images up to ratios of 5:1 showed little difference between the original and reconstructed images in subjective assessment of image quality. In CT images, compressed images did up to ratios of 10:1 and in MR images up to ratios of 15:1. We considered compression ratios up to 5:1 in Digora and CR panoramic images, up to 10:1 in CT images, up to 15:1 in MR images as clinically applicable compression ratios.

  10. Fractal Image Filters for Specialized Image Recognition Tasks

    Science.gov (United States)

    2010-02-11

    colorful than lines and circles drawn on papyrus: their depth and beauty are advertised to a broad audience via computer graphics represen- tations of... colours were obtained with the aid of a fractal transformation from the attractor to the small picture of the yellow �ower. as being the !-limit set of

  11. MEDICAL IMAGE COMPRESSION USING HYBRID CODER WITH FUZZY EDGE DETECTION

    Directory of Open Access Journals (Sweden)

    K. Vidhya

    2011-02-01

    Full Text Available Medical imaging techniques produce prohibitive amounts of digitized clinical data. Compression of medical images is a must due to large memory space required for transmission and storage. This paper presents an effective algorithm to compress and to reconstruct medical images. The proposed algorithm first extracts edge information of medical images by using fuzzy edge detector. The images are decomposed using Cohen-Daubechies-Feauveau (CDF wavelet. The hybrid technique utilizes the efficient wavelet based compression algorithms such as JPEG2000 and Set Partitioning In Hierarchical Trees (SPIHT. The wavelet coefficients in the approximation sub band are encoded using tier 1 part of JPEG2000. The wavelet coefficients in the detailed sub bands are encoded using SPIHT. Consistent quality images are produced by this method at a lower bit rate compared to other standard compression algorithms. Two main approaches to assess image quality are objective testing and subjective testing. The image quality is evaluated by objective quality measures. Objective measures correlate well with the perceived image quality for the proposed compression algorithm.

  12. Image encryption based on fractal-structured phase mask in fractional Fourier transform domain

    Science.gov (United States)

    Zhao, Meng-Dan; Gao, Xu-Zhen; Pan, Yue; Zhang, Guan-Lin; Tu, Chenghou; Li, Yongnan; Wang, Hui-Tian

    2018-04-01

    We present an optical encryption approach based on the combination of fractal Fresnel lens (FFL) and fractional Fourier transform (FrFT). Our encryption approach is in fact a four-fold encryption scheme, including the random phase encoding produced by the Gerchberg–Saxton algorithm, a FFL, and two FrFTs. A FFL is composed of a Sierpinski carpet fractal plate and a Fresnel zone plate. In our encryption approach, the security is enhanced due to the more expandable key spaces and the use of FFL overcomes the alignment problem of the optical axis in optical system. Only using the perfectly matched parameters of the FFL and the FrFT, the plaintext can be recovered well. We present an image encryption algorithm that from the ciphertext we can get two original images by the FrFT with two different phase distribution keys, obtained by performing 100 iterations between the two plaintext and ciphertext, respectively. We test the sensitivity of our approach to various parameters such as the wavelength of light, the focal length of FFL, and the fractional orders of FrFT. Our approach can resist various attacks.

  13. Fractal characteristic in the wearing of cutting tool

    Science.gov (United States)

    Mei, Anhua; Wang, Jinghui

    1995-11-01

    This paper studies the cutting tool wear with fractal geometry. The wearing image of the flank has been collected by machine vision which consists of CCD camera and personal computer. After being processed by means of preserving smoothing, binary making and edge extracting, the clear boundary enclosing the worn area has been obtained. The fractal dimension of the worn surface is calculated by the methods called `Slit Island' and `Profile'. The experiments and calciating give the conclusion that the worn surface is enclosed by a irregular boundary curve with some fractal dimension and characteristics of self-similarity. Furthermore, the relation between the cutting velocity and the fractal dimension of the worn region has been submitted. This paper presents a series of methods for processing and analyzing the fractal information in the blank wear, which can be applied to research the projective relation between the fractal structure and the wear state, and establish the fractal model of the cutting tool wear.

  14. Blind compressed sensing image reconstruction based on alternating direction method

    Science.gov (United States)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  15. Fractal analysis of en face tomographic images obtained with full field optical coherence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Wanrong; Zhu, Yue [Department of Optical Engineering, Nanjing University of Science and Technology, Jiangsu (China)

    2017-03-15

    The quantitative modeling of the imaging signal of pathological areas and healthy areas is necessary to improve the specificity of diagnosis with tomographic en face images obtained with full field optical coherence tomography (FFOCT). In this work, we propose to use the depth-resolved change in the fractal parameter as a quantitative specific biomarker of the stages of disease. The idea is based on the fact that tissue is a random medium and only statistical parameters that characterize tissue structure are appropriate. We successfully relate the imaging signal in FFOCT to the tissue structure in terms of the scattering function and the coherent transfer function of the system. The formula is then used to analyze the ratio of the Fourier transforms of the cancerous tissue to the normal tissue. We found that when the tissue changes from the normal to cancerous the ratio of the spectrum of the index inhomogeneities takes the form of an inverse power law and the changes in the fractal parameter can be determined by estimating slopes of the spectra of the ratio plotted on a log-log scale. The fresh normal and cancer liver tissues were imaged to demonstrate the potential diagnostic value of the method at early stages when there are no significant changes in tissue microstructures. (copyright 2016 by WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  16. Hyper-Fractal Analysis: A visual tool for estimating the fractal dimension of 4D objects

    Science.gov (United States)

    Grossu, I. V.; Grossu, I.; Felea, D.; Besliu, C.; Jipa, Al.; Esanu, T.; Bordeianu, C. C.; Stan, E.

    2013-04-01

    This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images and 3D objects (Grossu et al. (2010) [1]). The program was extended for working with four-dimensional objects stored in comma separated values files. This might be of interest in biomedicine, for analyzing the evolution in time of three-dimensional images. New version program summaryProgram title: Hyper-Fractal Analysis (Fractal Analysis v03) Catalogue identifier: AEEG_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v3_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 745761 No. of bytes in distributed program, including test data, etc.: 12544491 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 100M Classification: 14 Catalogue identifier of previous version: AEEG_v2_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 831-832 Does the new version supersede the previous version? Yes Nature of problem: Estimating the fractal dimension of 4D images. Solution method: Optimized implementation of the 4D box-counting algorithm. Reasons for new version: Inspired by existing applications of 3D fractals in biomedicine [3], we extended the optimized version of the box-counting algorithm [1, 2] to the four-dimensional case. This might be of interest in analyzing the evolution in time of 3D images. The box-counting algorithm was extended in order to support 4D objects, stored in comma separated values files. A new form was added for generating 2D, 3D, and 4D test data. The application was tested on 4D objects with known dimension, e.g. the Sierpinski hypertetrahedron gasket, Df=ln(5)/ln(2) (Fig. 1). The algorithm could be extended, with minimum effort, to

  17. Encryption of Stereo Images after Compression by Advanced Encryption Standard (AES

    Directory of Open Access Journals (Sweden)

    Marwah k Hussien

    2018-04-01

    Full Text Available New partial encryption schemes are proposed, in which a secure encryption algorithm is used to encrypt only part of the compressed data. Partial encryption applied after application of image compression algorithm. Only 0.0244%-25% of the original data isencrypted for two pairs of dif-ferent grayscale imageswiththe size (256 ´ 256 pixels. As a result, we see a significant reduction of time in the stage of encryption and decryption. In the compression step, the Orthogonal Search Algorithm (OSA for motion estimation (the dif-ferent between stereo images is used. The resulting disparity vector and the remaining image were compressed by Discrete Cosine Transform (DCT, Quantization and arithmetic encoding. The image compressed was encrypted by Advanced Encryption Standard (AES. The images were then decoded and were compared with the original images. Experimental results showed good results in terms of Peak Signal-to-Noise Ratio (PSNR, Com-pression Ratio (CR and processing time. The proposed partial encryption schemes are fast, se-cure and do not reduce the compression performance of the underlying selected compression methods

  18. A fractal derivative model for the characterization of anomalous diffusion in magnetic resonance imaging

    Science.gov (United States)

    Liang, Yingjie; Ye, Allen Q.; Chen, Wen; Gatto, Rodolfo G.; Colon-Perez, Luis; Mareci, Thomas H.; Magin, Richard L.

    2016-10-01

    Non-Gaussian (anomalous) diffusion is wide spread in biological tissues where its effects modulate chemical reactions and membrane transport. When viewed using magnetic resonance imaging (MRI), anomalous diffusion is characterized by a persistent or 'long tail' behavior in the decay of the diffusion signal. Recent MRI studies have used the fractional derivative to describe diffusion dynamics in normal and post-mortem tissue by connecting the order of the derivative with changes in tissue composition, structure and complexity. In this study we consider an alternative approach by introducing fractal time and space derivatives into Fick's second law of diffusion. This provides a more natural way to link sub-voxel tissue composition with the observed MRI diffusion signal decay following the application of a diffusion-sensitive pulse sequence. Unlike previous studies using fractional order derivatives, here the fractal derivative order is directly connected to the Hausdorff fractal dimension of the diffusion trajectory. The result is a simpler, computationally faster, and more direct way to incorporate tissue complexity and microstructure into the diffusional dynamics. Furthermore, the results are readily expressed in terms of spectral entropy, which provides a quantitative measure of the overall complexity of the heterogeneous and multi-scale structure of biological tissues. As an example, we apply this new model for the characterization of diffusion in fixed samples of the mouse brain. These results are compared with those obtained using the mono-exponential, the stretched exponential, the fractional derivative, and the diffusion kurtosis models. Overall, we find that the order of the fractal time derivative, the diffusion coefficient, and the spectral entropy are potential biomarkers to differentiate between the microstructure of white and gray matter. In addition, we note that the fractal derivative model has practical advantages over the existing models from the

  19. Medical image compression and its application to TDIS-FILE equipment

    International Nuclear Information System (INIS)

    Tsubura, Shin-ichi; Nishihara, Eitaro; Iwai, Shunsuke

    1990-01-01

    In order to compress medical images for filing and communication, we have developed a compression algorithm which compresses images with remarkable quality using a high-pass filtering method. Hardware for this compression algorithm was also developed and applied to TDIS (total digital imaging system)-FILE equipment. In the future, hardware based on this algorithm will be developed for various types of diagnostic equipment and PACS. This technique has the following characteristics: (1) significant reduction of artifacts; (2) acceptable quality for clinical evaluation at 15:1 to 20:1 compression ratio; and (3) high-speed processing and compact hardware. (author)

  20. Performance evaluation of emerging JPEGXR compression standard for medical images

    International Nuclear Information System (INIS)

    Basit, M.A.

    2012-01-01

    Medical images require loss less compression as a small error due to lossy compression may be considered as a diagnostic error. JPEG XR is the latest image compression standard designed for variety of applications and has a support for lossy and loss less modes. This paper provides in-depth performance evaluation of latest JPEGXR with existing image coding standards for medical images using loss less compression. Various medical images are used for evaluation and ten images of each organ are tested. Performance of JPEGXR is compared with JPEG2000 and JPEGLS using mean square error, peak signal to noise ratio, mean absolute error and structural similarity index. JPEGXR shows improvement of 20.73 dB and 5.98 dB over JPEGLS and JPEG2000 respectively for various test images used in experimentation. (author)

  1. The FBI compression standard for digitized fingerprint images

    Energy Technology Data Exchange (ETDEWEB)

    Brislawn, C.M.; Bradley, J.N. [Los Alamos National Lab., NM (United States); Onyshczak, R.J. [National Inst. of Standards and Technology, Gaithersburg, MD (United States); Hopper, T. [Federal Bureau of Investigation, Washington, DC (United States)

    1996-10-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  2. Image Compression Based On Wavelet, Polynomial and Quadtree

    Directory of Open Access Journals (Sweden)

    Bushra A. SULTAN

    2011-01-01

    Full Text Available In this paper a simple and fast image compression scheme is proposed, it is based on using wavelet transform to decompose the image signal and then using polynomial approximation to prune the smoothing component of the image band. The architect of proposed coding scheme is high synthetic where the error produced due to polynomial approximation in addition to the detail sub-band data are coded using both quantization and Quadtree spatial coding. As a last stage of the encoding process shift encoding is used as a simple and efficient entropy encoder to compress the outcomes of the previous stage.The test results indicate that the proposed system can produce a promising compression performance while preserving the image quality level.

  3. Image Quality Assessment of JPEG Compressed Mars Science Laboratory Mastcam Images using Convolutional Neural Networks

    Science.gov (United States)

    Kerner, H. R.; Bell, J. F., III; Ben Amor, H.

    2017-12-01

    The Mastcam color imaging system on the Mars Science Laboratory Curiosity rover acquires images within Gale crater for a variety of geologic and atmospheric studies. Images are often JPEG compressed before being downlinked to Earth. While critical for transmitting images on a low-bandwidth connection, this compression can result in image artifacts most noticeable as anomalous brightness or color changes within or near JPEG compression block boundaries. In images with significant high-frequency detail (e.g., in regions showing fine layering or lamination in sedimentary rocks), the image might need to be re-transmitted losslessly to enable accurate scientific interpretation of the data. The process of identifying which images have been adversely affected by compression artifacts is performed manually by the Mastcam science team, costing significant expert human time. To streamline the tedious process of identifying which images might need to be re-transmitted, we present an input-efficient neural network solution for predicting the perceived quality of a compressed Mastcam image. Most neural network solutions require large amounts of hand-labeled training data for the model to learn the target mapping between input (e.g. distorted images) and output (e.g. quality assessment). We propose an automatic labeling method using joint entropy between a compressed and uncompressed image to avoid the need for domain experts to label thousands of training examples by hand. We use automatically labeled data to train a convolutional neural network to estimate the probability that a Mastcam user would find the quality of a given compressed image acceptable for science analysis. We tested our model on a variety of Mastcam images and found that the proposed method correlates well with image quality perception by science team members. When assisted by our proposed method, we estimate that a Mastcam investigator could reduce the time spent reviewing images by a minimum of 70%.

  4. Fractal and digital image processing to determine the degree of dispersion of carbon nanotubes

    International Nuclear Information System (INIS)

    Liang, Xiao-ning; Li, Wei

    2016-01-01

    The degree of dispersion is an important parameter to quantitatively study properties of carbon nanotube composites. Among the many methods for studying dispersion, scanning electron microscopy, transmission electron microscopy, and atomic force microscopy are the most commonly used, intuitive, and convincing methods. However, they have the disadvantage of not being quantitative. To overcome this disadvantage, the fractal theory and digital image processing method can be used to provide a quantitative analysis of the morphology and properties of carbon nanotube composites. In this paper, the dispersion degree of carbon nanotubes was investigated using two fractal methods, namely, the box-counting method and the differential box-counting method. On the basis of the results, we propose a new method for the quantitative characterization of the degree of dispersion of carbon nanotubes. This hierarchical grid method can be used as a supplementary method, and can be combined with the fractal calculation method. Thus, the accuracy and effectiveness of the quantitative characterization of the dispersion degree of carbon nanotubes can be improved. (The outer diameter of the carbon nanotubes is about 50 nm; the length of the carbon nanotubes is 10–20 μm.)

  5. Fractal and digital image processing to determine the degree of dispersion of carbon nanotubes

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Xiao-ning, E-mail: xnliang0506@163.com; Li, Wei, E-mail: 1099006@mail.dhu.edu.cn, E-mail: liwei@dhu.edu.cn, E-mail: waiwentougao@outlook.com [Donghua University, College of Textiles (China)

    2016-05-15

    The degree of dispersion is an important parameter to quantitatively study properties of carbon nanotube composites. Among the many methods for studying dispersion, scanning electron microscopy, transmission electron microscopy, and atomic force microscopy are the most commonly used, intuitive, and convincing methods. However, they have the disadvantage of not being quantitative. To overcome this disadvantage, the fractal theory and digital image processing method can be used to provide a quantitative analysis of the morphology and properties of carbon nanotube composites. In this paper, the dispersion degree of carbon nanotubes was investigated using two fractal methods, namely, the box-counting method and the differential box-counting method. On the basis of the results, we propose a new method for the quantitative characterization of the degree of dispersion of carbon nanotubes. This hierarchical grid method can be used as a supplementary method, and can be combined with the fractal calculation method. Thus, the accuracy and effectiveness of the quantitative characterization of the dispersion degree of carbon nanotubes can be improved. (The outer diameter of the carbon nanotubes is about 50 nm; the length of the carbon nanotubes is 10–20 μm.)

  6. Heterogeneity of Glucose Metabolism in Esophageal Cancer Measured by Fractal Analysis of Fluorodeoxyglucose Positron Emission Tomography Image: Correlation between Metabolic Heterogeneity and Survival.

    Science.gov (United States)

    Tochigi, Toru; Shuto, Kiyohiko; Kono, Tsuguaki; Ohira, Gaku; Tohma, Takayuki; Gunji, Hisashi; Hayano, Koichi; Narushima, Kazuo; Fujishiro, Takeshi; Hanaoka, Toshiharu; Akutsu, Yasunori; Okazumi, Shinichi; Matsubara, Hisahiro

    2017-01-01

    Intratumoral heterogeneity is a well-recognized characteristic feature of cancer. The purpose of this study is to assess the heterogeneity of the intratumoral glucose metabolism using fractal analysis, and evaluate its prognostic value in patients with esophageal squamous cell carcinoma (ESCC). 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) studies of 79 patients who received curative surgery were evaluated. FDG-PET images were analyzed using fractal analysis software, where differential box-counting method was employed to calculate the fractal dimension (FD) of the tumor lesion. Maximum standardized uptake value (SUVmax) and FD were compared with overall survival (OS). The median SUVmax and FD of ESCCs in this cohort were 13.8 and 1.95, respectively. In univariate analysis performed using Cox's proportional hazard model, T stage and FD showed significant associations with OS (p = 0.04, p heterogeneity measured by fractal analysis can be a novel imaging biomarker for survival in patients with ESCC. © 2016 S. Karger AG, Basel.

  7. Expandable image compression system: A modular approach

    International Nuclear Information System (INIS)

    Ho, B.K.T.; Lo, S.C.; Huang, H.K.

    1986-01-01

    The full-frame bit-allocation algorithm for radiological image compression can achieve an acceptable compression ratio as high as 30:1. It involves two stages of operation: a two-dimensional discrete cosine transform and pixel quantization in the transformed space with pixel depth kept accountable by a bit-allocation table. The cosine transform hardware design took an expandable modular approach based on the VME bus system with a maximum data transfer rate of 48 Mbytes/sec and a microprocessor (Motorola 68000 family). The modules are cascadable and microprogrammable to perform 1,024-point butterfly operations. A total of 18 stages would be required for transforming a 1,000 x 1,000 image. Multiplicative constants and addressing sequences are to be software loaded into the parameter buffers of each stage prior to streaming data through the processor stages. The compression rate for 1K x 1K images is expected to be faster than one image per sec

  8. Classification of radar echoes using fractal geometry

    International Nuclear Information System (INIS)

    Azzaz, Nafissa; Haddad, Boualem

    2017-01-01

    Highlights: • Implementation of two concepts of fractal geometry to classify two types of meteorological radar echoes. • A new approach, called a multi-scale fractal dimension is used for classification between fixed echoes and rain echoes. • An Automatic identification system of meteorological radar echoes was proposed using fractal geometry. - Abstract: This paper deals with the discrimination between the precipitation echoes and the ground echoes in meteorological radar images using fractal geometry. This study aims to improve the measurement of precipitations by weather radars. For this, we considered three radar sites: Bordeaux (France), Dakar (Senegal) and Me lbourne (USA). We showed that the fractal dimension based on contourlet and the fractal lacunarity are pertinent to discriminate between ground and precipitation echoes. We also demonstrated that the ground echoes have a multifractal structure but the precipitations are more homogeneous than ground echoes whatever the prevailing climate. Thereby, we developed an automatic classification system of radar using a graphic interface. This interface, based on the fractal geometry makes possible the identification of radar echoes type in real time. This system can be inserted in weather radar for the improvement of precipitation estimations.

  9. CoGI: Towards Compressing Genomes as an Image.

    Science.gov (United States)

    Xie, Xiaojing; Zhou, Shuigeng; Guan, Jihong

    2015-01-01

    Genomic science is now facing an explosive increase of data thanks to the fast development of sequencing technology. This situation poses serious challenges to genomic data storage and transferring. It is desirable to compress data to reduce storage and transferring cost, and thus to boost data distribution and utilization efficiency. Up to now, a number of algorithms / tools have been developed for compressing genomic sequences. Unlike the existing algorithms, most of which treat genomes as one-dimensional text strings and compress them based on dictionaries or probability models, this paper proposes a novel approach called CoGI (the abbreviation of Compressing Genomes as an Image) for genome compression, which transforms the genomic sequences to a two-dimensional binary image (or bitmap), then applies a rectangular partition coding algorithm to compress the binary image. CoGI can be used as either a reference-based compressor or a reference-free compressor. For the former, we develop two entropy-based algorithms to select a proper reference genome. Performance evaluation is conducted on various genomes. Experimental results show that the reference-based CoGI significantly outperforms two state-of-the-art reference-based genome compressors GReEn and RLZ-opt in both compression ratio and compression efficiency. It also achieves comparable compression ratio but two orders of magnitude higher compression efficiency in comparison with XM--one state-of-the-art reference-free genome compressor. Furthermore, our approach performs much better than Gzip--a general-purpose and widely-used compressor, in both compression speed and compression ratio. So, CoGI can serve as an effective and practical genome compressor. The source code and other related documents of CoGI are available at: http://admis.fudan.edu.cn/projects/cogi.htm.

  10. MR imaging of medullary compression due to vertebral metastases

    International Nuclear Information System (INIS)

    Dooms, G.C.; Mathurin, P.; Maldague, B.; Cornelis, G.; Malghem, J.; Demeure, R.

    1987-01-01

    A prospective study was performed to assess the value of MR imaging for demonstrating medullary compression due to vertebral metastases in cancer patients clinically suspected of presenting with that complication. Twenty-five consecutive unselected patients were studied, and the MR imaging findings were confirmed by myelography, CT, and/or surgical and autopsy findings for each patient. The MR examinations were performed with a superconducting magnet (Philips Gyroscan S15) operating at 0.5-T. MR imaging demonstrated the metastases (single or multiple) mainly on T1- weighted images (TR = 0.45 sec and TE = 20 msec). Soft-tissue tumoral mass and/or deformity of a vertebral body secondary to metastasis, compressing the spinal cord, was equally demonstrated on T1- and heavily T2-weighted images (TR = 1.65 sec and TE = 100 msec). In the sagittal plane, MR imaging demonstrated the exact level of the compression (one or multiple levels) and its full extent. In conclusion, MR is the first imaging modality for studying cancer patients with clinically suspected medullary compression and obviates the need for more invasive procedures

  11. EBLAST: an efficient high-compression image transformation 3. application to Internet image and video transmission

    Science.gov (United States)

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.

    2001-12-01

    A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.

  12. Fractal aspects of the flow and shear behaviour of free-flowable particle size fractions of pharmaceutical directly compressible excipient sorbitol.

    Science.gov (United States)

    Hurychová, Hana; Lebedová, Václava; Šklubalová, Zdenka; Dzámová, Pavlína; Svěrák, Tomáš; Stoniš, Jan

    Flowability of powder excipients is directly influenced by their size and shape although the granulometric influence of the flow and shear behaviour of particulate matter is not studied frequently. In this work, the influence of particle size on the mass flow rate through the orifice of a conical hopper, and the cohesion and flow function was studied for four free-flowable size fractions of sorbitol for direct compression in the range of 0.080-0.400 mm. The particles were granulometricaly characterized using an optical microscopy; a boundary fractal dimension of 1.066 was estimated for regular sorbitol particles. In the particle size range studied, a non-linear relationship between the mean particle size and the mass flow rate Q10 (g/s) was detected having amaximum at the 0.245mm fraction. The best flow properties of this fraction were verified with aJenike shear tester due to the highest value of flow function and the lowest value of the cohesion. The results of this work show the importance of the right choice of the excipient particle size to achieve the best flow behaviour of particulate material.Key words: flowability size fraction sorbitol for direct compaction Jenike shear tester fractal dimension.

  13. Block-Based Compressed Sensing for Neutron Radiation Image Using WDFB

    Directory of Open Access Journals (Sweden)

    Wei Jin

    2015-01-01

    Full Text Available An ideal compression method for neutron radiation image should have high compression ratio while keeping more details of the original image. Compressed sensing (CS, which can break through the restrictions of sampling theorem, is likely to offer an efficient compression scheme for the neutron radiation image. Combining wavelet transform with directional filter banks, a novel nonredundant multiscale geometry analysis transform named Wavelet Directional Filter Banks (WDFB is constructed and applied to represent neutron radiation image sparsely. Then, the block-based CS technique is introduced and a high performance CS scheme for neutron radiation image is proposed. By performing two-step iterative shrinkage algorithm the problem of L1 norm minimization is solved to reconstruct neutron radiation image from random measurements. The experiment results demonstrate that the scheme not only improves the quality of reconstructed image obviously but also retains more details of original image.

  14. Wavelets: Applications to Image Compression-II

    Indian Academy of Sciences (India)

    Wavelets: Applications to Image Compression-II. Sachin P ... successful application of wavelets in image com- ... b) Soft threshold: In this case, all the coefficients x ..... [8] http://www.jpeg.org} Official site of the Joint Photographic Experts Group.

  15. Multiple-image encryption via lifting wavelet transform and XOR operation based on compressive ghost imaging scheme

    Science.gov (United States)

    Li, Xianye; Meng, Xiangfeng; Yang, Xiulun; Wang, Yurong; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-03-01

    A multiple-image encryption method via lifting wavelet transform (LWT) and XOR operation is proposed, which is based on a row scanning compressive ghost imaging scheme. In the encryption process, the scrambling operation is implemented for the sparse images transformed by LWT, then the XOR operation is performed on the scrambled images, and the resulting XOR images are compressed in the row scanning compressive ghost imaging, through which the ciphertext images can be detected by bucket detector arrays. During decryption, the participant who possesses his/her correct key-group, can successfully reconstruct the corresponding plaintext image by measurement key regeneration, compression algorithm reconstruction, XOR operation, sparse images recovery, and inverse LWT (iLWT). Theoretical analysis and numerical simulations validate the feasibility of the proposed method.

  16. Bony change of apical lesion healing process using fractal analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Ji Min; Park, Hyok; Jeong, Ho Gul; Kim, Kee Deog; Park, Chang Seo [Yonsei University College of Medicine, Seoul (Korea, Republic of)

    2005-06-15

    To investigate the change of bone healing process after endodontic treatment of the tooth with an apical lesion by fractal analysis. Radiographic images of 35 teeth from 33 patients taken on first diagnosis, 6 months, and 1 year after endodontic treatment were selected. Radiographic images were taken by JUPITER computerized Dental X-ray System. Fractal dimensions were calculated three times at each area by Scion Image PC program. Rectangular region of interest (30 x 30) were selected at apical lesion and normal apex of each image. The fractal dimension at apical lesion of first diagnosis (L{sub 0}) is 0.940 {+-} 0.361 and that of normal area (N{sub 0}) is 1.186 {+-} 0.727 (p<0.05). Fractal dimension at apical lesion of 6 months after endodontic treatment (L{sub 1}) is 1.076 {+-} 0.069 and that of normal area (N{sub 1}) is 1.192 {+-} 0.055 (p<0.05). Fractal dimension at apical lesion of 1 year after endodontic treatment (L{sub 2}) is 1.163 {+-} 0.074 and that of normal area (N{sub 2}) is 1.225 {+-} 0.079 (p<0.05). After endodontic treatment, the fractal dimensions at each apical lesions depending on time showed statistically significant difference. And there are statistically significant different between normal area and apical lesion on first diagnosis, 6 months after, 1 year after. But the differences were grow smaller as time flows. The evaluation of the prognosis after the endodontic treatment of the apical lesion was estimated by bone regeneration in apical region. Fractal analysis was attempted to overcome the limit of subjective reading, and as a result the change of the bone during the healing process was able to be detected objectively and quantitatively.

  17. Medical image compression by using three-dimensional wavelet transformation

    International Nuclear Information System (INIS)

    Wang, J.; Huang, H.K.

    1996-01-01

    This paper proposes a three-dimensional (3-D) medical image compression method for computed tomography (CT) and magnetic resonance (MR) that uses a separable nonuniform 3-D wavelet transform. The separable wavelet transform employs one filter bank within two-dimensional (2-D) slices and then a second filter bank on the slice direction. CT and MR image sets normally have different resolutions within a slice and between slices. The pixel distances within a slice are normally less than 1 mm and the distance between slices can vary from 1 mm to 10 mm. To find the best filter bank in the slice direction, the authors use the various filter banks in the slice direction and compare the compression results. The results from the 12 selected MR and CT image sets at various slice thickness show that the Haar transform in the slice direction gives the optimum performance for most image sets, except for a CT image set which has 1 mm slice distance. Compared with 2-D wavelet compression, compression ratios of the 3-D method are about 70% higher for CT and 35% higher for MR image sets at a peak signal to noise ratio (PSNR) of 50 dB. In general, the smaller the slice distance, the better the 3-D compression performance

  18. Single exposure optically compressed imaging and visualization using random aperture coding

    Energy Technology Data Exchange (ETDEWEB)

    Stern, A [Electro Optical Unit, Ben Gurion University of the Negev, Beer-Sheva 84105 (Israel); Rivenson, Yair [Department of Electrical and Computer Engineering, Ben Gurion University of the Negev, Beer-Sheva 84105 (Israel); Javidi, Bahrain [Department of Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut 06269-1157 (United States)], E-mail: stern@bgu.ac.il

    2008-11-01

    The common approach in digital imaging follows the sample-then-compress framework. According to this approach, in the first step as many pixels as possible are captured and in the second step the captured image is compressed by digital means. The recently introduced theory of compressed sensing provides the mathematical foundation necessary to combine these two steps in a single one, that is, to compress the information optically before it is recorded. In this paper we overview and extend an optical implementation of compressed sensing theory that we have recently proposed. With this new imaging approach the compression is accomplished inherently in the optical acquisition step. The primary feature of this imaging approach is a randomly encoded aperture realized by means of a random phase screen. The randomly encoded aperture implements random projection of the object field in the image plane. Using a single exposure, a randomly encoded image is captured which can be decoded by proper decoding algorithm.

  19. A JPEG backward-compatible HDR image compression

    Science.gov (United States)

    Korshunov, Pavel; Ebrahimi, Touradj

    2012-10-01

    High Dynamic Range (HDR) imaging is expected to become one of the technologies that could shape next generation of consumer digital photography. Manufacturers are rolling out cameras and displays capable of capturing and rendering HDR images. The popularity and full public adoption of HDR content is however hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of Low Dynamic Range (LDR) displays that are unable to render HDR. To facilitate wide spread of HDR usage, the backward compatibility of HDR technology with commonly used legacy image storage, rendering, and compression is necessary. Although many tone-mapping algorithms were developed for generating viewable LDR images from HDR content, there is no consensus on which algorithm to use and under which conditions. This paper, via a series of subjective evaluations, demonstrates the dependency of perceived quality of the tone-mapped LDR images on environmental parameters and image content. Based on the results of subjective tests, it proposes to extend JPEG file format, as the most popular image format, in a backward compatible manner to also deal with HDR pictures. To this end, the paper provides an architecture to achieve such backward compatibility with JPEG and demonstrates efficiency of a simple implementation of this framework when compared to the state of the art HDR image compression.

  20. Dictionary Approaches to Image Compression and Reconstruction

    Science.gov (United States)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  1. Optimal Image Data Compression For Whole Slide Images

    Directory of Open Access Journals (Sweden)

    J. Isola

    2016-06-01

    Differences in WSI file sizes of scanned images deemed “visually lossless” were significant. If we set Hamamatsu Nanozoomer .NDPI file size (using its default “jpeg80 quality” as 100%, the size of a “visually lossless” JPEG2000 file was only 15-20% of that. Comparisons to Aperio and 3D-Histech files (.svs and .mrxs at their default settings yielded similar results. A further optimization of JPEG2000 was done by treating empty slide area as uniform white-grey surface, which could be maximally compressed. Using this algorithm, JPEG2000 file sizes were only half, or even smaller, of original JPEG2000. Variation was due to the proportion of empty slide area on the scan. We anticipate that wavelet-based image compression methods, such as JPEG2000, have a significant advantage in saving storage costs of scanned whole slide image. In routine pathology laboratories applying WSI technology widely to their histology material, absolute cost savings can be substantial.  

  2. Development and evaluation of a novel lossless image compression method (AIC: artificial intelligence compression method) using neural networks as artificial intelligence

    International Nuclear Information System (INIS)

    Fukatsu, Hiroshi; Naganawa, Shinji; Yumura, Shinnichiro

    2008-01-01

    This study was aimed to validate the performance of a novel image compression method using a neural network to achieve a lossless compression. The encoding consists of the following blocks: a prediction block; a residual data calculation block; a transformation and quantization block; an organization and modification block; and an entropy encoding block. The predicted image is divided into four macro-blocks using the original image for teaching; and then redivided into sixteen sub-blocks. The predicted image is compared to the original image to create the residual image. The spatial and frequency data of the residual image are compared and transformed. Chest radiography, computed tomography (CT), magnetic resonance imaging, positron emission tomography, radioisotope mammography, ultrasonography, and digital subtraction angiography images were compressed using the AIC lossless compression method; and the compression rates were calculated. The compression rates were around 15:1 for chest radiography and mammography, 12:1 for CT, and around 6:1 for other images. This method thus enables greater lossless compression than the conventional methods. This novel method should improve the efficiency of handling of the increasing volume of medical imaging data. (author)

  3. Pornographic image recognition and filtering using incremental learning in compressed domain

    Science.gov (United States)

    Zhang, Jing; Wang, Chao; Zhuo, Li; Geng, Wenhao

    2015-11-01

    With the rapid development and popularity of the network, the openness, anonymity, and interactivity of networks have led to the spread and proliferation of pornographic images on the Internet, which have done great harm to adolescents' physical and mental health. With the establishment of image compression standards, pornographic images are mainly stored with compressed formats. Therefore, how to efficiently filter pornographic images is one of the challenging issues for information security. A pornographic image recognition and filtering method in the compressed domain is proposed by using incremental learning, which includes the following steps: (1) low-resolution (LR) images are first reconstructed from the compressed stream of pornographic images, (2) visual words are created from the LR image to represent the pornographic image, and (3) incremental learning is adopted to continuously adjust the classification rules to recognize the new pornographic image samples after the covering algorithm is utilized to train and recognize the visual words in order to build the initial classification model of pornographic images. The experimental results show that the proposed pornographic image recognition method using incremental learning has a higher recognition rate as well as costing less recognition time in the compressed domain.

  4. Multispectral Image Compression Based on DSC Combined with CCSDS-IDC

    Directory of Open Access Journals (Sweden)

    Jin Li

    2014-01-01

    Full Text Available Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC combined with image data compression (IDC approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE. Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS-based algorithm has better compression performance than the traditional compression approaches.

  5. Multispectral image compression based on DSC combined with CCSDS-IDC.

    Science.gov (United States)

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.

  6. Cloud solution for histopathological image analysis using region of interest based compression.

    Science.gov (United States)

    Kanakatte, Aparna; Subramanya, Rakshith; Delampady, Ashik; Nayak, Rajarama; Purushothaman, Balamuralidhar; Gubbi, Jayavardhana

    2017-07-01

    Recent technological gains have led to the adoption of innovative cloud based solutions in medical imaging field. Once the medical image is acquired, it can be viewed, modified, annotated and shared on many devices. This advancement is mainly due to the introduction of Cloud computing in medical domain. Tissue pathology images are complex and are normally collected at different focal lengths using a microscope. The single whole slide image contains many multi resolution images stored in a pyramidal structure with the highest resolution image at the base and the smallest thumbnail image at the top of the pyramid. Highest resolution image will be used for tissue pathology diagnosis and analysis. Transferring and storing such huge images is a big challenge. Compression is a very useful and effective technique to reduce the size of these images. As pathology images are used for diagnosis, no information can be lost during compression (lossless compression). A novel method of extracting the tissue region and applying lossless compression on this region and lossy compression on the empty regions has been proposed in this paper. The resulting compression ratio along with lossless compression on tissue region is in acceptable range allowing efficient storage and transmission to and from the Cloud.

  7. An Image Compression Scheme in Wireless Multimedia Sensor Networks Based on NMF

    Directory of Open Access Journals (Sweden)

    Shikang Kong

    2017-02-01

    Full Text Available With the goal of addressing the issue of image compression in wireless multimedia sensor networks with high recovered quality and low energy consumption, an image compression and transmission scheme based on non-negative matrix factorization (NMF is proposed in this paper. First, the NMF algorithm theory is studied. Then, a collaborative mechanism of image capture, block, compression and transmission is completed. Camera nodes capture images and send them to ordinary nodes which use an NMF algorithm for image compression. Compressed images are transmitted to the station by the cluster head node and received from ordinary nodes. The station takes on the image restoration. Simulation results show that, compared with the JPEG2000 and singular value decomposition (SVD compression schemes, the proposed scheme has a higher quality of recovered images and lower total node energy consumption. It is beneficial to reduce the burden of energy consumption and prolong the life of the whole network system, which has great significance for practical applications of WMSNs.

  8. High speed fluorescence imaging with compressed ultrafast photography

    Science.gov (United States)

    Thompson, J. V.; Mason, J. D.; Beier, H. T.; Bixler, J. N.

    2017-02-01

    Fluorescent lifetime imaging is an optical technique that facilitates imaging molecular interactions and cellular functions. Because the excited lifetime of a fluorophore is sensitive to its local microenvironment,1, 2 measurement of fluorescent lifetimes can be used to accurately detect regional changes in temperature, pH, and ion concentration. However, typical state of the art fluorescent lifetime methods are severely limited when it comes to acquisition time (on the order of seconds to minutes) and video rate imaging. Here we show that compressed ultrafast photography (CUP) can be used in conjunction with fluorescent lifetime imaging to overcome these acquisition rate limitations. Frame rates up to one hundred billion frames per second have been demonstrated with compressed ultrafast photography using a streak camera.3 These rates are achieved by encoding time in the spatial direction with a pseudo-random binary pattern. The time domain information is then reconstructed using a compressed sensing algorithm, resulting in a cube of data (x,y,t) for each readout image. Thus, application of compressed ultrafast photography will allow us to acquire an entire fluorescent lifetime image with a single laser pulse. Using a streak camera with a high-speed CMOS camera, acquisition rates of 100 frames per second can be achieved, which will significantly enhance our ability to quantitatively measure complex biological events with high spatial and temporal resolution. In particular, we will demonstrate the ability of this technique to do single-shot fluorescent lifetime imaging of cells and microspheres.

  9. Applications of fractals in ecology.

    Science.gov (United States)

    Sugihara, G; M May, R

    1990-03-01

    Fractal models describe the geometry of a wide variety of natural objects such as coastlines, island chains, coral reefs, satellite ocean-color images and patches of vegetation. Cast in the form of modified diffusion models, they can mimic natural and artificial landscapes having different types of complexity of shape. This article provides a brief introduction to fractals and reports on how they can be used by ecologists to answer a variety of basic questions, about scale, measurement and hierarchy in, ecological systems. Copyright © 1990. Published by Elsevier Ltd.

  10. COxSwAIN: Compressive Sensing for Advanced Imaging and Navigation

    Science.gov (United States)

    Kurwitz, Richard; Pulley, Marina; LaFerney, Nathan; Munoz, Carlos

    2015-01-01

    The COxSwAIN project focuses on building an image and video compression scheme that can be implemented in a small or low-power satellite. To do this, we used Compressive Sensing, where the compression is performed by matrix multiplications on the satellite and reconstructed on the ground. Our paper explains our methodology and demonstrates the results of the scheme, being able to achieve high quality image compression that is robust to noise and corruption.

  11. New patient-controlled abdominal compression method in radiography: radiation dose and image quality.

    Science.gov (United States)

    Piippo-Huotari, Oili; Norrman, Eva; Anderzén-Carlsson, Agneta; Geijer, Håkan

    2018-05-01

    The radiation dose for patients can be reduced with many methods and one way is to use abdominal compression. In this study, the radiation dose and image quality for a new patient-controlled compression device were compared with conventional compression and compression in the prone position . To compare radiation dose and image quality of patient-controlled compression compared with conventional and prone compression in general radiography. An experimental design with quantitative approach. After obtaining the approval of the ethics committee, a consecutive sample of 48 patients was examined with the standard clinical urography protocol. The radiation doses were measured as dose-area product and analyzed with a paired t-test. The image quality was evaluated by visual grading analysis. Four radiologists evaluated each image individually by scoring nine criteria modified from the European quality criteria for diagnostic radiographic images. There was no significant difference in radiation dose or image quality between conventional and patient-controlled compression. Prone position resulted in both higher dose and inferior image quality. Patient-controlled compression gave similar dose levels as conventional compression and lower than prone compression. Image quality was similar with both patient-controlled and conventional compression and was judged to be better than in the prone position.

  12. COMPRESSING BIOMEDICAL IMAGE BY USING INTEGER WAVELET TRANSFORM AND PREDICTIVE ENCODER

    OpenAIRE

    Anushree Srivastava*, Narendra Kumar Chaurasia

    2016-01-01

    Image compression has become an important process in today’s world of information exchange. It helps in effective utilization of high speed network resources. Medical image compression has an important role in medical field because they are used for future reference of patients. Medical data is compressed in such a way so that the diagnostics capabilities are not compromised or no medical information is lost. Medical imaging poses the great challenge of having compression algorithms that redu...

  13. Effect of high image compression on the reproducibility of cardiac Sestamibi reporting

    International Nuclear Information System (INIS)

    Thomas, P.; Allen, L.; Beuzeville, S.

    1999-01-01

    Full text: Compression algorithms have been mooted to minimize storage space and transmission times of digital images. We assessed the impact of high-level lousy compression using JPEG and wavelet algorithms on image quality and reporting accuracy of cardiac Sestamibi studies. Twenty stress/rest Sestamibi cardiac perfusion studies were reconstructed into horizontal short, vertical long and horizontal long axis slices using conventional methods. Each of these six sets of slices were aligned for reporting and saved (uncompressed) as a bitmap. This bitmap was then compressed using JPEG compression, then decompressed and saved as a bitmap for later viewing. This process was repeated using the original bitmap and wavelet compression. Finally, a second copy of the original bitmap was made. All 80 bitmaps were randomly coded to ensure blind reporting. The bitmaps were read blinded and by consensus of 2 experienced nuclear medicine physicians using a 5-point scale and 25 cardiac segments. Subjective image quality was also reported using a 3-point scale. Samples of the compressed images were also subtracted from the original bitmap for visual comparison of differences. Results showed an average compression ratio of 23:1 for wavelet and 13:1 for JPEG. Image subtraction showed only very minor discordance between the original and compressed images. There was no significant difference in subjective quality between the compressed and uncompressed images. There was no significant difference in reporting reproducibility of the identical bitmap copy, the JPEG image and the wavelet image compared with the original bitmap. Use of the high compression algorithms described had no significant impact on reporting reproducibility and subjective image quality of cardiac Sestamibi perfusion studies

  14. Multiband and Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Raffaele Pizzolante

    2016-02-01

    Full Text Available Hyperspectral images are widely used in several real-life applications. In this paper, we investigate on the compression of hyperspectral images by considering different aspects, including the optimization of the computational complexity in order to allow implementations on limited hardware (i.e., hyperspectral sensors, etc.. We present an approach that relies on a three-dimensional predictive structure. Our predictive structure, 3D-MBLP, uses one or more previous bands as references to exploit the redundancies among the third dimension. The achieved results are comparable, and often better, with respect to the other state-of-art lossless compression techniques for hyperspectral images.

  15. Map of fluid flow in fractal porous medium into fractal continuum flow.

    Science.gov (United States)

    Balankin, Alexander S; Elizarraraz, Benjamin Espinoza

    2012-05-01

    This paper is devoted to fractal continuum hydrodynamics and its application to model fluid flows in fractally permeable reservoirs. Hydrodynamics of fractal continuum flow is developed on the basis of a self-consistent model of fractal continuum employing vector local fractional differential operators allied with the Hausdorff derivative. The generalized forms of Green-Gauss and Kelvin-Stokes theorems for fractional calculus are proved. The Hausdorff material derivative is defined and the form of Reynolds transport theorem for fractal continuum flow is obtained. The fundamental conservation laws for a fractal continuum flow are established. The Stokes law and the analog of Darcy's law for fractal continuum flow are suggested. The pressure-transient equation accounting the fractal metric of fractal continuum flow is derived. The generalization of the pressure-transient equation accounting the fractal topology of fractal continuum flow is proposed. The mapping of fluid flow in a fractally permeable medium into a fractal continuum flow is discussed. It is stated that the spectral dimension of the fractal continuum flow d(s) is equal to its mass fractal dimension D, even when the spectral dimension of the fractally porous or fissured medium is less than D. A comparison of the fractal continuum flow approach with other models of fluid flow in fractally permeable media and the experimental field data for reservoir tests are provided.

  16. Moving image compression and generalization capability of constructive neural networks

    Science.gov (United States)

    Ma, Liying; Khorasani, Khashayar

    2001-03-01

    To date numerous techniques have been proposed to compress digital images to ease their storage and transmission over communication channels. Recently, a number of image compression algorithms using Neural Networks NNs have been developed. Particularly, several constructive feed-forward neural networks FNNs have been proposed by researchers for image compression, and promising results have been reported. At the previous SPIE AeroSense conference 2000, we proposed to use a constructive One-Hidden-Layer Feedforward Neural Network OHL-FNN for compressing digital images. In this paper, we first investigate the generalization capability of the proposed OHL-FNN in the presence of additive noise for network training and/ or generalization. Extensive experimental results for different scenarios are presented. It is revealed that the constructive OHL-FNN is not as robust to additive noise in input image as expected. Next, the constructive OHL-FNN is applied to moving images, video sequences. The first, or other specified frame in a moving image sequence is used to train the network. The remaining moving images that follow are then generalized/compressed by this trained network. Three types of correlation-like criteria measuring the similarity of any two images are introduced. The relationship between the generalization capability of the constructed net and the similarity of images is investigated in some detail. It is shown that the constructive OHL-FNN is promising even for changing images such as those extracted from a football game.

  17. Image Segmentation, Registration, Compression, and Matching

    Science.gov (United States)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  18. Fractal Characteristics of Rock Fracture Surface under Triaxial Compression after High Temperature

    Directory of Open Access Journals (Sweden)

    X. L. Xu

    2016-01-01

    Full Text Available Scanning Electron Microscopy (SEM test on 30 pieces of fractured granite has been researched by using S250MK III SEM under triaxial compression of different temperature (25~1000°C and confining pressure (0~40 MPa. Research results show that (1 the change of fractal dimension (FD of rock fracture with temperature is closely related to confining pressure, which can be divided into two categories. In the first category, when confining pressure is in 0~30 MPa, FD fits cubic polynomial fitting curve with temperature, reaching the maximum at 600°C. In the second category, when confining pressure is in 30~40 MPa, FD has volatility with temperature. (2 The FD of rock fracture varies with confining pressure and is also closely related to the temperature, which can be divided into three categories. In the first category, FD has volatility with confining pressure at 25°C, 400°C, and 800°C. In the second category, it increases exponentially at 200°C and 1000°C. In the third category, it decreases exponentially at 600°C. (3 It is found that 600°C is the critical temperature and 30 MPa is the critical confining pressure of granite. The rock transfers from brittle to plastic phase transition when temperature exceeds 600°C and confining pressure exceeds 30 MPa.

  19. Image compression using moving average histogram and RBF network

    International Nuclear Information System (INIS)

    Khowaja, S.; Ismaili, I.A.

    2015-01-01

    Modernization and Globalization have made the multimedia technology as one of the fastest growing field in recent times but optimal use of bandwidth and storage has been one of the topics which attract the research community to work on. Considering that images have a lion share in multimedia communication, efficient image compression technique has become the basic need for optimal use of bandwidth and space. This paper proposes a novel method for image compression based on fusion of moving average histogram and RBF (Radial Basis Function). Proposed technique employs the concept of reducing color intensity levels using moving average histogram technique followed by the correction of color intensity levels using RBF networks at reconstruction phase. Existing methods have used low resolution images for the testing purpose but the proposed method has been tested on various image resolutions to have a clear assessment of the said technique. The proposed method have been tested on 35 images with varying resolution and have been compared with the existing algorithms in terms of CR (Compression Ratio), MSE (Mean Square Error), PSNR (Peak Signal to Noise Ratio), computational complexity. The outcome shows that the proposed methodology is a better trade off technique in terms of compression ratio, PSNR which determines the quality of the image and computational complexity. (author)

  20. Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method

    Science.gov (United States)

    Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan

    2018-04-01

    Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.

  1. FRACTAL DIMENSION OF URBAN EXPANSION BASED ON REMOTE SENSING IMAGES

    Directory of Open Access Journals (Sweden)

    IACOB I. CIPRIAN

    2012-11-01

    Full Text Available Fractal Dimension of Urban Expansion Based on Remote Sensing Images: In Cluj-Napoca city the process of urbanization has been accelerated during the years and implication of local authorities reflects a relevant planning policy. A good urban planning framework should take into account the society demands and also it should satisfy the natural conditions of local environment. The expansion of antropic areas it can be approached by implication of 5D variables (time as a sequence of stages, space: with x, y, z and magnitude of phenomena into the process, which will allow us to analyse and extract the roughness of city shape. Thus, to improve the decision factor we take a different approach in this paper, looking at geometry and scale composition. Using the remote sensing (RS and GIS techniques we manage to extract a sequence of built-up areas (from 1980 to 2012 and used the result as an input for modelling the spatialtemporal changes of urban expansion and fractal theory to analysed the geometric features. Taking the time as a parameter we can observe behaviour and changes in urban landscape, this condition have been known as self-organized – a condition which in first stage the system was without any turbulence (before the antropic factor and during the time tend to approach chaotic behaviour (entropy state without causing an disequilibrium in the main system.

  2. A novel high-frequency encoding algorithm for image compression

    Science.gov (United States)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-12-01

    In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.

  3. Reconfigurable Hardware for Compressing Hyperspectral Image Data

    Science.gov (United States)

    Aranki, Nazeeh; Namkung, Jeffrey; Villapando, Carlos; Kiely, Aaron; Klimesh, Matthew; Xie, Hua

    2010-01-01

    High-speed, low-power, reconfigurable electronic hardware has been developed to implement ICER-3D, an algorithm for compressing hyperspectral-image data. The algorithm and parts thereof have been the topics of several NASA Tech Briefs articles, including Context Modeler for Wavelet Compression of Hyperspectral Images (NPO-43239) and ICER-3D Hyperspectral Image Compression Software (NPO-43238), which appear elsewhere in this issue of NASA Tech Briefs. As described in more detail in those articles, the algorithm includes three main subalgorithms: one for computing wavelet transforms, one for context modeling, and one for entropy encoding. For the purpose of designing the hardware, these subalgorithms are treated as modules to be implemented efficiently in field-programmable gate arrays (FPGAs). The design takes advantage of industry- standard, commercially available FPGAs. The implementation targets the Xilinx Virtex II pro architecture, which has embedded PowerPC processor cores with flexible on-chip bus architecture. It incorporates an efficient parallel and pipelined architecture to compress the three-dimensional image data. The design provides for internal buffering to minimize intensive input/output operations while making efficient use of offchip memory. The design is scalable in that the subalgorithms are implemented as independent hardware modules that can be combined in parallel to increase throughput. The on-chip processor manages the overall operation of the compression system, including execution of the top-level control functions as well as scheduling, initiating, and monitoring processes. The design prototype has been demonstrated to be capable of compressing hyperspectral data at a rate of 4.5 megasamples per second at a conservative clock frequency of 50 MHz, with a potential for substantially greater throughput at a higher clock frequency. The power consumption of the prototype is less than 6.5 W. The reconfigurability (by means of reprogramming) of

  4. Diffusion tensor imaging in spinal cord compression

    International Nuclear Information System (INIS)

    Wang, Wei; Qin, Wen; Hao, Nanxin; Wang, Yibin; Zong, Genlin

    2012-01-01

    Background Although diffusion tensor imaging has been successfully applied in brain research for decades, several main difficulties have hindered its extended utilization in spinal cord imaging. Purpose To assess the feasibility and clinical value of diffusion tensor imaging and tractography for evaluating chronic spinal cord compression. Material and Methods Single-shot spin-echo echo-planar DT sequences were scanned in 42 spinal cord compression patients and 49 healthy volunteers. The mean values of the apparent diffusion coefficient and fractional anisotropy were measured in region of interest at the cervical and lower thoracic spinal cord. The patients were divided into two groups according to the high signal on T2WI (the SCC-HI group and the SCC-nHI group for with or without high signal). A one-way ANOVA was used. Diffusion tensor tractography was used to visualize the morphological features of normal and impaired white matter. Results There were no statistically significant differences in the apparent diffusion coefficient and fractional anisotropy values between the different spinal cord segments of the normal subjects. All of the patients in the SCC-HI group had increased apparent diffusion coefficient values and decreased fractional anisotropy values at the lesion level compared to the normal controls. However, there were no statistically significant diffusion index differences between the SCC-nHI group and the normal controls. In the diffusion tensor imaging maps, the normal spinal cord sections were depicted as fiber tracts that were color-encoded to a cephalocaudal orientation. The diffusion tensor images were compressed to different degrees in all of the patients. Conclusion Diffusion tensor imaging and tractography are promising methods for visualizing spinal cord tracts and can provide additional information in clinical studies in spinal cord compression

  5. An Evaluation of Fractal Surface Measurement Methods for Characterizing Landscape Complexity from Remote-Sensing Imagery

    Science.gov (United States)

    Lam, Nina Siu-Ngan; Qiu, Hong-Lie; Quattrochi, Dale A.; Emerson, Charles W.; Arnold, James E. (Technical Monitor)

    2001-01-01

    The rapid increase in digital data volumes from new and existing sensors necessitates the need for efficient analytical tools for extracting information. We developed an integrated software package called ICAMS (Image Characterization and Modeling System) to provide specialized spatial analytical functions for interpreting remote sensing data. This paper evaluates the three fractal dimension measurement methods: isarithm, variogram, and triangular prism, along with the spatial autocorrelation measurement methods Moran's I and Geary's C, that have been implemented in ICAMS. A modified triangular prism method was proposed and implemented. Results from analyzing 25 simulated surfaces having known fractal dimensions show that both the isarithm and triangular prism methods can accurately measure a range of fractal surfaces. The triangular prism method is most accurate at estimating the fractal dimension of higher spatial complexity, but it is sensitive to contrast stretching. The variogram method is a comparatively poor estimator for all of the surfaces, particularly those with higher fractal dimensions. Similar to the fractal techniques, the spatial autocorrelation techniques are found to be useful to measure complex images but not images with low dimensionality. These fractal measurement methods can be applied directly to unclassified images and could serve as a tool for change detection and data mining.

  6. Near-Field Optical Microscopy of Fractal Structures

    DEFF Research Database (Denmark)

    Coello, Victor; Bozhevolnyi, Sergey I.

    1999-01-01

    Using a photon scanning tunnelling microscope combined with a shear-force feedback system, we image both topographical and near-field optical images (at the wavelengths of 633 and 594 nm) of silver colloid fractals. Near-field optical imaging is calibrated with a standing evanescent wave pattern...

  7. HVS scheme for DICOM image compression: Design and comparative performance evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Prabhakar, B. [Biomedical and Engineering Division, Indian Institute of Technology Madras, Chennai 600036, Tamil Nadu (India)]. E-mail: prabhakarb@iitm.ac.in; Reddy, M. Ramasubba [Biomedical and Engineering Division, Indian Institute of Technology Madras, Chennai 600036, Tamil Nadu (India)

    2007-07-15

    Advanced digital imaging technology in medical domain demands efficient and effective DICOM image compression for progressive image transmission and picture archival. Here a compression system, which incorporates sensitivities of HVS coded with SPIHT quantization, is discussed. The weighting factors derived from luminance CSF are used to transform the wavelet subband coefficients to reflect characteristics of HVS in best possible manner. Mannos et al. and Daly HVS models have been used and results are compared. To evaluate the performance, Eskicioglu chart metric is considered. Experiment is done on both Monochrome and Color Dicom images of MRI, CT, OT, and CR, natural and benchmark images. Reconstructed image through our technique showed improvement in visual quality and Eskicioglu chart metric at same compression ratios. Also the Daly HVS model based compression shows better performance perceptually and quantitatively when compared to Mannos et el. model. Further 'bior4.4' wavelet filter provides better results than 'db9' filter for this compression system. Results give strong evidence that under common boundary conditions; our technique achieves competitive visual quality, compression ratio and coding/decoding time, when compared with jpeg2000 (kakadu)

  8. Cellular automata codebooks applied to compact image compression

    Directory of Open Access Journals (Sweden)

    Radu DOGARU

    2006-12-01

    Full Text Available Emergent computation in semi-totalistic cellular automata (CA is used to generate a set of basis (or codebook. Such codebooks are convenient for simple and circuit efficient compression schemes based on binary vector quantization, applied to the bitplanes of any monochrome or color image. Encryption is also naturally included using these codebooks. Natural images would require less than 0.5 bits per pixel (bpp while the quality of the reconstructed images is comparable with traditional compression schemes. The proposed scheme is attractive for low power, sensor integrated applications.

  9. Tumor cells diagnostic through fractal dimensions

    International Nuclear Information System (INIS)

    Timbo, Christiano dos Santos

    2004-01-01

    This method relies on the application of an algorithm for the quantitative and statistic differentiation of a sample of cells stricken by a certain kind of pathology and a sample of healthy cells. This differentiation is made by applying the principles of fractal dimension to digital images of the cells. The algorithm was developed using the the concepts of Object- Oriented Programming, resulting in a simple code, divided in 5 distinct procedures, and a user-friendly interface. To obtain the fractal dimension of the images of the cells, the program processes the image, extracting its border, and uses it to characterize the complexity of the form of the cell in a quantitative way. In order to validate the code, it was used a digitalized image found in an article by W. Bauer, developer of an analog method. The result showed a difference of 6% between the value obtained by Bauer and the value obtained the algorithm developed in this work. (author)

  10. Fractal Bread.

    Science.gov (United States)

    Esbenshade, Donald H., Jr.

    1991-01-01

    Develops the idea of fractals through a laboratory activity that calculates the fractal dimension of ordinary white bread. Extends use of the fractal dimension to compare other complex structures as other breads and sponges. (MDH)

  11. Fractal analysis of bone architecture at distal radius

    International Nuclear Information System (INIS)

    Tomomitsu, Tatsushi; Mimura, Hiroaki; Murase, Kenya; Sone, Teruki; Fukunaga, Masao

    2005-01-01

    Bone strength depends on bone quality (architecture, turnover, damage accumulation, and mineralization) as well as bone mass. In this study, human bone architecture was analyzed using fractal image analysis, and the clinical relevance of this method was evaluated. The subjects were 12 healthy female controls and 16 female patients suspected of having osteoporosis (age range, 22-70 years; mean age, 49.1 years). High-resolution CT images of the distal radius were acquired and analyzed using a peripheral quantitative computed tomography (pQCT) system. On the same day, bone mineral densities of the lumbar spine (L-BMD), proximal femur (F-BMD), and distal radius (R-BMD) were measured by dual-energy X-ray absorptiometry (DXA). We examined the correlation between the fractal dimension and six bone mass indices. Subjects diagnosed with osteopenia or osteoporosis were divided into two groups (with and without vertebral fracture), and we compared measured values between these two groups. The fractal dimension correlated most closely with L-BMD (r=0.744). The coefficient of correlation between the fractal dimension and L-BMD was very similar to the coefficient of correlation between L-BMD and F-BMD (r=0.783) and the coefficient of correlation between L-BMD and R-BMD (r=0.742). The fractal dimension was the only measured value that differed significantly between both the osteopenic and the osteoporotic subjects with and without vertebral fracture. The present results suggest that the fractal dimension of the distal radius can be reliably used as a bone strength index that reflects bone architecture as well as bone mass. (author)

  12. Image-Data Compression Using Edge-Optimizing Algorithm for WFA Inference.

    Science.gov (United States)

    Culik, Karel II; Kari, Jarkko

    1994-01-01

    Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…

  13. A New Algorithm for the On-Board Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Raúl Guerra

    2018-03-01

    Full Text Available Hyperspectral sensors are able to provide information that is useful for many different applications. However, the huge amounts of data collected by these sensors are not exempt of drawbacks, especially in remote sensing environments where the hyperspectral images are collected on-board satellites and need to be transferred to the earth’s surface. In this situation, an efficient compression of the hyperspectral images is mandatory in order to save bandwidth and storage space. Lossless compression algorithms have been traditionally preferred, in order to preserve all the information present in the hyperspectral cube for scientific purposes, despite their limited compression ratio. Nevertheless, the increment in the data-rate of the new-generation sensors is making more critical the necessity of obtaining higher compression ratios, making it necessary to use lossy compression techniques. A new transform-based lossy compression algorithm, namely Lossy Compression Algorithm for Hyperspectral Image Systems (HyperLCA, is proposed in this manuscript. This compressor has been developed for achieving high compression ratios with a good compression performance at a reasonable computational burden. An extensive amount of experiments have been performed in order to evaluate the goodness of the proposed HyperLCA compressor using different calibrated and uncalibrated hyperspectral images from the AVIRIS and Hyperion sensors. The results provided by the proposed HyperLCA compressor have been evaluated and compared against those produced by the most relevant state-of-the-art compression solutions. The theoretical and experimental evidence indicates that the proposed algorithm represents an excellent option for lossy compressing hyperspectral images, especially for applications where the available computational resources are limited, such as on-board scenarios.

  14. Dynamic CT perfusion image data compression for efficient parallel processing.

    Science.gov (United States)

    Barros, Renan Sales; Olabarriaga, Silvia Delgado; Borst, Jordi; van Walderveen, Marianne A A; Posthuma, Jorrit S; Streekstra, Geert J; van Herk, Marcel; Majoie, Charles B L M; Marquering, Henk A

    2016-03-01

    The increasing size of medical imaging data, in particular time series such as CT perfusion (CTP), requires new and fast approaches to deliver timely results for acute care. Cloud architectures based on graphics processing units (GPUs) can provide the processing capacity required for delivering fast results. However, the size of CTP datasets makes transfers to cloud infrastructures time-consuming and therefore not suitable in acute situations. To reduce this transfer time, this work proposes a fast and lossless compression algorithm for CTP data. The algorithm exploits redundancies in the temporal dimension and keeps random read-only access to the image elements directly from the compressed data on the GPU. To the best of our knowledge, this is the first work to present a GPU-ready method for medical image compression with random access to the image elements from the compressed data.

  15. A New Approach for Fingerprint Image Compression

    Energy Technology Data Exchange (ETDEWEB)

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  16. The Role of Resolution in the Estimation of Fractal Dimension Maps From SAR Data

    Directory of Open Access Journals (Sweden)

    Gerardo Di Martino

    2017-12-01

    Full Text Available This work is aimed at investigating the role of resolution in fractal dimension map estimation, analyzing the role of the different surface spatial scales involved in the considered estimation process. The study is performed using a data set of actual Cosmo/SkyMed Synthetic Aperture Radar (SAR images relevant to two different areas, the region of Bidi in Burkina Faso and the city of Naples in Italy, acquired in stripmap and enhanced spotlight modes. The behavior of fractal dimension maps in the presence of areas with distinctive characteristics from the viewpoint of land-cover and surface features is discussed. Significant differences among the estimated maps are obtained in the presence of fine textural details, which significantly affect the fractal dimension estimation for the higher resolution spotlight images. The obtained results show that if we are interested in obtaining a reliable estimate of the fractal dimension of the observed natural scene, stripmap images should be chosen in view of both economic and computational considerations. In turn, the combination of fractal dimension maps obtained from stripmap and spotlight images can be used to identify areas on the scene presenting non-fractal behavior (e.g., urban areas. Along this guideline, a simple example of stripmap-spotlight data fusion is also presented.

  17. Effect of CT digital image compression on detection of coronary artery calcification

    International Nuclear Information System (INIS)

    Zheng, L.M.; Sone, S.; Itani, Y.; Wang, Q.; Hanamura, K.; Asakura, K.; Li, F.; Yang, Z.G.; Wang, J.C.; Funasaka, T.

    2000-01-01

    Purpose: To test the effect of digital compression of CT images on the detection of small linear or spotted high attenuation lesions such as coronary artery calcification (CAC). Material and methods: Fifty cases with and 50 without CAC were randomly selected from a population that had undergone spiral CT of the thorax for screening lung cancer. CT image data were compressed using JPEG (Joint Photographic Experts Group) or wavelet algorithms at ratios of 10:1, 20:1 or 40:1. Five radiologists reviewed the uncompressed and compressed images on a cathode-ray-tube. Observer performance was evaluated with receiver operating characteristic analysis. Results: CT images compressed at a ratio as high as 20:1 were acceptable for primary diagnosis of CAC. There was no significant difference in the detection accuracy for CAC between JPEG and wavelet algorithms at the compression ratios up to 20:1. CT images were more vulnerable to image blurring on the wavelet compression at relatively lower ratios, and 'blocking' artifacts occurred on the JPEG compression at relatively higher ratios. Conclusion: JPEG and wavelet algorithms allow compression of CT images without compromising their diagnostic value at ratios up to 20:1 in detecting small linear or spotted high attenuation lesions such as CAC, and there was no difference between the two algorithms in diagnostic accuracy

  18. Combining Biometric Fractal Pattern and Particle Swarm Optimization-Based Classifier for Fingerprint Recognition

    Directory of Open Access Journals (Sweden)

    Chia-Hung Lin

    2010-01-01

    Full Text Available This paper proposes combining the biometric fractal pattern and particle swarm optimization (PSO-based classifier for fingerprint recognition. Fingerprints have arch, loop, whorl, and accidental morphologies, and embed singular points, resulting in the establishment of fingerprint individuality. An automatic fingerprint identification system consists of two stages: digital image processing (DIP and pattern recognition. DIP is used to convert to binary images, refine out noise, and locate the reference point. For binary images, Katz's algorithm is employed to estimate the fractal dimension (FD from a two-dimensional (2D image. Biometric features are extracted as fractal patterns using different FDs. Probabilistic neural network (PNN as a classifier performs to compare the fractal patterns among the small-scale database. A PSO algorithm is used to tune the optimal parameters and heighten the accuracy. For 30 subjects in the laboratory, the proposed classifier demonstrates greater efficiency and higher accuracy in fingerprint recognition.

  19. Fractal dimension analysis of malignant and benign endobronchial ultrasound nodes

    International Nuclear Information System (INIS)

    Fiz, José Antonio; Monte-Moreno, Enrique; Andreo, Felipe; Auteri, Santiago José; Sanz-Santos, José; Serra, Pere; Bonet, Gloria; Castellà, Eva; Manzano, Juan Ruiz

    2014-01-01

    Endobronchial ultrasonography (EBUS) has been applied as a routine procedure for the diagnostic of hiliar and mediastinal nodes. The authors assessed the relationship between the echographic appearance of mediastinal nodes, based on endobronchial ultrasound images, and the likelihood of malignancy. The images of twelve malignant and eleven benign nodes were evaluated. A previous processing method was applied to improve the quality of the images and to enhance the details. Texture and morphology parameters analyzed were: the image texture of the echographies and a fractal dimension that expressed the relationship between area and perimeter of the structures that appear in the image, and characterizes the convoluted inner structure of the hiliar and mediastinal nodes. Processed images showed that relationship between log perimeter and log area of hilar nodes was lineal (i.e. perimeter vs. area follow a power law). Fractal dimension was lower in the malignant nodes compared with non-malignant nodes (1.47(0.09), 1.53(0.10) mean(SD), Mann–Whitney U test p < 0.05)). Fractal dimension of ultrasonographic images of mediastinal nodes obtained through endobronchial ultrasound differ in malignant nodes from non-malignant. This parameter could differentiate malignat and non-malignat mediastinic and hiliar nodes

  20. Observer detection of image degradation caused by irreversible data compression processes

    Science.gov (United States)

    Chen, Ji; Flynn, Michael J.; Gross, Barry; Spizarny, David

    1991-05-01

    Irreversible data compression methods have been proposed to reduce the data storage and communication requirements of digital imaging systems. In general, the error produced by compression increases as an algorithm''s compression ratio is increased. We have studied the relationship between compression ratios and the detection of induced error using radiologic observers. The nature of the errors was characterized by calculating the power spectrum of the difference image. In contrast with studies designed to test whether detected errors alter diagnostic decisions, this study was designed to test whether observers could detect the induced error. A paired-film observer study was designed to test whether induced errors were detected. The study was conducted with chest radiographs selected and ranked for subtle evidence of interstitial disease, pulmonary nodules, or pneumothoraces. Images were digitized at 86 microns (4K X 5K) and 2K X 2K regions were extracted. A full-frame discrete cosine transform method was used to compress images at ratios varying between 6:1 and 60:1. The decompressed images were reprinted next to the original images in a randomized order with a laser film printer. The use of a film digitizer and a film printer which can reproduce all of the contrast and detail in the original radiograph makes the results of this study insensitive to instrument performance and primarily dependent on radiographic image quality. The results of this study define conditions for which errors associated with irreversible compression cannot be detected by radiologic observers. The results indicate that an observer can detect the errors introduced by this compression algorithm for compression ratios of 10:1 (1.2 bits/pixel) or higher.

  1. A Multiresolution Image Completion Algorithm for Compressing Digital Color Images

    Directory of Open Access Journals (Sweden)

    R. Gomathi

    2014-01-01

    Full Text Available This paper introduces a new framework for image coding that uses image inpainting method. In the proposed algorithm, the input image is subjected to image analysis to remove some of the portions purposefully. At the same time, edges are extracted from the input image and they are passed to the decoder in the compressed manner. The edges which are transmitted to decoder act as assistant information and they help inpainting process fill the missing regions at the decoder. Textural synthesis and a new shearlet inpainting scheme based on the theory of p-Laplacian operator are proposed for image restoration at the decoder. Shearlets have been mathematically proven to represent distributed discontinuities such as edges better than traditional wavelets and are a suitable tool for edge characterization. This novel shearlet p-Laplacian inpainting model can effectively reduce the staircase effect in Total Variation (TV inpainting model whereas it can still keep edges as well as TV model. In the proposed scheme, neural network is employed to enhance the value of compression ratio for image coding. Test results are compared with JPEG 2000 and H.264 Intracoding algorithms. The results show that the proposed algorithm works well.

  2. Bony change of apical lesion healing process using fractal analysis

    International Nuclear Information System (INIS)

    Lee, Ji Min; Park, Hyok; Jeong, Ho Gul; Kim, Kee Deog; Park, Chang Seo

    2005-01-01

    To investigate the change of bone healing process after endodontic treatment of the tooth with an apical lesion by fractal analysis. Radiographic images of 35 teeth from 33 patients taken on first diagnosis, 6 months, and 1 year after endodontic treatment were selected. Radiographic images were taken by JUPITER computerized Dental X-ray System. Fractal dimensions were calculated three times at each area by Scion Image PC program. Rectangular region of interest (30 x 30) were selected at apical lesion and normal apex of each image. The fractal dimension at apical lesion of first diagnosis (L 0 ) is 0.940 ± 0.361 and that of normal area (N 0 ) is 1.186 ± 0.727 (p 1 ) is 1.076 ± 0.069 and that of normal area (N 1 ) is 1.192 ± 0.055 (p 2 ) is 1.163 ± 0.074 and that of normal area (N 2 ) is 1.225 ± 0.079 (p<0.05). After endodontic treatment, the fractal dimensions at each apical lesions depending on time showed statistically significant difference. And there are statistically significant different between normal area and apical lesion on first diagnosis, 6 months after, 1 year after. But the differences were grow smaller as time flows. The evaluation of the prognosis after the endodontic treatment of the apical lesion was estimated by bone regeneration in apical region. Fractal analysis was attempted to overcome the limit of subjective reading, and as a result the change of the bone during the healing process was able to be detected objectively and quantitatively.

  3. Spatial and radiometric characterization of multi-spectrum satellite images through multi-fractal analysis

    Science.gov (United States)

    Alonso, Carmelo; Tarquis, Ana M.; Zúñiga, Ignacio; Benito, Rosa M.

    2017-03-01

    Several studies have shown that vegetation indexes can be used to estimate root zone soil moisture. Earth surface images, obtained by high-resolution satellites, presently give a lot of information on these indexes, based on the data of several wavelengths. Because of the potential capacity for systematic observations at various scales, remote sensing technology extends the possible data archives from the present time to several decades back. Because of this advantage, enormous efforts have been made by researchers and application specialists to delineate vegetation indexes from local scale to global scale by applying remote sensing imagery. In this work, four band images have been considered, which are involved in these vegetation indexes, and were taken by satellites Ikonos-2 and Landsat-7 of the same geographic location, to study the effect of both spatial (pixel size) and radiometric (number of bits coding the image) resolution on these wavelength bands as well as two vegetation indexes: the Normalized Difference Vegetation Index (NDVI) and the Enhanced Vegetation Index (EVI). In order to do so, a multi-fractal analysis of these multi-spectral images was applied in each of these bands and the two indexes derived. The results showed that spatial resolution has a similar scaling effect in the four bands, but radiometric resolution has a larger influence in blue and green bands than in red and near-infrared bands. The NDVI showed a higher sensitivity to the radiometric resolution than EVI. Both were equally affected by the spatial resolution. From both factors, the spatial resolution has a major impact in the multi-fractal spectrum for all the bands and the vegetation indexes. This information should be taken in to account when vegetation indexes based on different satellite sensors are obtained.

  4. Self-Similarity of Plasmon Edge Modes on Koch Fractal Antennas.

    Science.gov (United States)

    Bellido, Edson P; Bernasconi, Gabriel D; Rossouw, David; Butet, Jérémy; Martin, Olivier J F; Botton, Gianluigi A

    2017-11-28

    We investigate the plasmonic behavior of Koch snowflake fractal geometries and their possible application as broadband optical antennas. Lithographically defined planar silver Koch fractal antennas were fabricated and characterized with high spatial and spectral resolution using electron energy loss spectroscopy. The experimental data are supported by numerical calculations carried out with a surface integral equation method. Multiple surface plasmon edge modes supported by the fractal structures have been imaged and analyzed. Furthermore, by isolating and reproducing self-similar features in long silver strip antennas, the edge modes present in the Koch snowflake fractals are identified. We demonstrate that the fractal response can be obtained by the sum of basic self-similar segments called characteristic edge units. Interestingly, the plasmon edge modes follow a fractal-scaling rule that depends on these self-similar segments formed in the structure after a fractal iteration. As the size of a fractal structure is reduced, coupling of the modes in the characteristic edge units becomes relevant, and the symmetry of the fractal affects the formation of hybrid modes. This analysis can be utilized not only to understand the edge modes in other planar structures but also in the design and fabrication of fractal structures for nanophotonic applications.

  5. Detection and classification of Breast Cancer in Wavelet Sub-bands of Fractal Segmented Cancerous Zones.

    Science.gov (United States)

    Shirazinodeh, Alireza; Noubari, Hossein Ahmadi; Rabbani, Hossein; Dehnavi, Alireza Mehri

    2015-01-01

    Recent studies on wavelet transform and fractal modeling applied on mammograms for the detection of cancerous tissues indicate that microcalcifications and masses can be utilized for the study of the morphology and diagnosis of cancerous cases. It is shown that the use of fractal modeling, as applied to a given image, can clearly discern cancerous zones from noncancerous areas. In this paper, for fractal modeling, the original image is first segmented into appropriate fractal boxes followed by identifying the fractal dimension of each windowed section using a computationally efficient two-dimensional box-counting algorithm. Furthermore, using appropriate wavelet sub-bands and image Reconstruction based on modified wavelet coefficients, it is shown that it is possible to arrive at enhanced features for detection of cancerous zones. In this paper, we have attempted to benefit from the advantages of both fractals and wavelets by introducing a new algorithm. By using a new algorithm named F1W2, the original image is first segmented into appropriate fractal boxes, and the fractal dimension of each windowed section is extracted. Following from that, by applying a maximum level threshold on fractal dimensions matrix, the best-segmented boxes are selected. In the next step, the segmented Cancerous zones which are candidates are then decomposed by utilizing standard orthogonal wavelet transform and db2 wavelet in three different resolution levels, and after nullifying wavelet coefficients of the image at the first scale and low frequency band of the third scale, the modified reconstructed image is successfully utilized for detection of breast cancer regions by applying an appropriate threshold. For detection of cancerous zones, our simulations indicate the accuracy of 90.9% for masses and 88.99% for microcalcifications detection results using the F1W2 method. For classification of detected mictocalcification into benign and malignant cases, eight features are identified and

  6. Adaptive compressive ghost imaging based on wavelet trees and sparse representation.

    Science.gov (United States)

    Yu, Wen-Kai; Li, Ming-Fei; Yao, Xu-Ri; Liu, Xue-Feng; Wu, Ling-An; Zhai, Guang-Jie

    2014-03-24

    Compressed sensing is a theory which can reconstruct an image almost perfectly with only a few measurements by finding its sparsest representation. However, the computation time consumed for large images may be a few hours or more. In this work, we both theoretically and experimentally demonstrate a method that combines the advantages of both adaptive computational ghost imaging and compressed sensing, which we call adaptive compressive ghost imaging, whereby both the reconstruction time and measurements required for any image size can be significantly reduced. The technique can be used to improve the performance of all computational ghost imaging protocols, especially when measuring ultra-weak or noisy signals, and can be extended to imaging applications at any wavelength.

  7. A Review On Segmentation Based Image Compression Techniques

    Directory of Open Access Journals (Sweden)

    S.Thayammal

    2013-11-01

    Full Text Available Abstract -The storage and transmission of imagery become more challenging task in the current scenario of multimedia applications. Hence, an efficient compression scheme is highly essential for imagery, which reduces the requirement of storage medium and transmission bandwidth. Not only improvement in performance and also the compression techniques must converge quickly in order to apply them for real time applications. There are various algorithms have been done in image compression, but everyone has its own pros and cons. Here, an extensive analysis between existing methods is performed. Also, the use of existing works is highlighted, for developing the novel techniques which face the challenging task of image storage and transmission in multimedia applications.

  8. Assessment of textural differentiations in forest resources in Romania using fractal analysis

    DEFF Research Database (Denmark)

    Andronache, Ion; Fensholt, Rasmus; Ahammer, Helmut

    2017-01-01

    regions in Romania affected by both deforestation and reforestation using a non-Euclidean method based on fractal analysis.We calculated four fractal dimensions of forest areas: the fractal box-counting dimension of the forest areas, the fractal box-counting dimension of the dilated forest areas......, the fractal dilation dimension and the box-counting dimension of the border of the dilated forest areas. Fractal analysis revealed morpho-structural and textural differentiations of forested, deforested and reforested areas in development regions with dominant mountain relief and high hills (more forested...... and compact organization) in comparison to the development regions dominated by plains or low hills (less forested, more fragmented with small and isolated clusters). Our analysis used the fractal analysis that has the advantage of analyzing the entire image, rather than studying local information, thereby...

  9. Development and assessment of compression technique for medical images using neural network. I. Assessment of lossless compression

    International Nuclear Information System (INIS)

    Fukatsu, Hiroshi

    2007-01-01

    This paper describes assessment of the lossless compression of a new efficient compression technique (JIS system) using neural network that the author and co-workers have recently developed. At first, theory is explained for encoding and decoding the data. Assessment is done on 55 images each of chest digital roentgenography, digital mammography, 64-row multi-slice CT, 1.5 Tesla MRI, positron emission tomography (PET) and digital subtraction angiography, which are lossless-compressed by the present JIS system to see the compression rate and loss. For comparison, those data are also JPEG lossless-compressed. Personal computer (PC) is an Apple MacBook Pro with configuration of Boot Camp for Windows environment. The present JIS system is found to have a more than 4 times higher efficiency than the usual compressions which compressing the file volume to only 1/11 in average, and thus to be importantly responsible to the increasing medical imaging data. (R.T.)

  10. Heterogeneity of cerebral blood flow: a fractal approach

    International Nuclear Information System (INIS)

    Kuikka, J.T.; Hartikainen, P.

    2000-01-01

    Aim: We demonstrate the heterogeneity of regional cerebral blood flow using a fractal approach and single-photon emission computed tomography (SPECT). Method: Tc-99m-labelled ethylcysteine dimer was injected intravenously in 10 healthy controls and in 10 patients with dementia of frontal lobe type. The head was imaged with a gamma camera and transaxial, sagittal and coronal slices were reconstructed. Two hundred fifty-six symmetrical regions of interest (ROIs) were drawn onto each hemisphere of functioning brain matter. Fractal analysis was used to examine the spatial heterogeneity of blood flow as a function of the number of ROIs. Results: Relative dispersion (=coefficient of variation of the regional flows) was fractal-like in healthy subjects and could be characterized by a fractal dimension of 1.17±0.05 (mean±SD) for the left hemisphere and 1.15±0.04 for the right hemisphere, respectively. The fractal dimension of 1.0 reflects completely homogeneous blood flow and 1.5 indicates a random blood flow distribution. Patients with dementia of frontal lobe type had a significantly lower fractal dimension of 1.04±0.03 than in healthy controls. (orig.) [de

  11. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  12. Image compression-encryption algorithms by combining hyper-chaotic system with discrete fractional random transform

    Science.gov (United States)

    Gong, Lihua; Deng, Chengzhi; Pan, Shumin; Zhou, Nanrun

    2018-07-01

    Based on hyper-chaotic system and discrete fractional random transform, an image compression-encryption algorithm is designed. The original image is first transformed into a spectrum by the discrete cosine transform and the resulting spectrum is compressed according to the method of spectrum cutting. The random matrix of the discrete fractional random transform is controlled by a chaotic sequence originated from the high dimensional hyper-chaotic system. Then the compressed spectrum is encrypted by the discrete fractional random transform. The order of DFrRT and the parameters of the hyper-chaotic system are the main keys of this image compression and encryption algorithm. The proposed algorithm can compress and encrypt image signal, especially can encrypt multiple images once. To achieve the compression of multiple images, the images are transformed into spectra by the discrete cosine transform, and then the spectra are incised and spliced into a composite spectrum by Zigzag scanning. Simulation results demonstrate that the proposed image compression and encryption algorithm is of high security and good compression performance.

  13. Infrastructural Fractals

    DEFF Research Database (Denmark)

    Bruun Jensen, Casper

    2007-01-01

    . Instead, I outline a fractal approach to the study of space, society, and infrastructure. A fractal orientation requires a number of related conceptual reorientations. It has implications for thinking about scale and perspective, and (sociotechnical) relations, and for considering the role of the social...... and a fractal social theory....

  14. On use of image quality metrics for perceptual blur modeling: image/video compression case

    Science.gov (United States)

    Cha, Jae H.; Olson, Jeffrey T.; Preece, Bradley L.; Espinola, Richard L.; Abbott, A. Lynn

    2018-02-01

    Linear system theory is employed to make target acquisition performance predictions for electro-optical/infrared imaging systems where the modulation transfer function (MTF) may be imposed from a nonlinear degradation process. Previous research relying on image quality metrics (IQM) methods, which heuristically estimate perceived MTF has supported that an average perceived MTF can be used to model some types of degradation such as image compression. Here, we discuss the validity of the IQM approach by mathematically analyzing the associated heuristics from the perspective of reliability, robustness, and tractability. Experiments with standard images compressed by x.264 encoding suggest that the compression degradation can be estimated by a perceived MTF within boundaries defined by well-behaved curves with marginal error. Our results confirm that the IQM linearizer methodology provides a credible tool for sensor performance modeling.

  15. Image Quality Assessment for Different Wavelet Compression Techniques in a Visual Communication Framework

    Directory of Open Access Journals (Sweden)

    Nuha A. S. Alwan

    2013-01-01

    Full Text Available Images with subband coding and threshold wavelet compression are transmitted over a Rayleigh communication channel with additive white Gaussian noise (AWGN, after quantization and 16-QAM modulation. A comparison is made between these two types of compression using both mean square error (MSE and structural similarity (SSIM image quality assessment (IQA criteria applied to the reconstructed image at the receiver. The two methods yielded comparable SSIM but different MSE measures. In this work, we justify our results which support previous findings in the literature that the MSE between two images is not indicative of structural similarity or the visibility of errors. It is found that it is difficult to reduce the pointwise errors in subband-compressed images (higher MSE. However, the compressed images provide comparable SSIM or perceived quality for both types of compression provided that the retained energy after compression is the same.

  16. Fractal analysis of Xylella fastidiosa biofilm formation

    Science.gov (United States)

    Moreau, A. L. D.; Lorite, G. S.; Rodrigues, C. M.; Souza, A. A.; Cotta, M. A.

    2009-07-01

    We have investigated the growth process of Xylella fastidiosa biofilms inoculated on a glass. The size and the distance between biofilms were analyzed by optical images; a fractal analysis was carried out using scaling concepts and atomic force microscopy images. We observed that different biofilms show similar fractal characteristics, although morphological variations can be identified for different biofilm stages. Two types of structural patterns are suggested from the observed fractal dimensions Df. In the initial and final stages of biofilm formation, Df is 2.73±0.06 and 2.68±0.06, respectively, while in the maturation stage, Df=2.57±0.08. These values suggest that the biofilm growth can be understood as an Eden model in the former case, while diffusion-limited aggregation (DLA) seems to dominate the maturation stage. Changes in the correlation length parallel to the surface were also observed; these results were correlated with the biofilm matrix formation, which can hinder nutrient diffusion and thus create conditions to drive DLA growth.

  17. An introduction to video image compression and authentication technology for safeguards applications

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1995-01-01

    Verification of a video image has been a major problem for safeguards for several years. Various verification schemes have been tried on analog video signals ever since the mid-1970's. These schemes have provided a measure of protection but have never been widely adopted. The development of reasonably priced complex video processing integrated circuits makes it possible to digitize a video image and then compress the resulting digital file into a smaller file without noticeable loss of resolution. Authentication and/or encryption algorithms can be more easily applied to digital video files that have been compressed. The compressed video files require less time for algorithm processing and image transmission. An important safeguards application for authenticated, compressed, digital video images is in unattended video surveillance systems and remote monitoring systems. The use of digital images in the surveillance system makes it possible to develop remote monitoring systems that send images over narrow bandwidth channels such as the common telephone line. This paper discusses the video compression process, authentication algorithm, and data format selected to transmit and store the authenticated images

  18. Compressive sensing based ptychography image encryption

    Science.gov (United States)

    Rawat, Nitin

    2015-09-01

    A compressive sensing (CS) based ptychography combined with an optical image encryption is proposed. The diffraction pattern is recorded through ptychography technique further compressed by non-uniform sampling via CS framework. The system requires much less encrypted data and provides high security. The diffraction pattern as well as the lesser measurements of the encrypted samples serves as a secret key which make the intruder attacks more difficult. Furthermore, CS shows that the linearly projected few random samples have adequate information for decryption with a dramatic volume reduction. Experimental results validate the feasibility and effectiveness of our proposed technique compared with the existing techniques. The retrieved images do not reveal any information with the original information. In addition, the proposed system can be robust even with partial encryption and under brute-force attacks.

  19. A MODIFIED EMBEDDED ZERO-TREE WAVELET METHOD FOR MEDICAL IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    T. Celine Therese Jenny

    2010-11-01

    Full Text Available The Embedded Zero-tree Wavelet (EZW is a lossy compression method that allows for progressive transmission of a compressed image. By exploiting the natural zero-trees found in a wavelet decomposed image, the EZW algorithm is able to encode large portions of insignificant regions of an still image with a minimal number of bits. The upshot of this encoding is an algorithm that is able to achieve relatively high peak signal to noise ratios (PSNR for high compression levels. The EZW algorithm is to encode large portions of insignificant regions of an image with a minimal number of bits. Vector Quantization (VQ method can be performed as a post processing step to reduce the coded file size. Vector Quantization (VQ method can be reduces redundancy of the image data in order to be able to store or transmit data in an efficient form. It is demonstrated by experimental results that the proposed method outperforms several well-known lossless image compression techniques for still images that contain 256 colors or less.

  20. Evaluation of peri-implant bone using fractal analysis

    International Nuclear Information System (INIS)

    Jung, Yun Hoa

    2005-01-01

    The purpose of this study was to investigate whether the fractal dimension of successive panoramic radiographs of bone after implant placement is useful in the characterization of structural change in alveolar bone. Twelve subjects with thirty-five implants were retrospectively followed-up from one week to six months after implantation. Thirty-six panoramic radiographs from twelve patients were classified into 1 week. 1-2 months and 3-6 months after implantation and digitized. The windows of bone apical and mesial or distal to the implant were defined as peri apical region of interest (ROI) and inter dental ROI; the fractal dimension of the image was calculated. There was not a statistically significant difference in fractal dimensions during the period up to 6 months after implantation. The fractal dimensions were higher in 13 and 15 mm than 10 and 11.5 mm implant length at inter dental ROIs in 3-6 months after implantation (p<0.01). Longer fixtures showed the higher fractal dimension of bone around implant. This investigation needs further exploration with large numbers of implants for longer follow-up periods.

  1. Multi-dimensional medical images compressed and filtered with wavelets

    International Nuclear Information System (INIS)

    Boyen, H.; Reeth, F. van; Flerackers, E.

    2002-01-01

    Full text: Using the standard wavelet decomposition methods, multi-dimensional medical images can be compressed and filtered by repeating the wavelet-algorithm on 1D-signals in an extra loop per extra dimension. In the non-standard decomposition for multi-dimensional images the areas that must be zero-filled in case of band- or notch-filters are more complex than geometric areas such as rectangles or cubes. Adding an additional dimension in this algorithm until 4D (e.g. a 3D beating heart) increases the geometric complexity of those areas even more. The aim of our study was to calculate the boundaries of the formed complex geometric areas, so we can use the faster non-standard decomposition to compress and filter multi-dimensional medical images. Because a lot of 3D medical images taken by PET- or SPECT-cameras have only a few layers in the Z-dimension and compressing images in a dimension with a few voxels is usually not worthwhile, we provided a solution in which one can choose which dimensions will be compressed or filtered. With the proposal of non-standard decomposition on Daubechies' wavelets D2 to D20 by Steven Gollmer in 1992, 1D data can be compressed and filtered. Each additional level works only on the smoothed data, so the transformation-time halves per extra level. Zero-filling a well-defined area alter the wavelet-transform and then performing the inverse transform will do the filtering. To be capable to compress and filter up to 4D-Images with the faster non-standard wavelet decomposition method, we have investigated a new method for calculating the boundaries of the areas which must be zero-filled in case of filtering. This is especially true for band- and notch filtering. Contrary to the standard decomposition method, the areas are no longer rectangles in 2D or cubes in 3D or a row of cubes in 4D: they are rectangles expanded with a half-sized rectangle in the other direction for 2D, cubes expanded with half cubes in one and quarter cubes in the

  2. The wavelet/scalar quantization compression standard for digital fingerprint images

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  3. Fractal analysis of SEM images and mercury intrusion porosimetry data for the microstructural characterization of microcrystalline cellulose-based pellets

    International Nuclear Information System (INIS)

    Gomez-Carracedo, A.; Alvarez-Lorenzo, C.; Coca, R.; Martinez-Pacheco, R.; Concheiro, A.; Gomez-Amoza, J.L.

    2009-01-01

    The microstructure of theophylline pellets prepared from microcrystalline cellulose, carbopol and dicalcium phosphate dihydrate, according to a mixture design, was characterized using textural analysis of gray-level scanning electron microscopy (SEM) images and thermodynamic analysis of the cumulative pore volume distribution obtained by mercury intrusion porosimetry. Surface roughness evaluated in terms of gray-level non-uniformity and fractal dimension of pellet surface depended on agglomeration phenomena during extrusion/spheronization. Pores at the surface, mainly 1-15 μm in diameter, determined both the mechanism and the rate of theophylline release, and a strong negative correlation between the fractal geometry and the b parameter of the Weibull function was found for pellets containing >60% carbopol. Theophylline mean dissolution time from these pellets was about two to four times greater. Textural analysis of SEM micrographs and fractal analysis of mercury intrusion data are complementary techniques that enable complete characterization of multiparticulate drug dosage forms

  4. Subsurface Profile Mapping using 3-D Compressive Wave Imaging

    Directory of Open Access Journals (Sweden)

    Hazreek Z A M

    2017-01-01

    Full Text Available Geotechnical site investigation related to subsurface profile mapping was commonly performed to provide valuable data for design and construction stage based on conventional drilling techniques. From past experience, drilling techniques particularly using borehole method suffer from limitations related to expensive, time consuming and limited data coverage. Hence, this study performs subsurface profile mapping using 3-D compressive wave imaging in order to minimize those conventional method constraints. Field measurement and data analysis of compressive wave (p-wave, vp was performed using seismic refraction survey (ABEM Terraloc MK 8, 7 kg of sledgehammer and 24 units of vertical geophone and OPTIM (SeisOpt@Picker & SeisOpt@2D software respectively. Then, 3-D compressive wave distribution of subsurface studied was obtained using analysis of SURFER software. Based on 3-D compressive wave image analyzed, it was found that subsurface profile studied consist of three main layers representing top soil (vp = 376 – 600 m/s, weathered material (vp = 900 – 2600 m/s and bedrock (vp > 3000 m/s. Thickness of each layer was varied from 0 – 2 m (first layer, 2 – 20 m (second layer and 20 m and over (third layer. Moreover, groundwater (vp = 1400 – 1600 m/s starts to be detected at 2.0 m depth from ground surface. This study has demonstrated that geotechnical site investigation data related to subsurface profiling was applicable to be obtained using 3-D compressive wave imaging. Furthermore, 3-D compressive wave imaging was performed based on non destructive principle in ground exploration thus consider economic, less time, large data coverage and sustainable to our environment.

  5. Optimization of wavelet decomposition for image compression and feature preservation.

    Science.gov (United States)

    Lo, Shih-Chung B; Li, Huai; Freedman, Matthew T

    2003-09-01

    A neural-network-based framework has been developed to search for an optimal wavelet kernel that can be used for a specific image processing task. In this paper, a linear convolution neural network was employed to seek a wavelet that minimizes errors and maximizes compression efficiency for an image or a defined image pattern such as microcalcifications in mammograms and bone in computed tomography (CT) head images. We have used this method to evaluate the performance of tap-4 wavelets on mammograms, CTs, magnetic resonance images, and Lena images. We found that the Daubechies wavelet or those wavelets with similar filtering characteristics can produce the highest compression efficiency with the smallest mean-square-error for many image patterns including general image textures as well as microcalcifications in digital mammograms. However, the Haar wavelet produces the best results on sharp edges and low-noise smooth areas. We also found that a special wavelet whose low-pass filter coefficients are 0.32252136, 0.85258927, 1.38458542, and -0.14548269) produces the best preservation outcomes in all tested microcalcification features including the peak signal-to-noise ratio, the contrast and the figure of merit in the wavelet lossy compression scheme. Having analyzed the spectrum of the wavelet filters, we can find the compression outcomes and feature preservation characteristics as a function of wavelets. This newly developed optimization approach can be generalized to other image analysis applications where a wavelet decomposition is employed.

  6. Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm

    Science.gov (United States)

    Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan

    2017-12-01

    Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.

  7. Lossless compression of multispectral images using spectral information

    Science.gov (United States)

    Ma, Long; Shi, Zelin; Tang, Xusheng

    2009-10-01

    Multispectral images are available for different purposes due to developments in spectral imaging systems. The sizes of multispectral images are enormous. Thus transmission and storage of these volumes of data require huge time and memory resources. That is why compression algorithms must be developed. A salient property of multispectral images is that strong spectral correlation exists throughout almost all bands. This fact is successfully used to predict each band based on the previous bands. We propose to use spectral linear prediction and entropy coding with context modeling for encoding multispectral images. Linear prediction predicts the value for the next sample and computes the difference between predicted value and the original value. This difference is usually small, so it can be encoded with less its than the original value. The technique implies prediction of each image band by involving number of bands along the image spectra. Each pixel is predicted using information provided by pixels in the previous bands in the same spatial position. As done in the JPEG-LS, the proposed coder also represents the mapped residuals by using an adaptive Golomb-Rice code with context modeling. This residual coding is context adaptive, where the context used for the current sample is identified by a context quantization function of the three gradients. Then, context-dependent Golomb-Rice code and bias parameters are estimated sample by sample. The proposed scheme was compared with three algorithms applied to the lossless compression of multispectral images, namely JPEG-LS, Rice coding, and JPEG2000. Simulation tests performed on AVIRIS images have demonstrated that the proposed compression scheme is suitable for multispectral images.

  8. Performance of target detection algorithm in compressive sensing miniature ultraspectral imaging compressed sensing system

    Science.gov (United States)

    Gedalin, Daniel; Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Rotman, Stanley R.; Stern, Adrian

    2017-04-01

    Compressive sensing theory was proposed to deal with the high quantity of measurements demanded by traditional hyperspectral systems. Recently, a compressive spectral imaging technique dubbed compressive sensing miniature ultraspectral imaging (CS-MUSI) was presented. This system uses a voltage controlled liquid crystal device to create multiplexed hyperspectral cubes. We evaluate the utility of the data captured using the CS-MUSI system for the task of target detection. Specifically, we compare the performance of the matched filter target detection algorithm in traditional hyperspectral systems and in CS-MUSI multiplexed hyperspectral cubes. We found that the target detection algorithm performs similarly in both cases, despite the fact that the CS-MUSI data is up to an order of magnitude less than that in conventional hyperspectral cubes. Moreover, the target detection is approximately an order of magnitude faster in CS-MUSI data.

  9. Compressed Sensing and Low-Rank Matrix Decomposition in Multisource Images Fusion

    Directory of Open Access Journals (Sweden)

    Kan Ren

    2014-01-01

    Full Text Available We propose a novel super-resolution multisource images fusion scheme via compressive sensing and dictionary learning theory. Under the sparsity prior of images patches and the framework of the compressive sensing theory, the multisource images fusion is reduced to a signal recovery problem from the compressive measurements. Then, a set of multiscale dictionaries are learned from several groups of high-resolution sample image’s patches via a nonlinear optimization algorithm. Moreover, a new linear weights fusion rule is proposed to obtain the high-resolution image. Some experiments are taken to investigate the performance of our proposed method, and the results prove its superiority to its counterparts.

  10. Fractal vector optical fields.

    Science.gov (United States)

    Pan, Yue; Gao, Xu-Zhen; Cai, Meng-Qiang; Zhang, Guan-Lin; Li, Yongnan; Tu, Chenghou; Wang, Hui-Tian

    2016-07-15

    We introduce the concept of a fractal, which provides an alternative approach for flexibly engineering the optical fields and their focal fields. We propose, design, and create a new family of optical fields-fractal vector optical fields, which build a bridge between the fractal and vector optical fields. The fractal vector optical fields have polarization states exhibiting fractal geometry, and may also involve the phase and/or amplitude simultaneously. The results reveal that the focal fields exhibit self-similarity, and the hierarchy of the fractal has the "weeding" role. The fractal can be used to engineer the focal field.

  11. Image compression software for the SOHO LASCO and EIT experiments

    Science.gov (United States)

    Grunes, Mitchell R.; Howard, Russell A.; Hoppel, Karl; Mango, Stephen A.; Wang, Dennis

    1994-01-01

    This paper describes the lossless and lossy image compression algorithms to be used on board the Solar Heliospheric Observatory (SOHO) in conjunction with the Large Angle Spectrometric Coronograph and Extreme Ultraviolet Imaging Telescope experiments. It also shows preliminary results obtained using similar prior imagery and discusses the lossy compression artifacts which will result. This paper is in part intended for the use of SOHO investigators who need to understand the results of SOHO compression in order to better allocate the transmission bits which they have been allocated.

  12. Effect of Image Linearization on Normalized Compression Distance

    Science.gov (United States)

    Mortensen, Jonathan; Wu, Jia Jie; Furst, Jacob; Rogers, John; Raicu, Daniela

    Normalized Information Distance, based on Kolmogorov complexity, is an emerging metric for image similarity. It is approximated by the Normalized Compression Distance (NCD) which generates the relative distance between two strings by using standard compression algorithms to compare linear strings of information. This relative distance quantifies the degree of similarity between the two objects. NCD has been shown to measure similarity effectively on information which is already a string: genomic string comparisons have created accurate phylogeny trees and NCD has also been used to classify music. Currently, to find a similarity measure using NCD for images, the images must first be linearized into a string, and then compared. To understand how linearization of a 2D image affects the similarity measure, we perform four types of linearization on a subset of the Corel image database and compare each for a variety of image transformations. Our experiment shows that different linearization techniques produce statistically significant differences in NCD for identical spatial transformations.

  13. High-speed reconstruction of compressed images

    Science.gov (United States)

    Cox, Jerome R., Jr.; Moore, Stephen M.

    1990-07-01

    A compression scheme is described that allows high-definition radiological images with greater than 8-bit intensity resolution to be represented by 8-bit pixels. Reconstruction of the images with their original intensity resolution can be carried out by means of a pipeline architecture suitable for compact, high-speed implementation. A reconstruction system is described that can be fabricated according to this approach and placed between an 8-bit display buffer and the display's video system thereby allowing contrast control of images at video rates. Results for 50 CR chest images are described showing that error-free reconstruction of the original 10-bit CR images can be achieved.

  14. Fractals: Giant impurity nonlinearities in optics of fractal clusters

    International Nuclear Information System (INIS)

    Butenko, A.V.; Shalaev, V.M.; Stockman, M.I.

    1988-01-01

    A theory of nonlinear optical properties of fractals is developed. Giant enhancement of optical susceptibilities is predicted for impurities bound to a fractal. This enhancement occurs if the exciting radiation frequency lies within the absorption band of the fractal. The giant optical nonlinearities are due to existence of high local electric fields in the sites of impurity locations. Such fields are due to the inhomogeneously broadened character of a fractal spectrum, i.e. partial conservation of individuality of fractal-forming particles (monomers). The field enhancement is proportional to the Q-factor of the resonance of a monomer. The effects of coherent anti-Stokes Raman scattering (CARS) and phase conjugation (PC) of light waves are enhanced to a much greater degree than generation of higher harmonics. In a general case the susceptibility of a higher-order is enhanced in the maximum way if the process includes ''subtraction'' of photons (at least one of the strong field frequencies enters the susceptibility with the minus sign). Alternatively, enhancement for the highest-order harmonic generation (when all the photons are ''accumulated'') is minimal. The predicted phenomena bear information on spectral properties of both impurity molecules and a fractal. In particular, in the CARS spectra a narrow (with the natural width) resonant structure, which is proper to an isolated monomer of a fractal, is predicted to be observed. (orig.)

  15. Context-dependent JPEG backward-compatible high-dynamic range image compression

    Science.gov (United States)

    Korshunov, Pavel; Ebrahimi, Touradj

    2013-10-01

    High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.

  16. Inverted fractal analysis of TiO{sub x} thin layers grown by inverse pulsed laser deposition

    Energy Technology Data Exchange (ETDEWEB)

    Égerházi, L., E-mail: egerhazi.laszlo@gmail.com [University of Szeged, Faculty of Medicine, Department of Medical Physics and Informatics, Korányi fasor 9., H-6720 Szeged (Hungary); Smausz, T. [University of Szeged, Faculty of Science, Department of Optics and Quantum Electronics, Dóm tér 9., H-6720 Szeged (Hungary); Bari, F. [University of Szeged, Faculty of Medicine, Department of Medical Physics and Informatics, Korányi fasor 9., H-6720 Szeged (Hungary)

    2013-08-01

    Inverted fractal analysis (IFA), a method developed for fractal analysis of scanning electron microscopy images of cauliflower-like thin films is presented through the example of layers grown by inverse pulsed laser deposition (IPLD). IFA uses the integrated fractal analysis module (FracLac) of the image processing software ImageJ, and an objective thresholding routine that preserves the characteristic features of the images, independently of their brightness and contrast. IFA revealed f{sub D} = 1.83 ± 0.01 for TiO{sub x} layers grown at 5–50 Pa background pressures. For a series of images, this result was verified by evaluating the scaling of the number of still resolved features on the film, counted manually. The value of f{sub D} not only confirms the fractal structure of TiO{sub x} IPLD thin films, but also suggests that the aggregation of plasma species in the gas atmosphere may have only limited contribution to the deposition.

  17. Statistical Analysis of Compression Methods for Storing Binary Image for Low-Memory Systems

    Directory of Open Access Journals (Sweden)

    Roman Slaby

    2013-01-01

    Full Text Available The paper is focused on the statistical comparison of the selected compression methods which are used for compression of the binary images. The aim is to asses, which of presented compression method for low-memory system requires less number of bytes of memory. For assessment of the success rates of the input image to binary image the correlation functions are used. Correlation function is one of the methods of OCR algorithm used for the digitization of printed symbols. Using of compression methods is necessary for systems based on low-power micro-controllers. The data stream saving is very important for such systems with limited memory as well as the time required for decoding the compressed data. The success rate of the selected compression algorithms is evaluated using the basic characteristics of the exploratory analysis. The searched samples represent the amount of bytes needed to compress the test images, representing alphanumeric characters.

  18. Magnetic resonance imaging of vascular compression in trigeminal neuralgia and hemifacial spasms

    International Nuclear Information System (INIS)

    Nagaseki, Yoshishige; Horikoshi, Tohru; Omata, Tomohiro; Sugita, Masao; Nukui, Hideaki; Sakamoto, Hajime; Kumagai, Hiroshi; Sasaki, Hideo; Tsuji, Reizou.

    1991-01-01

    We show how neurosurgical planning can benefit from the better visualization of the precise vascular compression of the nerve provided by the oblique-sagittal and gradient-echo method (OS-GR image) using magnetic resonance images (MRI). The scans of 3 patients with trigeminal neuralgia (TN) and of 15 with hemifacial spasm (HFS) were analyzed for the presence and appearance of the vascular compression of the nerves. Imaging sequences consisted of an OS-GR image (TR/TE: 200/20, 3-mm-thick slice) cut along each nerve shown by the axial view, which was scanned at the angle of 105 degrees taken between the dorsal line of the brain stem and the line corresponding to the pontomedullary junction. In the OS-GR images of the TN's, the vascular compressions of the root entry zone (REZ) of the trigeminal nerve were well visualized as high-intensity lines in the 2 cases whose vessels were confirmed intraoperatively. In the other case, with atypical facial pain, vascular compression was confirmed at the rostral distal site on the fifth nerve, apart from the REZ. In the 15 cases of HFS, twelve OS-GR images (80%) demonstrated vascular compressions at the REZ of the facial nerves from the direction of the caudoventral side. During the surgery for these 12 cases, in 11 cases (excepting the 1 case whose facial nerve was not compressed by any vessels), vascular compressions were confirmed corresponding to the findings of the OS-GR images. Among the 10 OS-GR images on the non-affected side, two false-positive findings were visualized. It is concluded that OS-GR images obtained by means of MRI may serve as a useful planning aid prior to microvascular decompression for cases of TN and HFS. (author)

  19. a New Method for Calculating Fractal Dimensions of Porous Media Based on Pore Size Distribution

    Science.gov (United States)

    Xia, Yuxuan; Cai, Jianchao; Wei, Wei; Hu, Xiangyun; Wang, Xin; Ge, Xinmin

    Fractal theory has been widely used in petrophysical properties of porous rocks over several decades and determination of fractal dimensions is always the focus of researches and applications by means of fractal-based methods. In this work, a new method for calculating pore space fractal dimension and tortuosity fractal dimension of porous media is derived based on fractal capillary model assumption. The presented work establishes relationship between fractal dimensions and pore size distribution, which can be directly used to calculate the fractal dimensions. The published pore size distribution data for eight sandstone samples are used to calculate the fractal dimensions and simultaneously compared with prediction results from analytical expression. In addition, the proposed fractal dimension method is also tested through Micro-CT images of three sandstone cores, and are compared with fractal dimensions by box-counting algorithm. The test results also prove a self-similar fractal range in sandstone when excluding smaller pores.

  20. Retinal vascular fractals predict long-term microvascular complications in type 1 diabetes mellitus

    DEFF Research Database (Denmark)

    Broe, Rebecca; Rasmussen, Malin L; Frydkjaer-Olsen, Ulrik

    2014-01-01

    : We included 180 patients with type 1 diabetes in a 16 year follow-up study. In baseline retinal photographs (from 1995), all vessels in a zone 0.5-2.0 disc diameters from the disc margin were traced using Singapore Institute Vessel Assessment-Fractal image analysis software. Artefacts were removed......AIMS/HYPOTHESIS: Fractal analysis of the retinal vasculature provides a global measure of the complexity and density of retinal vessels summarised as a single variable: the fractal dimension. We investigated fractal dimensions as long-term predictors of microvasculopathy in type 1 diabetes. METHODS....... Retinal fractal analysis therefore is a potential tool for risk stratification in type 1 diabetes....

  1. Compression and Processing of Space Image Sequences of Northern Lights and Sprites

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto; Martins, Bo; Jensen, Ole Riis

    1999-01-01

    Compression of image sequences of auroral activity as northern lights and thunderstorms with sprites is investigated.......Compression of image sequences of auroral activity as northern lights and thunderstorms with sprites is investigated....

  2. Application of a Noise Adaptive Contrast Sensitivity Function to Image Data Compression

    Science.gov (United States)

    Daly, Scott J.

    1989-08-01

    The visual contrast sensitivity function (CSF) has found increasing use in image compression as new algorithms optimize the display-observer interface in order to reduce the bit rate and increase the perceived image quality. In most compression algorithms, increasing the quantization intervals reduces the bit rate at the expense of introducing more quantization error, a potential image quality degradation. The CSF can be used to distribute this error as a function of spatial frequency such that it is undetectable by the human observer. Thus, instead of being mathematically lossless, the compression algorithm can be designed to be visually lossless, with the advantage of a significantly reduced bit rate. However, the CSF is strongly affected by image noise, changing in both shape and peak sensitivity. This work describes a model of the CSF that includes these changes as a function of image noise level by using the concepts of internal visual noise, and tests this model in the context of image compression with an observer study.

  3. Facial Image Compression Based on Structured Codebooks in Overcomplete Domain

    Directory of Open Access Journals (Sweden)

    Vila-Forcén JE

    2006-01-01

    Full Text Available We advocate facial image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: image compression is considered from the position of source coding with side information and, contrarily to the existing scenarios where the side information is given explicitly; the side information is created based on a deterministic approximation of the local image features. We consider an image in the overcomplete transform domain as a realization of a random source with a structured codebook of symbols where each symbol represents a particular edge shape. Due to the partial availability of the side information at both encoder and decoder, we treat our problem as a modification of the Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available at the decoder. Finally, the paper presents a practical image compression algorithm for facial images based on our concept that demonstrates the superior performance in the very-low-bit-rate regime.

  4. Telemedicine + OCT: toward design of optimized algorithms for high-quality compressed images

    Science.gov (United States)

    Mousavi, Mahta; Lurie, Kristen; Land, Julian; Javidi, Tara; Ellerbee, Audrey K.

    2014-03-01

    Telemedicine is an emerging technology that aims to provide clinical healthcare at a distance. Among its goals, the transfer of diagnostic images over telecommunication channels has been quite appealing to the medical community. When viewed as an adjunct to biomedical device hardware, one highly important consideration aside from the transfer rate and speed is the accuracy of the reconstructed image at the receiver end. Although optical coherence tomography (OCT) is an established imaging technique that is ripe for telemedicine, the effects of OCT data compression, which may be necessary on certain telemedicine platforms, have not received much attention in the literature. We investigate the performance and efficiency of several lossless and lossy compression techniques for OCT data and characterize their effectiveness with respect to achievable compression ratio, compression rate and preservation of image quality. We examine the effects of compression in the interferogram vs. A-scan domain as assessed with various objective and subjective metrics.

  5. Fractal design concepts for stretchable electronics.

    Science.gov (United States)

    Fan, Jonathan A; Yeo, Woon-Hong; Su, Yewang; Hattori, Yoshiaki; Lee, Woosik; Jung, Sung-Young; Zhang, Yihui; Liu, Zhuangjian; Cheng, Huanyu; Falgout, Leo; Bajema, Mike; Coleman, Todd; Gregoire, Dan; Larsen, Ryan J; Huang, Yonggang; Rogers, John A

    2014-01-01

    Stretchable electronics provide a foundation for applications that exceed the scope of conventional wafer and circuit board technologies due to their unique capacity to integrate with soft materials and curvilinear surfaces. The range of possibilities is predicated on the development of device architectures that simultaneously offer advanced electronic function and compliant mechanics. Here we report that thin films of hard electronic materials patterned in deterministic fractal motifs and bonded to elastomers enable unusual mechanics with important implications in stretchable device design. In particular, we demonstrate the utility of Peano, Greek cross, Vicsek and other fractal constructs to yield space-filling structures of electronic materials, including monocrystalline silicon, for electrophysiological sensors, precision monitors and actuators, and radio frequency antennas. These devices support conformal mounting on the skin and have unique properties such as invisibility under magnetic resonance imaging. The results suggest that fractal-based layouts represent important strategies for hard-soft materials integration.

  6. Fractal design concepts for stretchable electronics

    Science.gov (United States)

    Fan, Jonathan A.; Yeo, Woon-Hong; Su, Yewang; Hattori, Yoshiaki; Lee, Woosik; Jung, Sung-Young; Zhang, Yihui; Liu, Zhuangjian; Cheng, Huanyu; Falgout, Leo; Bajema, Mike; Coleman, Todd; Gregoire, Dan; Larsen, Ryan J.; Huang, Yonggang; Rogers, John A.

    2014-02-01

    Stretchable electronics provide a foundation for applications that exceed the scope of conventional wafer and circuit board technologies due to their unique capacity to integrate with soft materials and curvilinear surfaces. The range of possibilities is predicated on the development of device architectures that simultaneously offer advanced electronic function and compliant mechanics. Here we report that thin films of hard electronic materials patterned in deterministic fractal motifs and bonded to elastomers enable unusual mechanics with important implications in stretchable device design. In particular, we demonstrate the utility of Peano, Greek cross, Vicsek and other fractal constructs to yield space-filling structures of electronic materials, including monocrystalline silicon, for electrophysiological sensors, precision monitors and actuators, and radio frequency antennas. These devices support conformal mounting on the skin and have unique properties such as invisibility under magnetic resonance imaging. The results suggest that fractal-based layouts represent important strategies for hard-soft materials integration.

  7. Two-level image authentication by two-step phase-shifting interferometry and compressive sensing

    Science.gov (United States)

    Zhang, Xue; Meng, Xiangfeng; Yin, Yongkai; Yang, Xiulun; Wang, Yurong; Li, Xianye; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-01-01

    A two-level image authentication method is proposed; the method is based on two-step phase-shifting interferometry, double random phase encoding, and compressive sensing (CS) theory, by which the certification image can be encoded into two interferograms. Through discrete wavelet transform (DWT), sparseness processing, Arnold transform, and data compression, two compressed signals can be generated and delivered to two different participants of the authentication system. Only the participant who possesses the first compressed signal attempts to pass the low-level authentication. The application of Orthogonal Match Pursuit CS algorithm reconstruction, inverse Arnold transform, inverse DWT, two-step phase-shifting wavefront reconstruction, and inverse Fresnel transform can result in the output of a remarkable peak in the central location of the nonlinear correlation coefficient distributions of the recovered image and the standard certification image. Then, the other participant, who possesses the second compressed signal, is authorized to carry out the high-level authentication. Therefore, both compressed signals are collected to reconstruct the original meaningful certification image with a high correlation coefficient. Theoretical analysis and numerical simulations verify the feasibility of the proposed method.

  8. Helicalised fractals

    OpenAIRE

    Saw, Vee-Liem; Chew, Lock Yue

    2013-01-01

    We formulate the helicaliser, which replaces a given smooth curve by another curve that winds around it. In our analysis, we relate this formulation to the geometrical properties of the self-similar circular fractal (the discrete version of the curved helical fractal). Iterative applications of the helicaliser to a given curve yields a set of helicalisations, with the infinitely helicalised object being a fractal. We derive the Hausdorff dimension for the infinitely helicalised straight line ...

  9. Image and video compression for multimedia engineering fundamentals, algorithms, and standards

    CERN Document Server

    Shi, Yun Q

    2008-01-01

    Part I: Fundamentals Introduction Quantization Differential Coding Transform Coding Variable-Length Coding: Information Theory Results (II) Run-Length and Dictionary Coding: Information Theory Results (III) Part II: Still Image Compression Still Image Coding: Standard JPEG Wavelet Transform for Image Coding: JPEG2000 Nonstandard Still Image Coding Part III: Motion Estimation and Compensation Motion Analysis and Motion Compensation Block Matching Pel-Recursive Technique Optical Flow Further Discussion and Summary on 2-D Motion Estimation Part IV: Video Compression Fundam

  10. High bit depth infrared image compression via low bit depth codecs

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-01-01

    images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed.......264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can...

  11. Parallelization of one image compression method. Wavelet, Transform, Vector Quantization and Huffman Coding

    International Nuclear Information System (INIS)

    Moravie, Philippe

    1997-01-01

    Today, in the digitized satellite image domain, the needs for high dimension increase considerably. To transmit or to stock such images (more than 6000 by 6000 pixels), we need to reduce their data volume and so we have to use real-time image compression techniques. The large amount of computations required by image compression algorithms prohibits the use of common sequential processors, for the benefits of parallel computers. The study presented here deals with parallelization of a very efficient image compression scheme, based on three techniques: Wavelets Transform (WT), Vector Quantization (VQ) and Entropic Coding (EC). First, we studied and implemented the parallelism of each algorithm, in order to determine the architectural characteristics needed for real-time image compression. Then, we defined eight parallel architectures: 3 for Mallat algorithm (WT), 3 for Tree-Structured Vector Quantization (VQ) and 2 for Huffman Coding (EC). As our system has to be multi-purpose, we chose 3 global architectures between all of the 3x3x2 systems available. Because, for technological reasons, real-time is not reached at anytime (for all the compression parameter combinations), we also defined and evaluated two algorithmic optimizations: fix point precision and merging entropic coding in vector quantization. As a result, we defined a new multi-purpose multi-SMIMD parallel machine, able to compress digitized satellite image in real-time. The definition of the best suited architecture for real-time image compression was answered by presenting 3 parallel machines among which one multi-purpose, embedded and which might be used for other applications on board. (author) [fr

  12. A joint image encryption and watermarking algorithm based on compressive sensing and chaotic map

    International Nuclear Information System (INIS)

    Xiao Di; Cai Hong-Kun; Zheng Hong-Ying

    2015-01-01

    In this paper, a compressive sensing (CS) and chaotic map-based joint image encryption and watermarking algorithm is proposed. The transform domain coefficients of the original image are scrambled by Arnold map firstly. Then the watermark is adhered to the scrambled data. By compressive sensing, a set of watermarked measurements is obtained as the watermarked cipher image. In this algorithm, watermark embedding and data compression can be performed without knowing the original image; similarly, watermark extraction will not interfere with decryption. Due to the characteristics of CS, this algorithm features compressible cipher image size, flexible watermark capacity, and lossless watermark extraction from the compressed cipher image as well as robustness against packet loss. Simulation results and analyses show that the algorithm achieves good performance in the sense of security, watermark capacity, extraction accuracy, reconstruction, robustness, etc. (paper)

  13. Signal and image multiresolution analysis

    CERN Document Server

    Ouahabi, Abdelialil

    2012-01-01

    Multiresolution analysis using the wavelet transform has received considerable attention in recent years by researchers in various fields. It is a powerful tool for efficiently representing signals and images at multiple levels of detail with many inherent advantages, including compression, level-of-detail display, progressive transmission, level-of-detail editing, filtering, modeling, fractals and multifractals, etc.This book aims to provide a simple formalization and new clarity on multiresolution analysis, rendering accessible obscure techniques, and merging, unifying or completing

  14. Fractal differential equations and fractal-time dynamical systems

    Indian Academy of Sciences (India)

    like fractal subsets of the real line may be termed as fractal-time dynamical systems. Formulation ... involving scaling and memory effects. But most of ..... begin by recalling the definition of the Riemann integral in ordinary calculus [33]. Let g: [a ...

  15. A Near-Lossless Image Compression Algorithm Suitable for Hardware Design in Wireless Endoscopy System

    Directory of Open Access Journals (Sweden)

    Xie Xiang

    2007-01-01

    Full Text Available In order to decrease the communication bandwidth and save the transmitting power in the wireless endoscopy capsule, this paper presents a new near-lossless image compression algorithm based on the Bayer format image suitable for hardware design. This algorithm can provide low average compression rate ( bits/pixel with high image quality (larger than dB for endoscopic images. Especially, it has low complexity hardware overhead (only two line buffers and supports real-time compressing. In addition, the algorithm can provide lossless compression for the region of interest (ROI and high-quality compression for other regions. The ROI can be selected arbitrarily by varying ROI parameters. In addition, the VLSI architecture of this compression algorithm is also given out. Its hardware design has been implemented in m CMOS process.

  16. A new set of wavelet- and fractals-based features for Gleason grading of prostate cancer histopathology images

    Science.gov (United States)

    Mosquera Lopez, Clara; Agaian, Sos

    2013-02-01

    Prostate cancer detection and staging is an important step towards patient treatment selection. Advancements in digital pathology allow the application of new quantitative image analysis algorithms for computer-assisted diagnosis (CAD) on digitized histopathology images. In this paper, we introduce a new set of features to automatically grade pathological images using the well-known Gleason grading system. The goal of this study is to classify biopsy images belonging to Gleason patterns 3, 4, and 5 by using a combination of wavelet and fractal features. For image classification we use pairwise coupling Support Vector Machine (SVM) classifiers. The accuracy of the system, which is close to 97%, is estimated through three different cross-validation schemes. The proposed system offers the potential for automating classification of histological images and supporting prostate cancer diagnosis.

  17. Morphological Investigation and Fractal Properties of Realgar Nanoparticles

    Directory of Open Access Journals (Sweden)

    Amir Lashgari

    2015-01-01

    Full Text Available Some arsenic compounds can show extraordinary polymorphism. Realgar (As4S4 is among several minerals with various crystal forms and is one of the most important sources of arsenic for pharmaceutical use. Currently, realgar is used as an arsenic source in many industries, such as weaponry, publishing, textiles, cosmetics, and health products. In this paper, we used and reported new methods for the purification, nanonization, and structural morphological investigations of As4S4 by using planetary ball mills process for nanonization of the compound. The product was characterized using X-ray powder diffraction analysis, Fourier transform infrared spectrometry spectra, and field emission scanning electron microscope (FESEM imaging. We investigated the morphological properties of FESEM-imaged realgar nanoparticles by an image-processing technique that calculates fractal dimensions using values on a computer with MATLAB software. We applied the Statistical Package for the Social Sciences software for statistics data extracted from the FESEM image and obtained the statistics results of the fractal dimension and histogram plot for the FESEM image.

  18. Contributions to HEVC Prediction for Medical Image Compression

    OpenAIRE

    Guarda, André Filipe Rodrigues

    2016-01-01

    Medical imaging technology and applications are continuously evolving, dealing with images of increasing spatial and temporal resolutions, which allow easier and more accurate medical diagnosis. However, this increase in resolution demands a growing amount of data to be stored and transmitted. Despite the high coding efficiency achieved by the most recent image and video coding standards in lossy compression, they are not well suited for quality-critical medical image compressi...

  19. Fractal Dimension Analysis of Texture Formation of Whey Protein-Based Foods

    Directory of Open Access Journals (Sweden)

    Robi Andoyo

    2018-01-01

    Full Text Available Whey protein in the form of isolate or concentrate is widely used in food industries due to its functionality to form gel under certain condition and its nutritive value. Controlling or manipulating the formation of gel aggregates is used often to evaluate food texture. Many researchers made use of fractal analysis that provides the quantitative data (i.e., fractal dimension for fundamentally and rationally analyzing and designing whey protein-based food texture. This quantitative analysis is also done to better understand how the texture of whey protein-based food is formed. Two methods for fractal analysis were discussed in this review: image analysis (microscopy and rheology. These methods, however, have several limitations which greatly affect the accuracy of both fractal dimension values and types of aggregation obtained. This review therefore also discussed problem encountered and ways to reduce the potential errors during fractal analysis of each method.

  20. Magnetic resonance image compression using scalar-vector quantization

    Science.gov (United States)

    Mohsenian, Nader; Shahri, Homayoun

    1995-12-01

    A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.

  1. Sparse BLIP: BLind Iterative Parallel imaging reconstruction using compressed sensing.

    Science.gov (United States)

    She, Huajun; Chen, Rong-Rong; Liang, Dong; DiBella, Edward V R; Ying, Leslie

    2014-02-01

    To develop a sensitivity-based parallel imaging reconstruction method to reconstruct iteratively both the coil sensitivities and MR image simultaneously based on their prior information. Parallel magnetic resonance imaging reconstruction problem can be formulated as a multichannel sampling problem where solutions are sought analytically. However, the channel functions given by the coil sensitivities in parallel imaging are not known exactly and the estimation error usually leads to artifacts. In this study, we propose a new reconstruction algorithm, termed Sparse BLind Iterative Parallel, for blind iterative parallel imaging reconstruction using compressed sensing. The proposed algorithm reconstructs both the sensitivity functions and the image simultaneously from undersampled data. It enforces the sparseness constraint in the image as done in compressed sensing, but is different from compressed sensing in that the sensing matrix is unknown and additional constraint is enforced on the sensitivities as well. Both phantom and in vivo imaging experiments were carried out with retrospective undersampling to evaluate the performance of the proposed method. Experiments show improvement in Sparse BLind Iterative Parallel reconstruction when compared with Sparse SENSE, JSENSE, IRGN-TV, and L1-SPIRiT reconstructions with the same number of measurements. The proposed Sparse BLind Iterative Parallel algorithm reduces the reconstruction errors when compared to the state-of-the-art parallel imaging methods. Copyright © 2013 Wiley Periodicals, Inc.

  2. CMOS Compressed Imaging by Random Convolution

    OpenAIRE

    Jacques, Laurent; Vandergheynst, Pierre; Bibet, Alexandre; Majidzadeh, Vahid; Schmid, Alexandre; Leblebici, Yusuf

    2009-01-01

    We present a CMOS imager with built-in capability to perform Compressed Sensing. The adopted sensing strategy is the random Convolution due to J. Romberg. It is achieved by a shift register set in a pseudo-random configuration. It acts as a convolutive filter on the imager focal plane, the current issued from each CMOS pixel undergoing a pseudo-random redirection controlled by each component of the filter sequence. A pseudo-random triggering of the ADC reading is finally applied to comp...

  3. Expandable image compression system: A modular approach

    International Nuclear Information System (INIS)

    Ho, B.K.T.; Chan, K.K.; Ishimitsu, Y.; Lo, S.C.; Huang, H.K.

    1987-01-01

    The full-frame bit allocation algorithm for radiological image compression developed in the authors' laboratory can achieve compression ratios as high as 30:1. The software development and clinical evaluation of this algorithm has been completed. It involves two stages of operations: a two-dimensional discrete cosine transform and pixel quantization in the transform space with pixel depth kept accountable by a bit allocation table. Their design took an expandable modular approach based on the VME bus system which has a maximum data transfer rate of 48 Mbytes per second and a Motorola 68020 microprocessor as the master controller. The transform modules are based on advanced digital signal processor (DSP) chips microprogrammed to perform fast cosine transforms. Four DSP's built into a single-board transform module can process an 1K x 1K image in 1.7 seconds. Additional transform modules working in parallel can be added if even greater speeds are desired. The flexibility inherent in the microcode extends the capabilities of the system to incorporate images of variable sizes. Their design allows for a maximum image size of 2K x 2K

  4. A Novel Medical Image Watermarking in Three-dimensional Fourier Compressed Domain

    Directory of Open Access Journals (Sweden)

    Baoru Han

    2015-09-01

    Full Text Available Digital watermarking is a research hotspot in the field of image security, which is protected digital image copyright. In order to ensure medical image information security, a novel medical image digital watermarking algorithm in three-dimensional Fourier compressed domain is proposed. The novel medical image digital watermarking algorithm takes advantage of three-dimensional Fourier compressed domain characteristics, Legendre chaotic neural network encryption features and robust characteristics of differences hashing, which is a robust zero-watermarking algorithm. On one hand, the original watermarking image is encrypted in order to enhance security. It makes use of Legendre chaotic neural network implementation. On the other hand, the construction of zero-watermarking adopts differences hashing in three-dimensional Fourier compressed domain. The novel watermarking algorithm does not need to select a region of interest, can solve the problem of medical image content affected. The specific implementation of the algorithm and the experimental results are given in the paper. The simulation results testify that the novel algorithm possesses a desirable robustness to common attack and geometric attack.

  5. Fractal analysis of mandibular trabecular bone: optimal tile sizes for the tile counting method.

    Science.gov (United States)

    Huh, Kyung-Hoe; Baik, Jee-Seon; Yi, Won-Jin; Heo, Min-Suk; Lee, Sam-Sun; Choi, Soon-Chul; Lee, Sun-Bok; Lee, Seung-Pyo

    2011-06-01

    This study was performed to determine the optimal tile size for the fractal dimension of the mandibular trabecular bone using a tile counting method. Digital intraoral radiographic images were obtained at the mandibular angle, molar, premolar, and incisor regions of 29 human dry mandibles. After preprocessing, the parameters representing morphometric characteristics of the trabecular bone were calculated. The fractal dimensions of the processed images were analyzed in various tile sizes by the tile counting method. The optimal range of tile size was 0.132 mm to 0.396 mm for the fractal dimension using the tile counting method. The sizes were closely related to the morphometric parameters. The fractal dimension of mandibular trabecular bone, as calculated with the tile counting method, can be best characterized with a range of tile sizes from 0.132 to 0.396 mm.

  6. Electromagnetic fields in fractal continua

    Energy Technology Data Exchange (ETDEWEB)

    Balankin, Alexander S., E-mail: abalankin@ipn.mx [Grupo “Mecánica Fractal”, Instituto Politécnico Nacional, México D.F., 07738 Mexico (Mexico); Mena, Baltasar [Instituto de Ingeniería, Universidad Nacional Autónoma de México, México D.F. (Mexico); Patiño, Julián [Grupo “Mecánica Fractal”, Instituto Politécnico Nacional, México D.F., 07738 Mexico (Mexico); Morales, Daniel [Instituto Mexicano del Petróleo, México D.F., 07730 Mexico (Mexico)

    2013-04-01

    Fractal continuum electrodynamics is developed on the basis of a model of three-dimensional continuum Φ{sub D}{sup 3}⊂E{sup 3} with a fractal metric. The generalized forms of Maxwell equations are derived employing the local fractional vector calculus related to the Hausdorff derivative. The difference between the fractal continuum electrodynamics based on the fractal metric of continua with Euclidean topology and the electrodynamics in fractional space F{sup α} accounting the fractal topology of continuum with the Euclidean metric is outlined. Some electromagnetic phenomena in fractal media associated with their fractal time and space metrics are discussed.

  7. Dual photon excitation microscopy and image threshold segmentation in live cell imaging during compression testing.

    Science.gov (United States)

    Moo, Eng Kuan; Abusara, Ziad; Abu Osman, Noor Azuan; Pingguan-Murphy, Belinda; Herzog, Walter

    2013-08-09

    Morphological studies of live connective tissue cells are imperative to helping understand cellular responses to mechanical stimuli. However, photobleaching is a constant problem to accurate and reliable live cell fluorescent imaging, and various image thresholding methods have been adopted to account for photobleaching effects. Previous studies showed that dual photon excitation (DPE) techniques are superior over conventional one photon excitation (OPE) confocal techniques in minimizing photobleaching. In this study, we investigated the effects of photobleaching resulting from OPE and DPE on morphology of in situ articular cartilage chondrocytes across repeat laser exposures. Additionally, we compared the effectiveness of three commonly-used image thresholding methods in accounting for photobleaching effects, with and without tissue loading through compression. In general, photobleaching leads to an apparent volume reduction for subsequent image scans. Performing seven consecutive scans of chondrocytes in unloaded cartilage, we found that the apparent cell volume loss caused by DPE microscopy is much smaller than that observed using OPE microscopy. Applying scan-specific image thresholds did not prevent the photobleaching-induced volume loss, and volume reductions were non-uniform over the seven repeat scans. During cartilage loading through compression, cell fluorescence increased and, depending on the thresholding method used, led to different volume changes. Therefore, different conclusions on cell volume changes may be drawn during tissue compression, depending on the image thresholding methods used. In conclusion, our findings confirm that photobleaching directly affects cell morphology measurements, and that DPE causes less photobleaching artifacts than OPE for uncompressed cells. When cells are compressed during tissue loading, a complicated interplay between photobleaching effects and compression-induced fluorescence increase may lead to interpretations in

  8. Fractals for Geoengineering

    Science.gov (United States)

    Oleshko, Klaudia; de Jesús Correa López, María; Romero, Alejandro; Ramírez, Victor; Pérez, Olga

    2016-04-01

    The effectiveness of fractal toolbox to capture the scaling or fractal probability distribution, and simply fractal statistics of main hydrocarbon reservoir attributes, was highlighted by Mandelbrot (1995) and confirmed by several researchers (Zhao et al., 2015). Notwithstanding, after more than twenty years, it's still common the opinion that fractals are not useful for the petroleum engineers and especially for Geoengineering (Corbett, 2012). In spite of this negative background, we have successfully applied the fractal and multifractal techniques to our project entitled "Petroleum Reservoir as a Fractal Reactor" (2013 up to now). The distinguishable feature of Fractal Reservoir is the irregular shapes and rough pore/solid distributions (Siler, 2007), observed across a broad range of scales (from SEM to seismic). At the beginning, we have accomplished the detailed analysis of Nelson and Kibler (2003) Catalog of Porosity and Permeability, created for the core plugs of siliciclastic rocks (around ten thousand data were compared). We enriched this Catalog by more than two thousand data extracted from the last ten years publications on PoroPerm (Corbett, 2012) in carbonates deposits, as well as by our own data from one of the PEMEX, Mexico, oil fields. The strong power law scaling behavior was documented for the major part of these data from the geological deposits of contrasting genesis. Based on these results and taking into account the basic principles and models of the Physics of Fractals, introduced by Per Back and Kan Chen (1989), we have developed new software (Muukíl Kaab), useful to process the multiscale geological and geophysical information and to integrate the static geological and petrophysical reservoir models to dynamic ones. The new type of fractal numerical model with dynamical power law relations among the shapes and sizes of mesh' cells was designed and calibrated in the studied area. The statistically sound power law relations were established

  9. NIR hyperspectral compressive imager based on a modified Fabry–Perot resonator

    Science.gov (United States)

    Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Stern, Adrian

    2018-04-01

    The acquisition of hyperspectral (HS) image datacubes with available 2D sensor arrays involves a time consuming scanning process. In the last decade, several compressive sensing (CS) techniques were proposed to reduce the HS acquisition time. In this paper, we present a method for near-infrared (NIR) HS imaging which relies on our rapid CS resonator spectroscopy technique. Within the framework of CS, and by using a modified Fabry–Perot resonator, a sequence of spectrally modulated images is used to recover NIR HS datacubes. Owing to the innovative CS design, we demonstrate the ability to reconstruct NIR HS images with hundreds of spectral bands from an order of magnitude fewer measurements, i.e. with a compression ratio of about 10:1. This high compression ratio, together with the high optical throughput of the system, facilitates fast acquisition of large HS datacubes.

  10. Interleaved EPI diffusion imaging using SPIRiT-based reconstruction with virtual coil compression.

    Science.gov (United States)

    Dong, Zijing; Wang, Fuyixue; Ma, Xiaodong; Zhang, Zhe; Dai, Erpeng; Yuan, Chun; Guo, Hua

    2018-03-01

    To develop a novel diffusion imaging reconstruction framework based on iterative self-consistent parallel imaging reconstruction (SPIRiT) for multishot interleaved echo planar imaging (iEPI), with computation acceleration by virtual coil compression. As a general approach for autocalibrating parallel imaging, SPIRiT improves the performance of traditional generalized autocalibrating partially parallel acquisitions (GRAPPA) methods in that the formulation with self-consistency is better conditioned, suggesting SPIRiT to be a better candidate in k-space-based reconstruction. In this study, a general SPIRiT framework is adopted to incorporate both coil sensitivity and phase variation information as virtual coils and then is applied to 2D navigated iEPI diffusion imaging. To reduce the reconstruction time when using a large number of coils and shots, a novel shot-coil compression method is proposed for computation acceleration in Cartesian sampling. Simulations and in vivo experiments were conducted to evaluate the performance of the proposed method. Compared with the conventional coil compression, the shot-coil compression achieved higher compression rates with reduced errors. The simulation and in vivo experiments demonstrate that the SPIRiT-based reconstruction outperformed the existing method, realigned GRAPPA, and provided superior images with reduced artifacts. The SPIRiT-based reconstruction with virtual coil compression is a reliable method for high-resolution iEPI diffusion imaging. Magn Reson Med 79:1525-1531, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  11. Single-event transient imaging with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor.

    Science.gov (United States)

    Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji

    2016-02-22

    In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.

  12. Evaluation of the distortions of the digital chest image caused by the data compression

    International Nuclear Information System (INIS)

    Ando, Yutaka; Kunieda, Etsuo; Ogawa, Koichi; Tukamoto, Nobuhiro; Hashimoto, Shozo; Aoki, Makoto; Kurotani, Kenichi.

    1988-01-01

    The image data compression methods using orthogonal transforms (Discrete cosine transform, Discrete fourier transform, Hadamard transform, Haar transform, Slant transform) were analyzed. From the points of the error and the speed of the data conversion, the discrete cosine transform method (DCT) is superior to the other methods. The block quantization by the DCT for the digital chest image was used. The quality of data compressed and reconstructed images by the score analysis and the ROC curve analysis was examined. The chest image with the esophageal cancer and metastatic lung tumors was evaluated at the 17 checkpoints (the tumor, the vascular markings, the border of the heart and ribs, the mediastinal structures and et al). By our score analysis, the satisfactory ratio of the data compression is 1/5 and 1/10. The ROC analysis using normal chest images superimposed by the artificial coin lesions was made. The ROC curve of the 1/5 compressed ratio is almost as same as the original one. To summarize our study, the image data compression method using the DCT is thought to be useful for the clinical use and the 1/5 compression ratio is a tolerable ratio. (author)

  13. Evaluation of the distortions of the digital chest image caused by the data compression

    Energy Technology Data Exchange (ETDEWEB)

    Ando, Yutaka; Kunieda, Etsuo; Ogawa, Koichi; Tukamoto, Nobuhiro; Hashimoto, Shozo; Aoki, Makoto; Kurotani, Kenichi

    1988-08-01

    The image data compression methods using orthogonal transforms (Discrete cosine transform, Discrete fourier transform, Hadamard transform, Haar transform, Slant transform) were analyzed. From the points of the error and the speed of the data conversion, the discrete cosine transform method (DCT) is superior to the other methods. The block quantization by the DCT for the digital chest image was used. The quality of data compressed and reconstructed images by the score analysis and the ROC curve analysis was examined. The chest image with the esophageal cancer and metastatic lung tumors was evaluated at the 17 checkpoints (the tumor, the vascular markings, the border of the heart and ribs, the mediastinal structures and et al). By our score analysis, the satisfactory ratio of the data compression is 1/5 and 1/10. The ROC analysis using normal chest images superimposed by the artificial coin lesions was made. The ROC curve of the 1/5 compressed ratio is almost as same as the original one. To summarize our study, the image data compression method using the DCT is thought to be useful for the clinical use and the 1/5 compression ratio is a tolerable ratio.

  14. Multiband CCD Image Compression for Space Camera with Large Field of View

    Directory of Open Access Journals (Sweden)

    Jin Li

    2014-01-01

    Full Text Available Space multiband CCD camera compression encoder requires low-complexity, high-robustness, and high-performance because of its captured images information being very precious and also because it is usually working on the satellite where the resources, such as power, memory, and processing capacity, are limited. However, the traditional compression approaches, such as JPEG2000, 3D transforms, and PCA, have the high-complexity. The Consultative Committee for Space Data Systems-Image Data Compression (CCSDS-IDC algorithm decreases the average PSNR by 2 dB compared with JPEG2000. In this paper, we proposed a low-complexity compression algorithm based on deep coupling algorithm among posttransform in wavelet domain, compressive sensing, and distributed source coding. In our algorithm, we integrate three low-complexity and high-performance approaches in a deeply coupled manner to remove the spatial redundant, spectral redundant, and bit information redundancy. Experimental results on multiband CCD images show that the proposed algorithm significantly outperforms the traditional approaches.

  15. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).

    Science.gov (United States)

    Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling

    2018-04-17

    Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  16. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT

    Directory of Open Access Journals (Sweden)

    Ran Li

    2018-04-01

    Full Text Available Aimed at a low-energy consumption of Green Internet of Things (IoT, this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  17. Steady laminar flow of fractal fluids

    Energy Technology Data Exchange (ETDEWEB)

    Balankin, Alexander S., E-mail: abalankin@ipn.mx [Grupo Mecánica Fractal, ESIME, Instituto Politécnico Nacional, México D.F., 07738 (Mexico); Mena, Baltasar [Laboratorio de Ingeniería y Procesos Costeros, Instituto de Ingeniería, Universidad Nacional Autónoma de México, Sisal, Yucatán, 97355 (Mexico); Susarrey, Orlando; Samayoa, Didier [Grupo Mecánica Fractal, ESIME, Instituto Politécnico Nacional, México D.F., 07738 (Mexico)

    2017-02-12

    We study laminar flow of a fractal fluid in a cylindrical tube. A flow of the fractal fluid is mapped into a homogeneous flow in a fractional dimensional space with metric induced by the fractal topology. The equations of motion for an incompressible Stokes flow of the Newtonian fractal fluid are derived. It is found that the radial distribution for the velocity in a steady Poiseuille flow of a fractal fluid is governed by the fractal metric of the flow, whereas the pressure distribution along the flow direction depends on the fractal topology of flow, as well as on the fractal metric. The radial distribution of the fractal fluid velocity in a steady Couette flow between two concentric cylinders is also derived. - Highlights: • Equations of Stokes flow of Newtonian fractal fluid are derived. • Pressure distribution in the Newtonian fractal fluid is derived. • Velocity distribution in Poiseuille flow of fractal fluid is found. • Velocity distribution in a steady Couette flow is established.

  18. Sparse representations and compressive sensing for imaging and vision

    CERN Document Server

    Patel, Vishal M

    2013-01-01

    Compressed sensing or compressive sensing is a new concept in signal processing where one measures a small number of non-adaptive linear combinations of the signal.  These measurements are usually much smaller than the number of samples that define the signal.  From these small numbers of measurements, the signal is then reconstructed by non-linear procedure.  Compressed sensing has recently emerged as a powerful tool for efficiently processing data in non-traditional ways.  In this book, we highlight some of the key mathematical insights underlying sparse representation and compressed sensing and illustrate the role of these theories in classical vision, imaging and biometrics problems.

  19. Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Barni Mauro

    2007-01-01

    Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.

  20. Mathematical diagnosis of pediatric echocardiograms with fractal dimension measures evaluated through intrinsic mathematical harmony

    International Nuclear Information System (INIS)

    Rodriguez V, Javier O; Prieto, Signed E; Ortiz, Liliana

    2010-01-01

    Geometry allows the objective mathematical characterization of forms. Fractal geometry characterizes irregular objects. The left ventricle dynamical states form observed through echocardiography can be objectively evaluated through fractal dimension measures. Methods: A measurement of fractal dimension was performed using the Box-counting method of three defined objects in 28 echocardiographic images, 16 from normal children (group A) and 12 ill children (group B), in order to establish differences between health and illness from its comparison with the fractal dimensions of 2 normality prototypes and 2 disease prototypes. Results: A new diagnostic, clinical application methodology was developed based in the intrinsic mathematical harmony (IMH) concept, and it was observed that the fractal dimensions of the defined objects for an abnormal echocardiogram show similarity to its fourth significant number, thus demonstrating the possibility of following up the evolution from normality towards disease. According to the performed calculations, 68.75% of the cases in group A could be better evaluated with the developed diagnostic methodology, and the ill ones could be diagnosed more effectively. Conclusions: The pediatric echocardiography images can be objectively characterized with fractal dimension measurements, thus enabling the development of a clinical diagnostic methodology of echocardiography in children from the IMH concept.

  1. DSP accelerator for the wavelet compression/decompression of high- resolution images

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, M.A.; Gleason, S.S.; Jatko, W.B.

    1993-07-23

    A Texas Instruments (TI) TMS320C30-based S-Bus digital signal processing (DSP) module was used to accelerate a wavelet-based compression and decompression algorithm applied to high-resolution fingerprint images. The law enforcement community, together with the National Institute of Standards and Technology (NISI), is adopting a standard based on the wavelet transform for the compression, transmission, and decompression of scanned fingerprint images. A two-dimensional wavelet transform of the input image is computed. Then spatial/frequency regions are automatically analyzed for information content and quantized for subsequent Huffman encoding. Compression ratios range from 10:1 to 30:1 while maintaining the level of image quality necessary for identification. Several prototype systems were developed using SUN SPARCstation 2 with a 1280 {times} 1024 8-bit display, 64-Mbyte random access memory (RAM), Tiber distributed data interface (FDDI), and Spirit-30 S-Bus DSP-accelerators from Sonitech. The final implementation of the DSP-accelerated algorithm performed the compression or decompression operation in 3.5 s per print. Further increases in system throughput were obtained by adding several DSP accelerators operating in parallel.

  2. Adaptive bit plane quadtree-based block truncation coding for image compression

    Science.gov (United States)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  3. THE FRACTAL MARKET HYPOTHESIS

    Directory of Open Access Journals (Sweden)

    FELICIA RAMONA BIRAU

    2012-05-01

    Full Text Available In this article, the concept of capital market is analysed using Fractal Market Hypothesis which is a modern, complex and unconventional alternative to classical finance methods. Fractal Market Hypothesis is in sharp opposition to Efficient Market Hypothesis and it explores the application of chaos theory and fractal geometry to finance. Fractal Market Hypothesis is based on certain assumption. Thus, it is emphasized that investors did not react immediately to the information they receive and of course, the manner in which they interpret that information may be different. Also, Fractal Market Hypothesis refers to the way that liquidity and investment horizons influence the behaviour of financial investors.

  4. Use of digital image analysis combined with fractal theory to determine particle morphology and surface texture of quartz sands

    Directory of Open Access Journals (Sweden)

    Georgia S. Araujo

    2017-12-01

    Full Text Available The particle morphology and surface texture play a major role in influencing mechanical and hydraulic behaviors of sandy soils. This paper presents the use of digital image analysis combined with fractal theory as a tool to quantify the particle morphology and surface texture of two types of quartz sands widely used in the region of Vitória, Espírito Santo, southeast of Brazil. The two investigated sands are sampled from different locations. The purpose of this paper is to present a simple, straightforward, reliable and reproducible methodology that can identify representative sandy soil texture parameters. The test results of the soil samples of the two sands separated by sieving into six size fractions are presented and discussed. The main advantages of the adopted methodology are its simplicity, reliability of the results, and relatively low cost. The results show that sands from the coastal spit (BS have a greater degree of roundness and a smoother surface texture than river sands (RS. The values obtained in the test are statistically analyzed, and again it is confirmed that the BS sand has a slightly greater degree of sphericity than that of the RS sand. Moreover, the RS sand with rough surface texture has larger specific surface area values than the similar BS sand, which agree with the obtained roughness fractal dimensions. The consistent experimental results demonstrate that image analysis combined with fractal theory is an accurate and efficient method to quantify the differences in particle morphology and surface texture of quartz sands.

  5. Alternatives to the discrete cosine transform for irreversible tomographic image compression

    International Nuclear Information System (INIS)

    Villasenor, J.D.

    1993-01-01

    Full-frame irreversible compression of medical images is currently being performed using the discrete cosine transform (DCT). Although the DCT is the optimum fast transform for video compression applications, the authors show here that it is out-performed by the discrete Fourier transform (DFT) and discrete Hartley transform (DHT) for images obtained using positron emission tomography (PET) and magnetic resonance imaging (MRI), and possibly for certain types of digitized radiographs. The difference occurs because PET and MRI images are characterized by a roughly circular region D of non-zero intensity bounded by a region R in which the Image intensity is essentially zero. Clipping R to its minimum extent can reduce the number of low-intensity pixels but the practical requirement that images be stored on a rectangular grid means that a significant region of zero intensity must remain an integral part of the image to be compressed. With this constraint imposed, the DCT loses its advantage over the DFT because neither transform introduces significant artificial discontinuities. The DFT and DHT have the further important advantage of requiring less computation time than the DCT

  6. Assessment of Textural Differentiations in Forest Resources in Romania Using Fractal Analysis

    Directory of Open Access Journals (Sweden)

    Ion Andronache

    2017-02-01

    Full Text Available Deforestation and forest degradation have several negative effects on the environment including a loss of species habitats, disturbance of the water cycle and reduced ability to retain CO2, with consequences for global warming. We investigated the evolution of forest resources from development regions in Romania affected by both deforestation and reforestation using a non-Euclidean method based on fractal analysis. We calculated four fractal dimensions of forest areas: the fractal box-counting dimension of the forest areas, the fractal box-counting dimension of the dilated forest areas, the fractal dilation dimension and the box-counting dimension of the border of the dilated forest areas. Fractal analysis revealed morpho-structural and textural differentiations of forested, deforested and reforested areas in development regions with dominant mountain relief and high hills (more forested and compact organization in comparison to the development regions dominated by plains or low hills (less forested, more fragmented with small and isolated clusters. Our analysis used the fractal analysis that has the advantage of analyzing the entire image, rather than studying local information, thereby enabling quantification of the uniformity, fragmentation, heterogeneity and homogeneity of forests.

  7. Accelerated Air-coupled Ultrasound Imaging of Wood Using Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Yiming Fang

    2015-12-01

    Full Text Available Air-coupled ultrasound has shown excellent sensitivity and specificity for the nondestructive imaging of wood-based material. However, it is time-consuming, due to the high scanning density limited by the Nyquist law. This study investigated the feasibility of applying compressed sensing techniques to air-coupled ultrasound imaging, aiming to reduce the number of scanning lines and then accelerate the imaging. Firstly, an undersampled scanning strategy specified by a random binary matrix was proposed to address the limitation of the compressed sensing framework. The undersampled scanning can be easily implemented, while only minor modification was required for the existing imaging system. Then, discrete cosine transform was selected experimentally as the representation basis. Finally, orthogonal matching pursuit algorithm was utilized to reconstruct the wood images. Experiments on three real air-coupled ultrasound images indicated the potential of the present method to accelerate air-coupled ultrasound imaging of wood. The same quality of ACU images can be obtained with scanning time cut in half.

  8. THE FRACTAL MARKET HYPOTHESIS

    OpenAIRE

    FELICIA RAMONA BIRAU

    2012-01-01

    In this article, the concept of capital market is analysed using Fractal Market Hypothesis which is a modern, complex and unconventional alternative to classical finance methods. Fractal Market Hypothesis is in sharp opposition to Efficient Market Hypothesis and it explores the application of chaos theory and fractal geometry to finance. Fractal Market Hypothesis is based on certain assumption. Thus, it is emphasized that investors did not react immediately to the information they receive and...

  9. MR imaging of spinal factors and compression of the spinal cord in cervical myelopathy

    International Nuclear Information System (INIS)

    Kokubun, Shoichi; Ozawa, Hiroshi; Sakurai, Minoru; Ishii, Sukenobu; Tani, Shotaro; Sato, Tetsuaki.

    1992-01-01

    Magnetic resonance (MR) images of surgical 109 patients with cervical spondylotic myelopathy were retrospectively reviewed to examine whether MR imaging would replace conventional radiological procedures in determining spinal factors and spinal cord compression in this disease. MR imaging was useful in determining spondylotic herniation, continuous type of ossification of posterior longitudinal ligament, and calcification of yellow ligament, probably replacing CT myelography, discography, and CT discography. When total defect of the subarachnoid space on T2-weighted images and block on myelograms were compared in determining spinal cord compression, the spinal cord was affected more extensively by 1.3 intervertebral distance (IVD) on T2-weighted images. When indentation of one third or more in anterior and posterior diameter of the spinal cord was used as spinal cord compression, the difference in the affected extension between myelography and MR imaging was 0.2 IVD on T1-weighted images and 0.6 IVD on T2-weighted images. However, when block was seen in 3 or more IVD on myelograms, the range of spinal cord compression tended to be larger on T1-weighted images. For a small range of spinal cord compression, T1-weighted imaging seems to be helpful in determining the range of decompression. When using T2-weighted imaging, the range of decompression becomes large, frequently including posterior decompression. (N.K.)

  10. Do happy faces really modulate liking for Jackson Pollock art and statistical fractal noise images?

    Directory of Open Access Journals (Sweden)

    Mundloch Katrin

    2017-01-01

    Full Text Available Flexas et al. (2013 demonstrated that happy faces increase preference for abstract art if seen in short succession. We could not replicate their findings. In our first experiment, we tested whether valence, saliency or arousal of facial primes can modulate liking of Jackson Pollock art crops. In the second experiment, the emphasis was on testing another type of abstract visual stimuli which possess similar low-level image features: statistical fractal noise images. Pollock crops were rated significantly higher when primed with happy faces in contrast to neutral faces, but not differently to the no-prime condition. Findings of our study suggest that affective priming with happy faces may be stimulus-specific and may have inadvertent effects on other abstract visual material.

  11. Bi-level image compression with tree coding

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1996-01-01

    Presently, tree coders are the best bi-level image coders. The current ISO standard, JBIG, is a good example. By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code. Three general-purpose coders...... are constructed by this principle. A multi-pass free tree coding scheme produces superior compression results for all test images. A multi-pass fast free template coding scheme produces much better results than JBIG for difficult images, such as halftonings. Rissanen's algorithm `Context' is presented in a new...

  12. Effect of image compression and scaling on automated scoring of immunohistochemical stainings and segmentation of tumor epithelium

    Directory of Open Access Journals (Sweden)

    Konsti Juho

    2012-03-01

    Full Text Available Abstract Background Digital whole-slide scanning of tissue specimens produces large images demanding increasing storing capacity. To reduce the need of extensive data storage systems image files can be compressed and scaled down. The aim of this article is to study the effect of different levels of image compression and scaling on automated image analysis of immunohistochemical (IHC stainings and automated tumor segmentation. Methods Two tissue microarray (TMA slides containing 800 samples of breast cancer tissue immunostained against Ki-67 protein and two TMA slides containing 144 samples of colorectal cancer immunostained against EGFR were digitized with a whole-slide scanner. The TMA images were JPEG2000 wavelet compressed with four compression ratios: lossless, and 1:12, 1:25 and 1:50 lossy compression. Each of the compressed breast cancer images was furthermore scaled down either to 1:1, 1:2, 1:4, 1:8, 1:16, 1:32, 1:64 or 1:128. Breast cancer images were analyzed using an algorithm that quantitates the extent of staining in Ki-67 immunostained images, and EGFR immunostained colorectal cancer images were analyzed with an automated tumor segmentation algorithm. The automated tools were validated by comparing the results from losslessly compressed and non-scaled images with results from conventional visual assessments. Percentage agreement and kappa statistics were calculated between results from compressed and scaled images and results from lossless and non-scaled images. Results Both of the studied image analysis methods showed good agreement between visual and automated results. In the automated IHC quantification, an agreement of over 98% and a kappa value of over 0.96 was observed between losslessly compressed and non-scaled images and combined compression ratios up to 1:50 and scaling down to 1:8. In automated tumor segmentation, an agreement of over 97% and a kappa value of over 0.93 was observed between losslessly compressed images and

  13. Optimization of compressive 4D-spatio-spectral snapshot imaging

    Science.gov (United States)

    Zhao, Xia; Feng, Weiyi; Lin, Lihua; Su, Wu; Xu, Guoqing

    2017-10-01

    In this paper, a modified 3D computational reconstruction method in the compressive 4D-spectro-volumetric snapshot imaging system is proposed for better sensing spectral information of 3D objects. In the design of the imaging system, a microlens array (MLA) is used to obtain a set of multi-view elemental images (EIs) of the 3D scenes. Then, these elemental images with one dimensional spectral information and different perspectives are captured by the coded aperture snapshot spectral imager (CASSI) which can sense the spectral data cube onto a compressive 2D measurement image. Finally, the depth images of 3D objects at arbitrary depths, like a focal stack, are computed by inversely mapping the elemental images according to geometrical optics. With the spectral estimation algorithm, the spectral information of 3D objects is also reconstructed. Using a shifted translation matrix, the contrast of the reconstruction result is further enhanced. Numerical simulation results verify the performance of the proposed method. The system can obtain both 3D spatial information and spectral data on 3D objects using only one single snapshot, which is valuable in the agricultural harvesting robots and other 3D dynamic scenes.

  14. A study of complexity of oral mucosa using fractal geometry

    Directory of Open Access Journals (Sweden)

    S R Shenoi

    2017-01-01

    Full Text Available Background: The oral mucosa lining the oral cavity is composed of epithelium supported by connective tissue. The shape of the epithelial-connective tissue interface has traditionally been used to describe physiological and pathological changes in the oral mucosa. Aim: The aim is to evaluate the morphometric complexity in normal, dysplastic, well-differentiated, and moderately differentiated squamous cell carcinoma (SCC of the oral mucosa using fractal geometry. Materials and Methods: A total of 80 periodic acid–Schiff stained histological images of four groups: normal mucosa, dysplasia, well-differentiated SCC, and moderately differentiated SCC were verified by the gold standard. These images were then subjected to fractal analysis. Statistical Analysis: ANOVA and post hoc test: Bonferroni was applied. Results: Fractal dimension (FD increases as the complexity increases from normal to dysplasia and then to SCC. Normal buccal mucosa was found to be significantly different from dysplasia and the two grades of SCC (P < 0.05. ANOVA of fractal scores of four morphometrically different groups of buccal mucosa was significantly different with F (3,76 = 23.720 and P< 0.01. However, FD of dysplasia was not significantly different from well-differentiated and moderately differentiated SCC (P = 1.000 and P = 0.382, respectively. Conclusion: This study establishes FD as a newer tool in differentiating normal tissue from dysplastic and neoplastic tissue. Fractal geometry is useful in the study of both physiological and pathological changes in the oral mucosa. A new grading system based on FD may emerge as an adjuvant aid in cancer diagnosis.

  15. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    OpenAIRE

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with vari...

  16. Digitized hand-wrist radiographs: comparison of subjective and software-derived image quality at various compression ratios.

    Science.gov (United States)

    McCord, Layne K; Scarfe, William C; Naylor, Rachel H; Scheetz, James P; Silveira, Anibal; Gillespie, Kevin R

    2007-05-01

    The objectives of this study were to compare the effect of JPEG 2000 compression of hand-wrist radiographs on observer image quality qualitative assessment and to compare with a software-derived quantitative image quality index. Fifteen hand-wrist radiographs were digitized and saved as TIFF and JPEG 2000 images at 4 levels of compression (20:1, 40:1, 60:1, and 80:1). The images, including rereads, were viewed by 13 orthodontic residents who determined the image quality rating on a scale of 1 to 5. A quantitative analysis was also performed by using a readily available software based on the human visual system (Image Quality Measure Computer Program, version 6.2, Mitre, Bedford, Mass). ANOVA was used to determine the optimal compression level (P quality. When we used quantitative indexes, the JPEG 2000 images had lower quality at all compression ratios compared with the original TIFF images. There was excellent correlation (R2 >0.92) between qualitative and quantitative indexes. Image Quality Measure indexes are more sensitive than subjective image quality assessments in quantifying image degradation with compression. There is potential for this software-based quantitative method in determining the optimal compression ratio for any image without the use of subjective raters.

  17. A Coded Aperture Compressive Imaging Array and Its Visual Detection and Tracking Algorithms for Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Hanxiao Wu

    2012-10-01

    Full Text Available In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l1 optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l1 tracker without any optimization.

  18. Color image lossy compression based on blind evaluation and prediction of noise characteristics

    Science.gov (United States)

    Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena

    2011-03-01

    The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.

  19. Assessment of the impact of modeling axial compression on PET image reconstruction.

    Science.gov (United States)

    Belzunce, Martin A; Reader, Andrew J

    2017-10-01

    To comprehensively evaluate both the acceleration and image-quality impacts of axial compression and its degree of modeling in fully 3D PET image reconstruction. Despite being used since the very dawn of 3D PET reconstruction, there are still no extensive studies on the impact of axial compression and its degree of modeling during reconstruction on the end-point reconstructed image quality. In this work, an evaluation of the impact of axial compression on the image quality is performed by extensively simulating data with span values from 1 to 121. In addition, two methods for modeling the axial compression in the reconstruction were evaluated. The first method models the axial compression in the system matrix, while the second method uses an unmatched projector/backprojector, where the axial compression is modeled only in the forward projector. The different system matrices were analyzed by computing their singular values and the point response functions for small subregions of the FOV. The two methods were evaluated with simulated and real data for the Biograph mMR scanner. For the simulated data, the axial compression with span values lower than 7 did not show a decrease in the contrast of the reconstructed images. For span 11, the standard sinogram size of the mMR scanner, losses of contrast in the range of 5-10 percentage points were observed when measured for a hot lesion. For higher span values, the spatial resolution was degraded considerably. However, impressively, for all span values of 21 and lower, modeling the axial compression in the system matrix compensated for the spatial resolution degradation and obtained similar contrast values as the span 1 reconstructions. Such approaches have the same processing times as span 1 reconstructions, but they permit significant reduction in storage requirements for the fully 3D sinograms. For higher span values, the system has a large condition number and it is therefore difficult to recover accurately the higher

  20. Quantitative assessment of the influence of anatomic noise on the detection of subtle lung nodule in digital chest radiography using fractal-feature distance

    International Nuclear Information System (INIS)

    Imai, Kuniharu; Ikeda, Mitsuru; Enchi, Yukihiro; Niimi, Takanaga

    2008-01-01

    Purpose: To confirm whether or not the influence of anatomic noise on the detection of nodules in digital chest radiography can be evaluated by the fractal-feature distance. Materials and methods: We used the square images with and without a simulated nodule which were generated in our previous observer performance study; the simulated nodule was located on the upper margin of a rib, the inside of a rib, the lower margin of a rib, or the central region between two adjoining ribs. For the square chest images, fractal analysis was conducted using the virtual volume method. The fractal-feature distances between the considered and the reference images were calculated using the pseudo-fractal dimension and complexity, and the square images without the simulated nodule were employed as the reference images. We compared the fractal-feature distances with the observer's confidence level regarding the presence of a nodule in plain chest radiograph. Results: For all square chest images, the relationships between the length of the square boxes and the mean of the virtual volumes were linear on a log-log scale. For all types of the simulated nodules, the fractal-feature distance was the highest for the simulated nodules located on the central region between two adjoining ribs and was the lowest for those located in the inside of a rib. The fractal-feature distance showed a linear relation to an observer's confidence level. Conclusion: The fractal-feature distance would be useful for evaluating the influence of anatomic noise on the detection of nodules in digital chest radiography

  1. View compensated compression of volume rendered images for remote visualization.

    Science.gov (United States)

    Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S

    2009-07-01

    Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.

  2. Development of information preserving data compression algorithm for CT images

    International Nuclear Information System (INIS)

    Kobayashi, Yoshio

    1989-01-01

    Although digital imaging techniques in radiology develop rapidly, problems arise in archival storage and communication of image data. This paper reports on a new information preserving data compression algorithm for computed tomographic (CT) images. This algorithm consists of the following five processes: 1. Pixels surrounding the human body showing CT values smaller than -900 H.U. are eliminated. 2. Each pixel is encoded by its numerical difference from its neighboring pixel along a matrix line. 3. Difference values are encoded by a newly designed code rather than the natural binary code. 4. Image data, obtained with the above process, are decomposed into bit planes. 5. The bit state transitions in each bit plane are encoded by run length coding. Using this new algorithm, the compression ratios of brain, chest, and abdomen CT images are 4.49, 4.34. and 4.40 respectively. (author)

  3. Fractal Metrology for biogeosystems analysis

    Directory of Open Access Journals (Sweden)

    V. Torres-Argüelles

    2010-11-01

    Full Text Available The solid-pore distribution pattern plays an important role in soil functioning being related with the main physical, chemical and biological multiscale and multitemporal processes of this complex system. In the present research, we studied the aggregation process as self-organizing and operating near a critical point. The structural pattern is extracted from the digital images of three soils (Chernozem, Solonetz and "Chocolate" Clay and compared in terms of roughness of the gray-intensity distribution quantified by several measurement techniques. Special attention was paid to the uncertainty of each of them measured in terms of standard deviation. Some of the applied methods are known as classical in the fractal context (box-counting, rescaling-range and wavelets analyses, etc. while the others have been recently developed by our Group. The combination of these techniques, coming from Fractal Geometry, Metrology, Informatics, Probability Theory and Statistics is termed in this paper Fractal Metrology (FM. We show the usefulness of FM for complex systems analysis through a case study of the soil's physical and chemical degradation applying the selected toolbox to describe and compare the structural attributes of three porous media with contrasting structure but similar clay mineralogy dominated by montmorillonites.

  4. Virtual endoscopic images by 3D FASE cisternography for neurovascular compression

    International Nuclear Information System (INIS)

    Ishimori, Takashi; Nakano, Satoru; Kagawa, Masahiro

    2003-01-01

    Three-dimensional fast asymmetric spin echo (3D FASE) cisternography provides high spatial resolution and excellent contrast as a water image acquisition technique. It is also useful for the evaluation of various anatomical regions. This study investigated the usefulness and limitations of virtual endoscopic images obtained by 3D FASE MR cisternography in the preoperative evaluation of patients with neurovascular compression. The study included 12 patients with neurovascular compression: 10 with hemifacial spasm and two with trigeminal neuralgia. The diagnosis was surgically confirmed in all patients. The virtual endoscopic images obtained were judged to be of acceptable quality for interpretation in all cases. The areas of compression identified in preoperative diagnosis with virtual endoscopic images showed good agreement with those observed from surgery, except in one case in which the common trunk of the anterior inferior cerebellar artery and posterior inferior cerebellar artery (AICA-PICA) bifurcated near the root exit zone of the facial nerve. The veins are displayed in some cases but not in others. The main advantage of generating virtual endoscopic images is that such images can be used for surgical simulation, allowing the neurosurgeon to perform surgical procedures with greater confidence. (author)

  5. Fractal description of fractures

    International Nuclear Information System (INIS)

    Lung, C.W.

    1991-06-01

    Recent studies on the fractal description of fractures are reviewed. Some problems on this subject are discussed. It seems hopeful to use the fractal dimension as a parameter for quantitative fractography and to apply fractal structures to the development of high toughness materials. (author). 28 refs, 7 figs

  6. Establishing physical criteria to stop the losing compression of digital medical imaging

    International Nuclear Information System (INIS)

    Perez Diaz, M

    2008-01-01

    Full text: A key to store and/or transmit digital medical images obtained from modern technologies is the size in bytes they occupy difficulty. One way to solve the above is the implementation of compression algorithms (codecs) with or without losses. Particularly the latter do allow significant reductions in the size of the images, but if not applied on solid scientific criteria can lead to useful diagnostic information is lost. This talk takes a description and assessment of the quality of image obtained after the application of current compression codecs from analysis of physical parameters such as: Spatial resolution, random noise , contrast and image generation devices. Open for Medical Physics and Image Processing, directed toward establishing objective criteria to stop losing compression, based on the implementation of Univariate and bivariate traditional metrics such as mean square error introduced by each issue focuses rate compression, Signal to Noise peak to peak noise and contrast ratio , and other metrics, more modern, such as Structural Similarity Index and, Measures Distance , singular value decomposition of the image matrix and Correlation and Spectral Measurements. It also makes a review of physical approaches for predicting image quality from use mathematical observers as the Hotelling and Hotelling Pipeline with Gabor functions or Laguerre - Gauss polynomials . Finally the correlation of these objective methods with subjective assessment of image quality made ​​from ROC analysis based on Diagnostic Performance Curves is analyzed. (author)

  7. Fractals and foods.

    Science.gov (United States)

    Peleg, M

    1993-01-01

    Fractal geometry and related concepts have had only a very minor impact on food research. The very few reported food applications deal mainly with the characterization of the contours of agglomerated instant coffee particles, the surface morphology of treated starch particles, the microstructure of casein gels viewed as a product limited diffusion aggregation, and the jagged mechanical signatures of crunchy dry foods. Fractal geometry describes objects having morphological features that are scale invariant. A demonstration of the self-similarity of fractal objects can be found in the familiar morphology of cauliflower and broccoli, both foods. Processes regulated by nonlinear dynamics can exhibit a chaotic behavior that has fractal characteristics. Examples are mixing of viscous fluids, turbulence, crystallization, agglomeration, diffusion, and possibly food spoilage.

  8. Hardware Implementation of Lossless Adaptive Compression of Data From a Hyperspectral Imager

    Science.gov (United States)

    Keymeulen, Didlier; Aranki, Nazeeh I.; Klimesh, Matthew A.; Bakhshi, Alireza

    2012-01-01

    Efficient onboard data compression can reduce the data volume from hyperspectral imagers on NASA and DoD spacecraft in order to return as much imagery as possible through constrained downlink channels. Lossless compression is important for signature extraction, object recognition, and feature classification capabilities. To provide onboard data compression, a hardware implementation of a lossless hyperspectral compression algorithm was developed using a field programmable gate array (FPGA). The underlying algorithm is the Fast Lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral- Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), p. 26 with the modification reported in Lossless, Multi-Spectral Data Comressor for Improved Compression for Pushbroom-Type Instruments (NPO-45473), NASA Tech Briefs, Vol. 32, No. 7 (July 2008) p. 63, which provides improved compression performance for data from pushbroom-type imagers. An FPGA implementation of the unmodified FL algorithm was previously developed and reported in Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System (NPO-46867), NASA Tech Briefs, Vol. 36, No. 5 (May 2012) p. 42. The essence of the FL algorithm is adaptive linear predictive compression using the sign algorithm for filter adaption. The FL compressor achieves a combination of low complexity and compression effectiveness that exceeds that of stateof- the-art techniques currently in use. The modification changes the predictor structure to tolerate differences in sensitivity of different detector elements, as occurs in pushbroom-type imagers, which are suitable for spacecraft use. The FPGA implementation offers a low-cost, flexible solution compared to traditional ASIC (application specific integrated circuit) and can be integrated as an intellectual property (IP) for part of, e.g., a design that manages the instrument interface. The FPGA implementation was benchmarked on the Xilinx

  9. Flames in fractal grid generated turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Goh, K H H; Hampp, F; Lindstedt, R P [Department of Mechanical Engineering, Imperial College, London SW7 2AZ (United Kingdom); Geipel, P, E-mail: p.lindstedt@imperial.ac.uk [Siemens Industrial Turbomachinery AB, SE-612 83 Finspong (Sweden)

    2013-12-15

    Twin premixed turbulent opposed jet flames were stabilized for lean mixtures of air with methane and propane in fractal grid generated turbulence. A density segregation method was applied alongside particle image velocimetry to obtain velocity and scalar statistics. It is shown that the current fractal grids increase the turbulence levels by around a factor of 2. Proper orthogonal decomposition (POD) was applied to show that the fractal grids produce slightly larger turbulent structures that decay at a slower rate as compared to conventional perforated plates. Conditional POD (CPOD) was also implemented using the density segregation technique and the results show that CPOD is essential to segregate the relative structures and turbulent kinetic energy distributions in each stream. The Kolmogorov length scales were also estimated providing values {approx}0.1 and {approx}0.5 mm in the reactants and products, respectively. Resolved profiles of flame surface density indicate that a thin flame assumption leading to bimodal statistics is not perfectly valid under the current conditions and it is expected that the data obtained will be of significant value to the development of computational methods that can provide information on the conditional structure of turbulence. It is concluded that the increase in the turbulent Reynolds number is without any negative impact on other parameters and that fractal grids provide a route towards removing the classical problem of a relatively low ratio of turbulent to bulk strain associated with the opposed jet configuration. (paper)

  10. Inter frame motion estimation and its application to image sequence compression: an introduction

    International Nuclear Information System (INIS)

    Cremy, C.

    1996-01-01

    With the constant development of new communication technologies like, digital TV, teleconference, and the development of image analysis applications, there is a growing volume of data to manage. Compression techniques are required for the transmission and storage of these data. Dealing with original images would require the use of expansive high bandwidth communication devices and huge storage media. Image sequence compression can be achieved by means of interframe estimation that consists in retrieving redundant information relative to zones where there is little motion between two frames. This paper is an introduction to some motion estimation techniques like gradient techniques, pel-recursive, block-matching, and its application to image sequence compression. (Author) 17 refs

  11. Differential diagnosis of benign and malignant vertebral compression fractures with MR imaging

    International Nuclear Information System (INIS)

    Staebler, A.; Krimmel, K.; Seiderer, M.; Gaertner, C.; Fritsch, S.; Raum, W.

    1992-01-01

    42 patients with known malignancy and vertebral compressions underwent MRI. Sagittal T 1 -weighted spin-echo images pre and post Gd-DTPA, out of phase long TR gradient-echo images (GE) and short T 1 inversion recovery images (STIR) were obtained at 1.0 T. In 39 of 42 cases a correct differentiation between osteoporotic and tumorous vertebral compression fractures was possible by quantification and correlation of SE and GE signal intensities. Gd-DTPA did not improve differential diagnosis, since both tumour infiltration and bone marrow oedema in acute compression fracture showed comparable enhancement. STIR-sequences were most sensitive for pathology but unspecific due to a comparable amount of water in tumour tissue and bone marrow oedema. Susceptibility-induced signal reduction in GE images and morphologic criteria proved to be most reliable for differentiation of benign and tumour-related fractures. (orig./GDG) [de

  12. A new approach of objective quality evaluation on JPEG2000 lossy-compressed lung cancer CT images

    Science.gov (United States)

    Cai, Weihua; Tan, Yongqiang; Zhang, Jianguo

    2007-03-01

    Image compression has been used to increase the communication efficiency and storage capacity. JPEG 2000 compression, based on the wavelet transformation, has its advantages comparing to other compression methods, such as ROI coding, error resilience, adaptive binary arithmetic coding and embedded bit-stream. However it is still difficult to find an objective method to evaluate the image quality of lossy-compressed medical images so far. In this paper, we present an approach to evaluate the image quality by using a computer aided diagnosis (CAD) system. We selected 77 cases of CT images, bearing benign and malignant lung nodules with confirmed pathology, from our clinical Picture Archiving and Communication System (PACS). We have developed a prototype of CAD system to classify these images into benign ones and malignant ones, the performance of which was evaluated by the receiver operator characteristics (ROC) curves. We first used JPEG 2000 to compress these cases of images with different compression ratio from lossless to lossy, and used the CAD system to classify the cases with different compressed ratio, then compared the ROC curves from the CAD classification results. Support vector machine (SVM) and neural networks (NN) were used to classify the malignancy of input nodules. In each approach, we found that the area under ROC (AUC) decreases with the increment of compression ratio with small fluctuations.

  13. Fractal Analysis of Mobile Social Networks

    International Nuclear Information System (INIS)

    Zheng Wei; Pan Qian; Sun Chen; Deng Yu-Fan; Zhao Xiao-Kang; Kang Zhao

    2016-01-01

    Fractal and self similarity of complex networks have attracted much attention in recent years. The fractal dimension is a useful method to describe the fractal property of networks. However, the fractal features of mobile social networks (MSNs) are inadequately investigated. In this work, a box-covering method based on the ratio of excluded mass to closeness centrality is presented to investigate the fractal feature of MSNs. Using this method, we find that some MSNs are fractal at different time intervals. Our simulation results indicate that the proposed method is available for analyzing the fractal property of MSNs. (paper)

  14. Correspondence normalized ghost imaging on compressive sensing

    International Nuclear Information System (INIS)

    Zhao Sheng-Mei; Zhuang Peng

    2014-01-01

    Ghost imaging (GI) offers great potential with respect to conventional imaging techniques. It is an open problem in GI systems that a long acquisition time is be required for reconstructing images with good visibility and signal-to-noise ratios (SNRs). In this paper, we propose a new scheme to get good performance with a shorter construction time. We call it correspondence normalized ghost imaging based on compressive sensing (CCNGI). In the scheme, we enhance the signal-to-noise performance by normalizing the reference beam intensity to eliminate the noise caused by laser power fluctuations, and reduce the reconstruction time by using both compressive sensing (CS) and time-correspondence imaging (CI) techniques. It is shown that the qualities of the images have been improved and the reconstruction time has been reduced using CCNGI scheme. For the two-grayscale ''double-slit'' image, the mean square error (MSE) by GI and the normalized GI (NGI) schemes with the measurement number of 5000 are 0.237 and 0.164, respectively, and that is 0.021 by CCNGI scheme with 2500 measurements. For the eight-grayscale ''lena'' object, the peak signal-to-noise rates (PSNRs) are 10.506 and 13.098, respectively using GI and NGI schemes while the value turns to 16.198 using CCNGI scheme. The results also show that a high-fidelity GI reconstruction has been achieved using only 44% of the number of measurements corresponding to the Nyquist limit for the two-grayscale “double-slit'' object. The qualities of the reconstructed images using CCNGI are almost the same as those from GI via sparsity constraints (GISC) with a shorter reconstruction time. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  15. VLSI ARCHITECTURE FOR IMAGE COMPRESSION THROUGH ADDER MINIMIZATION TECHNIQUE AT DCT STRUCTURE

    Directory of Open Access Journals (Sweden)

    N.R. Divya

    2014-08-01

    Full Text Available Data compression plays a vital role in multimedia devices to present the information in a succinct frame. Initially, the DCT structure is used for Image compression, which has lesser complexity and area efficient. Similarly, 2D DCT also has provided reasonable data compression, but implementation concern, it calls more multipliers and adders thus its lead to acquire more area and high power consumption. To contain an account of all, this paper has been dealt with VLSI architecture for image compression using Rom free DA based DCT (Discrete Cosine Transform structure. This technique provides high-throughput and most suitable for real-time implementation. In order to achieve this image matrix is subdivided into odd and even terms then the multiplication functions are removed by shift and add approach. Kogge_Stone_Adder techniques are proposed for obtaining a bit-wise image quality which determines the new trade-off levels as compared to the previous techniques. Overall the proposed architecture produces reduced memory, low power consumption and high throughput. MATLAB is used as a funding tool for receiving an input pixel and obtaining output image. Verilog HDL is used for implementing the design, Model Sim for simulation, Quatres II is used to synthesize and obtain details about power and area.

  16. Fractal dust grains in plasma

    International Nuclear Information System (INIS)

    Huang, F.; Peng, R. D.; Liu, Y. H.; Chen, Z. Y.; Ye, M. F.; Wang, L.

    2012-01-01

    Fractal dust grains of different shapes are observed in a radially confined magnetized radio frequency plasma. The fractal dimensions of the dust structures in two-dimensional (2D) horizontal dust layers are calculated, and their evolution in the dust growth process is investigated. It is found that as the dust grains grow the fractal dimension of the dust structure decreases. In addition, the fractal dimension of the center region is larger than that of the entire region in the 2D dust layer. In the initial growth stage, the small dust particulates at a high number density in a 2D layer tend to fill space as a normal surface with fractal dimension D = 2. The mechanism of the formation of fractal dust grains is discussed.

  17. Simultaneous optical image compression and encryption using error-reduction phase retrieval algorithm

    International Nuclear Information System (INIS)

    Liu, Wei; Liu, Shutian; Liu, Zhengjun

    2015-01-01

    We report a simultaneous image compression and encryption scheme based on solving a typical optical inverse problem. The secret images to be processed are multiplexed as the input intensities of a cascaded diffractive optical system. At the output plane, a compressed complex-valued data with a lot fewer measurements can be obtained by utilizing error-reduction phase retrieval algorithm. The magnitude of the output image can serve as the final ciphertext while its phase serves as the decryption key. Therefore the compression and encryption are simultaneously completed without additional encoding and filtering operations. The proposed strategy can be straightforwardly applied to the existing optical security systems that involve diffraction and interference. Numerical simulations are performed to demonstrate the validity and security of the proposal. (paper)

  18. Assessment of disintegrant efficacy with fractal dimensions from real-time MRI.

    Science.gov (United States)

    Quodbach, Julian; Moussavi, Amir; Tammer, Roland; Frahm, Jens; Kleinebudde, Peter

    2014-11-20

    An efficient disintegrant is capable of breaking up a tablet in the smallest possible particles in the shortest time. Until now, comparative data on the efficacy of different disintegrants is based on dissolution studies or the disintegration time. Extending these approaches, this study introduces a method, which defines the evolution of fractal dimensions of tablets as surrogate parameter for the available surface area. Fractal dimensions are a measure for the tortuosity of a line, in this case the upper surface of a disintegrating tablet. High-resolution real-time MRI was used to record videos of disintegrating tablets. The acquired video images were processed to depict the upper surface of the tablets and a box-counting algorithm was used to estimate the fractal dimensions. The influence of six different disintegrants, of different relative tablet density, and increasing disintegrant concentration was investigated to evaluate the performance of the novel method. Changing relative densities hardly affect the progression of fractal dimensions, whereas an increase in disintegrant concentration causes increasing fractal dimensions during disintegration, which are also reached quicker. Different disintegrants display only minor differences in the maximal fractal dimension, yet the kinetic in which the maximum is reached allows a differentiation and classification of disintegrants. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Integrating dynamic and distributed compressive sensing techniques to enhance image quality of the compressive line sensing system for unmanned aerial vehicles application

    Science.gov (United States)

    Ouyang, Bing; Hou, Weilin; Caimi, Frank M.; Dalgleish, Fraser R.; Vuorenkoski, Anni K.; Gong, Cuiling

    2017-07-01

    The compressive line sensing imaging system adopts distributed compressive sensing (CS) to acquire data and reconstruct images. Dynamic CS uses Bayesian inference to capture the correlated nature of the adjacent lines. An image reconstruction technique that incorporates dynamic CS in the distributed CS framework was developed to improve the quality of reconstructed images. The effectiveness of the technique was validated using experimental data acquired in an underwater imaging test facility. Results that demonstrate contrast and resolution improvements will be presented. The improved efficiency is desirable for unmanned aerial vehicles conducting long-duration missions.

  20. Fractality and growth of He bubbles in metals

    Science.gov (United States)

    Kajita, Shin; Ito, Atsushi M.; Ohno, Noriyasu

    2017-08-01

    Pinholes are formed on surfaces of metals by the exposure to helium plasmas, and they are regarded as the initial process of the growth of fuzzy nanostructures. In this study, number density of the pinholes is investigated in detail from the scanning electron microscope (SEM) micrographs of tungsten and tantalum exposed to the helium plasmas. A power law relation was identified between the number density and the size of pinholes. From the slope and the region where the power law was satisfied, the fractal dimension D and smin, which characterize the SEM images, are deduced. Parametric dependences and material dependence of D and smin are revealed. To explain the fractality, simple Monte-Carlo simulations including random walks of He atoms and absorption on bubble was introduced. It is shown that the initial position of the random walk is one of the key factors to deduce the fractality. The results indicated that new nucleations of bubbles are necessary to reproduce the number-density distribution of bubbles.

  1. Multi-scale simulations of field ion microscopy images—Image compression with and without the tip shank

    International Nuclear Information System (INIS)

    NiewieczerzaŁ, Daniel; Oleksy, CzesŁaw; Szczepkowicz, Andrzej

    2012-01-01

    Multi-scale simulations of field ion microscopy images of faceted and hemispherical samples are performed using a 3D model. It is shown that faceted crystals have compressed images even in cases with no shank. The presence of the shank increases the compression of images of faceted crystals quantitatively in the same way as for hemispherical samples. It is hereby proven that the shank does not influence significantly the local, relative variations of the magnification caused by the atomic-scale structure of the sample. -- Highlights: ► Multi-scale simulations of field ion microscopy images. ► Faceted and hemispherical samples with and without shank. ► Shank causes overall compression, but does not influence local magnification effects. ► Image compression linearly increases with the shank angle. ► Shank changes compression of image of faceted tip in the same way as for smooth sample.

  2. Local System Matrix Compression for Efficient Reconstruction in Magnetic Particle Imaging

    Directory of Open Access Journals (Sweden)

    T. Knopp

    2015-01-01

    Full Text Available Magnetic particle imaging (MPI is a quantitative method for determining the spatial distribution of magnetic nanoparticles, which can be used as tracers for cardiovascular imaging. For reconstructing a spatial map of the particle distribution, the system matrix describing the magnetic particle imaging equation has to be known. Due to the complex dynamic behavior of the magnetic particles, the system matrix is commonly measured in a calibration procedure. In order to speed up the reconstruction process, recently, a matrix compression technique has been proposed that makes use of a basis transformation in order to compress the MPI system matrix. By thresholding the resulting matrix and storing the remaining entries in compressed row storage format, only a fraction of the data has to be processed when reconstructing the particle distribution. In the present work, it is shown that the image quality of the algorithm can be considerably improved by using a local threshold for each matrix row instead of a global threshold for the entire system matrix.

  3. Discovery of cosmic fractals

    CERN Document Server

    Baryshev, Yuri

    2002-01-01

    This is the first book to present the fascinating new results on the largest fractal structures in the universe. It guides the reader, in a simple way, to the frontiers of astronomy, explaining how fractals appear in cosmic physics, from our solar system to the megafractals in deep space. It also offers a personal view of the history of the idea of self-similarity and of cosmological principles, from Plato's ideal architecture of the heavens to Mandelbrot's fractals in the modern physical cosmos. In addition, this invaluable book presents the great fractal debate in astronomy (after Luciano Pi

  4. Fractal zeta functions and fractal drums higher-dimensional theory of complex dimensions

    CERN Document Server

    Lapidus, Michel L; Žubrinić, Darko

    2017-01-01

    This monograph gives a state-of-the-art and accessible treatment of a new general higher-dimensional theory of complex dimensions, valid for arbitrary bounded subsets of Euclidean spaces, as well as for their natural generalization, relative fractal drums. It provides a significant extension of the existing theory of zeta functions for fractal strings to fractal sets and arbitrary bounded sets in Euclidean spaces of any dimension. Two new classes of fractal zeta functions are introduced, namely, the distance and tube zeta functions of bounded sets, and their key properties are investigated. The theory is developed step-by-step at a slow pace, and every step is well motivated by numerous examples, historical remarks and comments, relating the objects under investigation to other concepts. Special emphasis is placed on the study of complex dimensions of bounded sets and their connections with the notions of Minkowski content and Minkowski measurability, as well as on fractal tube formulas. It is shown for the f...

  5. Using Fractal And Morphological Criteria For Automatic Classification Of Lung Diseases

    Science.gov (United States)

    Vehel, Jacques Levy

    1989-11-01

    Medical Images are difficult to analyze by means of classical image processing tools because they are very complex and irregular. Such shapes are obtained for instance in Nuclear Medecine with the spatial distribution of activity for organs such as lungs, liver, and heart. We have tried to apply two different theories to these signals: - Fractal Geometry deals with the analysis of complex irregular shapes which cannot well be described by the classical Euclidean geometry. - Integral Geometry treats sets globally and allows to introduce robust measures. We have computed three parameters on three kinds of Lung's SPECT images: normal, pulmonary embolism and chronic desease: - The commonly used fractal dimension (FD), that gives a measurement of the irregularity of the 3D shape. - The generalized lacunarity dimension (GLD), defined as the variance of the ratio of the local activity by the mean activity, which is only sensitive to the distribution and the size of gaps in the surface. - The Favard length that gives an approximation of the surface of a 3-D shape. The results show that each slice of the lung, considered as a 3D surface, is fractal and that the fractal dimension is the same for each slice and for the three kind of lungs; as for the lacunarity and Favard length, they are clearly different for normal lungs, pulmonary embolisms and chronic diseases. These results indicate that automatic classification of Lung's SPECT can be achieved, and that a quantitative measurement of the evolution of the disease could be made.

  6. Fractal geometry and number theory complex dimensions of fractal strings and zeros of zeta functions

    CERN Document Server

    Lapidus, Michael L

    1999-01-01

    A fractal drum is a bounded open subset of R. m with a fractal boundary. A difficult problem is to describe the relationship between the shape (geo­ metry) of the drum and its sound (its spectrum). In this book, we restrict ourselves to the one-dimensional case of fractal strings, and their higher dimensional analogues, fractal sprays. We develop a theory of complex di­ mensions of a fractal string, and we study how these complex dimensions relate the geometry with the spectrum of the fractal string. We refer the reader to [Berrl-2, Lapl-4, LapPol-3, LapMal-2, HeLapl-2] and the ref­ erences therein for further physical and mathematical motivations of this work. (Also see, in particular, Sections 7. 1, 10. 3 and 10. 4, along with Ap­ pendix B. ) In Chapter 1, we introduce the basic object of our research, fractal strings (see [Lapl-3, LapPol-3, LapMal-2, HeLapl-2]). A 'standard fractal string' is a bounded open subset of the real line. Such a set is a disjoint union of open intervals, the lengths of which ...

  7. Magni: A Python Package for Compressive Sampling and Reconstruction of Atomic Force Microscopy Images

    Directory of Open Access Journals (Sweden)

    Christian Schou Oxvig

    2014-10-01

    Full Text Available Magni is an open source Python package that embraces compressed sensing and Atomic Force Microscopy (AFM imaging techniques. It provides AFM-specific functionality for undersampling and reconstructing images from AFM equipment and thereby accelerating the acquisition of AFM images. Magni also provides researchers in compressed sensing with a selection of algorithms for reconstructing undersampled general images, and offers a consistent and rigorous way to efficiently evaluate the researchers own developed reconstruction algorithms in terms of phase transitions. The package also serves as a convenient platform for researchers in compressed sensing aiming at obtaining a high degree of reproducibility of their research.

  8. Quantum Fractal Eigenstates

    OpenAIRE

    Casati, Giulio; Maspero, Giulio; Shepelyansky, Dima L.

    1997-01-01

    We study quantum chaos in open dynamical systems and show that it is characterized by quantum fractal eigenstates located on the underlying classical strange repeller. The states with longest life times typically reveal a scars structure on the classical fractal set.

  9. Electromagnetism on anisotropic fractal media

    Science.gov (United States)

    Ostoja-Starzewski, Martin

    2013-04-01

    Basic equations of electromagnetic fields in anisotropic fractal media are obtained using a dimensional regularization approach. First, a formulation based on product measures is shown to satisfy the four basic identities of the vector calculus. This allows a generalization of the Green-Gauss and Stokes theorems as well as the charge conservation equation on anisotropic fractals. Then, pursuing the conceptual approach, we derive the Faraday and Ampère laws for such fractal media, which, along with two auxiliary null-divergence conditions, effectively give the modified Maxwell equations. Proceeding on a separate track, we employ a variational principle for electromagnetic fields, appropriately adapted to fractal media, so as to independently derive the same forms of these two laws. It is next found that the parabolic (for a conducting medium) and the hyperbolic (for a dielectric medium) equations involve modified gradient operators, while the Poynting vector has the same form as in the non-fractal case. Finally, Maxwell's electromagnetic stress tensor is reformulated for fractal systems. In all the cases, the derived equations for fractal media depend explicitly on fractal dimensions in three different directions and reduce to conventional forms for continuous media with Euclidean geometries upon setting these each of dimensions equal to unity.

  10. Inkjet-Printed Ultra Wide Band Fractal Antennas

    KAUST Repository

    Maza, Armando Rodriguez

    2012-05-01

    In this work, Paper-based inkjet-printed Ultra-wide band (UWB) fractal antennas are presented. Three new designs, a combined UWB fractal monopole based on the fourth order Koch Snowflake fractal which utilizes a Sierpinski Gasket fractal for ink reduction, a Cantor-based fractal antenna which performs a larger bandwidth compared to previously published UWB Cantor fractal monopole antenna, and a 3D loop fractal antenna which attains miniaturization, impedance matching and multiband characteristics. It is shown that fractals prove to be a successful method of reducing fabrication cost in inkjet printed antennas while retaining or enhancing printed antenna performance.

  11. Hydrophobicity classification of polymeric materials based on fractal dimension

    Directory of Open Access Journals (Sweden)

    Daniel Thomazini

    2008-12-01

    Full Text Available This study proposes a new method to obtain hydrophobicity classification (HC in high voltage polymer insulators. In the method mentioned, the HC was analyzed by fractal dimension (fd and its processing time was evaluated having as a goal the application in mobile devices. Texture images were created from spraying solutions produced of mixtures of isopropyl alcohol and distilled water in proportions, which ranged from 0 to 100% volume of alcohol (%AIA. Based on these solutions, the contact angles of the drops were measured and the textures were used as patterns for fractal dimension calculations.

  12. Alpha-spectrometry and fractal analysis of surface micro-images for characterisation of porous materials used in manufacture of targets for laser plasma experiments

    Energy Technology Data Exchange (ETDEWEB)

    Aushev, A A; Barinov, S P; Vasin, M G; Drozdov, Yu M; Ignat' ev, Yu V; Izgorodin, V M; Kovshov, D K; Lakhtikov, A E; Lukovkina, D D; Markelov, V V; Morovov, A P; Shishlov, V V [Russian Federal Nuclear Center ' All-Russian Research Institute of Experimental Physics' , Sarov, Nizhnii Novgorod region (Russian Federation)

    2015-06-30

    We present the results of employing the alpha-spectrometry method to determine the characteristics of porous materials used in targets for laser plasma experiments. It is shown that the energy spectrum of alpha-particles, after their passage through porous samples, allows one to determine the distribution of their path length in the foam skeleton. We describe the procedure of deriving such a distribution, excluding both the distribution broadening due to statistical nature of the alpha-particle interaction with an atomic structure (straggling) and hardware effects. The fractal analysis of micro-images is applied to the same porous surface samples that have been studied by alpha-spectrometry. The fractal dimension and size distribution of the number of the foam skeleton grains are obtained. Using the data obtained, a distribution of the total foam skeleton thickness along a chosen direction is constructed. It roughly coincides with the path length distribution of alpha-particles within a range of larger path lengths. It is concluded that the combined use of the alpha-spectrometry method and fractal analysis of images will make it possible to determine the size distribution of foam skeleton grains (or pores). The results can be used as initial data in theoretical studies on propagation of the laser and X-ray radiation in specific porous samples. (laser plasma)

  13. Faster tissue interface analysis from Raman microscopy images using compressed factorisation

    Science.gov (United States)

    Palmer, Andrew D.; Bannerman, Alistair; Grover, Liam; Styles, Iain B.

    2013-06-01

    The structure of an artificial ligament was examined using Raman microscopy in combination with novel data analysis. Basis approximation and compressed principal component analysis are shown to provide efficient compression of confocal Raman microscopy images, alongside powerful methods for unsupervised analysis. This scheme allows the acceleration of data mining, such as principal component analysis, as they can be performed on the compressed data representation, providing a decrease in the factorisation time of a single image from five minutes to under a second. Using this workflow the interface region between a chemically engineered ligament construct and a bone-mimic anchor was examined. Natural ligament contains a striated interface between the bone and tissue that provides improved mechanical load tolerance, a similar interface was found in the ligament construct.

  14. Efficient JPEG 2000 Image Compression Scheme for Multihop Wireless Networks

    Directory of Open Access Journals (Sweden)

    Halim Sghaier

    2011-08-01

    Full Text Available When using wireless sensor networks for real-time data transmission, some critical points should be considered. Restricted computational power, reduced memory, narrow bandwidth and energy supplied present strong limits in sensor nodes. Therefore, maximizing network lifetime and minimizing energy consumption are always optimization goals. To overcome the computation and energy limitation of individual sensor nodes during image transmission, an energy efficient image transport scheme is proposed, taking advantage of JPEG2000 still image compression standard using MATLAB and C from Jasper. JPEG2000 provides a practical set of features, not necessarily available in the previous standards. These features were achieved using techniques: the discrete wavelet transform (DWT, and embedded block coding with optimized truncation (EBCOT. Performance of the proposed image transport scheme is investigated with respect to image quality and energy consumption. Simulation results are presented and show that the proposed scheme optimizes network lifetime and reduces significantly the amount of required memory by analyzing the functional influence of each parameter of this distributed image compression algorithm.

  15. PET image reconstruction with rotationally symmetric polygonal pixel grid based highly compressible system matrix

    International Nuclear Information System (INIS)

    Yu Yunhan; Xia Yan; Liu Yaqiang; Wang Shi; Ma Tianyu; Chen Jing; Hong Baoyu

    2013-01-01

    To achieve a maximum compression of system matrix in positron emission tomography (PET) image reconstruction, we proposed a polygonal image pixel division strategy in accordance with rotationally symmetric PET geometry. Geometrical definition and indexing rule for polygonal pixels were established. Image conversion from polygonal pixel structure to conventional rectangular pixel structure was implemented using a conversion matrix. A set of test images were analytically defined in polygonal pixel structure, converted to conventional rectangular pixel based images, and correctly displayed which verified the correctness of the image definition, conversion description and conversion of polygonal pixel structure. A compressed system matrix for PET image recon was generated by tap model and tested by forward-projecting three different distributions of radioactive sources to the sinogram domain and comparing them with theoretical predictions. On a practical small animal PET scanner, a compress ratio of 12.6:1 of the system matrix size was achieved with the polygonal pixel structure, comparing with the conventional rectangular pixel based tap-mode one. OS-EM iterative image reconstruction algorithms with the polygonal and conventional Cartesian pixel grid were developed. A hot rod phantom was detected and reconstructed based on these two grids with reasonable time cost. Image resolution of reconstructed images was both 1.35 mm. We conclude that it is feasible to reconstruct and display images in a polygonal image pixel structure based on a compressed system matrix in PET image reconstruction. (authors)

  16. Image Compression using Haar and Modified Haar Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Mohannad Abid Shehab Ahmed

    2013-04-01

    Full Text Available Efficient image compression approaches can provide the best solutions to the recent growth of the data intensive and multimedia based applications. As presented in many papers the Haar matrix–based methods and wavelet analysis can be used in various areas of image processing such as edge detection, preserving, smoothing or filtering. In this paper, color image compression analysis and synthesis based on Haar and modified Haar is presented. The standard Haar wavelet transformation with N=2 is composed of a sequence of low-pass and high-pass filters, known as a filter bank, the vertical and horizontal Haar filters are composed to construct four 2-dimensional filters, such filters applied directly to the image to speed up the implementation of the Haar wavelet transform. Modified Haar technique is studied and implemented for odd based numbers i.e. (N=3 & N=5 to generate many solution sets, these sets are tested using the energy function or numerical method to get the optimum one.The Haar transform is simple, efficient in memory usage due to high zero value spread (it can use sparse principle, and exactly reversible without the edge effects as compared to DCT (Discrete Cosine Transform. The implemented Matlab simulation results prove the effectiveness of DWT (Discrete Wave Transform algorithms based on Haar and Modified Haar techniques in attaining an efficient compression ratio (C.R, achieving higher peak signal to noise ratio (PSNR, and the resulting images are of much smoother as compared to standard JPEG especially for high C.R. A comparison between standard JPEG, Haar, and Modified Haar techniques is done finally which approves the highest capability of Modified Haar between others.

  17. PROMISE: parallel-imaging and compressed-sensing reconstruction of multicontrast imaging using SharablE information.

    Science.gov (United States)

    Gong, Enhao; Huang, Feng; Ying, Kui; Wu, Wenchuan; Wang, Shi; Yuan, Chun

    2015-02-01

    A typical clinical MR examination includes multiple scans to acquire images with different contrasts for complementary diagnostic information. The multicontrast scheme requires long scanning time. The combination of partially parallel imaging and compressed sensing (CS-PPI) has been used to reconstruct accelerated scans. However, there are several unsolved problems in existing methods. The target of this work is to improve existing CS-PPI methods for multicontrast imaging, especially for two-dimensional imaging. If the same field of view is scanned in multicontrast imaging, there is significant amount of sharable information. It is proposed in this study to use manifold sharable information among multicontrast images to enhance CS-PPI in a sequential way. Coil sensitivity information and structure based adaptive regularization, which were extracted from previously reconstructed images, were applied to enhance the following reconstructions. The proposed method is called Parallel-imaging and compressed-sensing Reconstruction Of Multicontrast Imaging using SharablE information (PROMISE). Using L1 -SPIRiT as a CS-PPI example, results on multicontrast brain and carotid scans demonstrated that lower error level and better detail preservation can be achieved by exploiting manifold sharable information. Besides, the privilege of PROMISE still exists while there is interscan motion. Using the sharable information among multicontrast images can enhance CS-PPI with tolerance to motions. © 2014 Wiley Periodicals, Inc.

  18. Compressive Sampling for Non-Imaging Remote Classification

    Science.gov (United States)

    2013-10-22

    spectro -­‐polarization  imager,   a  compressive  coherence  imager  to  resolve  objects  through  turbulence...2    The  relay  lens  for   UV -­‐CASSI,  which  focuses  the  aperture  code  onto  the  monochrome  detector...below  in  Fig.  3,  with  a  silicon   UV  sensitive  detector  on  the  left,  and   a   UV

  19. Random walk through fractal environments

    OpenAIRE

    Isliker, H.; Vlahos, L.

    2002-01-01

    We analyze random walk through fractal environments, embedded in 3-dimensional, permeable space. Particles travel freely and are scattered off into random directions when they hit the fractal. The statistical distribution of the flight increments (i.e. of the displacements between two consecutive hittings) is analytically derived from a common, practical definition of fractal dimension, and it turns out to approximate quite well a power-law in the case where the dimension D of the fractal is ...

  20. An compression algorithm for medical images and a display with the decoding function

    International Nuclear Information System (INIS)

    Gotoh, Toshiyuki; Nakagawa, Yukihiro; Shiohara, Morito; Yoshida, Masumi

    1990-01-01

    This paper describes and efficient image compression method for medical images, a high-speed display with the decoding function. In our method, an input image is divided into blocks, and either of Discrete Cosine Transform coding (DCT) or Block Truncation Coding (BTC) is adaptively applied on each block to improve image quality. The display, we developed, receives the compressed data from the host computer and reconstruct images of good quality at high speed using four decoding microprocessors on which our algorithm is implemented in pipeline. By the experiments, our method and display were verified to be effective. (author)

  1. Magni: A Python Package for Compressive Sampling and Reconstruction of Atomic Force Microscopy Images

    DEFF Research Database (Denmark)

    Oxvig, Christian Schou; Pedersen, Patrick Steffen; Arildsen, Thomas

    2014-01-01

    Magni is an open source Python package that embraces compressed sensing and Atomic Force Microscopy (AFM) imaging techniques. It provides AFM-specific functionality for undersampling and reconstructing images from AFM equipment and thereby accelerating the acquisition of AFM images. Magni also pr...... as a convenient platform for researchers in compressed sensing aiming at obtaining a high degree of reproducibility of their research....

  2. Fractals via iterated functions and multifunctions

    International Nuclear Information System (INIS)

    Singh, S.L.; Prasad, Bhagwati; Kumar, Ashish

    2009-01-01

    Fractals have wide applications in biology, computer graphics, quantum physics and several other areas of applied sciences (see, for instance [Daya Sagar BS, Rangarajan Govindan, Veneziano Daniele. Preface - fractals in geophysics. Chaos, Solitons and Fractals 2004;19:237-39; El Naschie MS. Young double-split experiment Heisenberg uncertainty principles and cantorian space-time. Chaos, Solitons and Fractals 1994;4(3):403-09; El Naschie MS. Quantum measurement, information, diffusion and cantorian geodesics. In: El Naschie MS, Rossler OE, Prigogine I, editors. Quantum mechanics, diffusion and Chaotic fractals. Oxford: Elsevier Science Ltd; 1995. p. 191-205; El Naschie MS. Iterated function systems, information and the two-slit experiment of quantum mechanics. In: El Naschie MS, Rossler OE, Prigogine I, editors. Quantum mechanics, diffusion and Chaotic fractals. Oxford: Elsevier Science Ltd; 1995. p. 185-9; El Naschie MS, Rossler OE, Prigogine I. Forward. In: El Naschie MS, Rossler OE, Prigogine I, editors. Quantum mechanics, diffusion and Chaotic fractals. Oxford: Elsevier Science Ltd; 1995; El Naschie MS. A review of E-infinity theory and the mass spectrum of high energy particle physics. Chaos, Solitons and Fractals 2004;19:209-36; El Naschie MS. Fractal black holes and information. Chaos, Solitons and Fractals 2006;29:23-35; El Naschie MS. Superstring theory: what it cannot do but E-infinity could. Chaos, Solitons and Fractals 2006;29:65-8). Especially, the study of iterated functions has been found very useful in the theory of black holes, two-slit experiment in quantum mechanics (cf. El Naschie, as mentioned above). The intent of this paper is to give a brief account of recent developments of fractals arising from IFS. We also discuss iterated multifunctions.

  3. Clinical evaluation of the JPEG2000 compression rate of CT and MR images for long term archiving in PACS

    International Nuclear Information System (INIS)

    Cha, Soon Joo; Kim, Sung Hwan; Kim, Yong Hoon

    2006-01-01

    We wanted to evaluate an acceptable compression rate of JPEG2000 for long term archiving of CT and MR images in PACS. Nine CT images and 9 MR images that had small or minimal lesions were randomly selected from the PACS at our institute. All the images are compressed with rates of 5:1, 10:1, 20:1, 40:1 and 80:1 by the JPEG2000 compression protocol. Pairs of original and compressed images were compared by 9 radiologists who were working independently. We designed a JPEG2000 viewing program for comparing two images on one monitor system for performing easy and quick evaluation. All the observers performed the comparison study twice on 5 mega pixel grey scale LCD monitors and 2 mega pixel color LCD monitors, respectively. The PSNR (Peak Signal to Noise Ratio) values were calculated for making quantitative comparisions. On MR and CT, all the images with 5:1 compression images showed no difference from the original images by all 9 observers and only one observer could detect a image difference on one CT image for 10:1 compression on only the 5 mega pixel monitor. For the 20:1 compression rate, clinically significant image deterioration was found in 50% of the images on the 5M pixel monitor study, and in 30% of the images on the 2M pixel monitor. PSNR values larger than 44 dB were calculated for all the compressed images. The clinically acceptable image compression rate for long term archiving by the JPEG2000 compression protocol is 10:1 for MR and CT, and if this is applied to PACS, it would reduce the cost and responsibility of the system

  4. Fractal Electrochemical Microsupercapacitors

    KAUST Repository

    Hota, Mrinal Kanti

    2017-08-17

    The first successful fabrication of microsupercapacitors (μ-SCs) using fractal electrode designs is reported. Using sputtered anhydrous RuO thin-film electrodes as prototypes, μ-SCs are fabricated using Hilbert, Peano, and Moore fractal designs, and their performance is compared to conventional interdigital electrode structures. Microsupercapacitor performance, including energy density, areal and volumetric capacitances, changes with fractal electrode geometry. Specifically, the μ-SCs based on the Moore design show a 32% enhancement in energy density compared to conventional interdigital structures, when compared at the same power density and using the same thin-film RuO electrodes. The energy density of the Moore design is 23.2 mWh cm at a volumetric power density of 769 mW cm. In contrast, the interdigital design shows an energy density of only 17.5 mWh cm at the same power density. We show that active electrode surface area cannot alone explain the increase in capacitance and energy density. We propose that the increase in electrical lines of force, due to edging effects in the fractal electrodes, also contribute to the higher capacitance. This study shows that electrode fractal design is a viable strategy for improving the performance of integrated μ-SCs that use thin-film electrodes at no extra processing or fabrication cost.

  5. Fractal Electrochemical Microsupercapacitors

    KAUST Repository

    Hota, Mrinal Kanti; Jiang, Qiu; Mashraei, Yousof; Salama, Khaled N.; Alshareef, Husam N.

    2017-01-01

    The first successful fabrication of microsupercapacitors (μ-SCs) using fractal electrode designs is reported. Using sputtered anhydrous RuO thin-film electrodes as prototypes, μ-SCs are fabricated using Hilbert, Peano, and Moore fractal designs, and their performance is compared to conventional interdigital electrode structures. Microsupercapacitor performance, including energy density, areal and volumetric capacitances, changes with fractal electrode geometry. Specifically, the μ-SCs based on the Moore design show a 32% enhancement in energy density compared to conventional interdigital structures, when compared at the same power density and using the same thin-film RuO electrodes. The energy density of the Moore design is 23.2 mWh cm at a volumetric power density of 769 mW cm. In contrast, the interdigital design shows an energy density of only 17.5 mWh cm at the same power density. We show that active electrode surface area cannot alone explain the increase in capacitance and energy density. We propose that the increase in electrical lines of force, due to edging effects in the fractal electrodes, also contribute to the higher capacitance. This study shows that electrode fractal design is a viable strategy for improving the performance of integrated μ-SCs that use thin-film electrodes at no extra processing or fabrication cost.

  6. Edge-based compression of cartoon-like images with homogeneous diffusion

    DEFF Research Database (Denmark)

    Mainberger, Markus; Bruhn, Andrés; Weickert, Joachim

    2011-01-01

    Edges provide semantically important image features. In this paper a lossy compression method for cartoon-like images is presented, which is based on edge information. Edges together with some adjacent grey/colour values are extracted and encoded using a classical edge detector, binary compressio...

  7. Multiview Depth-Image Compression Using an Extended H.264 Encoder

    NARCIS (Netherlands)

    Morvan, Y.; Farin, D.S.; With, de P.H.N.; Blanc-Talon, J.; Philips, W.

    2007-01-01

    This paper presents a predictive-coding algorithm for the compression of multiple depth-sequences obtained from a multi-camera acquisition setup. The proposed depth-prediction algorithm works by synthesizing a virtual depth-image that matches the depth-image (of the predicted camera). To generate

  8. Fractal analysis reveals reduced complexity of retinal vessels in CADASIL.

    Directory of Open Access Journals (Sweden)

    Michele Cavallari

    2011-04-01

    Full Text Available The Cerebral Autosomal Dominant Arteriopathy with Subcortical Infarcts and Leukoencephalopathy (CADASIL affects mainly small cerebral arteries and leads to disability and dementia. The relationship between clinical expression of the disease and progression of the microvessel pathology is, however, uncertain as we lack tools for imaging brain vessels in vivo. Ophthalmoscopy is regarded as a window into the cerebral microcirculation. In this study we carried out an ophthalmoscopic examination in subjects with CADASIL. Specifically, we performed fractal analysis of digital retinal photographs. Data are expressed as mean fractal dimension (mean-D, a parameter that reflects complexity of the retinal vessel branching. Ten subjects with genetically confirmed diagnosis of CADASIL and 10 sex and age-matched control subjects were enrolled. Fractal analysis of retinal digital images was performed by means of a computer-based program, and the data expressed as mean-D. Brain MRI lesion volume in FLAIR and T1-weighted images was assessed using MIPAV software. Paired t-test was used to disclose differences in mean-D between CADASIL and control groups. Spearman rank analysis was performed to evaluate potential associations between mean-D values and both disease duration and disease severity, the latter expressed as brain MRI lesion volumes, in the subjects with CADASIL. The results showed that mean-D value of patients (1.42±0.05; mean±SD was lower than control (1.50±0.04; p = 0.002. Mean-D did not correlate with disease duration nor with MRI lesion volumes of the subjects with CADASIL. The findings suggest that fractal analysis is a sensitive tool to assess changes of retinal vessel branching, likely reflecting early brain microvessel alterations, in CADASIL patients.

  9. IMPROVED COMPRESSION OF XML FILES FOR FAST IMAGE TRANSMISSION

    Directory of Open Access Journals (Sweden)

    S. Manimurugan

    2011-02-01

    Full Text Available The eXtensible Markup Language (XML is a format that is widely used as a tool for data exchange and storage. It is being increasingly used in secure transmission of image data over wireless network and World Wide Web. Verbose in nature, XML files can be tens of megabytes long. Thus, to reduce their size and to allow faster transmission, compression becomes vital. Several general purpose compression tools have been proposed without satisfactory results. This paper proposes a novel technique using modified BWT for compressing XML files in a lossless fashion. The experimental results show that the performance of the proposed technique outperforms both general purpose and XML-specific compressors.

  10. Efficient Imaging and Real-Time Display of Scanning Ion Conductance Microscopy Based on Block Compressive Sensing

    Science.gov (United States)

    Li, Gongxin; Li, Peng; Wang, Yuechao; Wang, Wenxue; Xi, Ning; Liu, Lianqing

    2014-07-01

    Scanning Ion Conductance Microscopy (SICM) is one kind of Scanning Probe Microscopies (SPMs), and it is widely used in imaging soft samples for many distinctive advantages. However, the scanning speed of SICM is much slower than other SPMs. Compressive sensing (CS) could improve scanning speed tremendously by breaking through the Shannon sampling theorem, but it still requires too much time in image reconstruction. Block compressive sensing can be applied to SICM imaging to further reduce the reconstruction time of sparse signals, and it has another unique application that it can achieve the function of image real-time display in SICM imaging. In this article, a new method of dividing blocks and a new matrix arithmetic operation were proposed to build the block compressive sensing model, and several experiments were carried out to verify the superiority of block compressive sensing in reducing imaging time and real-time display in SICM imaging.

  11. USING H.264/AVC-INTRA FOR DCT BASED SEGMENTATION DRIVEN COMPOUND IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    S. Ebenezer Juliet

    2011-08-01

    Full Text Available This paper presents a one pass block classification algorithm for efficient coding of compound images which consists of multimedia elements like text, graphics and natural images. The objective is to minimize the loss of visual quality of text during compression by separating text information which needs high special resolution than the pictures and background. It segments computer screen images into text/graphics and picture/background classes based on DCT energy in each 4x4 block, and then compresses both text/graphics pixels and picture/background blocks by H.264/AVC with variable quantization parameter. Experimental results show that the single H.264/AVC-INTRA coder with variable quantization outperforms single coders such as JPEG, JPEG-2000 for compound images. Also the proposed method improves the PSNR value significantly than standard JPEG, JPEG-2000 and while keeping competitive compression ratios.

  12. REMOTELY SENSEDC IMAGE COMPRESSION BASED ON WAVELET TRANSFORM

    Directory of Open Access Journals (Sweden)

    Heung K. Lee

    1996-06-01

    Full Text Available In this paper, we present an image compression algorithm that is capable of significantly reducing the vast mount of information contained in multispectral images. The developed algorithm exploits the spectral and spatial correlations found in multispectral images. The scheme encodes the difference between images after contrast/brightness equalization to remove the spectral redundancy, and utilizes a two-dimensional wavelet trans-form to remove the spatial redundancy. The transformed images are than encoded by hilbert-curve scanning and run-length-encoding, followed by huffman coding. We also present the performance of the proposed algorithm with KITSAT-1 image as well as the LANDSAT MultiSpectral Scanner data. The loss of information is evaluated by peak signal to noise ratio (PSNR and classification capability.

  13. Compressed sensing in imaging mass spectrometry

    International Nuclear Information System (INIS)

    Bartels, Andreas; Dülk, Patrick; Trede, Dennis; Alexandrov, Theodore; Maaß, Peter

    2013-01-01

    Imaging mass spectrometry (IMS) is a technique of analytical chemistry for spatially resolved, label-free and multipurpose analysis of biological samples that is able to detect the spatial distribution of hundreds of molecules in one experiment. The hyperspectral IMS data is typically generated by a mass spectrometer analyzing the surface of the sample. In this paper, we propose a compressed sensing approach to IMS which potentially allows for faster data acquisition by collecting only a part of the pixels in the hyperspectral image and reconstructing the full image from this data. We present an integrative approach to perform both peak-picking spectra and denoising m/z-images simultaneously, whereas the state of the art data analysis methods solve these problems separately. We provide a proof of the robustness of the recovery of both the spectra and individual channels of the hyperspectral image and propose an algorithm to solve our optimization problem which is based on proximal mappings. The paper concludes with the numerical reconstruction results for an IMS dataset of a rat brain coronal section. (paper)

  14. Lossless medical image compression using geometry-adaptive partitioning and least square-based prediction.

    Science.gov (United States)

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2018-06-01

    To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.

  15. Computational simulation of breast compression based on segmented breast and fibroglandular tissues on magnetic resonance images

    Energy Technology Data Exchange (ETDEWEB)

    Shih, Tzu-Ching [Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, 40402, Taiwan (China); Chen, Jeon-Hor; Nie Ke; Lin Muqing; Chang, Daniel; Nalcioglu, Orhan; Su, Min-Ying [Tu and Yuen Center for Functional Onco-Imaging and Radiological Sciences, University of California, Irvine, CA 92697 (United States); Liu Dongxu; Sun Lizhi, E-mail: shih@mail.cmu.edu.t [Department of Civil and Environmental Engineering, University of California, Irvine, CA 92697 (United States)

    2010-07-21

    This study presents a finite element-based computational model to simulate the three-dimensional deformation of a breast and fibroglandular tissues under compression. The simulation was based on 3D MR images of the breast, and craniocaudal and mediolateral oblique compression, as used in mammography, was applied. The geometry of the whole breast and the segmented fibroglandular tissues within the breast were reconstructed using triangular meshes by using the Avizo (registered) 6.0 software package. Due to the large deformation in breast compression, a finite element model was used to simulate the nonlinear elastic tissue deformation under compression, using the MSC.Marc (registered) software package. The model was tested in four cases. The results showed a higher displacement along the compression direction compared to the other two directions. The compressed breast thickness in these four cases at a compression ratio of 60% was in the range of 5-7 cm, which is a typical range of thickness in mammography. The projection of the fibroglandular tissue mesh at a compression ratio of 60% was compared to the corresponding mammograms of two women, and they demonstrated spatially matched distributions. However, since the compression was based on magnetic resonance imaging (MRI), which has much coarser spatial resolution than the in-plane resolution of mammography, this method is unlikely to generate a synthetic mammogram close to the clinical quality. Whether this model may be used to understand the technical factors that may impact the variations in breast density needs further investigation. Since this method can be applied to simulate compression of the breast at different views and different compression levels, another possible application is to provide a tool for comparing breast images acquired using different imaging modalities--such as MRI, mammography, whole breast ultrasound and molecular imaging--that are performed using different body positions and under

  16. Fractals and chaos

    CERN Document Server

    Earnshow, R; Jones, H

    1991-01-01

    This volume is based upon the presentations made at an international conference in London on the subject of 'Fractals and Chaos'. The objective of the conference was to bring together some of the leading practitioners and exponents in the overlapping fields of fractal geometry and chaos theory, with a view to exploring some of the relationships between the two domains. Based on this initial conference and subsequent exchanges between the editors and the authors, revised and updated papers were produced. These papers are contained in the present volume. We thank all those who contributed to this effort by way of planning and organisation, and also all those who helped in the production of this volume. In particular, we wish to express our appreciation to Gerhard Rossbach, Computer Science Editor, Craig Van Dyck, Production Director, and Nancy A. Rogers, who did the typesetting. A. J. Crilly R. A. Earnshaw H. Jones 1 March 1990 Introduction Fractals and Chaos The word 'fractal' was coined by Benoit Mandelbrot i...

  17. Single-photon compressive imaging with some performance benefits over raster scanning

    International Nuclear Information System (INIS)

    Yu, Wen-Kai; Liu, Xue-Feng; Yao, Xu-Ri; Wang, Chao; Zhai, Guang-Jie; Zhao, Qing

    2014-01-01

    A single-photon imaging system based on compressed sensing has been developed to image objects under ultra-low illumination. With this system, we have successfully realized imaging at the single-photon level with a single-pixel avalanche photodiode without point-by-point raster scanning. From analysis of the signal-to-noise ratio in the measurement we find that our system has much higher sensitivity than conventional ones based on point-by-point raster scanning, while the measurement time is also reduced. - Highlights: • We design a single photon imaging system with compressed sensing. • A single point avalanche photodiode is used without raster scanning. • The Poisson shot noise in the measurement is analyzed. • The sensitivity of our system is proved to be higher than that of raster scanning

  18. Comparison of two fractal interpolation methods

    Science.gov (United States)

    Fu, Yang; Zheng, Zeyu; Xiao, Rui; Shi, Haibo

    2017-03-01

    As a tool for studying complex shapes and structures in nature, fractal theory plays a critical role in revealing the organizational structure of the complex phenomenon. Numerous fractal interpolation methods have been proposed over the past few decades, but they differ substantially in the form features and statistical properties. In this study, we simulated one- and two-dimensional fractal surfaces by using the midpoint displacement method and the Weierstrass-Mandelbrot fractal function method, and observed great differences between the two methods in the statistical characteristics and autocorrelation features. From the aspect of form features, the simulations of the midpoint displacement method showed a relatively flat surface which appears to have peaks with different height as the fractal dimension increases. While the simulations of the Weierstrass-Mandelbrot fractal function method showed a rough surface which appears to have dense and highly similar peaks as the fractal dimension increases. From the aspect of statistical properties, the peak heights from the Weierstrass-Mandelbrot simulations are greater than those of the middle point displacement method with the same fractal dimension, and the variances are approximately two times larger. When the fractal dimension equals to 1.2, 1.4, 1.6, and 1.8, the skewness is positive with the midpoint displacement method and the peaks are all convex, but for the Weierstrass-Mandelbrot fractal function method the skewness is both positive and negative with values fluctuating in the vicinity of zero. The kurtosis is less than one with the midpoint displacement method, and generally less than that of the Weierstrass-Mandelbrot fractal function method. The autocorrelation analysis indicated that the simulation of the midpoint displacement method is not periodic with prominent randomness, which is suitable for simulating aperiodic surface. While the simulation of the Weierstrass-Mandelbrot fractal function method has

  19. Edge-Based Image Compression with Homogeneous Diffusion

    Science.gov (United States)

    Mainberger, Markus; Weickert, Joachim

    It is well-known that edges contain semantically important image information. In this paper we present a lossy compression method for cartoon-like images that exploits information at image edges. These edges are extracted with the Marr-Hildreth operator followed by hysteresis thresholding. Their locations are stored in a lossless way using JBIG. Moreover, we encode the grey or colour values at both sides of each edge by applying quantisation, subsampling and PAQ coding. In the decoding step, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. Our experiments show that the suggested method outperforms the widely-used JPEG standard and can even beat the advanced JPEG2000 standard for cartoon-like images.

  20. Microstructure and fractal characteristics of the solid-liquid interface forming during directional solidification of Inconel 718

    Directory of Open Access Journals (Sweden)

    WANG Ling

    2007-08-01

    Full Text Available The solidification microstructure and fractal characteristics of the solid-liquid interfaces of Inconel 718, under different cooling rates during directional solidification, were investigated by using SEM. Results showed that 5 μm/s was the cellular-dendrite transient rate. The prime dendrite arm spacing (PDAS was measured by Image Tool and it decreased with the cooling rate increased. The fractal dimension of the interfaces was calculated and it changes from 1.204310 to 1.517265 with the withdrawal rate ranging from 10 to 100 μm/s. The physical significance of the fractal dimension was analyzed by using fractal theory. It was found that the fractal dimension of the dendrites can be used to describe the solidification microstructure and parameters at low cooling rate, but both the fractal dimension and the dendrite arm spacing are needed in order to integrally describe the evaluation of the solidification microstructure completely.

  1. Fractal analysis of MRI data for the characterization of patients with schizophrenia and bipolar disorder

    Science.gov (United States)

    Squarcina, Letizia; De Luca, Alberto; Bellani, Marcella; Brambilla, Paolo; Turkheimer, Federico E.; Bertoldo, Alessandra

    2015-02-01

    Fractal geometry can be used to analyze shape and patterns in brain images. With this study we use fractals to analyze T1 data of patients affected by schizophrenia or bipolar disorder, with the aim of distinguishing between healthy and pathological brains using the complexity of brain structure, in particular of grey matter, as a marker of disease. 39 healthy volunteers, 25 subjects affected by schizophrenia and 11 patients affected by bipolar disorder underwent an MRI session. We evaluated fractal dimension of the brain cortex and its substructures, calculated with an algorithm based on the box-count algorithm. We modified this algorithm, with the aim of avoiding the segmentation processing step and using all the information stored in the image grey levels. Moreover, to increase sensitivity to local structural changes, we computed a value of fractal dimension for each slice of the brain or of the particular structure. To have reference values in comparing healthy subjects with patients, we built a template by averaging fractal dimension values of the healthy volunteers data. Standard deviation was evaluated and used to create a confidence interval. We also performed a slice by slice t-test to assess the difference at slice level between the three groups. Consistent average fractal dimension values were found across all the structures in healthy controls, while in the pathological groups we found consistent differences, indicating a change in brain and structures complexity induced by these disorders.

  2. Fractal analysis of MRI data for the characterization of patients with schizophrenia and bipolar disorder

    International Nuclear Information System (INIS)

    Squarcina, Letizia; Bellani, Marcella; De Luca, Alberto; Bertoldo, Alessandra; Brambilla, Paolo; Turkheimer, Federico E

    2015-01-01

    Fractal geometry can be used to analyze shape and patterns in brain images. With this study we use fractals to analyze T1 data of patients affected by schizophrenia or bipolar disorder, with the aim of distinguishing between healthy and pathological brains using the complexity of brain structure, in particular of grey matter, as a marker of disease. 39 healthy volunteers, 25 subjects affected by schizophrenia and 11 patients affected by bipolar disorder underwent an MRI session. We evaluated fractal dimension of the brain cortex and its substructures, calculated with an algorithm based on the box-count algorithm. We modified this algorithm, with the aim of avoiding the segmentation processing step and using all the information stored in the image grey levels. Moreover, to increase sensitivity to local structural changes, we computed a value of fractal dimension for each slice of the brain or of the particular structure. To have reference values in comparing healthy subjects with patients, we built a template by averaging fractal dimension values of the healthy volunteers data. Standard deviation was evaluated and used to create a confidence interval. We also performed a slice by slice t-test to assess the difference at slice level between the three groups. Consistent average fractal dimension values were found across all the structures in healthy controls, while in the pathological groups we found consistent differences, indicating a change in brain and structures complexity induced by these disorders. (paper)

  3. Automatic generation of aesthetic patterns on fractal tilings by means of dynamical systems

    International Nuclear Information System (INIS)

    Chung, K.W.; Ma, H.M.

    2005-01-01

    A fractal tiling or f-tiling is a tiling which possesses self-similarity and the boundary of which is a fractal. In this paper, we investigate the classification of fractal tilings with kite-shaped and dart-shaped prototiles from which three new f-tilings are found. Invariant mappings are constructed for the creation of aesthetic patterns on such tilings. A modified convergence time scheme is described, which reflects the rate of convergence of various orbits and at the same time, enhances the artistic appeal of a generated image. A scheme based on the frequency of visit at a pixel is used to generate chaotic attractors

  4. Quasi-periodic fractal patterns in geomagnetic reversals, geological activity, and astronomical events

    International Nuclear Information System (INIS)

    Puetz, Stephen J.; Borchardt, Glenn

    2015-01-01

    Highlights: • Spectral analysis indicates similar harmonics in astronomical and geological events. • Quasi-periodic cycles occur in tripling patterns of 30.44, 91.33, 274, 822, and 2466 myr. • Similar astro- and geo-phases suggest that the cycles develop from a common source. - Abstract: The cause of geomagnetic reversals remains a geological mystery. With the availability of improved paleomagnetic databases in the past three years, a reexamination of possible periodicity in the geomagnetic reversal rate seems warranted. Previous reports of cyclicity in the reversal rate, along with the recent discovery of harmonic cycles in a variety of natural events, sparked our interest in reevaluating possible patterns in the reversal rate. Here, we focus on geomagnetic periodicity, but also analyze paleointensity, zircon formation, star formation, quasar formation, supernova, and gamma ray burst records to determine if patterns that occur in other types of data have similar periodicity. If so, then the degree of synchronization will indicate likely causal relationships with geomagnetic reversals. To achieve that goal, newly available time-series records from these disciplines were tested for cyclicity by using spectral analysis and time-lagged cross-correlation techniques. The results showed evidence of period-tripled cycles of 30.44, 91.33, 274, 822, and 2466 million years, corresponding to the periodicity from a new Universal Cycle model. Based on the results, a fractal model of the universe is hypothesized in which sub-electron fractal matter acts as a dynamic medium for large-scale waves that cause the cycles in astronomical and geological processes. According to this hypothesis, the medium of sub-electron fractal matter periodically compresses and decompresses according to the standard laws for mechanical waves. Consequently, the compressions contribute to high-pressure environments and vice versa for the decompressions, which are hypothesized to cause the

  5. Towards a physics on fractals: Differential vector calculus in three-dimensional continuum with fractal metric

    Science.gov (United States)

    Balankin, Alexander S.; Bory-Reyes, Juan; Shapiro, Michael

    2016-02-01

    One way to deal with physical problems on nowhere differentiable fractals is the mapping of these problems into the corresponding problems for continuum with a proper fractal metric. On this way different definitions of the fractal metric were suggested to account for the essential fractal features. In this work we develop the metric differential vector calculus in a three-dimensional continuum with a non-Euclidean metric. The metric differential forms and Laplacian are introduced, fundamental identities for metric differential operators are established and integral theorems are proved by employing the metric version of the quaternionic analysis for the Moisil-Teodoresco operator, which has been introduced and partially developed in this paper. The relations between the metric and conventional operators are revealed. It should be emphasized that the metric vector calculus developed in this work provides a comprehensive mathematical formalism for the continuum with any suitable definition of fractal metric. This offers a novel tool to study physics on fractals.

  6. SU-D-BRA-04: Fractal Dimension Analysis of Edge-Detected Rectal Cancer CTs for Outcome Prediction

    International Nuclear Information System (INIS)

    Zhong, H; Wang, J; Hu, W; Shen, L; Wan, J; Zhou, Z; Zhang, Z

    2015-01-01

    Purpose: To extract the fractal dimension features from edge-detected rectal cancer CTs, and to examine the predictability of fractal dimensions to outcomes of primary rectal cancer patients. Methods: Ninety-seven rectal cancer patients treated with neo-adjuvant chemoradiation were enrolled in this study. CT images were obtained before chemoradiotherapy. The primary lesions of the rectal cancer were delineated by experienced radiation oncologists. These images were extracted and filtered by six different Laplacian of Gaussian (LoG) filters with different filter values (0.5–3.0: from fine to coarse) to achieve primary lesions in different anatomical scales. Edges of the original images were found at zero-crossings of the filtered images. Three different fractal dimensions (box-counting dimension, Minkowski dimension, mass dimension) were calculated upon the image slice with the largest cross-section of the primary lesion. The significance of these fractal dimensions in survival, recurrence and metastasis were examined by Student’s t-test. Results: For a follow-up time of two years, 18 of 97 patients had experienced recurrence, 24 had metastasis, and 18 were dead. Minkowski dimensions under large filter values (2.0, 2.5, 3.0) were significantly larger (p=0.014, 0.006, 0.015) in patients with recurrence than those without. For metastasis, only box-counting dimensions under a single filter value (2.5) showed differences (p=0.016) between patients with and without. For overall survival, box-counting dimensions (filter values = 0.5, 1.0, 1.5), Minkowski dimensions (filter values = 0.5, 1.5, 2.0, 2,5) and mass dimensions (filter values = 1.5, 2.0) were all significant (p<0.05). Conclusion: It is feasible to extract shape information by edge detection and fractal dimensions analysis in neo-adjuvant rectal cancer patients. This information can be used to prognosis prediction

  7. Order-fractal transitions in abstract paintings

    Energy Technology Data Exchange (ETDEWEB)

    Calleja, E.M. de la, E-mail: elsama79@gmail.com [Instituto de Física, Universidade Federal do Rio Grande do Sul, Caixa Postal 15051, 91501-970, Porto Alegre, RS (Brazil); Cervantes, F. [Department of Applied Physics, CINVESTAV-IPN, Carr. Antigua a Progreso km.6, Cordemex, C.P.97310, Mérida, Yucatán (Mexico); Calleja, J. de la [Department of Informatics, Universidad Politécnica de Puebla, 72640 (Mexico)

    2016-08-15

    In this study, we determined the degree of order for 22 Jackson Pollock paintings using the Hausdorff–Besicovitch fractal dimension. Based on the maximum value of each multi-fractal spectrum, the artworks were classified according to the year in which they were painted. It has been reported that Pollock’s paintings are fractal and that this feature was more evident in his later works. However, our results show that the fractal dimension of these paintings ranges among values close to two. We characterize this behavior as a fractal-order transition. Based on the study of disorder-order transition in physical systems, we interpreted the fractal-order transition via the dark paint strokes in Pollock’s paintings as structured lines that follow a power law measured by the fractal dimension. We determined self-similarity in specific paintings, thereby demonstrating an important dependence on the scale of observations. We also characterized the fractal spectrum for the painting entitled Teri’s Find. We obtained similar spectra for Teri’s Find and Number 5, thereby suggesting that the fractal dimension cannot be rejected completely as a quantitative parameter for authenticating these artworks. -- Highlights: •We determined the degree of order in Jackson Pollock paintings using the Hausdorff–Besicovitch dimension. •We detected a fractal-order transition from Pollock’s paintings between 1947 and 1951. •We suggest that Jackson Pollock could have painted Teri’s Find.

  8. Efficient burst image compression using H.265/HEVC

    Science.gov (United States)

    Roodaki-Lavasani, Hoda; Lainema, Jani

    2014-02-01

    New imaging use cases are emerging as more powerful camera hardware is entering consumer markets. One family of such use cases is based on capturing multiple pictures instead of just one when taking a photograph. That kind of a camera operation allows e.g. selecting the most successful shot from a sequence of images, showing what happened right before or after the shot was taken or combining the shots by computational means to improve either visible characteristics of the picture (such as dynamic range or focus) or the artistic aspects of the photo (e.g. by superimposing pictures on top of each other). Considering that photographic images are typically of high resolution and quality and the fact that these kind of image bursts can consist of at least tens of individual pictures, an efficient compression algorithm is desired. However, traditional video coding approaches fail to provide the random access properties these use cases require to achieve near-instantaneous access to the pictures in the coded sequence. That feature is critical to allow users to browse the pictures in an arbitrary order or imaging algorithms to extract desired pictures from the sequence quickly. This paper proposes coding structures that provide such random access properties while achieving coding efficiency superior to existing image coders. The results indicate that using HEVC video codec with a single reference picture fixed for the whole sequence can achieve nearly as good compression as traditional IPPP coding structures. It is also shown that the selection of the reference frame can further improve the coding efficiency.

  9. Least median of squares filtering of locally optimal point matches for compressible flow image registration

    International Nuclear Information System (INIS)

    Castillo, Edward; Guerrero, Thomas; Castillo, Richard; White, Benjamin; Rojo, Javier

    2012-01-01

    Compressible flow based image registration operates under the assumption that the mass of the imaged material is conserved from one image to the next. Depending on how the mass conservation assumption is modeled, the performance of existing compressible flow methods is limited by factors such as image quality, noise, large magnitude voxel displacements, and computational requirements. The Least Median of Squares Filtered Compressible Flow (LFC) method introduced here is based on a localized, nonlinear least squares, compressible flow model that describes the displacement of a single voxel that lends itself to a simple grid search (block matching) optimization strategy. Spatially inaccurate grid search point matches, corresponding to erroneous local minimizers of the nonlinear compressible flow model, are removed by a novel filtering approach based on least median of squares fitting and the forward search outlier detection method. The spatial accuracy of the method is measured using ten thoracic CT image sets and large samples of expert determined landmarks (available at www.dir-lab.com). The LFC method produces an average error within the intra-observer error on eight of the ten cases, indicating that the method is capable of achieving a high spatial accuracy for thoracic CT registration. (paper)

  10. Entrainment to a real time fractal visual stimulus modulates fractal gait dynamics.

    Science.gov (United States)

    Rhea, Christopher K; Kiefer, Adam W; D'Andrea, Susan E; Warren, William H; Aaron, Roy K

    2014-08-01

    Fractal patterns characterize healthy biological systems and are considered to reflect the ability of the system to adapt to varying environmental conditions. Previous research has shown that fractal patterns in gait are altered following natural aging or disease, and this has potential negative consequences for gait adaptability that can lead to increased risk of injury. However, the flexibility of a healthy neurological system to exhibit different fractal patterns in gait has yet to be explored, and this is a necessary step toward understanding human locomotor control. Fifteen participants walked for 15min on a treadmill, either in the absence of a visual stimulus or while they attempted to couple the timing of their gait with a visual metronome that exhibited a persistent fractal pattern (contained long-range correlations) or a random pattern (contained no long-range correlations). The stride-to-stride intervals of the participants were recorded via analog foot pressure switches and submitted to detrended fluctuation analysis (DFA) to determine if the fractal patterns during the visual metronome conditions differed from the baseline (no metronome) condition. DFA α in the baseline condition was 0.77±0.09. The fractal patterns in the stride-to-stride intervals were significantly altered when walking to the fractal metronome (DFA α=0.87±0.06) and to the random metronome (DFA α=0.61±0.10) (both p<.05 when compared to the baseline condition), indicating that a global change in gait dynamics was observed. A variety of strategies were identified at the local level with a cross-correlation analysis, indicating that local behavior did not account for the consistent global changes. Collectively, the results show that a gait dynamics can be shifted in a prescribed manner using a visual stimulus and the shift appears to be a global phenomenon. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Diffusion-Weighted Imaging for Predicting New Compression Fractures Following Percutaneous Vertebroplasty

    International Nuclear Information System (INIS)

    Sugimoto, T.

    2008-01-01

    Background: Percutaneous vertebroplasty (PVP) is a technique that structurally stabilizes a fractured vertebral body. However, some patients return to the hospital due to recurrent back pain following PVP, and such pain is sometimes caused by new compression fractures. Purpose: To investigate whether the apparent diffusion coefficient (ADC) of adjacent vertebral bodies as assessed by diffusion-weighted imaging before PVP could predict the onset of new compression fractures following PVP. Material and Methods: 25 patients with osteoporotic compression fractures who underwent PVP were enrolled in this study. ADC was measured for 49 vertebral bodies immediately above and below each vertebral body injected with bone cement before and after PVP. By measuring ADC for each adjacent vertebral body, ADC was compared between vertebral bodies with a new compression fracture within 1 month and those without new compression fractures. In addition, the mean ADC of adjacent vertebral bodies per patient was calculated. Results: Mean preoperative ADC for the six adjacent vertebral bodies with new compression fractures was 0.55x10 -3 mm 2 /s (range 0.36-1.01x10 -3 mm 2 /s), and for the 43 adjacent vertebral bodies without new compression fractures 0.20x10 -3 mm 2 /s (range 0-0.98x10 -3 mm 2 /s) (P -3 mm 2 /s (range 0.21-1.01x10 -3 mm 2 /s), and that for the 19 patients without new compression fractures 0.17x10 -3 mm 2 /s (range 0.01-0.43x10 -3 mm 2 /s) (P<0.001). Conclusion: The ADC of adjacent vertebral bodies as assessed by diffusion-weighted imaging before PVP might be one of the predictors for new compression fractures following PVP

  12. Adaptive Binary Arithmetic Coder-Based Image Feature and Segmentation in the Compressed Domain

    Directory of Open Access Journals (Sweden)

    Hsi-Chin Hsin

    2012-01-01

    Full Text Available Image compression is necessary in various applications, especially for efficient transmission over a band-limited channel. It is thus desirable to be able to segment an image in the compressed domain directly such that the burden of decompressing computation can be avoided. Motivated by the adaptive binary arithmetic coder (MQ coder of JPEG2000, we propose an efficient scheme to segment the feature vectors that are extracted from the code stream of an image. We modify the Compression-based Texture Merging (CTM algorithm to alleviate the influence of overmerging problem by making use of the rate distortion information. Experimental results show that the MQ coder-based image segmentation is preferable in terms of the boundary displacement error (BDE measure. It has the advantage of saving computational cost as the segmentation results even at low rates of bits per pixel (bpp are satisfactory.

  13. Video on the Internet: An introduction to the digital encoding, compression, and transmission of moving image data.

    Science.gov (United States)

    Boudier, T; Shotton, D M

    1999-01-01

    In this paper, we seek to provide an introduction to the fast-moving field of digital video on the Internet, from the viewpoint of the biological microscopist who might wish to store or access videos, for instance in image databases such as the BioImage Database (http://www.bioimage.org). We describe and evaluate the principal methods used for encoding and compressing moving image data for digital storage and transmission over the Internet, which involve compromises between compression efficiency and retention of image fidelity, and describe the existing alternate software technologies for downloading or streaming compressed digitized videos using a Web browser. We report the results of experiments on video microscopy recordings and three-dimensional confocal animations of biological specimens to evaluate the compression efficiencies of the principal video compression-decompression algorithms (codecs) and to document the artefacts associated with each of them. Because MPEG-1 gives very high compression while yet retaining reasonable image quality, these studies lead us to recommend that video databases should store both a high-resolution original version of each video, ideally either uncompressed or losslessly compressed, and a separate edited and highly compressed MPEG-1 preview version that can be rapidly downloaded for interactive viewing by the database user. Copyright 1999 Academic Press.

  14. Positron annihilation near fractal surfaces

    International Nuclear Information System (INIS)

    Lung, C.W.; Deng, K.M.; Xiong, L.Y.

    1991-07-01

    A model for positron annihilation in the sub-surface region near a fractal surface is proposed. It is found that the power law relationship between the mean positron implantation depth and incident positron energy can be used to measure the fractal dimension of the fractal surface in materials. (author). 10 refs, 2 figs

  15. Development of a compressive sampling hyperspectral imager prototype

    Science.gov (United States)

    Barducci, Alessandro; Guzzi, Donatella; Lastri, Cinzia; Nardino, Vanni; Marcoionni, Paolo; Pippi, Ivan

    2013-10-01

    Compressive sensing (CS) is a new technology that investigates the chance to sample signals at a lower rate than the traditional sampling theory. The main advantage of CS is that compression takes place during the sampling phase, making possible significant savings in terms of the ADC, data storage memory, down-link bandwidth, and electrical power absorption. The CS technology could have primary importance for spaceborne missions and technology, paving the way to noteworthy reductions of payload mass, volume, and cost. On the contrary, the main CS disadvantage is made by the intensive off-line data processing necessary to obtain the desired source estimation. In this paper we summarize the CS architecture and its possible implementations for Earth observation, giving evidence of possible bottlenecks hindering this technology. CS necessarily employs a multiplexing scheme, which should produce some SNR disadvantage. Moreover, this approach would necessitate optical light modulators and 2-dim detector arrays of high frame rate. This paper describes the development of a sensor prototype at laboratory level that will be utilized for the experimental assessment of CS performance and the related reconstruction errors. The experimental test-bed adopts a push-broom imaging spectrometer, a liquid crystal plate, a standard CCD camera and a Silicon PhotoMultiplier (SiPM) matrix. The prototype is being developed within the framework of the ESA ITI-B Project titled "Hyperspectral Passive Satellite Imaging via Compressive Sensing".

  16. On the Use of Normalized Compression Distances for Image Similarity Detection

    Directory of Open Access Journals (Sweden)

    Dinu Coltuc

    2018-01-01

    Full Text Available This paper investigates the usefulness of the normalized compression distance (NCD for image similarity detection. Instead of the direct NCD between images, the paper considers the correlation between NCD based feature vectors extracted for each image. The vectors are derived by computing the NCD between the original image and sequences of translated (rotated versions. Feature vectors for simple transforms (circular translations on horizontal, vertical, diagonal directions and rotations around image center and several standard compressors are generated and tested in a very simple experiment of similarity detection between the original image and two filtered versions (median and moving average. The promising vector configurations (geometric transform, lossless compressor are further tested for similarity detection on the 24 images of the Kodak set subject to some common image processing. While the direct computation of NCD fails to detect image similarity even in the case of simple median and moving average filtering in 3 × 3 windows, for certain transforms and compressors, the proposed approach appears to provide robustness at similarity detection against smoothing, lossy compression, contrast enhancement, noise addition and some robustness against geometrical transforms (scaling, cropping and rotation.

  17. Statistical and Fractal Processing of Phase Images of Human Biological Fluids

    Directory of Open Access Journals (Sweden)

    MARCHUK, Y. I.

    2010-11-01

    Full Text Available Performed in this work are complex statistical and fractal analyses of phase properties inherent to birefringence networks of liquid crystals consisting of optically-thin layers prepared from human bile. Within the framework of a statistical approach, the authors have investigated values and ranges for changes of statistical moments of the 1-st to 4-th orders that characterize coordinate distributions for phase shifts between orthogonal components of amplitudes inherent to laser radiation transformed by human bile with various pathologies. Using the Gramm-Charlie method, ascertained are correlation criteria for differentiation of phase maps describing pathologically changed liquid-crystal networks. In the framework of the fractal approach, determined are dimensionalities of self-similar coordinate phase distributions as well as features of transformation of logarithmic dependences for power spectra of these distributions for various types of human pathologies.

  18. Optical image transformation and encryption by phase-retrieval-based double random-phase encoding and compressive ghost imaging

    Science.gov (United States)

    Yuan, Sheng; Yang, Yangrui; Liu, Xuemei; Zhou, Xin; Wei, Zhenzhuo

    2018-01-01

    An optical image transformation and encryption scheme is proposed based on double random-phase encoding (DRPE) and compressive ghost imaging (CGI) techniques. In this scheme, a secret image is first transformed into a binary image with the phase-retrieval-based DRPE technique, and then encoded by a series of random amplitude patterns according to the ghost imaging (GI) principle. Compressive sensing, corrosion and expansion operations are implemented to retrieve the secret image in the decryption process. This encryption scheme takes the advantage of complementary capabilities offered by the phase-retrieval-based DRPE and GI-based encryption techniques. That is the phase-retrieval-based DRPE is used to overcome the blurring defect of the decrypted image in the GI-based encryption, and the CGI not only reduces the data amount of the ciphertext, but also enhances the security of DRPE. Computer simulation results are presented to verify the performance of the proposed encryption scheme.

  19. Astronomical Image Compression Techniques Based on ACC and KLT Coder

    Directory of Open Access Journals (Sweden)

    J. Schindler

    2011-01-01

    Full Text Available This paper deals with a compression of image data in applications in astronomy. Astronomical images have typical specific properties — high grayscale bit depth, size, noise occurrence and special processing algorithms. They belong to the class of scientific images. Their processing and compression is quite different from the classical approach of multimedia image processing. The database of images from BOOTES (Burst Observer and Optical Transient Exploring System has been chosen as a source of the testing signal. BOOTES is a Czech-Spanish robotic telescope for observing AGN (active galactic nuclei and the optical transient of GRB (gamma ray bursts searching. This paper discusses an approach based on an analysis of statistical properties of image data. A comparison of two irrelevancy reduction methods is presented from a scientific (astrometric and photometric point of view. The first method is based on a statistical approach, using the Karhunen-Loeve transform (KLT with uniform quantization in the spectral domain. The second technique is derived from wavelet decomposition with adaptive selection of used prediction coefficients. Finally, the comparison of three redundancy reduction methods is discussed. Multimedia format JPEG2000 and HCOMPRESS, designed especially for astronomical images, are compared with the new Astronomical Context Coder (ACC coder based on adaptive median regression.

  20. Feature extraction algorithm for space targets based on fractal theory

    Science.gov (United States)

    Tian, Balin; Yuan, Jianping; Yue, Xiaokui; Ning, Xin

    2007-11-01

    In order to offer a potential for extending the life of satellites and reducing the launch and operating costs, satellite servicing including conducting repairs, upgrading and refueling spacecraft on-orbit become much more frequently. Future space operations can be more economically and reliably executed using machine vision systems, which can meet real time and tracking reliability requirements for image tracking of space surveillance system. Machine vision was applied to the research of relative pose for spacecrafts, the feature extraction algorithm was the basis of relative pose. In this paper fractal geometry based edge extraction algorithm which can be used in determining and tracking the relative pose of an observed satellite during proximity operations in machine vision system was presented. The method gets the gray-level image distributed by fractal dimension used the Differential Box-Counting (DBC) approach of the fractal theory to restrain the noise. After this, we detect the consecutive edge using Mathematical Morphology. The validity of the proposed method is examined by processing and analyzing images of space targets. The edge extraction method not only extracts the outline of the target, but also keeps the inner details. Meanwhile, edge extraction is only processed in moving area to reduce computation greatly. Simulation results compared edge detection using the method which presented by us with other detection methods. The results indicate that the presented algorithm is a valid method to solve the problems of relative pose for spacecrafts.

  1. A Complex Story: Universal Preference vs. Individual Differences Shaping Aesthetic Response to Fractals Patterns

    Science.gov (United States)

    Street, Nichola; Forsythe, Alexandra M.; Reilly, Ronan; Taylor, Richard; Helmy, Mai S.

    2016-01-01

    Fractal patterns offer one way to represent the rough complexity of the natural world. Whilst they dominate many of our visual experiences in nature, little large-scale perceptual research has been done to explore how we respond aesthetically to these patterns. Previous research (Taylor et al., 2011) suggests that the fractal patterns with mid-range fractal dimensions (FDs) have universal aesthetic appeal. Perceptual and aesthetic responses to visual complexity have been more varied with findings suggesting both linear (Forsythe et al., 2011) and curvilinear (Berlyne, 1970) relationships. Individual differences have been found to account for many of the differences we see in aesthetic responses but some, such as culture, have received little attention within the fractal and complexity research fields. This two-study article aims to test preference responses to FD and visual complexity, using a large cohort (N = 443) of participants from around the world to allow universality claims to be tested. It explores the extent to which age, culture and gender can predict our preferences for fractally complex patterns. Following exploratory analysis that found strong correlations between FD and visual complexity, a series of linear mixed-effect models were implemented to explore if each of the individual variables could predict preference. The first tested a linear complexity model (likelihood of selecting the more complex image from the pair of images) and the second a mid-range FD model (likelihood of selecting an image within mid-range). Results show that individual differences can reliably predict preferences for complexity across culture, gender and age. However, in fitting with current findings the mid-range models show greater consistency in preference not mediated by gender, age or culture. This article supports the established theory that the mid-range fractal patterns appear to be a universal construct underlying preference but also highlights the fragility of

  2. Ultra high-speed x-ray imaging of laser-driven shock compression using synchrotron light

    Science.gov (United States)

    Olbinado, Margie P.; Cantelli, Valentina; Mathon, Olivier; Pascarelli, Sakura; Grenzer, Joerg; Pelka, Alexander; Roedel, Melanie; Prencipe, Irene; Laso Garcia, Alejandro; Helbig, Uwe; Kraus, Dominik; Schramm, Ulrich; Cowan, Tom; Scheel, Mario; Pradel, Pierre; De Resseguier, Thibaut; Rack, Alexander

    2018-02-01

    A high-power, nanosecond pulsed laser impacting the surface of a material can generate an ablation plasma that drives a shock wave into it; while in situ x-ray imaging can provide a time-resolved probe of the shock-induced material behaviour on macroscopic length scales. Here, we report on an investigation into laser-driven shock compression of a polyurethane foam and a graphite rod by means of single-pulse synchrotron x-ray phase-contrast imaging with MHz frame rate. A 6 J, 10 ns pulsed laser was used to generate shock compression. Physical processes governing the laser-induced dynamic response such as elastic compression, compaction, pore collapse, fracture, and fragmentation have been imaged; and the advantage of exploiting the partial spatial coherence of a synchrotron source for studying low-density, carbon-based materials is emphasized. The successful combination of a high-energy laser and ultra high-speed x-ray imaging using synchrotron light demonstrates the potentiality of accessing complementary information from scientific studies of laser-driven shock compression.

  3. Evaluation of onboard hyperspectral-image compression techniques for a parallel push-broom sensor

    Energy Technology Data Exchange (ETDEWEB)

    Briles, S.

    1996-04-01

    A single hyperspectral imaging sensor can produce frames with spatially-continuous rows of differing, but adjacent, spectral wavelength. If the frame sample-rate of the sensor is such that subsequent hyperspectral frames are spatially shifted by one row, then the sensor can be thought of as a parallel (in wavelength) push-broom sensor. An examination of data compression techniques for such a sensor is presented. The compression techniques are intended to be implemented onboard a space-based platform and to have implementation speeds that match the date rate of the sensor. Data partitions examined extend from individually operating on a single hyperspectral frame to operating on a data cube comprising the two spatial axes and the spectral axis. Compression algorithms investigated utilize JPEG-based image compression, wavelet-based compression and differential pulse code modulation. Algorithm performance is quantitatively presented in terms of root-mean-squared error and root-mean-squared correlation coefficient error. Implementation issues are considered in algorithm development.

  4. Correlation of optical properties with the fractal microstructure of black molybdenum coatings

    Energy Technology Data Exchange (ETDEWEB)

    Barrera, Enrique; Gonzalez, Federico [Area de Energia, Division de Ciencias Basicas e Ingenieria, Universidad Autonoma Metropolitana-Iztapalapa, Apartado Postal 55-534, Mexico, D.F. 09340 (Mexico); Rodriguez, Eduardo [Area de Computacion y Sistemas, Division de Ciencias Basicas e Ingenieria, Universidad Autonoma Metropolitana-Iztapalapa, Apartado Postal 55-534, Mexico, D.F. 09340 (Mexico); Alvarez-Ramirez, Jose, E-mail: jjar@xanum.uam.mx [Area de Energia, Division de Ciencias Basicas e Ingenieria, Universidad Autonoma Metropolitana-Iztapalapa, Apartado Postal 55-534, Mexico, D.F. 09340 (Mexico)

    2010-01-01

    Coating is commonly used for improving the optical properties of surfaces for solar collector applications. The coating morphology depends on the deposition conditions, and this determines the final optical characteristics. Coating morphologies are irregular and of fractal nature, so a suitable approach for its characterization should use methods borrowed from fractal analysis. The aim of this work is to study the fractal characteristics of black molybdenum coatings on copper and to relate the fractal parameters to the optical properties. To this end, coating surfaces were prepared via immersion in a solution of ammonium paramolybdate for different deposition periods. The fractal analysis was carried out for SEM and AFM images of the coating surface and the fractal properties were obtained with a recently developed high-dimensional extension of the well-known detrended fluctuation analysis (DFA). The most salient parameter drawn from the application of the DFA is the Hurst index, a parameter related to the roughness of the coating surface, and the multifractality index, which is related to the non-linearity features of the coating morphology. The results showed that optical properties, including absorptance and emittance, are decreasing functions of the Hurst and multifractality indices. This suggests that coating surfaces with high absorptance and emittance values are related to complex coating morphologies conformed within a non-linear structure.

  5. Encounters with chaos and fractals

    CERN Document Server

    Gulick, Denny

    2012-01-01

    Periodic Points Iterates of Functions Fixed Points Periodic Points Families of Functions The Quadratic Family Bifurcations Period-3 Points The Schwarzian Derivative One-Dimensional Chaos Chaos Transitivity and Strong Chaos Conjugacy Cantor Sets Two-Dimensional Chaos Review of Matrices Dynamics of Linear FunctionsNonlinear Maps The Hénon Map The Horseshoe Map Systems of Differential Equations Review of Systems of Differential Equations Almost Linearity The Pendulum The Lorenz System Introduction to Fractals Self-Similarity The Sierpiński Gasket and Other "Monsters"Space-Filling Curves Similarity and Capacity DimensionsLyapunov Dimension Calculating Fractal Dimensions of Objects Creating Fractals Sets Metric Spaces The Hausdorff Metric Contractions and Affine Functions Iterated Function SystemsAlgorithms for Drawing Fractals Complex Fractals: Julia Sets and the Mandelbrot Set Complex Numbers and Functions Julia Sets The Mandelbrot Set Computer Programs Answers to Selected Exercises References Index.

  6. Real-time Image Generation for Compressive Light Field Displays

    International Nuclear Information System (INIS)

    Wetzstein, G; Lanman, D; Hirsch, M; Raskar, R

    2013-01-01

    With the invention of integral imaging and parallax barriers in the beginning of the 20th century, glasses-free 3D displays have become feasible. Only today—more than a century later—glasses-free 3D displays are finally emerging in the consumer market. The technologies being employed in current-generation devices, however, are fundamentally the same as what was invented 100 years ago. With rapid advances in optical fabrication, digital processing power, and computational perception, a new generation of display technology is emerging: compressive displays exploring the co-design of optical elements and computational processing while taking particular characteristics of the human visual system into account. In this paper, we discuss real-time implementation strategies for emerging compressive light field displays. We consider displays composed of multiple stacked layers of light-attenuating or polarization-rotating layers, such as LCDs. The involved image generation requires iterative tomographic image synthesis. We demonstrate that, for the case of light field display, computed tomographic light field synthesis maps well to operations included in the standard graphics pipeline, facilitating efficient GPU-based implementations with real-time framerates.

  7. Predicting the fidelity of JPEG2000 compressed CT images using DICOM header information

    International Nuclear Information System (INIS)

    Kim, Kil Joong; Kim, Bohyoung; Lee, Hyunna; Choi, Hosik; Jeon, Jong-June; Ahn, Jeong-Hwan; Lee, Kyoung Ho

    2011-01-01

    Purpose: To propose multiple logistic regression (MLR) and artificial neural network (ANN) models constructed using digital imaging and communications in medicine (DICOM) header information in predicting the fidelity of Joint Photographic Experts Group (JPEG) 2000 compressed abdomen computed tomography (CT) images. Methods: Our institutional review board approved this study and waived informed patient consent. Using a JPEG2000 algorithm, 360 abdomen CT images were compressed reversibly (n = 48, as negative control) or irreversibly (n = 312) to one of different compression ratios (CRs) ranging from 4:1 to 10:1. Five radiologists independently determined whether the original and compressed images were distinguishable or indistinguishable. The 312 irreversibly compressed images were divided randomly into training (n = 156) and testing (n = 156) sets. The MLR and ANN models were constructed regarding the DICOM header information as independent variables and the pooled radiologists' responses as dependent variable. As independent variables, we selected the CR (DICOM tag number: 0028, 2112), effective tube current-time product (0018, 9332), section thickness (0018, 0050), and field of view (0018, 0090) among the DICOM tags. Using the training set, an optimal subset of independent variables was determined by backward stepwise selection in a four-fold cross-validation scheme. The MLR and ANN models were constructed with the determined independent variables using the training set. The models were then evaluated on the testing set by using receiver-operating-characteristic (ROC) analysis regarding the radiologists' pooled responses as the reference standard and by measuring Spearman rank correlation between the model prediction and the number of radiologists who rated the two images as distinguishable. Results: The CR and section thickness were determined as the optimal independent variables. The areas under the ROC curve for the MLR and ANN predictions were 0.91 (95% CI; 0

  8. Fractal Structures For Fixed Mems Capacitors

    KAUST Repository

    Elshurafa, Amro M.

    2014-08-28

    An embodiment of a fractal fixed capacitor comprises a capacitor body in a microelectromechanical system (MEMS) structure. The capacitor body has a first plate with a fractal shape separated by a horizontal distance from a second plate with a fractal shape. The first plate and the second plate are within the same plane. Such a fractal fixed capacitor further comprises a substrate above which the capacitor body is positioned.

  9. Enhanced Graphene Photodetector with Fractal Metasurface

    DEFF Research Database (Denmark)

    Fan, Jieran; Wang, Di; DeVault, Clayton

    2016-01-01

    We designed and fabricated a broadband, polarization-independent photodetector by integrating graphene with a fractal Cayley tree metasurface. Our measurements show an almost uniform, tenfold enhancement in photocurrent generation due to the fractal metasurface structure.......We designed and fabricated a broadband, polarization-independent photodetector by integrating graphene with a fractal Cayley tree metasurface. Our measurements show an almost uniform, tenfold enhancement in photocurrent generation due to the fractal metasurface structure....

  10. Fractal Structures For Fixed Mems Capacitors

    KAUST Repository

    Elshurafa, Amro M.; Radwan, Ahmed Gomaa Ahmed; Emira, Ahmed A.; Salama, Khaled N.

    2014-01-01

    An embodiment of a fractal fixed capacitor comprises a capacitor body in a microelectromechanical system (MEMS) structure. The capacitor body has a first plate with a fractal shape separated by a horizontal distance from a second plate with a fractal shape. The first plate and the second plate are within the same plane. Such a fractal fixed capacitor further comprises a substrate above which the capacitor body is positioned.

  11. Psicodiagnóstico fractal

    OpenAIRE

    Moghilevsky, Débora Estela

    2011-01-01

    A lo largo de los últimos años del siglo veinte se ha desarrollado la teoría de la complejidad. Este modelo relaciona las ciencias duras tales como la matemática, la teoría del caos, la física cuántica y la geometría fractal con las llamadas seudo ciencias. Dentro de este contexto podemos definir la Psicología Fractal como la ciencia que estudia los aspectos psíquicos como dinámicamente fractales.

  12. A Fractal Perspective on Scale in Geography

    Directory of Open Access Journals (Sweden)

    Bin Jiang

    2016-06-01

    Full Text Available Scale is a fundamental concept that has attracted persistent attention in geography literature over the past several decades. However, it creates enormous confusion and frustration, particularly in the context of geographic information science, because of scale-related issues such as image resolution and the modifiable areal unit problem (MAUP. This paper argues that the confusion and frustration arise from traditional Euclidean geometric thinking, in which locations, directions, and sizes are considered absolute, and it is now time to revise this conventional thinking. Hence, we review fractal geometry, together with its underlying way of thinking, and compare it to Euclidean geometry. Under the paradigm of Euclidean geometry, everything is measurable, no matter how big or small. However, most geographic features, due to their fractal nature, are essentially unmeasurable or their sizes depend on scale. For example, the length of a coastline, the area of a lake, and the slope of a topographic surface are all scale-dependent. Seen from the perspective of fractal geometry, many scale issues, such as the MAUP, are inevitable. They appear unsolvable, but can be dealt with. To effectively deal with scale-related issues, we present topological and scaling analyses illustrated by street-related concepts such as natural streets, street blocks, and natural cities. We further contend that one of the two spatial properties, spatial heterogeneity, is de facto the fractal nature of geographic features, and it should be considered the first effect among the two, because it is global and universal across all scales, which should receive more attention from practitioners of geography.

  13. Regional variance of visually lossless threshold in compressed chest CT images: Lung versus mediastinum and chest wall

    International Nuclear Information System (INIS)

    Kim, Tae Jung; Lee, Kyoung Ho; Kim, Bohyoung; Kim, Kil Joong; Chun, Eun Ju; Bajpai, Vasundhara; Kim, Young Hoon; Hahn, Seokyung; Lee, Kyung Won

    2009-01-01

    Objective: To estimate the visually lossless threshold (VLT) for the Joint Photographic Experts Group (JPEG) 2000 compression of chest CT images and to demonstrate the variance of the VLT between the lung and mediastinum/chest wall. Subjects and methods: Eighty images were compressed reversibly (as negative control) and irreversibly to 5:1, 10:1, 15:1 and 20:1. Five radiologists determined if the compressed images were distinguishable from their originals in the lung and mediastinum/chest wall. Exact tests for paired proportions were used to compare the readers' responses between the reversible and irreversible compressions and between the lung and mediastinum/chest wall. Results: At reversible, 5:1, 10:1, 15:1, and 20:1 compressions, 0%, 0%, 3-49% (p < .004, for three readers), 69-99% (p < .001, for all readers), and 100% of the 80 image pairs were distinguishable in the lung, respectively; and 0%, 0%, 74-100% (p < .001, for all readers), 100%, and 100% were distinguishable in the mediastinum/chest wall, respectively. The image pairs were less frequently distinguishable in the lung than in the mediastinum/chest wall at 10:1 (p < .001, for all readers) and 15:1 (p < .001, for two readers). In 321 image comparisons, the image pairs were indistinguishable in the lung but distinguishable in the mediastinum/chest wall, whereas there was no instance of the opposite. Conclusion: For JPEG2000 compression of chest CT images, the VLT is between 5:1 and 10:1. The lung is more tolerant to the compression than the mediastinum/chest wall.

  14. Regional variance of visually lossless threshold in compressed chest CT images: Lung versus mediastinum and chest wall

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Jung [Department of Radiology, Seoul National University Bundang Hospital, 300 Gumi-dong, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of); Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center (Korea, Republic of); Lee, Kyoung Ho [Department of Radiology, Seoul National University Bundang Hospital, 300 Gumi-dong, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of); Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center (Korea, Republic of)], E-mail: kholee@snubhrad.snu.ac.kr; Kim, Bohyoung; Kim, Kil Joong; Chun, Eun Ju; Bajpai, Vasundhara; Kim, Young Hoon [Department of Radiology, Seoul National University Bundang Hospital, 300 Gumi-dong, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of); Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center (Korea, Republic of); Hahn, Seokyung [Medical Research Collaborating Center, Seoul National University Hospital, 28 Yongon-dong, Chongno-gu, Seoul 110-744 (Korea, Republic of); Seoul National University College of Medicine (Korea, Republic of); Lee, Kyung Won [Department of Radiology, Seoul National University Bundang Hospital, 300 Gumi-dong, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of); Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center (Korea, Republic of)

    2009-03-15

    Objective: To estimate the visually lossless threshold (VLT) for the Joint Photographic Experts Group (JPEG) 2000 compression of chest CT images and to demonstrate the variance of the VLT between the lung and mediastinum/chest wall. Subjects and methods: Eighty images were compressed reversibly (as negative control) and irreversibly to 5:1, 10:1, 15:1 and 20:1. Five radiologists determined if the compressed images were distinguishable from their originals in the lung and mediastinum/chest wall. Exact tests for paired proportions were used to compare the readers' responses between the reversible and irreversible compressions and between the lung and mediastinum/chest wall. Results: At reversible, 5:1, 10:1, 15:1, and 20:1 compressions, 0%, 0%, 3-49% (p < .004, for three readers), 69-99% (p < .001, for all readers), and 100% of the 80 image pairs were distinguishable in the lung, respectively; and 0%, 0%, 74-100% (p < .001, for all readers), 100%, and 100% were distinguishable in the mediastinum/chest wall, respectively. The image pairs were less frequently distinguishable in the lung than in the mediastinum/chest wall at 10:1 (p < .001, for all readers) and 15:1 (p < .001, for two readers). In 321 image comparisons, the image pairs were indistinguishable in the lung but distinguishable in the mediastinum/chest wall, whereas there was no instance of the opposite. Conclusion: For JPEG2000 compression of chest CT images, the VLT is between 5:1 and 10:1. The lung is more tolerant to the compression than the mediastinum/chest wall.

  15. Design of LTCC Based Fractal Antenna

    KAUST Repository

    AdbulGhaffar, Farhan

    2010-09-01

    The thesis presents a Sierpinski Carpet fractal antenna array designed at 24 GHz for automotive radar applications. Miniaturized, high performance and low cost antennas are required for this application. To meet these specifications a fractal array has been designed for the first time on Low Temperature Co-fired Ceramic (LTCC) based substrate. LTCC provides a suitable platform for the development of these antennas due to its properties of vertical stack up and embedded passives. The complete antenna concept involves integration of this fractal antenna array with a Fresnel lens antenna providing a total gain of 15dB which is appropriate for medium range radar applications. The thesis also presents a comparison between the designed fractal antenna and a conventional patch antenna outlining the advantages of fractal antenna over the later one. The fractal antenna has a bandwidth of 1.8 GHz which is 7.5% of the centre frequency (24GHz) as compared to 1.9% of the conventional patch antenna. Furthermore the fractal design exhibits a size reduction of 53% as compared to the patch antenna. In the end a sensitivity analysis is carried out for the fractal antenna design depicting the robustness of the proposed design against the typical LTCC fabrication tolerances.

  16. 2-D Fractal Carpet Antenna Design and Performance

    Science.gov (United States)

    Barton, C. C.; Tebbens, S. F.; Ewing, J. J.; Peterman, D. J.; Rizki, M. M.

    2017-12-01

    A 2-D fractal carpet antenna uses a fractal (self-similar) pattern to increase its perimeter by iteration and can receive or transmit electromagnetic radiation within its perimeter-bounded surface area. 2-D fractals are shapes that, at their mathematical limit (infinite iterations) have an infinite perimeter bounding a finite surface area. The fractal dimension describes the degree of space filling and lacunarity which quantifies the size and spatial distribution of open space bounded by a fractal shape. A key aspect of fractal antennas lies in iteration (repetition) of a fractal pattern over a range of length scales. Iteration produces fractal antennas that are very compact, wideband and multiband. As the number of iterations increases, the antenna operates at higher and higher frequencies. Manifestly different from traditional antenna designs, a fractal antenna can operate at multiple frequencies simultaneously. We have created a MATLAB code to generate deterministic and stochastic modes of Sierpinski carpet fractal antennas with a range of fractal dimensions between 1 and 2. Variation in fractal dimension, stochasticity, number of iterations, and lacunarities have been computationally tested using COMSOL Multiphysics software to determine their effect on antenna performance

  17. Compressive Sampling based Image Coding for Resource-deficient Visual Communication.

    Science.gov (United States)

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen

    2016-04-14

    In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.

  18. Performance evaluation of objective quality metrics for HDR image compression

    Science.gov (United States)

    Valenzise, Giuseppe; De Simone, Francesca; Lauga, Paul; Dufaux, Frederic

    2014-09-01

    Due to the much larger luminance and contrast characteristics of high dynamic range (HDR) images, well-known objective quality metrics, widely used for the assessment of low dynamic range (LDR) content, cannot be directly applied to HDR images in order to predict their perceptual fidelity. To overcome this limitation, advanced fidelity metrics, such as the HDR-VDP, have been proposed to accurately predict visually significant differences. However, their complex calibration may make them difficult to use in practice. A simpler approach consists in computing arithmetic or structural fidelity metrics, such as PSNR and SSIM, on perceptually encoded luminance values but the performance of quality prediction in this case has not been clearly studied. In this paper, we aim at providing a better comprehension of the limits and the potentialities of this approach, by means of a subjective study. We compare the performance of HDR-VDP to that of PSNR and SSIM computed on perceptually encoded luminance values, when considering compressed HDR images. Our results show that these simpler metrics can be effectively employed to assess image fidelity for applications such as HDR image compression.

  19. 2-D Fractal Wire Antenna Design and Performance

    Science.gov (United States)

    Tebbens, S. F.; Barton, C. C.; Peterman, D. J.; Ewing, J. J.; Abbott, C. S.; Rizki, M. M.

    2017-12-01

    A 2-D fractal wire antenna uses a fractal (self-similar) pattern to increase its length by iteration and can receive or transmit electromagnetic radiation. 2-D fractals are shapes that, at their mathematical limit (of infinite iterations) have an infinite length. The fractal dimension describes the degree of space filling. A fundamental property of fractal antennas lies in iteration (repetition) of a fractal pattern over a range of length scales. Iteration produces fractal antennas that can be very compact, wideband and multiband. As the number of iterations increases, the antenna tends to have additional frequencies that minimize far field return loss. This differs from traditional antenna designs in that a single fractal antenna can operate well at multiple frequencies. We have created a MATLAB code to generate deterministic and stochastic modes of fractal wire antennas with a range of fractal dimensions between 1 and 2. Variation in fractal dimension, stochasticity, and number of iterations have been computationally tested using COMSOL Multiphysics software to determine their effect on antenna performance.

  20. Compressive Sensing Based Bio-Inspired Shape Feature Detection CMOS Imager

    Science.gov (United States)

    Duong, Tuan A. (Inventor)

    2015-01-01

    A CMOS imager integrated circuit using compressive sensing and bio-inspired detection is presented which integrates novel functions and algorithms within a novel hardware architecture enabling efficient on-chip implementation.

  1. Neutron scattering from fractals

    DEFF Research Database (Denmark)

    Kjems, Jørgen; Freltoft, T.; Richter, D.

    1986-01-01

    The scattering formalism for fractal structures is presented. Volume fractals are exemplified by silica particle clusters formed either from colloidal suspensions or by flame hydrolysis. The determination of the fractional dimensionality through scattering experiments is reviewed, and recent small...

  2. A Novel 1D Hybrid Chaotic Map-Based Image Compression and Encryption Using Compressed Sensing and Fibonacci-Lucas Transform

    Directory of Open Access Journals (Sweden)

    Tongfeng Zhang

    2016-01-01

    Full Text Available A one-dimensional (1D hybrid chaotic system is constructed by three different 1D chaotic maps in parallel-then-cascade fashion. The proposed chaotic map has larger key space and exhibits better uniform distribution property in some parametric range compared with existing 1D chaotic map. Meanwhile, with the combination of compressive sensing (CS and Fibonacci-Lucas transform (FLT, a novel image compression and encryption scheme is proposed with the advantages of the 1D hybrid chaotic map. The whole encryption procedure includes compression by compressed sensing (CS, scrambling with FLT, and diffusion after linear scaling. Bernoulli measurement matrix in CS is generated by the proposed 1D hybrid chaotic map due to its excellent uniform distribution. To enhance the security and complexity, transform kernel of FLT varies in each permutation round according to the generated chaotic sequences. Further, the key streams used in the diffusion process depend on the chaotic map as well as plain image, which could resist chosen plaintext attack (CPA. Experimental results and security analyses demonstrate the validity of our scheme in terms of high security and robustness against noise attack and cropping attack.

  3. Fractal analysis of sulphidic mineral

    Directory of Open Access Journals (Sweden)

    Miklúšová Viera

    2002-03-01

    Full Text Available In this paper, the application of fractal theory in the characterization of fragmented surfaces, as well as the mass-size distributions are discussed. The investigated mineral-chalcopyrite of Slovak provenience is characterised after particle size reduction processes-crushing and grinding. The problem how the different size reduction methods influence the surface irregularities of obtained particles is solved. Mandelbrot (1983, introducing the fractal geometry, offered a new way of characterization of surface irregularities by the fractal dimension. The determination of the surface fractal dimension DS consists in measuring the specific surface by the BET method in several fractions into which the comminuted chalcopyrite is sieved. This investigation shows that the specific surface of individual fractions were higher for the crushed sample than for the short-term (3 min ground sample. The surface fractal dimension can give an information about the adsorption sites accessible to molecules of nitrogen and according to this, the value of the fractal dimension is higher for crushed sample.The effect of comminution processes on the mass distribution of particles crushed and ground in air as well as in polar liquids is also discussed. The estimation of fractal dimensions of particles mass distribution is done on the assumption that the particle size distribution is described by the power-law (1. The value of fractal dimension for the mass distribution in the crushed sample is lower than in the sample ground in air, because it is influenced by the energy required for comminution.The sample of chalcopyrite was ground (10min in ethanol and i-butanol [which according to Ikazaki (1991] are characterized by the parameter µ /V, where µ is its dipole moment and V is the molecular volume. The values of µ /V for the used polar liquids are of the same order. That is why the expressive differences in particle size distributions as well as in the values of

  4. Does an increase in compression force really improve visual image quality in mammography? – An initial investigation

    International Nuclear Information System (INIS)

    Mercer, C.E.; Hogg, P.; Cassidy, S.; Denton, E.R.E.

    2013-01-01

    Objective: Literature speculates that visual image quality (IQ) and compression force levels may be directly related. This small study investigates whether a relationship exists between compression force levels and visual IQ. Method: To investigate how visual IQ varies with different levels of compression force, 39 clients were selected over a 6 year screening period that had received markedly different amounts of compression force on each of their three sequential screens. Images for the 3 screening episodes for all women were scored visually using 3 different IQ scales. Results: Correlation coefficients between the 3 IQ scales were positive and high (0.82, 0.9 and 0.85). For the scales, the IQ scores their correlation does not vary significantly, even though different compression levels had been applied. Kappa IQ scale 1: 0.92, 0.89, 0.89. ANOVA IQ scale 2: p = 0.98, p = 0.55, p = 0.56. ICC IQ scale 3: 0.97, 0.93, 0.91. Conclusion: For the 39 clients there is no difference in visual IQ when different amounts of compression are applied. We believe that further work should be conducted into compression force and image quality as ‘higher levels’ of compression force may not be justified in the attainment of suitable visual image quality

  5. Variability of the fractal dimension of the left coronary tree in-patient with disease arterial severe occlusive

    International Nuclear Information System (INIS)

    Rodriguez, Javier; Alvarez, Luisa F; Marino, Martha E and others

    2004-01-01

    Fractal geometry is a chapter of mathematics that allows the measurement of irregularity in natural objects. The adequate measures in order to characterize the forms of the human body are the fractal dimensions. Coronary ramification is a fractal object, which enables the diagnosis of occlusive arterial disease by the measurement of an arterial segment obtained by coronary angiography, without measuring the impact of the obstruction in the whole ramification. Fractal dimension evaluates the irregularity of the whole coronary ramification. The right anterior oblique projection (RAO) of the left coronary ramifications (LCR) obtained through arteriography is evaluated with fractal dimensions, using the box counting method. Images of the ramification between systole and diastole were measured in 14 patients, 7 of them without occlusive arterial disease, group 1, and 7 with severe occlusive arterial disease, group 2. Patients without occlusive arterial disease showed a greater variability in the fractal dimensions sequence evaluated with the net difference, being in general this difference other than zero

  6. Fractal analysis of phasic laser images of the myocardium for the purpose of diagnostics of acute coronary insufficiency

    Science.gov (United States)

    Wanchuliak, O. Y.; Bachinskyi, V. T.

    2011-09-01

    In this work on the base of Mueller-matrix description of optical anisotropy, the possibility of monitoring of time changes of myocardium tissue birefringence, has been considered. The optical model of polycrystalline networks of myocardium is suggested. The results of investigating the interrelation between the values correlation (correlation area, asymmetry coefficient and autocorrelation function excess) and fractal (dispersion of logarithmic dependencies of power spectra) parameters are presented. They characterize the distributions of Mueller matrix elements in the points of laser images of myocardium histological sections. The criteria of differentiation of death coming reasons are determined.

  7. Radiologic assessment of bone healing after orthognathic surgery using fractal analysis

    Energy Technology Data Exchange (ETDEWEB)

    Park, Kwang Soo; Heo, Min Suk; Lee, Sam Sun; Choi, Soon Chul; Park, Tae Won [College of Dentistry, Seoul National University, Seoul (Korea, Republic of); Jeon, In Seong [Department of Dentistry, Inje University Sanggyepaik Hospital, Seoul (Korea, Republic of); Kim, Jong Dae [Division of Information and Communication Engineering, Hallym university, Chuncheon (Korea, Republic of)

    2002-12-15

    To evaluate the radiographic change of operation sites after orthognathic surgery using the digital image processing and fractal analysis. A series of panoramic radiographs of thirty-five randomly selected patients who had undergone mandibular orthognathic surgery (bilateral sagittal split ramus osteotomy) without clinical complication for osseous healing, were taken. The panoramic radiographs of each selected patient were taken at pre-operation (stage 0), 1 or 2 days after operation (stage 1), 1 month after operation (stage 2), 6 months after operation (stage 3), and 12 months after operation (stage 4). The radiographs were digitized at 600 dpi, 8 bit, and 256 gray levels. The region of interest, centered on the bony gap area of the operation site, was selected and the fractal dimension was calculated by using the tile-counting method. The mean values and standard deviations of fractal dimension for each stage were calculated and the differences among stage 0, 1, 2, 3, and 4 were evaluated through repeated measures of the ANOVA and paired t-test. The mean values and standard deviations of the fractal dimensions obtained from stage 0, 1, 2, 3, and 4 were 1.658 {+-} 0.048, 1.580 {+-} 0.050, 1.607 {+-} 0.046, 1.624 {+-} 0.049, and 1.641 {+-} 0.061, respectively. The fractal dimensions from stage 1 to stage 4 were shown to have a tendency to increase (p<0.05). The tendency of the fractal dimesion to increase relative to healing time may be a useful means of evaluating post-operative bony healing of the osteotomy site.

  8. Radiologic assessment of bone healing after orthognathic surgery using fractal analysis

    International Nuclear Information System (INIS)

    Park, Kwang Soo; Heo, Min Suk; Lee, Sam Sun; Choi, Soon Chul; Park, Tae Won; Jeon, In Seong; Kim, Jong Dae

    2002-01-01

    To evaluate the radiographic change of operation sites after orthognathic surgery using the digital image processing and fractal analysis. A series of panoramic radiographs of thirty-five randomly selected patients who had undergone mandibular orthognathic surgery (bilateral sagittal split ramus osteotomy) without clinical complication for osseous healing, were taken. The panoramic radiographs of each selected patient were taken at pre-operation (stage 0), 1 or 2 days after operation (stage 1), 1 month after operation (stage 2), 6 months after operation (stage 3), and 12 months after operation (stage 4). The radiographs were digitized at 600 dpi, 8 bit, and 256 gray levels. The region of interest, centered on the bony gap area of the operation site, was selected and the fractal dimension was calculated by using the tile-counting method. The mean values and standard deviations of fractal dimension for each stage were calculated and the differences among stage 0, 1, 2, 3, and 4 were evaluated through repeated measures of the ANOVA and paired t-test. The mean values and standard deviations of the fractal dimensions obtained from stage 0, 1, 2, 3, and 4 were 1.658 ± 0.048, 1.580 ± 0.050, 1.607 ± 0.046, 1.624 ± 0.049, and 1.641 ± 0.061, respectively. The fractal dimensions from stage 1 to stage 4 were shown to have a tendency to increase (p<0.05). The tendency of the fractal dimesion to increase relative to healing time may be a useful means of evaluating post-operative bony healing of the osteotomy site.

  9. Improved MR imaging evaluation of chondromalacia patellae with use of a vise for cartilage compression

    International Nuclear Information System (INIS)

    Koenig, H.; Dinkelaker, F.; Wolf, K.J.

    1990-01-01

    This paper reports on earlier and more precise evaluation of chondromalacia patellae by means of MR imaging performed with a specially constructed vise for compression of the retropatellar cartilage. Two volunteers and 18 patients were examined 1-4 weeks before arthroscopy and cartilage biopsy. Imaging parameters included spin-echo (SE) (1,600/22 + 110 msec) and fast low-angle shot (FLASH) (30/12 msec, 10 degrees and 30 degrees excitation angles) sequences, 4-mm section thickness, and sagittal and axial views. For cartilage compression, we used a wooden vise. FLASH imaging was done without and with compression of the retropatellar cartilage. Cartilage thickness and signal intensities were measured

  10. Bilipschitz embedding of homogeneous fractals

    OpenAIRE

    Lü, Fan; Lou, Man-Li; Wen, Zhi-Ying; Xi, Li-Feng

    2014-01-01

    In this paper, we introduce a class of fractals named homogeneous sets based on some measure versions of homogeneity, uniform perfectness and doubling. This fractal class includes all Ahlfors-David regular sets, but most of them are irregular in the sense that they may have different Hausdorff dimensions and packing dimensions. Using Moran sets as main tool, we study the dimensions, bilipschitz embedding and quasi-Lipschitz equivalence of homogeneous fractals.

  11. INCREASE OF STABILITY AT JPEG COMPRESSION OF THE DIGITAL WATERMARKS EMBEDDED IN STILL IMAGES

    Directory of Open Access Journals (Sweden)

    V. A. Batura

    2015-07-01

    Full Text Available Subject of Research. The paper deals with creation and research of method for increasing stability at JPEG compressing of digital watermarks embedded in still images. Method. A new algorithm of digital watermarking for still images which embeds digital watermark into a still image via modification of frequency coefficients for Hadamard discrete transformation is presented. The choice of frequency coefficients for embedding of a digital watermark is based on existence of sharp change of their values after modification at the maximum compression of JPEG. The choice of blocks of pixels for embedding is based on the value of their entropy. The new algorithm was subjected to the analysis of resistance to an image compression, noising, filtration, change of size, color and histogram equalization. Elham algorithm possessing a good resistance to JPEG compression was chosen for comparative analysis. Nine gray-scale images were selected as objects for protection. Obscurity of the distortions embedded in them was defined on the basis of the peak value of a signal to noise ratio which should be not lower than 43 dB for obscurity of the brought distortions. Resistibility of embedded watermark was determined by the Pearson correlation coefficient, which value should not be below 0.5 for the minimum allowed stability. The algorithm of computing experiment comprises: watermark embedding into each test image by the new algorithm and Elham algorithm; introducing distortions to the object of protection; extracting of embedded information with its subsequent comparison with the original. Parameters of the algorithms were chosen so as to provide approximately the same level of distortions introduced into the images. Main Results. The method of preliminary processing of digital watermark presented in the paper makes it possible to reduce significantly the volume of information embedded in the still image. The results of numerical experiment have shown that the

  12. Non-US data compression and coding research. FASAC Technical Assessment Report

    Energy Technology Data Exchange (ETDEWEB)

    Gray, R.M.; Cohn, M.; Craver, L.W.; Gersho, A.; Lookabaugh, T.; Pollara, F.; Vetterli, M.

    1993-11-01

    This assessment of recent data compression and coding research outside the United States examines fundamental and applied work in the basic areas of signal decomposition, quantization, lossless compression, and error control, as well as application development efforts in image/video compression and speech/audio compression. Seven computer scientists and engineers who are active in development of these technologies in US academia, government, and industry carried out the assessment. Strong industrial and academic research groups in Western Europe, Israel, and the Pacific Rim are active in the worldwide search for compression algorithms that provide good tradeoffs among fidelity, bit rate, and computational complexity, though the theoretical roots and virtually all of the classical compression algorithms were developed in the United States. Certain areas, such as segmentation coding, model-based coding, and trellis-coded modulation, have developed earlier or in more depth outside the United States, though the United States has maintained its early lead in most areas of theory and algorithm development. Researchers abroad are active in other currently popular areas, such as quantizer design techniques based on neural networks and signal decompositions based on fractals and wavelets, but, in most cases, either similar research is or has been going on in the United States, or the work has not led to useful improvements in compression performance. Because there is a high degree of international cooperation and interaction in this field, good ideas spread rapidly across borders (both ways) through international conferences, journals, and technical exchanges. Though there have been no fundamental data compression breakthroughs in the past five years--outside or inside the United State--there have been an enormous number of significant improvements in both places in the tradeoffs among fidelity, bit rate, and computational complexity.

  13. Recognition of fractal graphs

    NARCIS (Netherlands)

    Perepelitsa, VA; Sergienko, [No Value; Kochkarov, AM

    1999-01-01

    Definitions of prefractal and fractal graphs are introduced, and they are used to formulate mathematical models in different fields of knowledge. The topicality of fractal-graph recognition from the point of view, of fundamental improvement in the efficiency of the solution of algorithmic problems

  14. Electromagnetic backscattering from one-dimensional drifting fractal sea surface II: Electromagnetic backscattering model

    International Nuclear Information System (INIS)

    Xie Tao; Zhao Shang-Zhuo; Fang He; Yu Wen-Jin; He Yi-Jun; Perrie, William

    2016-01-01

    Sea surface current has a significant influence on electromagnetic (EM) backscattering signals and may constitute a dominant synthetic aperture radar (SAR) imaging mechanism. An effective EM backscattering model for a one-dimensional drifting fractal sea surface is presented in this paper. This model is used to simulate EM backscattering signals from the drifting sea surface. Numerical results show that ocean currents have a significant influence on EM backscattering signals from the sea surface. The normalized radar cross section (NRCS) discrepancies between the model for a coupled wave-current fractal sea surface and the model for an uncoupled fractal sea surface increase with the increase of incidence angle, as well as with increasing ocean currents. Ocean currents that are parallel to the direction of the wave can weaken the EM backscattering signal intensity, while the EM backscattering signal is intensified by ocean currents propagating oppositely to the wave direction. The model presented in this paper can be used to study the SAR imaging mechanism for a drifting sea surface. (paper)

  15. Videos and images from 25 years of teaching compressible flow

    Science.gov (United States)

    Settles, Gary

    2008-11-01

    Compressible flow is a very visual topic due to refractive optical flow visualization and the public fascination with high-speed flight. Films, video clips, and many images are available to convey this in the classroom. An overview of this material is given and selected examples are shown, drawn from educational films, the movies, television, etc., and accumulated over 25 years of teaching basic and advanced compressible-flow courses. The impact of copyright protection and the doctrine of fair use is also discussed.

  16. Random walk through fractal environments

    International Nuclear Information System (INIS)

    Isliker, H.; Vlahos, L.

    2003-01-01

    We analyze random walk through fractal environments, embedded in three-dimensional, permeable space. Particles travel freely and are scattered off into random directions when they hit the fractal. The statistical distribution of the flight increments (i.e., of the displacements between two consecutive hittings) is analytically derived from a common, practical definition of fractal dimension, and it turns out to approximate quite well a power-law in the case where the dimension D F of the fractal is less than 2, there is though, always a finite rate of unaffected escape. Random walks through fractal sets with D F ≤2 can thus be considered as defective Levy walks. The distribution of jump increments for D F >2 is decaying exponentially. The diffusive behavior of the random walk is analyzed in the frame of continuous time random walk, which we generalize to include the case of defective distributions of walk increments. It is shown that the particles undergo anomalous, enhanced diffusion for D F F >2 is normal for large times, enhanced though for small and intermediate times. In particular, it follows that fractals generated by a particular class of self-organized criticality models give rise to enhanced diffusion. The analytical results are illustrated by Monte Carlo simulations

  17. Rheological and fractal characteristics of unconditioned and conditioned water treatment residuals.

    Science.gov (United States)

    Dong, Y J; Wang, Y L; Feng, J

    2011-07-01

    The rheological and fractal characteristics of raw (unconditioned) and conditioned water treatment residuals (WTRs) were investigated in this study. Variations in morphology, size, and image fractal dimensions of the flocs/aggregates in these WTR systems with increasing polymer doses were analyzed. The results showed that when the raw WTRs were conditioned with the polymer CZ8688, the optimum polymer dosage was observed at 24 kg/ton dry sludge. The average diameter of irregularly shaped flocs/aggregates in the WTR suspensions increased from 42.54 μm to several hundred micrometers with increasing polymer doses. Furthermore, the aggregates in the conditioned WTR system displayed boundary/surface and mass fractals. At the optimum polymer dosage, the aggregates formed had a volumetric average diameter of about 820.7 μm, with a one-dimensional fractal dimension of 1.01 and a mass fractal dimension of 2.74 on the basis of the image analysis. Rheological tests indicated that the conditioned WTRs at the optimum polymer dosage showed higher levels of shear-thinning behavior than the raw WTRs. Variations in the limiting viscosity (η(∞)) of conditioned WTRs with sludge content could be described by a linear equation, which were different from the often-observed empirical exponential relationship for most municipal sludge. With increasing temperature, the η(∞) of the raw WTRs decreased more rapidly than that of the raw WTRs. Good fitting results for the relationships between lgη(∞)∼T using the Arrhenius equation indicate that the WTRs had a much higher activation energy for viscosity of about 17.86-26.91 J/mol compared with that of anaerobic granular sludge (2.51 J/mol) (Mu and Yu, 2006). In addition, the Bingham plastic model adequately described the rheological behavior of the conditioned WTRs, whereas the rheology of the raw WTRs fit the Herschel-Bulkley model well at only certain sludge contents. Considering the good power-law relationships between the

  18. A fractal image analysis methodology for heat damage inspection in carbon fiber reinforced composites

    Science.gov (United States)

    Haridas, Aswin; Crivoi, Alexandru; Prabhathan, P.; Chan, Kelvin; Murukeshan, V. M.

    2017-06-01

    The use of carbon fiber-reinforced polymer (CFRP) composite materials in the aerospace industry have far improved the load carrying properties and the design flexibility of aircraft structures. A high strength to weight ratio, low thermal conductivity, and a low thermal expansion coefficient gives it an edge for applications demanding stringent loading conditions. Specifically, this paper focuses on the behavior of CFRP composites under stringent thermal loads. The properties of composites are largely affected by external thermal loads, especially when the loads are beyond the glass temperature, Tg, of the composite. Beyond this, the composites are subject to prominent changes in mechanical and thermal properties which may further lead to material decomposition. Furthermore, thermal damage formation being chaotic, a strict dimension cannot be associated with the formed damage. In this context, this paper focuses on comparing multiple speckle image analysis algorithms to effectively characterize the formed thermal damages on the CFRP specimen. This would provide us with a fast method for quantifying the extent of heat damage in carbon composites, thus reducing the required time for inspection. The image analysis methods used for the comparison include fractal dimensional analysis of the formed speckle pattern and analysis of number and size of various connecting elements in the binary image.

  19. Chaos and fractals. Applications to nuclear engineering; Caos y fractales. Aplicaciones en ingenieria nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Clausse, A; Delmastro, D F

    1991-12-31

    This work presents a description of the research lines carried out by the authors on chaos and fractal theories, oriented to the nuclear field. The possibilities that appear in the nuclear security branch where the information deriving from chaos and fractal techniques may help to the development of better criteria and more reliable designs, are of special importance. (Author). [Espanol] En este trabajo se presenta una descripcion de las lineas de investigacion que los autores estan llevando a cabo en teoria de caos y fractales orientadas al campo nuclear. Es de especial importancia las posibilidades que se abren en el area de la seguridad nuclear, en donde la informacion proveniente de las tecnicas de caos y fractales pueden ayudar al desarrollo de mejores criterios y disenos mas confiables. (Autor).

  20. Fractal analysis of fractures and microstructures in rocks

    International Nuclear Information System (INIS)

    Merceron, T.; Nakashima, S.; Velde, B.; Badri, A.

    1991-01-01

    Fractal geometry was used to characterize the distribution of fracture fields in rocks, which represent main pathways for material migration such as groundwater flow. Fractal investigations of fracture distribution were performed on granite along Auriat and Shikoku boreholes. Fractal dimensions range between 0.3 and 0.5 according to the different sets of fracture planes selected for the analyses. Shear, tension and compressional modes exhibit different fractal values while the composite fracture patterns are also fractal but with a different, median, fractal value. These observations indicate that the fractal method can be used to distinguish fracture types of different origins in a complex system. Fractal results for Shikoku borehole also correlate with geophysical parameters recorded along, drill-holes such as resistivity and possibly permeability. These results represent the first steps of the fractal investigation along drill-holes. Future studies will be conducted to verify relationships between fractal dimensions and permeability by using available geophysical data. Microstructures and microcracks were analysed in the Inada granite. Microcrack patterns are fractal but fractal dimensions values vary according to both mineral type and orientations of measurement within the mineral. Microcracks in quartz are characterized by more irregular distribution (average D = 0.40) than those in feldspars (D = 0.50) suggesting a different mode of rupture. Highest values of D are reported along main cleavage planes for feldspars or C axis for quartz. Further fractal investigations of microstructure in granite will be used to characterize the potential pathways for fluid migration and diffusion in the rock matrix. (author)

  1. Comparison of surface fractal dimensions of chromizing coating and P110 steel for corrosion resistance estimation

    International Nuclear Information System (INIS)

    Lin, Naiming; Guo, Junwen; Xie, Faqin; Zou, Jiaojuan; Tian, Wei; Yao, Xiaofei; Zhang, Hongyan; Tang, Bin

    2014-01-01

    Highlights: • Continuous chromizing coating was synthesized on P110 steel by pack cementation. • The chromizing coating showed better corrosion resistance. • Comparison of surface fractal dimensions can estimate corrosion resistance. - Abstract: In the field of corrosion research, mass gain/loss, electrochemical tests and comparing the surface elemental distributions, phase constitutions as well as surface morphologies before and after corrosion are extensively applied to investigate the corrosion behavior or estimate the corrosion resistance of materials that operated in various environments. Most of the above methods are problem oriented, complex and longer-period time-consuming. However from an object oriented point of view, the corroded surfaces of materials often have self-similar characterization: fractal property which can be employed to efficiently achieve damaged surface analysis. The present work describes a strategy of comparison of the surface fractal dimensions for corrosion resistance estimation: chromizing coating was synthesized on P110 steel surface to improve its performance via pack cementation. Scanning electron microscope (SEM) was used to investigate the surface morphologies of the original and corroded samples. Surface fractal dimensions of the detected samples were calculated by binary images related to SEM images of surface morphologies with box counting algorithm method. The results showed that both surface morphologies and surface fractal dimensions of P110 steel varied greatly before and after corrosion test, but the chromizing coating changed slightly. The chromizing coating indicated better corrosion resistance than P110 steel. Comparison of surface fractal dimensions of original and corroded samples can rapidly and exactly realize the estimation of corrosion resistance

  2. Comparison of surface fractal dimensions of chromizing coating and P110 steel for corrosion resistance estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Naiming, E-mail: lnmlz33@163.com [Research Institute of Surface Engineering, Taiyuan University of Technology, Taiyuan 030024 (China); Guo, Junwen [Research Institute of Surface Engineering, Taiyuan University of Technology, Taiyuan 030024 (China); Xie, Faqin [School of Aeronautics, Northwestern Polytechnical University, Xi’an 710072 (China); Zou, Jiaojuan; Tian, Wei [Research Institute of Surface Engineering, Taiyuan University of Technology, Taiyuan 030024 (China); Yao, Xiaofei [School of Materials and Chemical Engineering, Xi’an Technological University, Xi’an 710032 (China); Zhang, Hongyan; Tang, Bin [Research Institute of Surface Engineering, Taiyuan University of Technology, Taiyuan 030024 (China)

    2014-08-30

    Highlights: • Continuous chromizing coating was synthesized on P110 steel by pack cementation. • The chromizing coating showed better corrosion resistance. • Comparison of surface fractal dimensions can estimate corrosion resistance. - Abstract: In the field of corrosion research, mass gain/loss, electrochemical tests and comparing the surface elemental distributions, phase constitutions as well as surface morphologies before and after corrosion are extensively applied to investigate the corrosion behavior or estimate the corrosion resistance of materials that operated in various environments. Most of the above methods are problem oriented, complex and longer-period time-consuming. However from an object oriented point of view, the corroded surfaces of materials often have self-similar characterization: fractal property which can be employed to efficiently achieve damaged surface analysis. The present work describes a strategy of comparison of the surface fractal dimensions for corrosion resistance estimation: chromizing coating was synthesized on P110 steel surface to improve its performance via pack cementation. Scanning electron microscope (SEM) was used to investigate the surface morphologies of the original and corroded samples. Surface fractal dimensions of the detected samples were calculated by binary images related to SEM images of surface morphologies with box counting algorithm method. The results showed that both surface morphologies and surface fractal dimensions of P110 steel varied greatly before and after corrosion test, but the chromizing coating changed slightly. The chromizing coating indicated better corrosion resistance than P110 steel. Comparison of surface fractal dimensions of original and corroded samples can rapidly and exactly realize the estimation of corrosion resistance.

  3. Subband directional vector quantization in radiological image compression

    Science.gov (United States)

    Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel

    1992-05-01

    The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.

  4. Video compression and DICOM proxies for remote viewing of DICOM images

    Science.gov (United States)

    Khorasani, Elahe; Sheinin, Vadim; Paulovicks, Brent; Jagmohan, Ashish

    2009-02-01

    Digital medical images are rapidly growing in size and volume. A typical study includes multiple image "slices." These images have a special format and a communication protocol referred to as DICOM (Digital Imaging Communications in Medicine). Storing, retrieving, and viewing these images are handled by DICOM-enabled systems. DICOM images are stored in central repository servers called PACS (Picture Archival and Communication Systems). Remote viewing stations are DICOM-enabled applications that can query the PACS servers and retrieve the DICOM images for viewing. Modern medical images are quite large, reaching as much as 1 GB per file. When the viewing station is connected to the PACS server via a high-bandwidth local LAN, downloading of the images is relatively efficient and does not cause significant wasted time for physicians. Problems arise when the viewing station is located in a remote facility that has a low-bandwidth link to the PACS server. If the link between the PACS and remote facility is in the range of 1 Mbit/sec, downloading medical images is very slow. To overcome this problem, medical images are compressed to reduce the size for transmission. This paper describes a method of compression that maintains diagnostic quality of images while significantly reducing the volume to be transmitted, without any change to the existing PACS servers and viewer software, and without requiring any change in the way doctors retrieve and view images today.

  5. Fractal structures and fractal functions as disease indicators

    Science.gov (United States)

    Escos, J.M; Alados, C.L.; Emlen, J.M.

    1995-01-01

    Developmental instability is an early indicator of stress, and has been used to monitor the impacts of human disturbance on natural ecosystems. Here we investigate the use of different measures of developmental instability on two species, green peppers (Capsicum annuum), a plant, and Spanish ibex (Capra pyrenaica), an animal. For green peppers we compared the variance in allometric relationship between control plants, and a treatment group infected with the tomato spotted wilt virus. The results show that infected plants have a greater variance about the allometric regression line than the control plants. We also observed a reduction in complexity of branch structure in green pepper with a viral infection. Box-counting fractal dimension of branch architecture declined under stress infection. We also tested the reduction in complexity of behavioral patterns under stress situations in Spanish ibex (Capra pyrenaica). Fractal dimension of head-lift frequency distribution measures predator detection efficiency. This dimension decreased under stressful conditions, such as advanced pregnancy and parasitic infection. Feeding distribution activities reflect food searching efficiency. Power spectral analysis proves to be the most powerful tool for character- izing fractal behavior, revealing a reduction in complexity of time distribution activity under parasitic infection.

  6. Effects of JPEG data compression on magnetic resonance imaging evaluation of small vessels ischemic lesions of the brain

    International Nuclear Information System (INIS)

    Kuriki, Paulo Eduardo de Aguiar; Abdala, Nitamar; Nogueira, Roberto Gomes; Carrete Junior, Henrique; Szejnfeld, Jacob

    2006-01-01

    Objective: to establish the maximum achievable JPEG compression ratio without affecting quantitative and qualitative magnetic resonance imaging analysis of ischemic lesion in small vessels of the brain. Material and method: fifteen DICOM images were converted to JPEG with a compression ratio of 1:10 to 1:60 and were assessed together with the original images by three neuro radiologists. The number, morphology and signal intensity of the lesions were analyzed. Results: lesions were properly identified up to a 1:30 ratio. More lesions were identified with a 1:10 ratio then in the original images. Morphology and edges were properly evaluated up toa 1:40 ratio. Compression did not affect signal. Conclusion: small lesions were identified ( < 2 mm ) and in all compression ratios the JPEG algorithm generated image noise that misled observers to identify more lesions in JPEG images then in DICOM images, thus generating false-positive results.(author)

  7. Fractal geometry mathematical foundations and applications

    CERN Document Server

    Falconer, Kenneth

    2013-01-01

    The seminal text on fractal geometry for students and researchers: extensively revised and updated with new material, notes and references that reflect recent directions. Interest in fractal geometry continues to grow rapidly, both as a subject that is fascinating in its own right and as a concept that is central to many areas of mathematics, science and scientific research. Since its initial publication in 1990 Fractal Geometry: Mathematical Foundations and Applications has become a seminal text on the mathematics of fractals.  The book introduces and develops the general theory and applica

  8. Fractal nature of hydrocarbon deposits. 2. Spatial distribution

    International Nuclear Information System (INIS)

    Barton, C.C.; Schutter, T.A; Herring, P.R.; Thomas, W.J.; Scholz, C.H.

    1991-01-01

    Hydrocarbons are unevenly distributed within reservoirs and are found in patches whose size distribution is a fractal over a wide range of scales. The spatial distribution of the patches is also fractal and this can be used to constrain the design of drilling strategies also defined by a fractal dimension. Fractal distributions are scale independent and are characterized by a power-law scaling exponent termed the fractal dimension. The authors have performed fractal analyses on the spatial distribution of producing and showing wells combined and of dry wells in 1,600-mi 2 portions of the Denver and Powder River basins that were nearly completely drilled on quarter-mile square-grid spacings. They have limited their analyses to wells drilled to single stratigraphic intervals so that the map pattern revealed by drilling is representative of the spatial patchiness of hydrocarbons at depth. The fractal dimensions for the spatial patchiness of hydrocarbons in the two basins are 1.5 and 1.4, respectively. The fractal dimension for the pattern of all wells drilled is 1.8 for both basins, which suggests a drilling strategy with a fractal dimension significantly higher than the dimensions 1.5 and 1.4 sufficient to efficiently and economically explore these reservoirs. In fact, the fractal analysis reveals that the drilling strategy used in these basins approaches a fractal dimension of 2.0, which is equivalent to random drilling with no geologic input. Knowledge of the fractal dimension of a reservoir prior to drilling would provide a basis for selecting and a criterion for halting a drilling strategy for exploration whose fractal dimension closely matches that of the spatial fractal dimension of the reservoir, such a strategy should prove more efficient and economical than current practice

  9. Fractal electrodynamics via non-integer dimensional space approach

    Science.gov (United States)

    Tarasov, Vasily E.

    2015-09-01

    Using the recently suggested vector calculus for non-integer dimensional space, we consider electrodynamics problems in isotropic case. This calculus allows us to describe fractal media in the framework of continuum models with non-integer dimensional space. We consider electric and magnetic fields of fractal media with charges and currents in the framework of continuum models with non-integer dimensional spaces. An application of the fractal Gauss's law, the fractal Ampere's circuital law, the fractal Poisson equation for electric potential, and equation for fractal stream of charges are suggested. Lorentz invariance and speed of light in fractal electrodynamics are discussed. An expression for effective refractive index of non-integer dimensional space is suggested.

  10. Independent transmission of sign language interpreter in DVB: assessment of image compression

    Science.gov (United States)

    Zatloukal, Petr; Bernas, Martin; Dvořák, LukáÅ.¡

    2015-02-01

    Sign language on television provides information to deaf that they cannot get from the audio content. If we consider the transmission of the sign language interpreter over an independent data stream, the aim is to ensure sufficient intelligibility and subjective image quality of the interpreter with minimum bit rate. The work deals with the ROI-based video compression of Czech sign language interpreter implemented to the x264 open source library. The results of this approach are verified in subjective tests with the deaf. They examine the intelligibility of sign language expressions containing minimal pairs for different levels of compression and various resolution of image with interpreter and evaluate the subjective quality of the final image for a good viewing experience.

  11. Joint Group Sparse PCA for Compressed Hyperspectral Imaging.

    Science.gov (United States)

    Khan, Zohaib; Shafait, Faisal; Mian, Ajmal

    2015-12-01

    A sparse principal component analysis (PCA) seeks a sparse linear combination of input features (variables), so that the derived features still explain most of the variations in the data. A group sparse PCA introduces structural constraints on the features in seeking such a linear combination. Collectively, the derived principal components may still require measuring all the input features. We present a joint group sparse PCA (JGSPCA) algorithm, which forces the basic coefficients corresponding to a group of features to be jointly sparse. Joint sparsity ensures that the complete basis involves only a sparse set of input features, whereas the group sparsity ensures that the structural integrity of the features is maximally preserved. We evaluate the JGSPCA algorithm on the problems of compressed hyperspectral imaging and face recognition. Compressed sensing results show that the proposed method consistently outperforms sparse PCA and group sparse PCA in reconstructing the hyperspectral scenes of natural and man-made objects. The efficacy of the proposed compressed sensing method is further demonstrated in band selection for face recognition.

  12. An L1-norm phase constraint for half-Fourier compressed sensing in 3D MR imaging.

    Science.gov (United States)

    Li, Guobin; Hennig, Jürgen; Raithel, Esther; Büchert, Martin; Paul, Dominik; Korvink, Jan G; Zaitsev, Maxim

    2015-10-01

    In most half-Fourier imaging methods, explicit phase replacement is used. In combination with parallel imaging, or compressed sensing, half-Fourier reconstruction is usually performed in a separate step. The purpose of this paper is to report that integration of half-Fourier reconstruction into iterative reconstruction minimizes reconstruction errors. The L1-norm phase constraint for half-Fourier imaging proposed in this work is compared with the L2-norm variant of the same algorithm, with several typical half-Fourier reconstruction methods. Half-Fourier imaging with the proposed phase constraint can be seamlessly combined with parallel imaging and compressed sensing to achieve high acceleration factors. In simulations and in in-vivo experiments half-Fourier imaging with the proposed L1-norm phase constraint enables superior performance both reconstruction of image details and with regard to robustness against phase estimation errors. The performance and feasibility of half-Fourier imaging with the proposed L1-norm phase constraint is reported. Its seamless combination with parallel imaging and compressed sensing enables use of greater acceleration in 3D MR imaging.

  13. Fast algorithm for exploring and compressing of large hyperspectral images

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey

    2011-01-01

    A new method for calculation of latent variable space for exploratory analysis and dimension reduction of large hyperspectral images is proposed. The method is based on significant downsampling of image pixels with preservation of pixels’ structure in feature (variable) space. To achieve this, in...... can be used first of all for fast compression of large data arrays with principal component analysis or similar projection techniques....

  14. Inkjet-Printed Ultra Wide Band Fractal Antennas

    KAUST Repository

    Maza, Armando Rodriguez

    2012-01-01

    reduction, a Cantor-based fractal antenna which performs a larger bandwidth compared to previously published UWB Cantor fractal monopole antenna, and a 3D loop fractal antenna which attains miniaturization, impedance matching and multiband characteristics

  15. An analytical look at the effects of compression on medical images

    OpenAIRE

    Persons, Kenneth; Palisson, Patrice; Manduca, Armando; Erickson, Bradley J.; Savcenko, Vladimir

    1997-01-01

    This article will take an analytical look at how lossy Joint Photographic Experts Group (JPEG) and wavelet image compression techniques affect medical image content. It begins with a brief explanation of how the JPEG and wavelet algorithms work, and describes in general terms what effect they can have on image quality (removal of noise, blurring, and artifacts). It then focuses more specifically on medical image diagnostic content and explains why subtle pathologies, that may be difficult for...

  16. Categorization of new fractal carpets

    International Nuclear Information System (INIS)

    Rani, Mamta; Goel, Saurabh

    2009-01-01

    Sierpinski carpet is one of the very beautiful fractals from the historic gallery of classical fractals. Carpet designing is not only a fascinating activity in computer graphics, but it has real applications in carpet industry as well. One may find illusionary delighted carpets designed here, which are useful in real designing of carpets. In this paper, we attempt to systematize their generation and put them into categories. Each next category leads to a more generalized form of the fractal carpet.

  17. On the Lipschitz condition in the fractal calculus

    International Nuclear Information System (INIS)

    Golmankhaneh, Alireza K.; Tunc, Cemil

    2017-01-01

    In this paper, the existence and uniqueness theorems are proved for the linear and non-linear fractal differential equations. The fractal Lipschitz condition is given on the F"α-calculus which applies for the non-differentiable function in the sense of the standard calculus. More, the metric spaces associated with fractal sets and about functions with fractal supports are defined to build fractal Cauchy sequence. Furthermore, Picard iterative process in the F"α-calculus which have important role in the numerical and approximate solution of fractal differential equations is explored. We clarify the results using the illustrative examples.

  18. On Scientific Data and Image Compression Based on Adaptive Higher-Order FEM

    Czech Academy of Sciences Publication Activity Database

    Šolín, Pavel; Andrš, David

    2009-01-01

    Roč. 1, č. 1 (2009), s. 56-68 ISSN 2070-0733 R&D Projects: GA ČR(CZ) GA102/07/0496; GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z20570509 Keywords : data compress ion * image compress ion * adaptive hp-FEM Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering http://www.global-sci.org/aamm

  19. Integrated quantitative fractal polarimetric analysis of monolayer lung cancer cells

    Science.gov (United States)

    Shrestha, Suman; Zhang, Lin; Quang, Tri; Farrahi, Tannaz; Narayan, Chaya; Deshpande, Aditi; Na, Ying; Blinzler, Adam; Ma, Junyu; Liu, Bo; Giakos, George C.

    2014-05-01

    Digital diagnostic pathology has become one of the most valuable and convenient advancements in technology over the past years. It allows us to acquire, store and analyze pathological information from the images of histological and immunohistochemical glass slides which are scanned to create digital slides. In this study, efficient fractal, wavelet-based polarimetric techniques for histological analysis of monolayer lung cancer cells will be introduced and different monolayer cancer lines will be studied. The outcome of this study indicates that application of fractal, wavelet polarimetric principles towards the analysis of squamous carcinoma and adenocarcinoma cancer cell lines may be proved extremely useful in discriminating among healthy and lung cancer cells as well as differentiating among different lung cancer cells.

  20. Polarimetric and Indoor Imaging Fusion Based on Compressive Sensing

    Science.gov (United States)

    2013-04-01

    34 in Proc. IEEE Radar Conf, Rome, Italy , May 2008. [17] M. G. Amin, F. Ahmad, W. Zhang, "A compressive sensing approach to moving target... Ferrara , J. Jackson, and M. Stuff, "Three-dimensional sparse-aperture moving-target imaging," in Proc. SPIE, vol. 6970, 2008. [43] M. Skolnik (Ed

  1. Fractal dimension of turbulent black holes

    Science.gov (United States)

    Westernacher-Schneider, John Ryan

    2017-11-01

    We present measurements of the fractal dimension of a turbulent asymptotically anti-de Sitter black brane reconstructed from simulated boundary fluid data at the perfect fluid order using the fluid-gravity duality. We argue that the boundary fluid energy spectrum scaling as E (k )˜k-2 is a more natural setting for the fluid-gravity duality than the Kraichnan-Kolmogorov scaling of E (k )˜k-5 /3, but we obtain fractal dimensions D for spatial sections of the horizon H ∩Σ in both cases: D =2.584 (1 ) and D =2.645 (4 ), respectively. These results are consistent with the upper bound of D =3 , thereby resolving the tension with the recent claim in Adams et al. [Phys. Rev. Lett. 112, 151602 (2014), 10.1103/PhysRevLett.112.151602] that D =3 +1 /3 . We offer a critical examination of the calculation which led to their result, and show that their proposed definition of the fractal dimension performs poorly as a fractal dimension estimator on one-dimensional curves with known fractal dimension. Finally, we describe how to define and in principle calculate the fractal dimension of spatial sections of the horizon H ∩Σ in a covariant manner, and we speculate on assigning a "bootstrapped" value of fractal dimension to the entire horizon H when it is in a statistically quasisteady turbulent state.

  2. Data compression of scanned halftone images

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Jensen, Kim S.

    1994-01-01

    with the halftone grid, and converted to a gray level representation. A new digital description of (halftone) grids has been developed for this purpose. The gray level values are coded according to a scheme based on states derived from a segmentation of gray values. To enable real-time processing of high resolution...... scanner output, the coding has been parallelized and implemented on a transputer system. For comparison, the test image was coded using existing (lossless) methods giving compression rates of 2-7. The best of these, a combination of predictive and binary arithmetic coding was modified and optimized...

  3. Compressed sensing with cyclic-S Hadamard matrix for terahertz imaging applications

    Science.gov (United States)

    Ermeydan, Esra Şengün; ćankaya, Ilyas

    2018-01-01

    Compressed Sensing (CS) with Cyclic-S Hadamard matrix is proposed for single pixel imaging applications in this study. In single pixel imaging scheme, N = r . c samples should be taken for r×c pixel image where . denotes multiplication. CS is a popular technique claiming that the sparse signals can be reconstructed with samples under Nyquist rate. Therefore to solve the slow data acquisition problem in Terahertz (THz) single pixel imaging, CS is a good candidate. However, changing mask for each measurement is a challenging problem since there is no commercial Spatial Light Modulators (SLM) for THz band yet, therefore circular masks are suggested so that for each measurement one or two column shifting will be enough to change the mask. The CS masks are designed using cyclic-S matrices based on Hadamard transform for 9 × 7 and 15 × 17 pixel images within the framework of this study. The %50 compressed images are reconstructed using total variation based TVAL3 algorithm. Matlab simulations demonstrates that cyclic-S matrices can be used for single pixel imaging based on CS. The circular masks have the advantage to reduce the mechanical SLMs to a single sliding strip, whereas the CS helps to reduce acquisition time and energy since it allows to reconstruct the image from fewer samples.

  4. Fractals as objects with nontrivial structures at all scales

    International Nuclear Information System (INIS)

    Lacan, Francis; Tresser, Charles

    2015-01-01

    Toward the middle of 2001, the authors started arguing that fractals are important when discussing the operational resilience of information systems and related computer sciences issues such as artificial intelligence. But in order to argue along these lines it turned out to be indispensable to define fractals so as to let one recognize as fractals some sets that are very far from being self similar in the (usual) metric sense. This paper is devoted to define (in a loose sense at least) fractals in ways that allow for instance all the Cantor sets to be fractals and that permit to recognize fractality (the property of being fractal) in the context of the information technology issues that we had tried to comprehend. Starting from the meta-definition of a fractal as an “object with non-trivial structure at all scales” that we had used for long, we ended up taking these words seriously. Accordingly we define fractals in manners that depend both on the structures that the fractals are endowed with and the chosen sets of structure compatible maps, i.e., we approach fractals in a category-dependent manner. We expect that this new approach to fractals will contribute to the understanding of more of the fractals that appear in exact and other sciences than what can be handled presently

  5. Context-Aware Image Compression.

    Directory of Open Access Journals (Sweden)

    Jacky C K Chan

    Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.

  6. Secure biometric image sensor and authentication scheme based on compressed sensing.

    Science.gov (United States)

    Suzuki, Hiroyuki; Suzuki, Masamichi; Urabe, Takuya; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2013-11-20

    It is important to ensure the security of biometric authentication information, because its leakage causes serious risks, such as replay attacks using the stolen biometric data, and also because it is almost impossible to replace raw biometric information. In this paper, we propose a secure biometric authentication scheme that protects such information by employing an optical data ciphering technique based on compressed sensing. The proposed scheme is based on two-factor authentication, the biometric information being supplemented by secret information that is used as a random seed for a cipher key. In this scheme, a biometric image is optically encrypted at the time of image capture, and a pair of restored biometric images for enrollment and verification are verified in the authentication server. If any of the biometric information is exposed to risk, it can be reenrolled by changing the secret information. Through numerical experiments, we confirm that finger vein images can be restored from the compressed sensing measurement data. We also present results that verify the accuracy of the scheme.

  7. Selection of bi-level image compression method for reduction of communication energy in wireless visual sensor networks

    Science.gov (United States)

    Khursheed, Khursheed; Imran, Muhammad; Ahmad, Naeem; O'Nils, Mattias

    2012-06-01

    Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.

  8. Fractal Structures For Mems Variable Capacitors

    KAUST Repository

    Elshurafa, Amro M.

    2014-08-28

    In accordance with the present disclosure, one embodiment of a fractal variable capacitor comprises a capacitor body in a microelectromechanical system (MEMS) structure, wherein the capacitor body has an upper first metal plate with a fractal shape separated by a vertical distance from a lower first metal plate with a complementary fractal shape; and a substrate above which the capacitor body is suspended.

  9. Fractal THz metamaterials

    DEFF Research Database (Denmark)

    Malureanu, Radu; Jepsen, Peter Uhd; Xiao, S.

    2010-01-01

    applications. THz radiation can be employed for various purposes, among them the study of vibrations in biological molecules, motion of electrons in semiconductors and propagation of acoustic shock waves in crystals. We propose here a new THz fractal MTM design that shows very high transmission in the desired...... frequency range as well as a clear differentiation between one polarisation and another. Based on theoretical predictions we fabricated and measured a fractal based THz metamaterial that shows more than 60% field transmission at around 1THz for TE polarized light while the TM waves have almost 80% field...... transmission peak at 0.6THz. One of the main characteristics of this design is its tunability by design: by simply changing the length of the fractal elements one can choose the operating frequency window. The modelling, fabrication and characterisation results will be presented in this paper. Due to the long...

  10. 32Still Image Compression Algorithm Based on Directional Filter Banks

    OpenAIRE

    Chunling Yang; Duanwu Cao; Li Ma

    2010-01-01

    Hybrid wavelet and directional filter banks (HWD) is an effective multi-scale geometrical analysis method. Compared to wavelet transform, it can better capture the directional information of images. But the ringing artifact, which is caused by the coefficient quantization in transform domain, is the biggest drawback of image compression algorithms in HWD domain. In this paper, by researching on the relationship between directional decomposition and ringing artifact, an improved decomposition ...

  11. Categorization of fractal plants

    International Nuclear Information System (INIS)

    Chandra, Munesh; Rani, Mamta

    2009-01-01

    Fractals in nature are always a result of some growth process. The language of fractals which has been created specifically for the description of natural growth process is called L-systems. Recently, superior iterations (essentially, investigated by Mann [Mann WR. Mean value methods in iteration. Proc Am Math Soc 1953;4:506-10 [MR0054846 (14,988f)

  12. Fractal and multifractal analysis of LiF thin film surface

    International Nuclear Information System (INIS)

    Yadav, R.P.; Dwivedi, S.; Mittal, A.K.; Kumar, M.; Pandey, A.C.

    2012-01-01

    Highlights: ► Fractal and multifractal analysis of surface morphologies of the LiF thin films. ► Complexity and roughness of the LiF thin films increases as thickness increases. ► LiF thin films are multifractal in nature. ► Strength of the multifractality increases with thickness of the film. - Abstract: Fractal and multifractal analysis is performed on the atomic force microscopy (AFM) images of the surface morphologies of the LiF thin films of thickness 10 nm, 20 nm, and 40 nm, respectively. Autocorrelation function, height–height correlation function, and two-dimensional multifractal detrended fluctuation analysis (MFDFA) are used for characterizing the surface. It is found that the interface width, average roughness, lateral correlation length, and fractal dimension of the LiF thin film increase with the thickness of the film, whereas the roughness exponent decreases with thickness. Thus, the complexity and roughness of the LiF thin films increases as thickness increases. It is also demonstrated that the LiF thin films are multifractal in nature. Strength of the multifractality increases with thickness of the film.

  13. Morphometric relations of fractal-skeletal based channel network model

    Directory of Open Access Journals (Sweden)

    B. S. Daya Sagar

    1998-01-01

    Full Text Available A fractal-skeletal based channel network (F-SCN model is proposed. Four regular sided initiator-basins are transformed as second order fractal basins by following a specific generating mechanism with non-random rule. The morphological skeletons, hereafter referred to as channel networks, are extracted from these fractal basins. The morphometric and fractal relationships of these F-SCNs are shown. The fractal dimensions of these fractal basins, channel networks, and main channel lengths (computed through box counting method are compared with those of estimated length–area measures. Certain morphometric order ratios to show fractal relations are also highlighted.

  14. Fractal Analysis of Rock Joint Profiles

    Science.gov (United States)

    Audy, Ondřej; Ficker, Tomáš

    2017-10-01

    Surface reliefs of rock joints are analyzed in geotechnics when shear strength of rocky slopes is estimated. The rock joint profiles actually are self-affine fractal curves and computations of their fractal dimensions require special methods. Many papers devoted to the fractal properties of these profiles were published in the past but only a few of those papers employed a convenient computational method that would have guaranteed a sound value of that dimension. As a consequence, anomalously low dimensions were presented. This contribution deals with two computational modifications that lead to sound fractal dimensions of the self-affine rock joint profiles. These are the modified box-counting method and the modified yard-stick method sometimes called the compass method. Both these methods are frequently applied to self-similar fractal curves but the self-affine profile curves due to their self-affine nature require modified computational procedures implemented in computer programs.

  15. A random walk through fractal dimensions

    CERN Document Server

    Kaye, Brian H

    2008-01-01

    Fractal geometry is revolutionizing the descriptive mathematics of applied materials systems. Rather than presenting a mathematical treatise, Brian Kaye demonstrates the power of fractal geometry in describing materials ranging from Swiss cheese to pyrolytic graphite. Written from a practical point of view, the author assiduously avoids the use of equations while introducing the reader to numerous interesting and challenging problems in subject areas ranging from geography to fine particle science. The second edition of this successful book provides up-to-date literature coverage of the use of fractal geometry in all areas of science.From reviews of the first edition:''...no stone is left unturned in the quest for applications of fractal geometry to fine particle problems....This book should provide hours of enjoyable reading to those wishing to become acquainted with the ideas of fractal geometry as applied to practical materials problems.'' MRS Bulletin

  16. Effects of fractal pore on coal devolatilization

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yongli; He, Rong [Tsinghua Univ., Beijing (China). Dept. of Thermal Engineering; Wang, Xiaoliang; Cao, Liyong [Dongfang Electric Corporation, Chengdu (China). Centre New Energy Inst.

    2013-07-01

    Coal devolatilization is numerically investigated by drop tube furnace and a coal pyrolysis model (Fragmentation and Diffusion Model). The fractal characteristics of coal and char pores are investigated. Gas diffusion and secondary reactions in fractal pores are considered in the numerical simulations of coal devolatilization, and the results show that the fractal dimension is increased firstly and then decreased later with increased coal conversions during devolatilization. The mechanisms of effects of fractal pores on coal devolatilization are analyzed.

  17. Fractal dimension and turbulence in Giant HII Regions

    International Nuclear Information System (INIS)

    Caicedo-Ortiz, H E; Santiago-Cortes, E; López-Bonilla, J; er piso, CP 07738, México D.F (Mexico))" data-affiliation=" (ESFM, Instituto Politécnico Nacional, Edif. 9, 1er piso, CP 07738, México D.F (Mexico))" >Castañeda, H O

    2015-01-01

    We have measured the fractal dimensions of the Giant HII Regions Hubble X and Hubble V in NGC6822 using images obtained with the Hubble's Wide Field Planetary Camera 2 (WFPC2). These measures are associated with the turbulence observed in these regions, which is quantified through the velocity dispersion of emission lines in the visible. Our results suggest low turbulence behaviour

  18. Quality Evaluation and Nonuniform Compression of Geometrically Distorted Images Using the Quadtree Distortion Map

    Directory of Open Access Journals (Sweden)

    Cristina Costa

    2004-09-01

    Full Text Available The paper presents an analysis of the effects of lossy compression algorithms applied to images affected by geometrical distortion. It will be shown that the encoding-decoding process results in a nonhomogeneous image degradation in the geometrically corrected image, due to the different amount of information associated to each pixel. A distortion measure named quadtree distortion map (QDM able to quantify this aspect is proposed. Furthermore, QDM is exploited to achieve adaptive compression of geometrically distorted pictures, in order to ensure a uniform quality on the final image. Tests are performed using JPEG and JPEG2000 coding standards in order to quantitatively and qualitatively assess the performance of the proposed method.

  19. A System for Compressive Spectral and Polarization Imaging at Short Wave Infrared (SWIR) Wavelengths

    Science.gov (United States)

    2017-10-18

    UV -­‐ VIS -­‐IR   60mm   Apo   Macro  lens   Jenoptik-­‐Inc   $5,817.36   IR... VIS /NIR Compressive Spectral Imager”, Proceedings of IEEE International Conference on Image Processing (ICIP ’15), Quebec City, Canada, (September...imaging   system   will   lead   to   a   wide-­‐band   VIS -­‐NIR-­‐SWIR   compressive  spectral  and  polarimetric

  20. Closed contour fractal dimension estimation by the Fourier transform

    International Nuclear Information System (INIS)

    Florindo, J.B.; Bruno, O.M.

    2011-01-01

    Highlights: → A novel fractal dimension concept, based on Fourier spectrum, is proposed. → Computationally simple. Computational time smaller than conventional fractal methods. → Results are closer to Hausdorff-Besicovitch than conventional methods. → The method is more accurate and robustness to geometric operations and noise addition. - Abstract: This work proposes a novel technique for the numerical calculus of the fractal dimension of fractal objects which can be represented as a closed contour. The proposed method maps the fractal contour onto a complex signal and calculates its fractal dimension using the Fourier transform. The Fourier power spectrum is obtained and an exponential relation is verified between the power and the frequency. From the parameter (exponent) of the relation, is obtained the fractal dimension. The method is compared to other classical fractal dimension estimation methods in the literature, e.g., Bouligand-Minkowski, box-counting and classical Fourier. The comparison is achieved by the calculus of the fractal dimension of fractal contours whose dimensions are well-known analytically. The results showed the high precision and robustness of the proposed technique.