WorldWideScience

Sample records for fingerprint image compression

  1. Wavelet/scalar quantization compression standard for fingerprint images

    Energy Technology Data Exchange (ETDEWEB)

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.

  2. The FBI compression standard for digitized fingerprint images

    Energy Technology Data Exchange (ETDEWEB)

    Brislawn, C.M.; Bradley, J.N. [Los Alamos National Lab., NM (United States); Onyshczak, R.J. [National Inst. of Standards and Technology, Gaithersburg, MD (United States); Hopper, T. [Federal Bureau of Investigation, Washington, DC (United States)

    1996-10-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  3. The wavelet/scalar quantization compression standard for digital fingerprint images

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  4. A New Approach for Fingerprint Image Compression

    Energy Technology Data Exchange (ETDEWEB)

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  5. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M. [Los Alamos National Lab., NM (United States); Hopper, T. [Federal Bureau of Investigation, Washington, DC (United States)

    1993-05-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI`s Integrated Automated Fingerprint Identification System.

  6. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M. (Los Alamos National Lab., NM (United States)); Hopper, T. (Federal Bureau of Investigation, Washington, DC (United States))

    1993-01-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.

  7. Compression of fingerprint data using the wavelet vector quantization image compression algorithm. 1992 progress report

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1992-04-11

    This report describes the development of a Wavelet Vector Quantization (WVQ) image compression algorithm for fingerprint raster files. The pertinent work was performed at Los Alamos National Laboratory for the Federal Bureau of Investigation. This document describes a previously-sent package of C-language source code, referred to as LAFPC, that performs the WVQ fingerprint compression and decompression tasks. The particulars of the WVQ algorithm and the associated design procedure are detailed elsewhere; the purpose of this document is to report the results of the design algorithm for the fingerprint application and to delineate the implementation issues that are incorporated in LAFPC. Special attention is paid to the computation of the wavelet transform, the fast search algorithm used for the VQ encoding, and the entropy coding procedure used in the transmission of the source symbols.

  8. Fingerprints in Compressed Strings

    DEFF Research Database (Denmark)

    Bille, Philip; Cording, Patrick Hagge; Gørtz, Inge Li

    2013-01-01

    The Karp-Rabin fingerprint of a string is a type of hash value that due to its strong properties has been used in many string algorithms. In this paper we show how to construct a data structure for a string S of size N compressed by a context-free grammar of size n that answers fingerprint queries...... derivative that captures LZ78 compression and its variations) we get O(loglogN) query time. Hence, our data structures has the same time and space complexity as for random access in SLPs. We utilize the fingerprint data structures to solve the longest common extension problem in query time O(logNlogℓ) and O....... That is, given indices i and j, the answer to a query is the fingerprint of the substring S[i,j]. We present the first O(n) space data structures that answer fingerprint queries without decompressing any characters. For Straight Line Programs (SLP) we get O(logN) query time, and for Linear SLPs (an SLP...

  9. Fingerprints in compressed strings

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Cording, Patrick Hagge

    2017-01-01

    In this paper we show how to construct a data structure for a string S of size N compressed into a context-free grammar of size n that supports efficient Karp–Rabin fingerprint queries to any substring of S. That is, given indices i and j, the answer to a query is the fingerprint of the substring S......[i,j]. We present the first O(n) space data structures that answer fingerprint queries without decompressing any characters. For Straight Line Programs (SLP) we get O(log⁡N) query time, and for Linear SLPs (an SLP derivative that captures LZ78 compression and its variations) we get O(log⁡log⁡N) query time...

  10. Low-Speed Fingerprint Image Capture System User`s Guide, June 1, 1993

    Energy Technology Data Exchange (ETDEWEB)

    Whitus, B.R.; Goddard, J.S.; Jatko, W.B.; Manges, W.W.; Treece, D.A.

    1993-06-01

    The Low-Speed Fingerprint Image Capture System (LS-FICS) uses a Sun workstation controlling a Lenzar ElectroOptics Opacity 1000 imaging system to digitize fingerprint card images to support the Federal Bureau of Investigation`s (FBI`s) Automated Fingerprint Identification System (AFIS) program. The system also supports the operations performed by the Oak Ridge National Laboratory- (ORNL-) developed Image Transmission Network (ITN) prototype card scanning system. The input to the system is a single FBI fingerprint card of the agreed-upon standard format and a user-specified identification number. The output is a file formatted to be compatible with the National Institute of Standards and Technology (NIST) draft standard for fingerprint data exchange dated June 10, 1992. These NIST compatible files contain the required print and text images. The LS-FICS is designed to provide the FBI with the capability of scanning fingerprint cards into a digital format. The FBI will replicate the system to generate a data base of test images. The Host Workstation contains the image data paths and the compression algorithm. A local area network interface, disk storage, and tape drive are used for the image storage and retrieval, and the Lenzar Opacity 1000 scanner is used to acquire the image. The scanner is capable of resolving 500 pixels/in. in both x and y directions. The print images are maintained in full 8-bit gray scale and compressed with an FBI-approved wavelet-based compression algorithm. The text fields are downsampled to 250 pixels/in. and 2-bit gray scale. The text images are then compressed using a lossless Huffman coding scheme. The text fields retrieved from the output files are easily interpreted when displayed on the screen. Detailed procedures are provided for system calibration and operation. Software tools are provided to verify proper system operation.

  11. BETTER FINGERPRINT IMAGE COMPRESSION AT LOWER BIT-RATES: AN APPROACH USING MULTIWAVELETS WITH OPTIMISED PREFILTER COEFFICIENTS

    Directory of Open Access Journals (Sweden)

    N R Rema

    2017-08-01

    Full Text Available In this paper, a multiwavelet based fingerprint compression technique using set partitioning in hierarchical trees (SPIHT algorithm with optimised prefilter coefficients is proposed. While wavelet based progressive compression techniques give a blurred image at lower bit rates due to lack of high frequency information, multiwavelets can be used efficiently to represent high frequency information. SA4 (Symmetric Antisymmetric multiwavelet when combined with SPIHT reduces the number of nodes during initialization to 1/4th compared to SPIHT with wavelet. This reduction in nodes leads to improvement in PSNR at lower bit rates. The PSNR can be further improved by optimizing the prefilter coefficients. In this work genetic algorithm (GA is used for optimizing prefilter coefficients. Using the proposed technique, there is a considerable improvement in PSNR at lower bit rates, compared to existing techniques in literature. An overall average improvement of 4.23dB and 2.52dB for bit rates in between 0.01 to 1 has been achieved for the images in the databases FVC 2000 DB1 and FVC 2002 DB3 respectively. The quality of the reconstructed image is better even at higher compression ratios like 80:1 and 100:1. The level of decomposition required for a multiwavelet is lesser compared to a wavelet.

  12. Phase unwinding for dictionary compression with multiple channel transmission in magnetic resonance fingerprinting.

    Science.gov (United States)

    Lattanzi, Riccardo; Zhang, Bei; Knoll, Florian; Assländer, Jakob; Cloos, Martijn A

    2018-06-01

    Magnetic Resonance Fingerprinting reconstructions can become computationally intractable with multiple transmit channels, if the B 1 + phases are included in the dictionary. We describe a general method that allows to omit the transmit phases. We show that this enables straightforward implementation of dictionary compression to further reduce the problem dimensionality. We merged the raw data of each RF source into a single k-space dataset, extracted the transceiver phases from the corresponding reconstructed images and used them to unwind the phase in each time frame. All phase-unwound time frames were combined in a single set before performing SVD-based compression. We conducted synthetic, phantom and in-vivo experiments to demonstrate the feasibility of SVD-based compression in the case of two-channel transmission. Unwinding the phases before SVD-based compression yielded artifact-free parameter maps. For fully sampled acquisitions, parameters were accurate with as few as 6 compressed time frames. SVD-based compression performed well in-vivo with highly under-sampled acquisitions using 16 compressed time frames, which reduced reconstruction time from 750 to 25min. Our method reduces the dimensions of the dictionary atoms and enables to implement any fingerprint compression strategy in the case of multiple transmit channels. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Distortion Estimation in Compressed Music Using Only Audio Fingerprints

    NARCIS (Netherlands)

    Doets, P.J.O.; Lagendijk, R.L.

    2008-01-01

    An audio fingerprint is a compact yet very robust representation of the perceptually relevant parts of an audio signal. It can be used for content-based audio identification, even when the audio is severely distorted. Audio compression changes the fingerprint slightly. We show that these small

  14. Gabor filter based fingerprint image enhancement

    Science.gov (United States)

    Wang, Jin-Xiang

    2013-03-01

    Fingerprint recognition technology has become the most reliable biometric technology due to its uniqueness and invariance, which has been most convenient and most reliable technique for personal authentication. The development of Automated Fingerprint Identification System is an urgent need for modern information security. Meanwhile, fingerprint preprocessing algorithm of fingerprint recognition technology has played an important part in Automatic Fingerprint Identification System. This article introduces the general steps in the fingerprint recognition technology, namely the image input, preprocessing, feature recognition, and fingerprint image enhancement. As the key to fingerprint identification technology, fingerprint image enhancement affects the accuracy of the system. It focuses on the characteristics of the fingerprint image, Gabor filters algorithm for fingerprint image enhancement, the theoretical basis of Gabor filters, and demonstration of the filter. The enhancement algorithm for fingerprint image is in the windows XP platform with matlab.65 as a development tool for the demonstration. The result shows that the Gabor filter is effective in fingerprint image enhancement technology.

  15. Image Processing and Features Extraction of Fingerprint Images ...

    African Journals Online (AJOL)

    To demonstrate the importance of the image processing of fingerprint images prior to image enrolment or comparison, the set of fingerprint images in databases (a) and (b) of the FVC (Fingerprint Verification Competition) 2000 database were analyzed using a features extraction algorithm. This paper presents the results of ...

  16. DSP accelerator for the wavelet compression/decompression of high- resolution images

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, M.A.; Gleason, S.S.; Jatko, W.B.

    1993-07-23

    A Texas Instruments (TI) TMS320C30-based S-Bus digital signal processing (DSP) module was used to accelerate a wavelet-based compression and decompression algorithm applied to high-resolution fingerprint images. The law enforcement community, together with the National Institute of Standards and Technology (NISI), is adopting a standard based on the wavelet transform for the compression, transmission, and decompression of scanned fingerprint images. A two-dimensional wavelet transform of the input image is computed. Then spatial/frequency regions are automatically analyzed for information content and quantized for subsequent Huffman encoding. Compression ratios range from 10:1 to 30:1 while maintaining the level of image quality necessary for identification. Several prototype systems were developed using SUN SPARCstation 2 with a 1280 {times} 1024 8-bit display, 64-Mbyte random access memory (RAM), Tiber distributed data interface (FDDI), and Spirit-30 S-Bus DSP-accelerators from Sonitech. The final implementation of the DSP-accelerated algorithm performed the compression or decompression operation in 3.5 s per print. Further increases in system throughput were obtained by adding several DSP accelerators operating in parallel.

  17. Tools for quality control of fingerprint databases

    Science.gov (United States)

    Swann, B. Scott; Libert, John M.; Lepley, Margaret A.

    2010-04-01

    Integrity of fingerprint data is essential to biometric and forensic applications. Accordingly, the FBI's Criminal Justice Information Services (CJIS) Division has sponsored development of software tools to facilitate quality control functions relative to maintaining its fingerprint data assets inherent to the Integrated Automated Fingerprint Identification System (IAFIS) and Next Generation Identification (NGI). This paper provides an introduction of two such tools. The first FBI-sponsored tool was developed by the National Institute of Standards and Technology (NIST) and examines and detects the spectral signature of the ridge-flow structure characteristic of friction ridge skin. The Spectral Image Validation/Verification (SIVV) utility differentiates fingerprints from non-fingerprints, including blank frames or segmentation failures erroneously included in data; provides a "first look" at image quality; and can identify anomalies in sample rates of scanned images. The SIVV utility might detect errors in individual 10-print fingerprints inaccurately segmented from the flat, multi-finger image acquired by one of the automated collection systems increasing in availability and usage. In such cases, the lost fingerprint can be recovered by re-segmentation from the now compressed multi-finger image record. The second FBI-sponsored tool, CropCoeff was developed by MITRE and thoroughly tested via NIST. CropCoeff enables cropping of the replacement single print directly from the compressed data file, thus avoiding decompression and recompression of images that might degrade fingerprint features necessary for matching.

  18. A Support Vector Machine Approach for Truncated Fingerprint Image Detection from Sweeping Fingerprint Sensors

    Science.gov (United States)

    Chen, Chi-Jim; Pai, Tun-Wen; Cheng, Mox

    2015-01-01

    A sweeping fingerprint sensor converts fingerprints on a row by row basis through image reconstruction techniques. However, a built fingerprint image might appear to be truncated and distorted when the finger was swept across a fingerprint sensor at a non-linear speed. If the truncated fingerprint images were enrolled as reference targets and collected by any automated fingerprint identification system (AFIS), successful prediction rates for fingerprint matching applications would be decreased significantly. In this paper, a novel and effective methodology with low time computational complexity was developed for detecting truncated fingerprints in a real time manner. Several filtering rules were implemented to validate existences of truncated fingerprints. In addition, a machine learning method of supported vector machine (SVM), based on the principle of structural risk minimization, was applied to reject pseudo truncated fingerprints containing similar characteristics of truncated ones. The experimental result has shown that an accuracy rate of 90.7% was achieved by successfully identifying truncated fingerprint images from testing images before AFIS enrollment procedures. The proposed effective and efficient methodology can be extensively applied to all existing fingerprint matching systems as a preliminary quality control prior to construction of fingerprint templates. PMID:25835186

  19. A Support Vector Machine Approach for Truncated Fingerprint Image Detection from Sweeping Fingerprint Sensors

    Directory of Open Access Journals (Sweden)

    Chi-Jim Chen

    2015-03-01

    Full Text Available A sweeping fingerprint sensor converts fingerprints on a row by row basis through image reconstruction techniques. However, a built fingerprint image might appear to be truncated and distorted when the finger was swept across a fingerprint sensor at a non-linear speed. If the truncated fingerprint images were enrolled as reference targets and collected by any automated fingerprint identification system (AFIS, successful prediction rates for fingerprint matching applications would be decreased significantly. In this paper, a novel and effective methodology with low time computational complexity was developed for detecting truncated fingerprints in a real time manner. Several filtering rules were implemented to validate existences of truncated fingerprints. In addition, a machine learning method of supported vector machine (SVM, based on the principle of structural risk minimization, was applied to reject pseudo truncated fingerprints containing similar characteristics of truncated ones. The experimental result has shown that an accuracy rate of 90.7% was achieved by successfully identifying truncated fingerprint images from testing images before AFIS enrollment procedures. The proposed effective and efficient methodology can be extensively applied to all existing fingerprint matching systems as a preliminary quality control prior to construction of fingerprint templates.

  20. SVD compression for magnetic resonance fingerprinting in the time domain.

    Science.gov (United States)

    McGivney, Debra F; Pierre, Eric; Ma, Dan; Jiang, Yun; Saybasili, Haris; Gulani, Vikas; Griswold, Mark A

    2014-12-01

    Magnetic resonance (MR) fingerprinting is a technique for acquiring and processing MR data that simultaneously provides quantitative maps of different tissue parameters through a pattern recognition algorithm. A predefined dictionary models the possible signal evolutions simulated using the Bloch equations with different combinations of various MR parameters and pattern recognition is completed by computing the inner product between the observed signal and each of the predicted signals within the dictionary. Though this matching algorithm has been shown to accurately predict the MR parameters of interest, one desires a more efficient method to obtain the quantitative images. We propose to compress the dictionary using the singular value decomposition, which will provide a low-rank approximation. By compressing the size of the dictionary in the time domain, we are able to speed up the pattern recognition algorithm, by a factor of between 3.4-4.8, without sacrificing the high signal-to-noise ratio of the original scheme presented previously.

  1. Fingerprinting with Wow

    Science.gov (United States)

    Yu, Eugene; Craver, Scott

    2006-02-01

    Wow, or time warping caused by speed fluctuations in analog audio equipment, provides a wealth of applications in watermarking. Very subtle temporal distortion has been used to defeat watermarks, and as components in watermarking systems. In the image domain, the analogous warping of an image's canvas has been used both to defeat watermarks and also proposed to prevent collusion attacks on fingerprinting systems. In this paper, we explore how subliminal levels of wow can be used for steganography and fingerprinting. We present both a low-bitrate robust solution and a higher-bitrate solution intended for steganographic communication. As already observed, such a fingerprinting algorithm naturally discourages collusion by averaging, owing to flanging effects when misaligned audio is averaged. Another advantage of warping is that even when imperceptible, it can be beyond the reach of compression algorithms. We use this opportunity to debunk the common misconception that steganography is impossible under "perfect compression."

  2. Enhancing security of fingerprints through contextual biometric watermarking.

    Science.gov (United States)

    Noore, Afzel; Singh, Richa; Vatsa, Mayank; Houck, Max M

    2007-07-04

    This paper presents a novel digital watermarking technique using face and demographic text data as multiple watermarks for verifying the chain of custody and protecting the integrity of a fingerprint image. The watermarks are embedded in selected texture regions of a fingerprint image using discrete wavelet transform. Experimental results show that modifications in these locations are visually imperceptible and maintain the minutiae details. The integrity of the fingerprint image is verified through the high matching scores obtained from an automatic fingerprint identification system. There is also a high degree of visual correlation between the embedded images, and the extracted images from the watermarked fingerprint. The degree of similarity is computed using pixel-based metrics and human visual system metrics. The results also show that the proposed watermarked fingerprint and the extracted images are resilient to common attacks such as compression, filtering, and noise.

  3. A Compressed Sensing Framework for Magnetic Resonance Fingerprinting

    OpenAIRE

    Davies, Mike; Puy, Gilles; Vandergheynst, Pierre; Wiaux, Yves

    2013-01-01

    Inspired by the recently proposed Magnetic Resonance Fingerprinting (MRF) technique, we develop a principled compressed sensing framework for quantitative MRI. The three key components are: a random pulse excitation sequence following the MRF technique; a random EPI subsampling strategy and an iterative projection algorithm that imposes consistency with the Bloch equations. We show that theoretically, as long as the excitation sequence possesses an appropriate form of persistent excitation, w...

  4. Efficient Filtering of Noisy Fingerprint Images

    Directory of Open Access Journals (Sweden)

    Maria Liliana Costin

    2016-01-01

    Full Text Available Fingerprint identification is an important field in the wide domain of biometrics with many applications, in different areas such: judicial, mobile phones, access systems, airports. There are many elaborated algorithms for fingerprint identification, but none of them can guarantee that the results of identification are always 100 % accurate. A first step in a fingerprint image analysing process consists in the pre-processing or filtering. If the result after this step is not by a good quality the upcoming identification process can fail. A major difficulty can appear in case of fingerprint identification if the images that should be identified from a fingerprint image database are noisy with different type of noise. The objectives of the paper are: the successful completion of the noisy digital image filtering, a novel more robust algorithm of identifying the best filtering algorithm and the classification and ranking of the images. The choice about the best filtered images of a set of 9 algorithms is made with a dual method of fuzzy and aggregation model. We are proposing through this paper a set of 9 filters with different novelty designed for processing the digital images using the following methods: quartiles, medians, average, thresholds and histogram equalization, applied all over the image or locally on small areas. Finally the statistics reveal the classification and ranking of the best algorithms.

  5. Fingerprint Image Enhancement Based on Second Directional Derivative of the Digital Image

    Directory of Open Access Journals (Sweden)

    Onnia Vesa

    2002-01-01

    Full Text Available This paper presents a novel approach of fingerprint image enhancement that relies on detecting the fingerprint ridges as image regions where the second directional derivative of the digital image is positive. A facet model is used in order to approximate the derivatives at each image pixel based on the intensity values of pixels located in a certain neighborhood. We note that the size of this neighborhood has a critical role in achieving accurate enhancement results. Using neighborhoods of various sizes, the proposed algorithm determines several candidate binary representations of the input fingerprint pattern. Subsequently, an output binary ridge-map image is created by selecting image zones, from the available binary image candidates, according to a MAP selection rule. Two public domain collections of fingerprint images are used in order to objectively assess the performance of the proposed fingerprint image enhancement approach.

  6. Three-dimensional imaging of artificial fingerprint by optical coherence tomography

    Science.gov (United States)

    Larin, Kirill V.; Cheng, Yezeng

    2008-03-01

    Fingerprint recognition is one of the popular used methods of biometrics. However, due to the surface topography limitation, fingerprint recognition scanners are easily been spoofed, e.g. using artificial fingerprint dummies. Thus, biometric fingerprint identification devices need to be more accurate and secure to deal with different fraudulent methods including dummy fingerprints. Previously, we demonstrated that Optical Coherence Tomography (OCT) images revealed the presence of the artificial fingerprints (made from different household materials, such as cement and liquid silicone rubber) at all times, while the artificial fingerprints easily spoofed the commercial fingerprint reader. Also we demonstrated that an analysis of the autocorrelation of the OCT images could be used in automatic recognition systems. Here, we exploited the three-dimensional (3D) imaging of the artificial fingerprint by OCT to generate vivid 3D image for both the artificial fingerprint layer and the real fingerprint layer beneath. With the reconstructed 3D image, it could not only point out whether there exists an artificial material, which is intended to spoof the scanner, above the real finger, but also could provide the hacker's fingerprint. The results of these studies suggested that Optical Coherence Tomography could be a powerful real-time noninvasive method for accurate identification of artificial fingerprints real fingerprints as well.

  7. Radiological Image Compression

    Science.gov (United States)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  8. Missing data reconstruction using Gaussian mixture models for fingerprint images

    Science.gov (United States)

    Agaian, Sos S.; Yeole, Rushikesh D.; Rao, Shishir P.; Mulawka, Marzena; Troy, Mike; Reinecke, Gary

    2016-05-01

    Publisher's Note: This paper, originally published on 25 May 2016, was replaced with a revised version on 16 June 2016. If you downloaded the original PDF, but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance. One of the most important areas in biometrics is matching partial fingerprints in fingerprint databases. Recently, significant progress has been made in designing fingerprint identification systems for missing fingerprint information. However, a dependable reconstruction of fingerprint images still remains challenging due to the complexity and the ill-posed nature of the problem. In this article, both binary and gray-level images are reconstructed. This paper also presents a new similarity score to evaluate the performance of the reconstructed binary image. The offered fingerprint image identification system can be automated and extended to numerous other security applications such as postmortem fingerprints, forensic science, investigations, artificial intelligence, robotics, all-access control, and financial security, as well as for the verification of firearm purchasers, driver license applicants, etc.

  9. Data Compression of Fingerprint Minutiae

    OpenAIRE

    VISHAL SHRIVASTAVA; SUMIT SHARMA

    2012-01-01

    Biometric techniques have usual advantages over conventional personal identification technique. Among various commercially available biometric techniques such as face, fingerprint, Iris etc., fingerprint-based techniques are the most accepted recognition system. Fingerprints are trace or impression of patterns created byfriction ridges of the skin in the fingers and thumbs. Steganography usually used in smart card is a safe technique for authenticating a person. In steganography, biometric ch...

  10. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Directory of Open Access Journals (Sweden)

    Xiangwei Li

    2014-12-01

    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  11. Partial Fingerprint Image Enhancement using Region Division Technique and Morphological Transform

    International Nuclear Information System (INIS)

    Ahmad, A.; Arshad, I.; Raja, G.

    2015-01-01

    Fingerprints are the most renowned biometric trait for identification and verification. The quality of fingerprint image plays a vital role in feature extraction and matching. Existing algorithms work well for good quality fingerprint images and fail for partial fingerprint images as they are obtained from excessively dry fingers or affected by disease resulting in broken ridges. We propose an algorithm to enhance partial fingerprint images using morphological operatins with region division technique. The proposed method divides low quality image into six regions from top to bottom. Morphological operations choose an appropriate Structuring Element (SE) that joins broken ridges and thus enhance the image for further processing. The proposed method uses SE line with suitable angle theta and radius r in each region based on the orientation of the ridges. The algorithm is applied to 14 low quality fingerprint images from FVC-2002 database. Experimental results show that percentage accuracy has been improved using the proposed algorithm. The manual markup has been reduced and accuracy of 76.16% with Equal Error Rate (EER) of 3.16% is achieved. (author)

  12. 3D fingerprint imaging system based on full-field fringe projection profilometry

    Science.gov (United States)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  13. Fingerprint image enhancement by differential hysteresis processing.

    Science.gov (United States)

    Blotta, Eduardo; Moler, Emilce

    2004-05-10

    A new method to enhance defective fingerprints images through image digital processing tools is presented in this work. When the fingerprints have been taken without any care, blurred and in some cases mostly illegible, as in the case presented here, their classification and comparison becomes nearly impossible. A combination of spatial domain filters, including a technique called differential hysteresis processing (DHP), is applied to improve these kind of images. This set of filtering methods proved to be satisfactory in a wide range of cases by uncovering hidden details that helped to identify persons. Dactyloscopy experts from Policia Federal Argentina and the EAAF have validated these results.

  14. Snake Model Based on Improved Genetic Algorithm in Fingerprint Image Segmentation

    Directory of Open Access Journals (Sweden)

    Mingying Zhang

    2016-12-01

    Full Text Available Automatic fingerprint identification technology is a quite mature research field in biometric identification technology. As the preprocessing step in fingerprint identification, fingerprint segmentation can improve the accuracy of fingerprint feature extraction, and also reduce the time of fingerprint preprocessing, which has a great significance in improving the performance of the whole system. Based on the analysis of the commonly used methods of fingerprint segmentation, the existing segmentation algorithm is improved in this paper. The snake model is used to segment the fingerprint image. Additionally, it is improved by using the global optimization of the improved genetic algorithm. Experimental results show that the algorithm has obvious advantages both in the speed of image segmentation and in the segmentation effect.

  15. Straightforward fabrication of black nano silica dusting powder for latent fingerprint imaging

    Science.gov (United States)

    Komalasari, Isna; Krismastuti, Fransiska Sri Herwahyu; Elishian, Christine; Handayani, Eka Mardika; Nugraha, Willy Cahya; Ketrin, Rosi

    2017-11-01

    Imaging of latent fingerprint pattern (aka fingermark) is one of the most important and accurate detection methods in forensic investigation because of the characteristic of individual fingerprint. This detection technique relies on the mechanical adherence of fingerprint powder to the moisture and oily component of the skin left on the surface. The particle size of fingerprint powder is one of the critical parameter to obtain excellent fingerprint image. This study develops a simple, cheap and straightforward method to fabricate Nano size black dusting fingerprint powder based on Nano silica and applies the powder to visualize latent fingerprint. The nanostructured silica was prepared from tetraethoxysilane (TEOS) and then modified with Nano carbon, methylene blue and sodium acetate to color the powder. Finally, as a proof-of-principle, the ability of this black Nano silica dusting powder to image latent fingerprint is successfully demonstrated and the results show that this fingerprint powder provides clearer fingerprint pattern compared to the commercial one highlighting the potential application of the nanostructured silica in forensic science.

  16. Accessible biometrics: A frustrated total internal reflection approach to imaging fingerprints.

    Science.gov (United States)

    Smith, Nathan D; Sharp, James S

    2017-05-01

    Fingerprints are widely used as a means of identifying persons of interest because of the highly individual nature of the spatial distribution and types of features (or minuta) found on the surface of a finger. This individuality has led to their wide application in the comparison of fingerprints found at crime scenes with those taken from known offenders and suspects in custody. However, despite recent advances in machine vision technology and image processing techniques, fingerprint evidence is still widely being collected using outdated practices involving ink and paper - a process that can be both time consuming and expensive. Reduction of forensic service budgets increasingly requires that evidence be gathered and processed more rapidly and efficiently. However, many of the existing digital fingerprint acquisition devices have proven too expensive to roll out on a large scale. As a result new, low-cost imaging technologies are required to increase the quality and throughput of the processing of fingerprint evidence. Here we describe an inexpensive approach to digital fingerprint acquisition that is based upon frustrated total internal reflection imaging. The quality and resolution of the images produced are shown to be as good as those currently acquired using ink and paper based methods. The same imaging technique is also shown to be capable of imaging powdered fingerprints that have been lifted from a crime scene using adhesive tape or gel lifters. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.

  17. Oriented diffusion filtering for enhancing low-quality fingerprint images

    KAUST Repository

    Gottschlich, C.; Schönlieb, C.-B.

    2012-01-01

    To enhance low-quality fingerprint images, we present a novel method that first estimates the local orientation of the fingerprint ridge and valley flow and next performs oriented diffusion filtering, followed by a locally adaptive contrast enhancement step. By applying the authors' new approach to low-quality images of the FVC2004 fingerprint databases, the authors are able to show its competitiveness with other state-of-the-art enhancement methods for fingerprints like curved Gabor filtering. A major advantage of oriented diffusion filtering over those is its computational efficiency. Combining oriented diffusion filtering with curved Gabor filters led to additional improvements and, to the best of the authors' knowledge, the lowest equal error rates achieved so far using MINDTCT and BOZORTH3 on the FVC2004 databases. The recognition performance and the computational efficiency of the method suggest to include oriented diffusion filtering as a standard image enhancement add-on module for real-time fingerprint recognition systems. In order to facilitate the reproduction of these results, an implementation of the oriented diffusion filtering for Matlab and GNU Octave is made available for download. © 2012 The Institution of Engineering and Technology.

  18. Oriented diffusion filtering for enhancing low-quality fingerprint images

    KAUST Repository

    Gottschlich, C.

    2012-01-01

    To enhance low-quality fingerprint images, we present a novel method that first estimates the local orientation of the fingerprint ridge and valley flow and next performs oriented diffusion filtering, followed by a locally adaptive contrast enhancement step. By applying the authors\\' new approach to low-quality images of the FVC2004 fingerprint databases, the authors are able to show its competitiveness with other state-of-the-art enhancement methods for fingerprints like curved Gabor filtering. A major advantage of oriented diffusion filtering over those is its computational efficiency. Combining oriented diffusion filtering with curved Gabor filters led to additional improvements and, to the best of the authors\\' knowledge, the lowest equal error rates achieved so far using MINDTCT and BOZORTH3 on the FVC2004 databases. The recognition performance and the computational efficiency of the method suggest to include oriented diffusion filtering as a standard image enhancement add-on module for real-time fingerprint recognition systems. In order to facilitate the reproduction of these results, an implementation of the oriented diffusion filtering for Matlab and GNU Octave is made available for download. © 2012 The Institution of Engineering and Technology.

  19. Image quality (IQ) guided multispectral image compression

    Science.gov (United States)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  20. Image-based fingerprint verification system using LabVIEW

    Directory of Open Access Journals (Sweden)

    Sunil K. Singla

    2008-09-01

    Full Text Available Biometric-based identification/verification systems provide a solution to the security concerns in the modern world where machine is replacing human in every aspect of life. Fingerprints, because of their uniqueness, are the most widely used and highly accepted biometrics. Fingerprint biometric systems are either minutiae-based or pattern learning (image based. The minutiae-based algorithm depends upon the local discontinuities in the ridge flow pattern and are used when template size is important while image-based matching algorithm uses both the micro and macro feature of a fingerprint and is used if fast response is required. In the present paper an image-based fingerprint verification system is discussed. The proposed method uses a learning phase, which is not present in conventional image-based systems. The learning phase uses pseudo random sub-sampling, which reduces the number of comparisons needed in the matching stage. This system has been developed using LabVIEW (Laboratory Virtual Instrument Engineering Workbench toolbox version 6i. The availability of datalog files in LabVIEW makes it one of the most promising candidates for its usage as a database. Datalog files can access and manipulate data and complex data structures quickly and easily. It makes writing and reading much faster. After extensive experimentation involving a large number of samples and different learning sizes, high accuracy with learning image size of 100 100 and a threshold value of 700 (1000 being the perfect match has been achieved.

  1. Algebra for applications cryptography, secret sharing, error-correcting, fingerprinting, compression

    CERN Document Server

    Slinko, Arkadii

    2015-01-01

    This book examines the relationship between mathematics and data in the modern world. Indeed, modern societies are awash with data which must be manipulated in many different ways: encrypted, compressed, shared between users in a prescribed manner, protected from an unauthorised access and transmitted over unreliable channels. All of these operations can be understood only by a person with knowledge of basics in algebra and number theory. This book provides the necessary background in arithmetic, polynomials, groups, fields and elliptic curves that is sufficient to understand such real-life applications as cryptography, secret sharing, error-correcting, fingerprinting and compression of information. It is the first to cover many recent developments in these topics. Based on a lecture course given to third-year undergraduates, it is self-contained with numerous worked examples and exercises provided to test understanding. It can additionally be used for self-study.

  2. Compression for radiological images

    Science.gov (United States)

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  3. Self-Organizing Maps for Fingerprint Image Quality Assessment

    DEFF Research Database (Denmark)

    Olsen, Martin Aastrup; Tabassi, Elham; Makarov, Anton

    2013-01-01

    Fingerprint quality assessment is a crucial task which needs to be conducted accurately in various phases in the biometric enrolment and recognition processes. Neglecting quality measurement will adversely impact accuracy and efficiency of biometric recognition systems (e.g. verification and iden......Fingerprint quality assessment is a crucial task which needs to be conducted accurately in various phases in the biometric enrolment and recognition processes. Neglecting quality measurement will adversely impact accuracy and efficiency of biometric recognition systems (e.g. verification...... machine learning techniques. We train a self-organizing map (SOM) to cluster blocks of fingerprint images based on their spatial information content. The output of the SOM is a high-level representation of the finger image, which forms the input to a Random Forest trained to learn the relationship between...

  4. Mammography image compression using Wavelet

    International Nuclear Information System (INIS)

    Azuhar Ripin; Md Saion Salikin; Wan Hazlinda Ismail; Asmaliza Hashim; Norriza Md Isa

    2004-01-01

    Image compression plays an important role in many applications like medical imaging, televideo conferencing, remote sensing, document and facsimile transmission, which depend on the efficient manipulation, storage, and transmission of binary, gray scale, or color images. In Medical imaging application such Picture Archiving and Communication System (PACs), the image size or image stream size is too large and requires a large amount of storage space or high bandwidth for communication. Image compression techniques are divided into two categories namely lossy and lossless data compression. Wavelet method used in this project is a lossless compression method. In this method, the exact original mammography image data can be recovered. In this project, mammography images are digitized by using Vider Sierra Plus digitizer. The digitized images are compressed by using this wavelet image compression technique. Interactive Data Language (IDLs) numerical and visualization software is used to perform all of the calculations, to generate and display all of the compressed images. Results of this project are presented in this paper. (Author)

  5. Fingerprint matching algorithm for poor quality images

    Directory of Open Access Journals (Sweden)

    Vedpal Singh

    2015-04-01

    Full Text Available The main aim of this study is to establish an efficient platform for fingerprint matching for low-quality images. Generally, fingerprint matching approaches use the minutiae points for authentication. However, it is not such a reliable authentication method for low-quality images. To overcome this problem, the current study proposes a fingerprint matching methodology based on normalised cross-correlation, which would improve the performance and reduce the miscalculations during authentication. It would decrease the computational complexities. The error rate of the proposed method is 5.4%, which is less than the two-dimensional (2D dynamic programming (DP error rate of 5.6%, while Lee's method produces 5.9% and the combined method has 6.1% error rate. Genuine accept rate at 1% false accept rate is 89.3% but at 0.1% value it is 96.7%, which is higher. The outcome of this study suggests that the proposed methodology has a low error rate with minimum computational effort as compared with existing methods such as Lee's method and 2D DP and the combined method.

  6. Image compression of bone images

    International Nuclear Information System (INIS)

    Hayrapetian, A.; Kangarloo, H.; Chan, K.K.; Ho, B.; Huang, H.K.

    1989-01-01

    This paper reports a receiver operating characteristic (ROC) experiment conducted to compare the diagnostic performance of a compressed bone image with the original. The compression was done on custom hardware that implements an algorithm based on full-frame cosine transform. The compression ratio in this study is approximately 10:1, which was decided after a pilot experiment. The image set consisted of 45 hand images, including normal images and images containing osteomalacia and osteitis fibrosa. Each image was digitized with a laser film scanner to 2,048 x 2,048 x 8 bits. Six observers, all board-certified radiologists, participated in the experiment. For each ROC session, an independent ROC curve was constructed and the area under that curve calculated. The image set was randomized for each session, as was the order for viewing the original and reconstructed images. Analysis of variance was used to analyze the data and derive statistically significant results. The preliminary results indicate that the diagnostic quality of the reconstructed image is comparable to that of the original image

  7. Compressive sensing in medical imaging.

    Science.gov (United States)

    Graff, Christian G; Sidky, Emil Y

    2015-03-10

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  8. Lossy image compression for digital medical imaging systems

    Science.gov (United States)

    Wilhelm, Paul S.; Haynor, David R.; Kim, Yongmin; Nelson, Alan C.; Riskin, Eve A.

    1990-07-01

    Image compression at rates of 10:1 or greater could make PACS much more responsive and economically attractive. This paper describes a protocol for subjective and objective evaluation of the fidelity of compressed/decompressed images to the originals and presents the results ofits application to four representative and promising compression methods. The methods examined are predictive pruned tree-structured vector quantization, fractal compression, the discrete cosine transform with equal weighting of block bit allocation, and the discrete cosine transform with human visual system weighting of block bit allocation. Vector quantization is theoretically capable of producing the best compressed images, but has proven to be difficult to effectively implement. It has the advantage that it can reconstruct images quickly through a simple lookup table. Disadvantages are that codebook training is required, the method is computationally intensive, and achieving the optimum performance would require prohibitively long vector dimensions. Fractal compression is a relatively new compression technique, but has produced satisfactory results while being computationally simple. It is fast at both image compression and image reconstruction. Discrete cosine iransform techniques reproduce images well, but have traditionally been hampered by the need for intensive computing to compress and decompress images. A protocol was developed for side-by-side observer comparison of reconstructed images with originals. Three 1024 X 1024 CR (Computed Radiography) images and two 512 X 512 X-ray CT images were viewed at six bit rates (0.2, 0.4, 0.6, 0.9, 1.2, and 1.5 bpp for CR, and 1.0, 1.3, 1.6, 1.9, 2.2, 2.5 bpp for X-ray CT) by nine radiologists at the University of Washington Medical Center. The CR images were viewed on a Pixar II Megascan (2560 X 2048) monitor and the CT images on a Sony (1280 X 1024) monitor. The radiologists' subjective evaluations of image fidelity were compared to

  9. Polarization-based and specular-reflection-based noncontact latent fingerprint imaging and lifting

    Science.gov (United States)

    Lin, Shih-Schön; Yemelyanov, Konstantin M.; Pugh, Edward N., Jr.; Engheta, Nader

    2006-09-01

    In forensic science the finger marks left unintentionally by people at a crime scene are referred to as latent fingerprints. Most existing techniques to detect and lift latent fingerprints require application of a certain material directly onto the exhibit. The chemical and physical processing applied to the fingerprint potentially degrades or prevents further forensic testing on the same evidence sample. Many existing methods also have deleterious side effects. We introduce a method to detect and extract latent fingerprint images without applying any powder or chemicals on the object. Our method is based on the optical phenomena of polarization and specular reflection together with the physiology of fingerprint formation. The recovered image quality is comparable to existing methods. In some cases, such as the sticky side of tape, our method shows unique advantages.

  10. Two-Level Evaluation on Sensor Interoperability of Features in Fingerprint Image Segmentation

    Directory of Open Access Journals (Sweden)

    Ya-Shuo Li

    2012-03-01

    Full Text Available Features used in fingerprint segmentation significantly affect the segmentation performance. Various features exhibit different discriminating abilities on fingerprint images derived from different sensors. One feature which has better discriminating ability on images derived from a certain sensor may not adapt to segment images derived from other sensors. This degrades the segmentation performance. This paper empirically analyzes the sensor interoperability problem of segmentation feature, which refers to the feature’s ability to adapt to the raw fingerprints captured by different sensors. To address this issue, this paper presents a two-level feature evaluation method, including the first level feature evaluation based on segmentation error rate and the second level feature evaluation based on decision tree. The proposed method is performed on a number of fingerprint databases which are obtained from various sensors. Experimental results show that the proposed method can effectively evaluate the sensor interoperability of features, and the features with good evaluation results acquire better segmentation accuracies of images originating from different sensors.

  11. Contrast enhancement of fingerprint images using intuitionistic type II fuzzy set

    Directory of Open Access Journals (Sweden)

    Devarasan Ezhilmaran

    2015-04-01

    Full Text Available A novel contrast image enhancement of fingerprint images using intuitionistic type II fuzzy set theory is recommended in this work. The method of Hamacher T co-norm(S norm which generates a new membership function with the help of upper and lower membership function of type II fuzzy set. The finger print identification is one of the very few techniques employed in forensic science to aid criminal investigations in daily life, providing access control in financial security;-, visa related services, as well as others. Mostly fingerprint images are poorly illuminated and hardly visible, so it is necessary to enhance the input images. The enhancement is useful for authentication and matching. The fingerprint enhancement is vital for identifying and authenticating people by matching their fingerprints with the stored one in the database. The proposed enhancement of the intuitionistic type II fuzzy set theory results showed that it is more effective, especially, very useful for forensic science operations. The experimental results were compared with non-fuzzy, fuzzy, intuitionistic fuzzy and type II fuzzy methods in which the proposed method offered better results with good quality, less noise and low blur features.

  12. Image splitting and remapping method for radiological image compression

    Science.gov (United States)

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  13. JPEG and wavelet compression of ophthalmic images

    Science.gov (United States)

    Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.

    1999-05-01

    This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.

  14. Compression of the digitized X-ray images

    International Nuclear Information System (INIS)

    Terae, Satoshi; Miyasaka, Kazuo; Fujita, Nobuyuki; Takamura, Akio; Irie, Goro; Inamura, Kiyonari.

    1987-01-01

    Medical images are using an increased amount of space in the hospitals, while they are not accessed easily. Thus, suitable data filing system and precise data compression will be necessitated. Image quality was evaluated before and after image data compression, using local filing system (MediFile 1000, NEC Co.) and forty-seven modes of compression parameter. For this study X-ray images of 10 plain radiographs and 7 contrast examinations were digitized using a film reader of CCD sensor in MediFile 1000. Those images were compressed into forty-seven kinds of image data to save in an optical disc and then the compressed images were reconstructed. Each reconstructed image was compared with non-compressed images in respect to several regions of our interest by four radiologists. Compression and extension of radiological images were promptly made by employing the local filing system. Image quality was much more affected by the ratio of data compression than by the mode of parameter itself. In another word, the higher compression ratio became, the worse the image quality were. However, image quality was not significantly degraded until the compression ratio was about 15: 1 on plain radiographs and about 8: 1 on contrast studies. Image compression by this technique will be admitted by diagnostic radiology. (author)

  15. Double-compression method for biomedical images

    Science.gov (United States)

    Antonenko, Yevhenii A.; Mustetsov, Timofey N.; Hamdi, Rami R.; Małecka-Massalska, Teresa; Orshubekov, Nurbek; DzierŻak, RóŻa; Uvaysova, Svetlana

    2017-08-01

    This paper describes a double compression method (DCM) of biomedical images. A comparison of image compression factors in size JPEG, PNG and developed DCM was carried out. The main purpose of the DCM - compression of medical images while maintaining the key points that carry diagnostic information. To estimate the minimum compression factor an analysis of the coding of random noise image is presented.

  16. The application of infrared chemical imaging to the detection and enhancement of latent fingerprints: method optimization and further findings.

    Science.gov (United States)

    Tahtouh, Mark; Despland, Pauline; Shimmon, Ronald; Kalman, John R; Reedy, Brian J

    2007-09-01

    Fourier transform infrared (FTIR) chemical imaging allows the collection of fingerprint images from backgrounds that have traditionally posed problems for conventional fingerprint detection methods. In this work, the suitability of this technique for the imaging of fingerprints on a wider range of difficult surfaces (including polymer banknotes, various types of paper, and aluminum drink cans) has been tested. For each new surface, a systematic methodology was employed to optimize settings such as spectral resolution, number of scans, and pixel aggregation in order to reduce collection time and file-size without compromising spatial resolution and the quality of the final fingerprint image. The imaging of cyanoacrylate-fumed fingerprints on polymer banknotes has been improved, with shorter collection times for larger image areas. One-month-old fingerprints on polymer banknotes have been successfully fumed and imaged. It was also found that FTIR chemical imaging gives high quality images of cyanoacrylate-fumed fingerprints on aluminum drink cans, regardless of the printed background. Although visible and UV light sources do not yield fingerprint images of the same quality on difficult, nonporous backgrounds, in many cases they can be used to locate a fingerprint prior to higher quality imaging by the FTIR technique. Attempts to acquire FTIR images of fingerprints on paper-based porous surfaces that had been treated with established reagents such as ninhydrin were all unsuccessful due to the swamping effect of the cellulose constituents of the paper.

  17. Radiologic image compression -- A review

    International Nuclear Information System (INIS)

    Wong, S.; Huang, H.K.; Zaremba, L.; Gooden, D.

    1995-01-01

    The objective of radiologic image compression is to reduce the data volume of and to achieve a lot bit rate in the digital representation of radiologic images without perceived loss of image quality. However, the demand for transmission bandwidth and storage space in the digital radiology environment, especially picture archiving and communication systems (PACS) and teleradiology, and the proliferating use of various imaging modalities, such as magnetic resonance imaging, computed tomography, ultrasonography, nuclear medicine, computed radiography, and digital subtraction angiography, continue to outstrip the capabilities of existing technologies. The availability of lossy coding techniques for clinical diagnoses further implicates many complex legal and regulatory issues. This paper reviews the recent progress of lossless and lossy radiologic image compression and presents the legal challenges of using lossy compression of medical records. To do so, the authors first describe the fundamental concepts of radiologic imaging and digitization. Then, the authors examine current compression technology in the field of medical imaging and discuss important regulatory policies and legal questions facing the use of compression in this field. The authors conclude with a summary of future challenges and research directions. 170 refs

  18. Perceptual Image Compression in Telemedicine

    Science.gov (United States)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  19. Composite Techniques Based Color Image Compression

    Directory of Open Access Journals (Sweden)

    Zainab Ibrahim Abood

    2017-03-01

    Full Text Available Compression for color image is now necessary for transmission and storage in the data bases since the color gives a pleasing nature and natural for any object, so three composite techniques based color image compression is implemented to achieve image with high compression, no loss in original image, better performance and good image quality. These techniques are composite stationary wavelet technique (S, composite wavelet technique (W and composite multi-wavelet technique (M. For the high energy sub-band of the 3rd level of each composite transform in each composite technique, the compression parameters are calculated. The best composite transform among the 27 types is the three levels of multi-wavelet transform (MMM in M technique which has the highest values of energy (En and compression ratio (CR and least values of bit per pixel (bpp, time (T and rate distortion R(D. Also the values of the compression parameters of the color image are nearly the same as the average values of the compression parameters of the three bands of the same image.

  20. Subjective evaluation of compressed image quality

    Science.gov (United States)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  1. Optical Methods in Fingerprint Imaging for Medical and Personality Applications.

    Science.gov (United States)

    Wang, Chia-Nan; Wang, Jing-Wein; Lin, Ming-Hsun; Chang, Yao-Lang; Kuo, Chia-Ming

    2017-10-23

    Over the years, analysis and induction of personality traits has been a topic for individual subjective conjecture or speculation, rather than a focus of inductive scientific analysis. This study proposes a novel framework for analysis and induction of personality traits. First, 14 personality constructs based on the "Big Five" personality factors were developed. Next, a new fingerprint image algorithm was used for classification, and the fingerprints were classified into eight types. The relationship between personality traits and fingerprint type was derived from the results of the questionnaire survey. After comparison of pre-test and post-test results, this study determined the induction ability of personality traits from fingerprint type. Experimental results showed that the left/right thumbprint type of a majority of subjects was left loop/right loop and that the personalities of individuals with this fingerprint type were moderate with no significant differences in the 14 personality constructs.

  2. Optical Methods in Fingerprint Imaging for Medical and Personality Applications

    Directory of Open Access Journals (Sweden)

    Chia-Nan Wang

    2017-10-01

    Full Text Available Over the years, analysis and induction of personality traits has been a topic for individual subjective conjecture or speculation, rather than a focus of inductive scientific analysis. This study proposes a novel framework for analysis and induction of personality traits. First, 14 personality constructs based on the “Big Five” personality factors were developed. Next, a new fingerprint image algorithm was used for classification, and the fingerprints were classified into eight types. The relationship between personality traits and fingerprint type was derived from the results of the questionnaire survey. After comparison of pre-test and post-test results, this study determined the induction ability of personality traits from fingerprint type. Experimental results showed that the left/right thumbprint type of a majority of subjects was left loop/right loop and that the personalities of individuals with this fingerprint type were moderate with no significant differences in the 14 personality constructs.

  3. Optical Methods in Fingerprint Imaging for Medical and Personality Applications

    Science.gov (United States)

    Wang, Jing-Wein; Lin, Ming-Hsun; Chang, Yao-Lang; Kuo, Chia-Ming

    2017-01-01

    Over the years, analysis and induction of personality traits has been a topic for individual subjective conjecture or speculation, rather than a focus of inductive scientific analysis. This study proposes a novel framework for analysis and induction of personality traits. First, 14 personality constructs based on the “Big Five” personality factors were developed. Next, a new fingerprint image algorithm was used for classification, and the fingerprints were classified into eight types. The relationship between personality traits and fingerprint type was derived from the results of the questionnaire survey. After comparison of pre-test and post-test results, this study determined the induction ability of personality traits from fingerprint type. Experimental results showed that the left/right thumbprint type of a majority of subjects was left loop/right loop and that the personalities of individuals with this fingerprint type were moderate with no significant differences in the 14 personality constructs. PMID:29065556

  4. HVS-based medical image compression

    Energy Technology Data Exchange (ETDEWEB)

    Kai Xie [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)]. E-mail: xie_kai2001@sjtu.edu.cn; Jie Yang [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China); Min Zhuyue [CREATIS-CNRS Research Unit 5515 and INSERM Unit 630, 69621 Villeurbanne (France); Liang Lixiao [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)

    2005-07-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time.

  5. HVS-based medical image compression

    International Nuclear Information System (INIS)

    Kai Xie; Jie Yang; Min Zhuyue; Liang Lixiao

    2005-01-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time

  6. Optical image encryption using QR code and multilevel fingerprints in gyrator transform domains

    Science.gov (United States)

    Wei, Yang; Yan, Aimin; Dong, Jiabin; Hu, Zhijuan; Zhang, Jingtao

    2017-11-01

    A new concept of GT encryption scheme is proposed in this paper. We present a novel optical image encryption method by using quick response (QR) code and multilevel fingerprint keys in gyrator transform (GT) domains. In this method, an original image is firstly transformed into a QR code, which is placed in the input plane of cascaded GTs. Subsequently, the QR code is encrypted into the cipher-text by using multilevel fingerprint keys. The original image can be obtained easily by reading the high-quality retrieved QR code with hand-held devices. The main parameters used as private keys are GTs' rotation angles and multilevel fingerprints. Biometrics and cryptography are integrated with each other to improve data security. Numerical simulations are performed to demonstrate the validity and feasibility of the proposed encryption scheme. In the future, the method of applying QR codes and fingerprints in GT domains possesses much potential for information security.

  7. Evaluation of a new image compression technique

    International Nuclear Information System (INIS)

    Algra, P.R.; Kroon, H.M.; Noordveld, R.B.; DeValk, J.P.J.; Seeley, G.W.; Westerink, P.H.

    1988-01-01

    The authors present the evaluation of a new image compression technique, subband coding using vector quantization, on 44 CT examinations of the upper abdomen. Three independent radiologists reviewed the original images and compressed versions. The compression ratios used were 16:1 and 20:1. Receiver operating characteristic analysis showed no difference in the diagnostic contents between originals and their compressed versions. Subjective visibility of anatomic structures was equal. Except for a few 20:1 compressed images, the observers could not distinguish compressed versions from original images. They conclude that subband coding using vector quantization is a valuable method for data compression in CT scans of the abdomen

  8. Converting Panax ginseng DNA and chemical fingerprints into two-dimensional barcode.

    Science.gov (United States)

    Cai, Yong; Li, Peng; Li, Xi-Wen; Zhao, Jing; Chen, Hai; Yang, Qing; Hu, Hao

    2017-07-01

    In this study, we investigated how to convert the Panax ginseng DNA sequence code and chemical fingerprints into a two-dimensional code. In order to improve the compression efficiency, GATC2Bytes and digital merger compression algorithms are proposed. HPLC chemical fingerprint data of 10 groups of P. ginseng from Northeast China and the internal transcribed spacer 2 (ITS2) sequence code as the DNA sequence code were ready for conversion. In order to convert such data into a two-dimensional code, the following six steps were performed: First, the chemical fingerprint characteristic data sets were obtained through the inflection filtering algorithm. Second, precompression processing of such data sets is undertaken. Third, precompression processing was undertaken with the P. ginseng DNA (ITS2) sequence codes. Fourth, the precompressed chemical fingerprint data and the DNA (ITS2) sequence code were combined in accordance with the set data format. Such combined data can be compressed by Zlib, an open source data compression algorithm. Finally, the compressed data generated a two-dimensional code called a quick response code (QR code). Through the abovementioned converting process, it can be found that the number of bytes needed for storing P. ginseng chemical fingerprints and its DNA (ITS2) sequence code can be greatly reduced. After GTCA2Bytes algorithm processing, the ITS2 compression rate reaches 75% and the chemical fingerprint compression rate exceeds 99.65% via filtration and digital merger compression algorithm processing. Therefore, the overall compression ratio even exceeds 99.36%. The capacity of the formed QR code is around 0.5k, which can easily and successfully be read and identified by any smartphone. P. ginseng chemical fingerprints and its DNA (ITS2) sequence code can form a QR code after data processing, and therefore the QR code can be a perfect carrier of the authenticity and quality of P. ginseng information. This study provides a theoretical

  9. An efficient adaptive arithmetic coding image compression technology

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei

    2011-01-01

    This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  10. ROI-based DICOM image compression for telemedicine

    Indian Academy of Sciences (India)

    ground and reconstruct the image portions losslessly. The compressed image can ... If the image is compressed by 8:1 compression without any perceptual distortion, the ... Figure 2. Cross-sectional view of medical image (statistical representation). ... The Integer Wavelet Transform (IWT) is used to have lossless processing.

  11. Distorted Fingerprint Verification System

    Directory of Open Access Journals (Sweden)

    Divya KARTHIKAESHWARAN

    2011-01-01

    Full Text Available Fingerprint verification is one of the most reliable personal identification methods. Fingerprint matching is affected by non-linear distortion introduced in fingerprint impression during the image acquisition process. This non-linear deformation changes both the position and orientation of minutiae. The proposed system operates in three stages: alignment based fingerprint matching, fuzzy clustering and classifier framework. First, an enhanced input fingerprint image has been aligned with the template fingerprint image and matching score is computed. To improve the performance of the system, a fuzzy clustering based on distance and density has been used to cluster the feature set obtained from the fingerprint matcher. Finally a classifier framework has been developed and found that cost sensitive classifier produces better results. The system has been evaluated on fingerprint database and the experimental result shows that system produces a verification rate of 96%. This system plays an important role in forensic and civilian applications.

  12. High-speed biometrics ultrasonic system for 3D fingerprint imaging

    Science.gov (United States)

    Maev, Roman G.; Severin, Fedar

    2012-10-01

    The objective of this research is to develop a new robust fingerprint identification technology based upon forming surface-subsurface (under skin) ultrasonic 3D images of the finger pads. The presented work aims to create specialized ultrasonic scanning methods for biometric purposes. Preliminary research has demonstrated the applicability of acoustic microscopy for fingerprint reading. The additional information from internal skin layers and dermis structures contained in the scan can essentially improve confidence in the identification. Advantages of this system include high resolution and quick scanning time. Operating in pulse-echo mode provides spatial resolution up to 0.05 mm. Technology advantages of the proposed technology are the following: • Full-range scanning of the fingerprint area "nail to nail" (2.5 x 2.5 cm) can be done in less than 5 sec with a resolution of up to 1000 dpi. • Collection of information about the in-depth structure of the fingerprint realized by the set of spherically focused 50 MHz acoustic lens provide the resolution ~ 0.05 mm or better • In addition to fingerprints, this technology can identify sweat porous at the surface and under the skin • No sensitivity to the contamination of the finger's surface • Detection of blood velocity using Doppler effect can be implemented to distinguish living specimens • Utilization as polygraph device • Simple connectivity to fingerprint databases obtained with other techniques • The digitally interpolated images can then be enhanced allowing for greater resolution • Method can be applied to fingernails and underlying tissues, providing more information • A laboratory prototype of the biometrics system based on these described principles was designed, built and tested. It is the first step toward a practical implementation of this technique.

  13. Application of content-based image compression to telepathology

    Science.gov (United States)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  14. Watermark Compression in Medical Image Watermarking Using Lempel-Ziv-Welch (LZW) Lossless Compression Technique.

    Science.gov (United States)

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohd; Ali, Mushtaq

    2016-04-01

    In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery.

  15. Task-oriented lossy compression of magnetic resonance images

    Science.gov (United States)

    Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques

    1996-04-01

    A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.

  16. Halftoning processing on a JPEG-compressed image

    Science.gov (United States)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  17. SU-E-I-75: Development of New Biological Fingerprints for Patient Recognition to Identify Misfiled Images in a PACS Server

    Energy Technology Data Exchange (ETDEWEB)

    Shimizu, Y; Yoon, Y; Iwase, K; Yasumatsu, S; Matsunobu, Y [Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, Fukuoka, JP (Japan); Morishita, J [Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, Fukuoka, JP (Japan)

    2015-06-15

    Purpose: We are trying to develop an image-searching technique to identify misfiled images in a picture archiving and communication system (PACS) server by using five biological fingerprints: the whole lung field, cardiac shadow, superior mediastinum, lung apex, and right lower lung. Each biological fingerprint in a chest radiograph includes distinctive anatomical structures to identify misfiled images. The whole lung field was less effective for evaluating the similarity between two images than the other biological fingerprints. This was mainly due to the variation in the positioning for chest radiographs. The purpose of this study is to develop new biological fingerprints that could reduce influence of differences in the positioning for chest radiography. Methods: Two hundred patients were selected randomly from our database (36,212 patients). These patients had two images each (current and previous images). Current images were used as the misfiled images in this study. A circumscribed rectangular area of the lung and the upper half of the rectangle were selected automatically as new biological fingerprints. These biological fingerprints were matched to all previous images in the database. The degrees of similarity between the two images were calculated for the same and different patients. The usefulness of new the biological fingerprints for automated patient recognition was examined in terms of receiver operating characteristic (ROC) analysis. Results: Area under the ROC curves (AUCs) for the circumscribed rectangle of the lung, upper half of the rectangle, and whole lung field were 0.980, 0.994, and 0.950, respectively. The new biological fingerprints showed better performance in identifying the patients correctly than the whole lung field. Conclusion: We have developed new biological fingerprints: circumscribed rectangle of the lung and upper half of the rectangle. These new biological fingerprints would be useful for automated patient identification system

  18. SU-E-I-75: Development of New Biological Fingerprints for Patient Recognition to Identify Misfiled Images in a PACS Server

    International Nuclear Information System (INIS)

    Shimizu, Y; Yoon, Y; Iwase, K; Yasumatsu, S; Matsunobu, Y; Morishita, J

    2015-01-01

    Purpose: We are trying to develop an image-searching technique to identify misfiled images in a picture archiving and communication system (PACS) server by using five biological fingerprints: the whole lung field, cardiac shadow, superior mediastinum, lung apex, and right lower lung. Each biological fingerprint in a chest radiograph includes distinctive anatomical structures to identify misfiled images. The whole lung field was less effective for evaluating the similarity between two images than the other biological fingerprints. This was mainly due to the variation in the positioning for chest radiographs. The purpose of this study is to develop new biological fingerprints that could reduce influence of differences in the positioning for chest radiography. Methods: Two hundred patients were selected randomly from our database (36,212 patients). These patients had two images each (current and previous images). Current images were used as the misfiled images in this study. A circumscribed rectangular area of the lung and the upper half of the rectangle were selected automatically as new biological fingerprints. These biological fingerprints were matched to all previous images in the database. The degrees of similarity between the two images were calculated for the same and different patients. The usefulness of new the biological fingerprints for automated patient recognition was examined in terms of receiver operating characteristic (ROC) analysis. Results: Area under the ROC curves (AUCs) for the circumscribed rectangle of the lung, upper half of the rectangle, and whole lung field were 0.980, 0.994, and 0.950, respectively. The new biological fingerprints showed better performance in identifying the patients correctly than the whole lung field. Conclusion: We have developed new biological fingerprints: circumscribed rectangle of the lung and upper half of the rectangle. These new biological fingerprints would be useful for automated patient identification system

  19. Efficient Hardware Implementation For Fingerprint Image Enhancement Using Anisotropic Gaussian Filter.

    Science.gov (United States)

    Khan, Tariq Mahmood; Bailey, Donald G; Khan, Mohammad A U; Kong, Yinan

    2017-05-01

    A real-time image filtering technique is proposed which could result in faster implementation for fingerprint image enhancement. One major hurdle associated with fingerprint filtering techniques is the expensive nature of their hardware implementations. To circumvent this, a modified anisotropic Gaussian filter is efficiently adopted in hardware by decomposing the filter into two orthogonal Gaussians and an oriented line Gaussian. An architecture is developed for dynamically controlling the orientation of the line Gaussian filter. To further improve the performance of the filter, the input image is homogenized by a local image normalization. In the proposed structure, for a middle-range reconfigurable FPGA, both parallel compute-intensive and real-time demands were achieved. We manage to efficiently speed up the image-processing time and improve the resource utilization of the FPGA. Test results show an improved speed for its hardware architecture while maintaining reasonable enhancement benchmarks.

  20. High Bit-Depth Medical Image Compression With HEVC.

    Science.gov (United States)

    Parikh, Saurin S; Ruiz, Damian; Kalva, Hari; Fernandez-Escribano, Gerardo; Adzic, Velibor

    2018-03-01

    Efficient storing and retrieval of medical images has direct impact on reducing costs and improving access in cloud-based health care services. JPEG 2000 is currently the commonly used compression format for medical images shared using the DICOM standard. However, new formats such as high efficiency video coding (HEVC) can provide better compression efficiency compared to JPEG 2000. Furthermore, JPEG 2000 is not suitable for efficiently storing image series and 3-D imagery. Using HEVC, a single format can support all forms of medical images. This paper presents the use of HEVC for diagnostically acceptable medical image compression, focusing on compression efficiency compared to JPEG 2000. Diagnostically acceptable lossy compression and complexity of high bit-depth medical image compression are studied. Based on an established medically acceptable compression range for JPEG 2000, this paper establishes acceptable HEVC compression range for medical imaging applications. Experimental results show that using HEVC can increase the compression performance, compared to JPEG 2000, by over 54%. Along with this, a new method for reducing computational complexity of HEVC encoding for medical images is proposed. Results show that HEVC intra encoding complexity can be reduced by over 55% with negligible increase in file size.

  1. Mathematical transforms and image compression: A review

    Directory of Open Access Journals (Sweden)

    Satish K. Singh

    2010-07-01

    Full Text Available It is well known that images, often used in a variety of computer and other scientific and engineering applications, are difficult to store and transmit due to their sizes. One possible solution to overcome this problem is to use an efficient digital image compression technique where an image is viewed as a matrix and then the operations are performed on the matrix. All the contemporary digital image compression systems use various mathematical transforms for compression. The compression performance is closely related to the performance by these mathematical transforms in terms of energy compaction and spatial frequency isolation by exploiting inter-pixel redundancies present in the image data. Through this paper, a comprehensive literature survey has been carried out and the pros and cons of various transform-based image compression models have also been discussed.

  2. Photogrammetric fingerprint unwrapping

    Science.gov (United States)

    Paar, Gerhard; del Pilar Caballo Perucha, Maria; Bauer, Arnold; Nauschnegg, Bernhard

    2008-04-01

    Fingerprints are important biometric cues. Compared to conventional fingerprint sensors the use of contact-free stereoscopic image acquisition of the front-most finger segment has a set of advantages: Finger deformation is avoided, the entire relevant area for biometric use is covered, some technical aspects like sensor maintenance and cleaning are facilitated, and access to a three-dimensional reconstruction of the covered area is possible. We describe a photogrammetric workflow for nail-to-nail fingerprint reconstruction: A calibrated sensor setup with typically 5 cameras and dedicated illumination acquires adjacent stereo pairs. Using the silhouettes of the segmented finger a raw cylindrical model is generated. After preprocessing (shading correction, dust removal, lens distortion correction), each individual camera texture is projected onto the model. Image-to-image matching on these pseudo ortho images and dense 3D reconstruction obtains a textured cylindrical digital surface model with radial distances around the major axis and a grid size in the range of 25-50 µm. The model allows for objective fingerprint unwrapping and novel fingerprint matching algorithms since 3D relations between fingerprint features are available as additional cues. Moreover, covering the entire region with relevant fingerprint texture is particularly important for establishing a comprehensive forensic database. The workflow has been implemented in portable C and is ready for industrial exploitation. Further improvement issues are code optimization, unwrapping method, illumination strategy to avoid highlights and to improve the initial segmentation, and the comparison of the unwrapping result to conventional fingerprint acquisition technology.

  3. Compression and archiving of digital images

    International Nuclear Information System (INIS)

    Huang, H.K.

    1988-01-01

    This paper describes the application of a full-frame bit-allocation image compression technique to a hierarchical digital image archiving system consisting of magnetic disks, optical disks and an optical disk library. The digital archiving system without the compression has been in clinical operation in the Pediatric Radiology for more than half a year. The database in the system consists of all pediatric inpatients including all images from computed radiography, digitized x-ray films, CT, MR, and US. The rate of image accumulation is approximately 1,900 megabytes per week. The hardware design of the compression module is based on a Motorola 68020 microprocessor, A VME bus, a 16 megabyte image buffer memory board, and three Motorola digital signal processing 56001 chips on a VME board for performing the two-dimensional cosine transform and the quantization. The clinical evaluation of the compression module with the image archiving system is expected to be in February 1988

  4. Lossless medical image compression with a hybrid coder

    Science.gov (United States)

    Way, Jing-Dar; Cheng, Po-Yuen

    1998-10-01

    The volume of medical image data is expected to increase dramatically in the next decade due to the large use of radiological image for medical diagnosis. The economics of distributing the medical image dictate that data compression is essential. While there is lossy image compression, the medical image must be recorded and transmitted lossless before it reaches the users to avoid wrong diagnosis due to the image data lost. Therefore, a low complexity, high performance lossless compression schematic that can approach the theoretic bound and operate in near real-time is needed. In this paper, we propose a hybrid image coder to compress the digitized medical image without any data loss. The hybrid coder is constituted of two key components: an embedded wavelet coder and a lossless run-length coder. In this system, the medical image is compressed with the lossy wavelet coder first, and the residual image between the original and the compressed ones is further compressed with the run-length coder. Several optimization schemes have been used in these coders to increase the coding performance. It is shown that the proposed algorithm is with higher compression ratio than run-length entropy coders such as arithmetic, Huffman and Lempel-Ziv coders.

  5. Images compression in nuclear medicine

    International Nuclear Information System (INIS)

    Rebelo, M.S.; Furuie, S.S.; Moura, L.

    1992-01-01

    The performance of two methods for images compression in nuclear medicine was evaluated. The LZW precise, and Cosine Transformed, approximate, methods were analyzed. The results were obtained, showing that the utilization of approximated method produced images with an agreeable quality for visual analysis and compression rates, considerably high than precise method. (C.G.C.)

  6. Efficient predictive algorithms for image compression

    CERN Document Server

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla

    2017-01-01

    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  7. Compressive Transient Imaging

    KAUST Repository

    Sun, Qilin

    2017-04-01

    High resolution transient/3D imaging technology is of high interest in both scientific research and commercial application. Nowadays, all of the transient imaging methods suffer from low resolution or time consuming mechanical scanning. We proposed a new method based on TCSPC and Compressive Sensing to achieve a high resolution transient imaging with a several seconds capturing process. Picosecond laser sends a serious of equal interval pulse while synchronized SPAD camera\\'s detecting gate window has a precise phase delay at each cycle. After capturing enough points, we are able to make up a whole signal. By inserting a DMD device into the system, we are able to modulate all the frames of data using binary random patterns to reconstruct a super resolution transient/3D image later. Because the low fill factor of SPAD sensor will make a compressive sensing scenario ill-conditioned, We designed and fabricated a diffractive microlens array. We proposed a new CS reconstruction algorithm which is able to denoise at the same time for the measurements suffering from Poisson noise. Instead of a single SPAD senor, we chose a SPAD array because it can drastically reduce the requirement for the number of measurements and its reconstruction time. Further more, it not easy to reconstruct a high resolution image with only one single sensor while for an array, it just needs to reconstruct small patches and a few measurements. In this thesis, we evaluated the reconstruction methods using both clean measurements and the version corrupted by Poisson noise. The results show how the integration over the layers influence the image quality and our algorithm works well while the measurements suffer from non-trival Poisson noise. It\\'s a breakthrough in the areas of both transient imaging and compressive sensing.

  8. Fingerprint pores extractor

    CSIR Research Space (South Africa)

    Mngenge, NA

    2012-11-01

    Full Text Available , this is not always the case because of diseases and hash working conditions that affect fingerprints. In order to maintain high level of security independent of varying fingerprint image quality research suggests the use of other fingerprint features to compliment...

  9. Fingerprint recognition with identical twin fingerprints.

    Science.gov (United States)

    Tao, Xunqiang; Chen, Xinjian; Yang, Xin; Tian, Jie

    2012-01-01

    Fingerprint recognition with identical twins is a challenging task due to the closest genetics-based relationship existing in the identical twins. Several pioneers have analyzed the similarity between twins' fingerprints. In this work we continue to investigate the topic of the similarity of identical twin fingerprints. Our study was tested based on a large identical twin fingerprint database that contains 83 twin pairs, 4 fingers per individual and six impressions per finger: 3984 (83*2*4*6) images. Compared to the previous work, our contributions are summarized as follows: (1) Two state-of-the-art fingerprint identification methods: P071 and VeriFinger 6.1 were used, rather than one fingerprint identification method in previous studies. (2) Six impressions per finger were captured, rather than just one impression, which makes the genuine distribution of matching scores more realistic. (3) A larger sample (83 pairs) was collected. (4) A novel statistical analysis, which aims at showing the probability distribution of the fingerprint types for the corresponding fingers of identical twins which have same fingerprint type, has been conducted. (5) A novel analysis, which aims at showing which finger from identical twins has higher probability of having same fingerprint type, has been conducted. Our results showed that: (a) A state-of-the-art automatic fingerprint verification system can distinguish identical twins without drastic degradation in performance. (b) The chance that the fingerprints have the same type from identical twins is 0.7440, comparing to 0.3215 from non-identical twins. (c) For the corresponding fingers of identical twins which have same fingerprint type, the probability distribution of five major fingerprint types is similar to the probability distribution for all the fingers' fingerprint type. (d) For each of four fingers of identical twins, the probability of having same fingerprint type is similar.

  10. Fingerprint recognition with identical twin fingerprints.

    Directory of Open Access Journals (Sweden)

    Xunqiang Tao

    Full Text Available Fingerprint recognition with identical twins is a challenging task due to the closest genetics-based relationship existing in the identical twins. Several pioneers have analyzed the similarity between twins' fingerprints. In this work we continue to investigate the topic of the similarity of identical twin fingerprints. Our study was tested based on a large identical twin fingerprint database that contains 83 twin pairs, 4 fingers per individual and six impressions per finger: 3984 (83*2*4*6 images. Compared to the previous work, our contributions are summarized as follows: (1 Two state-of-the-art fingerprint identification methods: P071 and VeriFinger 6.1 were used, rather than one fingerprint identification method in previous studies. (2 Six impressions per finger were captured, rather than just one impression, which makes the genuine distribution of matching scores more realistic. (3 A larger sample (83 pairs was collected. (4 A novel statistical analysis, which aims at showing the probability distribution of the fingerprint types for the corresponding fingers of identical twins which have same fingerprint type, has been conducted. (5 A novel analysis, which aims at showing which finger from identical twins has higher probability of having same fingerprint type, has been conducted. Our results showed that: (a A state-of-the-art automatic fingerprint verification system can distinguish identical twins without drastic degradation in performance. (b The chance that the fingerprints have the same type from identical twins is 0.7440, comparing to 0.3215 from non-identical twins. (c For the corresponding fingers of identical twins which have same fingerprint type, the probability distribution of five major fingerprint types is similar to the probability distribution for all the fingers' fingerprint type. (d For each of four fingers of identical twins, the probability of having same fingerprint type is similar.

  11. Cloud Optimized Image Format and Compression

    Science.gov (United States)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  12. Iris Recognition: The Consequences of Image Compression

    Directory of Open Access Journals (Sweden)

    Bishop DanielA

    2010-01-01

    Full Text Available Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  13. Iris Recognition: The Consequences of Image Compression

    Science.gov (United States)

    Ives, Robert W.; Bishop, Daniel A.; Du, Yingzi; Belcher, Craig

    2010-12-01

    Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  14. Combined Sparsifying Transforms for Compressive Image Fusion

    Directory of Open Access Journals (Sweden)

    ZHAO, L.

    2013-11-01

    Full Text Available In this paper, we present a new compressive image fusion method based on combined sparsifying transforms. First, the framework of compressive image fusion is introduced briefly. Then, combined sparsifying transforms are presented to enhance the sparsity of images. Finally, a reconstruction algorithm based on the nonlinear conjugate gradient is presented to get the fused image. The simulations demonstrate that by using the combined sparsifying transforms better results can be achieved in terms of both the subjective visual effect and the objective evaluation indexes than using only a single sparsifying transform for compressive image fusion.

  15. Performance evaluation of breast image compression techniques

    International Nuclear Information System (INIS)

    Anastassopoulos, G.; Lymberopoulos, D.; Panayiotakis, G.; Bezerianos, A.

    1994-01-01

    Novel diagnosis orienting tele working systems manipulate, store, and process medical data through real time communication - conferencing schemes. One of the most important factors affecting the performance of these systems is image handling. Compression algorithms can be applied to the medical images, in order to minimize : a) the volume of data to be stored in the database, b) the demanded bandwidth from the network, c) the transmission costs, and to minimize the speed of the transmitted data. In this paper an estimation of all the factors of the process that affect the presentation of breast images is made, from the time the images are produced from a modality, till the compressed images are stored, or transmitted in a Broadband network (e.g. B-ISDN). The images used were scanned images of the TOR(MAX) Leeds breast phantom, as well as typical breast images. A comparison of seven compression techniques has been done, based on objective criteria such as Mean Square Error (MSE), resolution, contrast, etc. The user can choose the appropriate compression ratio in order to achieve the desired image quality. (authors)

  16. Recognizable or Not: Towards Image Semantic Quality Assessment for Compression

    Science.gov (United States)

    Liu, Dong; Wang, Dandan; Li, Houqiang

    2017-12-01

    Traditionally, image compression was optimized for the pixel-wise fidelity or the perceptual quality of the compressed images given a bit-rate budget. But recently, compressed images are more and more utilized for automatic semantic analysis tasks such as recognition and retrieval. For these tasks, we argue that the optimization target of compression is no longer perceptual quality, but the utility of the compressed images in the given automatic semantic analysis task. Accordingly, we propose to evaluate the quality of the compressed images neither at pixel level nor at perceptual level, but at semantic level. In this paper, we make preliminary efforts towards image semantic quality assessment (ISQA), focusing on the task of optical character recognition (OCR) from compressed images. We propose a full-reference ISQA measure by comparing the features extracted from text regions of original and compressed images. We then propose to integrate the ISQA measure into an image compression scheme. Experimental results show that our proposed ISQA measure is much better than PSNR and SSIM in evaluating the semantic quality of compressed images; accordingly, adopting our ISQA measure to optimize compression for OCR leads to significant bit-rate saving compared to using PSNR or SSIM. Moreover, we perform subjective test about text recognition from compressed images, and observe that our ISQA measure has high consistency with subjective recognizability. Our work explores new dimensions in image quality assessment, and demonstrates promising direction to achieve higher compression ratio for specific semantic analysis tasks.

  17. Person recognition using fingerprints and top-view finger images

    Directory of Open Access Journals (Sweden)

    Panyayot Chaikan

    2010-03-01

    Full Text Available Our multimodal biometric system combines fingerprinting with a top-view finger image captured by a CCD camera without user intervention. The greyscale image is preprocessed to enhance its edges, skin furrows, and the nail shape before being manipulated by a bank of oriented filters. A square tessellation is applied to the filtered image to create a feature map, called a NailCode, which is employed in Euclidean distance computations. The NailCode reduces system errors by 17.68% in the verification mode, and by 6.82% in the identification mode.

  18. A topology based approach to categorization of fingerprint images

    DEFF Research Database (Denmark)

    Aabrandt, A.; Olsen, M. A.; Busch, C.

    2012-01-01

    , an image is viewed as a triangulated point cloud and the topology associated with this construct is summarized using its first betti number - a number that indicates the number of distinct cycles in the triangulation associated to the particular image. This number is then compared against the first betti...... numbers of “n” prototype images in order to perform classification (“fingerprint” vs “non-fingerprint”). The proposed method is compared against SIVV (a tool provided by NIST). Experimental results on fingerprint and iris databases demonstrate the potential of the scheme....

  19. Contributions in compression of 3D medical images and 2D images; Contributions en compression d'images medicales 3D et d'images naturelles 2D

    Energy Technology Data Exchange (ETDEWEB)

    Gaudeau, Y

    2006-12-15

    The huge amounts of volumetric data generated by current medical imaging techniques in the context of an increasing demand for long term archiving solutions, as well as the rapid development of distant radiology make the use of compression inevitable. Indeed, if the medical community has sided until now with compression without losses, most of applications suffer from compression ratios which are too low with this kind of compression. In this context, compression with acceptable losses could be the most appropriate answer. So, we propose a new loss coding scheme based on 3D (3 dimensional) Wavelet Transform and Dead Zone Lattice Vector Quantization 3D (DZLVQ) for medical images. Our algorithm has been evaluated on several computerized tomography (CT) and magnetic resonance image volumes. The main contribution of this work is the design of a multidimensional dead zone which enables to take into account correlations between neighbouring elementary volumes. At high compression ratios, we show that it can out-perform visually and numerically the best existing methods. These promising results are confirmed on head CT by two medical patricians. The second contribution of this document assesses the effect with-loss image compression on CAD (Computer-Aided Decision) detection performance of solid lung nodules. This work on 120 significant lungs images shows that detection did not suffer until 48:1 compression and still was robust at 96:1. The last contribution consists in the complexity reduction of our compression scheme. The first allocation dedicated to 2D DZLVQ uses an exponential of the rate-distortion (R-D) functions. The second allocation for 2D and 3D medical images is based on block statistical model to estimate the R-D curves. These R-D models are based on the joint distribution of wavelet vectors using a multidimensional mixture of generalized Gaussian (MMGG) densities. (author)

  20. Acquiring a 2D rolled equivalent fingerprint image from a non-contact 3D finger scan

    Science.gov (United States)

    Fatehpuria, Abhishika; Lau, Daniel L.; Hassebrook, Laurence G.

    2006-04-01

    The use of fingerprints as a biometric is both the oldest mode of computer aided personal identification and the most relied-upon technology in use today. But current fingerprint scanning systems have some challenging and peculiar difficulties. Often skin conditions and imperfect acquisition circumstances cause the captured fingerprint image to be far from ideal. Also some of the acquisition techniques can be slow and cumbersome to use and may not provide the complete information required for reliable feature extraction and fingerprint matching. Most of the difficulties arise due to the contact of the fingerprint surface with the sensor platen. To attain a fast-capture, non-contact, fingerprint scanning technology, we are developing a scanning system that employs structured light illumination as a means for acquiring a 3-D scan of the finger with sufficiently high resolution to record ridge-level details. In this paper, we describe the postprocessing steps used for converting the acquired 3-D scan of the subject's finger into a 2-D rolled equivalent image.

  1. High-quality compressive ghost imaging

    Science.gov (United States)

    Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun

    2018-04-01

    We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.

  2. Evaluation of a single-pixel one-transistor active pixel sensor for fingerprint imaging

    Science.gov (United States)

    Xu, Man; Ou, Hai; Chen, Jun; Wang, Kai

    2015-08-01

    Since it first appeared in iPhone 5S in 2013, fingerprint identification (ID) has rapidly gained popularity among consumers. Current fingerprint-enabled smartphones unanimously consists of a discrete sensor to perform fingerprint ID. This architecture not only incurs higher material and manufacturing cost, but also provides only static identification and limited authentication. Hence as the demand for a thinner, lighter, and more secure handset grows, we propose a novel pixel architecture that is a photosensitive device embedded in a display pixel and detects the reflected light from the finger touch for high resolution, high fidelity and dynamic biometrics. To this purpose, an amorphous silicon (a-Si:H) dual-gate photo TFT working in both fingerprint-imaging mode and display-driving mode will be developed.

  3. Detection of visible and latent fingerprints using micro-X-ray fluorescence elemental imaging.

    Science.gov (United States)

    Worley, Christopher G; Wiltshire, Sara S; Miller, Thomasin C; Havrilla, George J; Majidi, Vahid

    2006-01-01

    Using micro-X-ray fluorescence (MXRF), a novel means of detecting fingerprints was examined in which the prints were imaged based on their elemental composition. MXRF is a nondestructive technique. Although this method requires a priori knowledge about the approximate location of a print, it offers a new and complementary means for detecting fingerprints that are also left pristine for further analysis (including potential DNA extraction) or archiving purposes. Sebaceous fingerprints and those made after perspiring were detected based on elements such as potassium and chlorine present in the print residue. Unique prints were also detected including those containing lotion, saliva, banana, or sunscreen. This proof-of-concept study demonstrates the potential for visualizing fingerprints by MXRF on surfaces that can be problematic using current methods.

  4. Magnetic Resonance Fingerprinting - a promising new approach to obtain standardized imaging biomarkers from MRI.

    Science.gov (United States)

    2015-04-01

    Current routine MRI examinations rely on the acquisition of qualitative images that have a contrast "weighted" for a mixture of (magnetic) tissue properties. Recently, a novel approach was introduced, namely MR Fingerprinting (MRF) with a completely different approach to data acquisition, post-processing and visualization. Instead of using a repeated, serial acquisition of data for the characterization of individual parameters of interest, MRF uses a pseudo randomized acquisition that causes the signals from different tissues to have a unique signal evolution or 'fingerprint' that is simultaneously a function of the multiple material properties under investigation. The processing after acquisition involves a pattern recognition algorithm to match the fingerprints to a predefined dictionary of predicted signal evolutions. These can then be translated into quantitative maps of the magnetic parameters of interest. MR Fingerprinting (MRF) is a technique that could theoretically be applied to most traditional qualitative MRI methods and replaces them with acquisition of truly quantitative tissue measures. MRF is, thereby, expected to be much more accurate and reproducible than traditional MRI and should improve multi-center studies and significantly reduce reader bias when diagnostic imaging is performed. Key Points • MR fingerprinting (MRF) is a new approach to data acquisition, post-processing and visualization.• MRF provides highly accurate quantitative maps of T1, T2, proton density, diffusion.• MRF may offer multiparametric imaging with high reproducibility, and high potential for multicenter/ multivendor studies.

  5. The task of control digital image compression

    OpenAIRE

    TASHMANOV E.B.; МАМАTOV М.S.

    2014-01-01

    In this paper we consider the relationship of control tasks and image compression losses. The main idea of this approach is to allocate structural lines simplified image and further compress the selected data

  6. Fingerprint fake detection by optical coherence tomography

    Science.gov (United States)

    Meissner, Sven; Breithaupt, Ralph; Koch, Edmund

    2013-03-01

    The most established technique for the identification at biometric access control systems is the human fingerprint. While every human fingerprint is unique, fingerprints can be faked very easily by using thin layer fakes. Because commercial fingerprint scanners use only a two-dimensional image acquisition of the finger surface, they can only hardly differentiate between real fingerprints and fingerprint fakes applied on thin layer materials. A Swept Source OCT system with an A-line rate of 20 kHz and a lateral and axial resolution of approximately 13 μm, a centre wavelength of 1320 nm and a band width of 120 nm (FWHM) was used to acquire fingerprints and finger tips with overlying fakes. Three-dimensional volume stacks with dimensions of 4.5 mm x 4 mm x 2 mm were acquired. The layering arrangement of the imaged finger tips and faked finger tips was analyzed and subsequently classified into real and faked fingerprints. Additionally, sweat gland ducts were detected and consulted for the classification. The manual classification between real fingerprints and faked fingerprints results in almost 100 % correctness. The outer as well as the internal fingerprint can be recognized in all real human fingers, whereby this was not possible in the image stacks of the faked fingerprints. Furthermore, in all image stacks of real human fingers the sweat gland ducts were detected. The number of sweat gland ducts differs between the test persons. The typical helix shape of the ducts was observed. In contrast, in images of faked fingerprints we observe abnormal layer arrangements and no sweat gland ducts connecting the papillae of the outer fingerprint and the internal fingerprint. We demonstrated that OCT is a very useful tool to enhance the performance of biometric control systems concerning attacks by thin layer fingerprint fakes.

  7. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    Directory of Open Access Journals (Sweden)

    Philip J Kellman

    Full Text Available Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert

  8. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    Science.gov (United States)

    Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  9. A new hyperspectral image compression paradigm based on fusion

    Science.gov (United States)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  10. Performance evaluation of breast image compression techniques

    Energy Technology Data Exchange (ETDEWEB)

    Anastassopoulos, G; Lymberopoulos, D [Wire Communications Laboratory, Electrical Engineering Department, University of Patras, Greece (Greece); Panayiotakis, G; Bezerianos, A [Medical Physics Department, School of Medicine, University of Patras, Greece (Greece)

    1994-12-31

    Novel diagnosis orienting tele working systems manipulate, store, and process medical data through real time communication - conferencing schemes. One of the most important factors affecting the performance of these systems is image handling. Compression algorithms can be applied to the medical images, in order to minimize : a) the volume of data to be stored in the database, b) the demanded bandwidth from the network, c) the transmission costs, and to minimize the speed of the transmitted data. In this paper an estimation of all the factors of the process that affect the presentation of breast images is made, from the time the images are produced from a modality, till the compressed images are stored, or transmitted in a Broadband network (e.g. B-ISDN). The images used were scanned images of the TOR(MAX) Leeds breast phantom, as well as typical breast images. A comparison of seven compression techniques has been done, based on objective criteria such as Mean Square Error (MSE), resolution, contrast, etc. The user can choose the appropriate compression ratio in order to achieve the desired image quality. (authors). 12 refs, 4 figs.

  11. Effects on MR images compression in tissue classification quality

    International Nuclear Information System (INIS)

    Santalla, H; Meschino, G; Ballarin, V

    2007-01-01

    It is known that image compression is required to optimize the storage in memory. Moreover, transmission speed can be significantly improved. Lossless compression is used without controversy in medicine, though benefits are limited. If we compress images lossy, where image can not be totally recovered; we can only recover an approximation. In this point definition of 'quality' is essential. What we understand for 'quality'? How can we evaluate a compressed image? Quality in images is an attribute whit several definitions and interpretations, which actually depend on the posterior use we want to give them. This work proposes a quantitative analysis of quality for lossy compressed Magnetic Resonance (MR) images, and their influence in automatic tissue classification, accomplished with these images

  12. Correlation and image compression for limited-bandwidth CCD.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, Douglas G.

    2005-07-01

    As radars move to Unmanned Aerial Vehicles with limited-bandwidth data downlinks, the amount of data stored and transmitted with each image becomes more significant. This document gives the results of a study to determine the effect of lossy compression in the image magnitude and phase on Coherent Change Detection (CCD). We examine 44 lossy compression types, plus lossless zlib compression, and test each compression method with over 600 CCD image pairs. We also derive theoretical predictions for the correlation for most of these compression schemes, which compare favorably with the experimental results. We recommend image transmission formats for limited-bandwidth programs having various requirements for CCD, including programs which cannot allow performance degradation and those which have stricter bandwidth requirements at the expense of CCD performance.

  13. Usefulness of biological fingerprint in magnetic resonance imaging for patient verification.

    Science.gov (United States)

    Ueda, Yasuyuki; Morishita, Junji; Kudomi, Shohei; Ueda, Katsuhiko

    2016-09-01

    The purpose of our study is to investigate the feasibility of automated patient verification using multi-planar reconstruction (MPR) images generated from three-dimensional magnetic resonance (MR) imaging of the brain. Several anatomy-related MPR images generated from three-dimensional fast scout scan of each MR examination were used as biological fingerprint images in this study. The database of this study consisted of 730 temporal pairs of MR examination of the brain. We calculated the correlation value between current and prior biological fingerprint images of the same patient and also all combinations of two images for different patients to evaluate the effectiveness of our method for patient verification. The best performance of our system were as follows: a half-total error rate of 1.59 % with a false acceptance rate of 0.023 % and a false rejection rate of 3.15 %, an equal error rate of 1.37 %, and a rank-one identification rate of 98.6 %. Our method makes it possible to verify the identity of the patient using only some existing medical images without the addition of incidental equipment. Also, our method will contribute to patient misidentification error management caused by human errors.

  14. Optimum image compression rate maintaining diagnostic image quality of digital intraoral radiographs

    International Nuclear Information System (INIS)

    Song, Ju Seop; Koh, Kwang Joon

    2000-01-01

    The aims of the present study are to determine the optimum compression rate in terms of file size reduction and diagnostic quality of the images after compression and evaluate the transmission speed of original or each compressed images. The material consisted of 24 extracted human premolars and molars. The occlusal surfaces and proximal surfaces of the teeth had a clinical disease spectrum that ranged from sound to varying degrees of fissure discoloration and cavitation. The images from Digora system were exported in TIFF and the images from conventional intraoral film were scanned and digitalized in TIFF by Nikon SF-200 scanner(Nikon, Japan). And six compression factors were chosen and applied on the basis of the results from a pilot study. The total number of images to be assessed were 336. Three radiologists assessed the occlusal and proximal surfaces of the teeth with 5-rank scale. Finally diagnosed as either sound or carious lesion by one expert oral pathologist. And sensitivity and specificity and kappa value for diagnostic agreement was calculated. Also the area (Az) values under the ROC curve were calculated and paired t-test and oneway ANOVA test was performed. Thereafter, transmission time of the image files of the each compression level were compared with that of the original image files. No significant difference was found between original and the corresponding images up to 7% (1:14) compression ratio for both the occlusal and proximal caries (p<0.05). JPEG3 (1:14) image files are transmitted fast more than 10 times, maintained diagnostic information in image, compared with original image files. 1:14 compressed image file may be used instead of the original image and reduce storage needs and transmission time.

  15. Image encryption using fingerprint as key based on phase retrieval algorithm and public key cryptography

    Science.gov (United States)

    Zhao, Tieyu; Ran, Qiwen; Yuan, Lin; Chi, Yingying; Ma, Jing

    2015-09-01

    In this paper, a novel image encryption system with fingerprint used as a secret key is proposed based on the phase retrieval algorithm and RSA public key algorithm. In the system, the encryption keys include the fingerprint and the public key of RSA algorithm, while the decryption keys are the fingerprint and the private key of RSA algorithm. If the users share the fingerprint, then the system will meet the basic agreement of asymmetric cryptography. The system is also applicable for the information authentication. The fingerprint as secret key is used in both the encryption and decryption processes so that the receiver can identify the authenticity of the ciphertext by using the fingerprint in decryption process. Finally, the simulation results show the validity of the encryption scheme and the high robustness against attacks based on the phase retrieval technique.

  16. Hyperspectral image compressing using wavelet-based method

    Science.gov (United States)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  17. Wavelet compression algorithm applied to abdominal ultrasound images

    International Nuclear Information System (INIS)

    Lin, Cheng-Hsun; Pan, Su-Feng; LU, Chin-Yuan; Lee, Ming-Che

    2006-01-01

    We sought to investigate acceptable compression ratios of lossy wavelet compression on 640 x 480 x 8 abdominal ultrasound (US) images. We acquired 100 abdominal US images with normal and abnormal findings from the view station of a 932-bed teaching hospital. The US images were then compressed at quality factors (QFs) of 3, 10, 30, and 50 followed outcomes of a pilot study. This was equal to the average compression ratios of 4.3:1, 8.5:1, 20:1 and 36.6:1, respectively. Four objective measurements were carried out to examine and compare the image degradation between original and compressed images. Receiver operating characteristic (ROC) analysis was also introduced for subjective assessment. Five experienced and qualified radiologists as reviewers blinded to corresponding pathological findings, analysed paired 400 randomly ordered images with two 17-inch thin film transistor/liquid crystal display (TFT/LCD) monitors. At ROC analysis, the average area under curve (Az) for US abdominal image was 0.874 at the ratio of 36.6:1. The compressed image size was only 2.7% for US original at this ratio. The objective parameters showed the higher the mean squared error (MSE) or root mean squared error (RMSE) values, the poorer the image quality. The higher signal-to-noise ratio (SNR) or peak signal-to-noise ratio (PSNR) values indicated better image quality. The average RMSE, PSNR at 36.6:1 for US were 4.84 ± 0.14, 35.45 dB, respectively. This finding suggests that, on the basis of the patient sample, wavelet compression of abdominal US to a ratio of 36.6:1 did not adversely affect diagnostic performance or evaluation error for radiologists' interpretation so as to risk affecting diagnosis

  18. Contributions in compression of 3D medical images and 2D images; Contributions en compression d'images medicales 3D et d'images naturelles 2D

    Energy Technology Data Exchange (ETDEWEB)

    Gaudeau, Y

    2006-12-15

    The huge amounts of volumetric data generated by current medical imaging techniques in the context of an increasing demand for long term archiving solutions, as well as the rapid development of distant radiology make the use of compression inevitable. Indeed, if the medical community has sided until now with compression without losses, most of applications suffer from compression ratios which are too low with this kind of compression. In this context, compression with acceptable losses could be the most appropriate answer. So, we propose a new loss coding scheme based on 3D (3 dimensional) Wavelet Transform and Dead Zone Lattice Vector Quantization 3D (DZLVQ) for medical images. Our algorithm has been evaluated on several computerized tomography (CT) and magnetic resonance image volumes. The main contribution of this work is the design of a multidimensional dead zone which enables to take into account correlations between neighbouring elementary volumes. At high compression ratios, we show that it can out-perform visually and numerically the best existing methods. These promising results are confirmed on head CT by two medical patricians. The second contribution of this document assesses the effect with-loss image compression on CAD (Computer-Aided Decision) detection performance of solid lung nodules. This work on 120 significant lungs images shows that detection did not suffer until 48:1 compression and still was robust at 96:1. The last contribution consists in the complexity reduction of our compression scheme. The first allocation dedicated to 2D DZLVQ uses an exponential of the rate-distortion (R-D) functions. The second allocation for 2D and 3D medical images is based on block statistical model to estimate the R-D curves. These R-D models are based on the joint distribution of wavelet vectors using a multidimensional mixture of generalized Gaussian (MMGG) densities. (author)

  19. Implementation of Minutiae Based Fingerprint Identification System Using Crossing Number Concept

    Directory of Open Access Journals (Sweden)

    Atul S. CHAUDHARI

    2014-01-01

    Full Text Available Biometric system is essentially a pattern recognition system which recognizes a person by determining the authenticity of a specific physiological (e.g., fingerprints, face, retina, iris or behavioral (e.g., gait, signature characteristic possessed by that person. Among all the presently employed biometric techniques, fingerprint identification systems have received the most attention due to the long history of fingerprints and its extensive use in forensics. Fingerprint is reliable biometric characteristic as it is unique and persistence. Fingerprint is the pattern of ridges and valleys on the surface of fingertip. However, recognizing fingerprints in poor quality images is still a very complex job, so the fingerprint image must be preprocessed before matching. It is very difficult to extract fingerprint features directly from gray scale fingerprint image. In this paper we have proposed the system which uses minutiae based matching algorithm for fingerprint identification. There are three main phases in proposed algorithm. First phase enhance the input fingerprint image by preprocessing it. The enhanced fingerprint image is converted into thinned binary image and then minutiae are extracted by using Crossing Number Concept in second phase. Third stage compares input fingerprint image (after preprocessing and minutiae extraction with fingerprint images enrolled in database and makes decision whether the input fingerprint is matched with the fingerprint stored in database or not.

  20. Segmentation of forensic latent fingerprint images lifted contact-less from planar surfaces with optical cohererence tomography

    CSIR Research Space (South Africa)

    Khutlang, R

    2015-07-01

    Full Text Available the substrate surface plus the latent fingerprint impression left on it. They are concatenated together to form a 2-D segmented image of the lifted fingerprint. After enhancement using contrast-limited adaptive histogram equalization, minutiae were extracted...

  1. Fingerprint Recognition Using Minutia Score Matching

    OpenAIRE

    J, Ravi.; Raja, K. B.; R, Venugopal. K.

    2010-01-01

    The popular Biometric used to authenticate a person is Fingerprint which is unique and permanent throughout a person’s life. A minutia matching is widely used for fingerprint recognition and can be classified as ridge ending and ridge bifurcation. In this paper we projected Fingerprint Recognition using Minutia Score Matching method (FRMSM). For Fingerprint thinning, the Block Filter is used, which scans the image at the boundary to preserves the quality of the image and extract the minutiae ...

  2. Wavelet-based compression of pathological images for telemedicine applications

    Science.gov (United States)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  3. Image compression with Iris-C

    Science.gov (United States)

    Gains, David

    2009-05-01

    Iris-C is an image codec designed for streaming video applications that demand low bit rate, low latency, lossless image compression. To achieve compression and low latency the codec features the discrete wavelet transform, Exp-Golomb coding, and online processes that construct dynamic models of the input video. Like H.264 and Dirac, the Iris-C codec accepts input video from both the YUV and YCOCG colour spaces, but the system can also operate on Bayer RAW data read directly from an image sensor. Testing shows that the Iris-C codec is competitive with the Dirac low delay syntax codec which is typically regarded as the state-of-the-art low latency, lossless video compressor.

  4. Novelty detection-based internal fingerprint segmentation in optical coherence tomography images

    CSIR Research Space (South Africa)

    Khutlang, R

    2014-12-01

    Full Text Available present an automatic segmentation of the papillary layer method, in 3-D swept source optical coherence tomography (SS-OCT) images. The papillary contour represents the internal fingerprint, which does not suffer external skin problems. The slices composing...

  5. Novelty detection-based internal fingerprint segmentation in optical coherence tomography images

    CSIR Research Space (South Africa)

    Khutlang, Rethabile

    2017-08-01

    Full Text Available present an automatic segmentation of the papillary layer method, from images acquired using contact-less 3-D swept source optical coherence tomography (OCT). The papillary contour represents the internal fingerprint, which does not suffer from the external...

  6. Contributions in compression of 3D medical images and 2D images

    International Nuclear Information System (INIS)

    Gaudeau, Y.

    2006-12-01

    The huge amounts of volumetric data generated by current medical imaging techniques in the context of an increasing demand for long term archiving solutions, as well as the rapid development of distant radiology make the use of compression inevitable. Indeed, if the medical community has sided until now with compression without losses, most of applications suffer from compression ratios which are too low with this kind of compression. In this context, compression with acceptable losses could be the most appropriate answer. So, we propose a new loss coding scheme based on 3D (3 dimensional) Wavelet Transform and Dead Zone Lattice Vector Quantization 3D (DZLVQ) for medical images. Our algorithm has been evaluated on several computerized tomography (CT) and magnetic resonance image volumes. The main contribution of this work is the design of a multidimensional dead zone which enables to take into account correlations between neighbouring elementary volumes. At high compression ratios, we show that it can out-perform visually and numerically the best existing methods. These promising results are confirmed on head CT by two medical patricians. The second contribution of this document assesses the effect with-loss image compression on CAD (Computer-Aided Decision) detection performance of solid lung nodules. This work on 120 significant lungs images shows that detection did not suffer until 48:1 compression and still was robust at 96:1. The last contribution consists in the complexity reduction of our compression scheme. The first allocation dedicated to 2D DZLVQ uses an exponential of the rate-distortion (R-D) functions. The second allocation for 2D and 3D medical images is based on block statistical model to estimate the R-D curves. These R-D models are based on the joint distribution of wavelet vectors using a multidimensional mixture of generalized Gaussian (MMGG) densities. (author)

  7. On-board image compression for the RAE lunar mission

    Science.gov (United States)

    Miller, W. H.; Lynch, T. J.

    1976-01-01

    The requirements, design, implementation, and flight performance of an on-board image compression system for the lunar orbiting Radio Astronomy Explorer-2 (RAE-2) spacecraft are described. The image to be compressed is a panoramic camera view of the long radio astronomy antenna booms used for gravity-gradient stabilization of the spacecraft. A compression ratio of 32 to 1 is obtained by a combination of scan line skipping and adaptive run-length coding. The compressed imagery data are convolutionally encoded for error protection. This image compression system occupies about 1000 cu cm and consumes 0.4 W.

  8. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    Science.gov (United States)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  9. Data compression of digital X-ray images from a clinical viewpoint

    International Nuclear Information System (INIS)

    Ando, Yutaka

    1992-01-01

    For the PACS (picture archiving and communication system), large storage capacity recording media and a fast data transfer network are necessary. When the PACS are working, these technology requirements become an large problem. So we need image data compression having a higher recording efficiency media and an improved transmission ratio. There are two kinds of data compression methods, one is reversible compression and other is the irreversible one. By these reversible compression methods, a compressed-expanded image is exactly equal to the original image. The ratio of data compression is about between 1/2 an d1/3. On the other hand, for irreversible data compression, the compressed-expanded image is a distorted image, and we can achieve a high compression ratio by using this method. In the medical field, the discrete cosine transform (DCT) method is popular because of the low distortion and fast performance. The ratio of data compression is actually from 1/10 to 1/20. It is important for us to decide the compression ratio according to the purposes and modality of the image. We must carefully select the ratio of the data compression because the suitable compression ratio alters in the usage of image for education, clinical diagnosis and reference. (author)

  10. Spatial compression algorithm for the analysis of very large multivariate images

    Science.gov (United States)

    Keenan, Michael R [Albuquerque, NM

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  11. Acceptable levels of digital image compression in chest radiology

    International Nuclear Information System (INIS)

    Smith, I.

    2000-01-01

    The introduction of picture archival and communications systems (PACS) and teleradiology has prompted an examination of techniques that optimize the storage capacity and speed of digital storage and distribution networks. The general acceptance of the move to replace conventional screen-film capture with computed radiography (CR) is an indication that clinicians within the radiology community are willing to accept images that have been 'compressed'. The question to be answered, therefore, is what level of compression is acceptable. The purpose of the present study is to provide an assessment of the ability of a group of imaging professionals to determine whether an image has been compressed. To undertake this study a single mobile chest image, selected for the presence of some subtle pathology in the form of a number of septal lines in both costphrenic angles, was compressed to levels of 10:1, 20:1 and 30:1. These images were randomly ordered and shown to the observers for interpretation. Analysis of the responses indicates that in general it was not possible to distinguish the original image from its compressed counterparts. Furthermore, a preference appeared to be shown for images that have undergone low levels of compression. This preference can most likely be attributed to the 'de-noising' effect of the compression algorithm at low levels. Copyright (1999) Blackwell Science Pty. Ltd

  12. Reevaluation of JPEG image compression to digitalized gastrointestinal endoscopic color images: a pilot study

    Science.gov (United States)

    Kim, Christopher Y.

    1999-05-01

    Endoscopic images p lay an important role in describing many gastrointestinal (GI) disorders. The field of radiology has been on the leading edge of creating, archiving and transmitting digital images. With the advent of digital videoendoscopy, endoscopists now have the ability to generate images for storage and transmission. X-rays can be compressed 30-40X without appreciable decline in quality. We reported results of a pilot study using JPEG compression of 24-bit color endoscopic images. For that study, the result indicated that adequate compression ratios vary according to the lesion and that images could be compressed to between 31- and 99-fold smaller than the original size without an appreciable decline in quality. The purpose of this study was to expand upon the methodology of the previous sty with an eye towards application for the WWW, a medium which would expand both clinical and educational purposes of color medical imags. The results indicate that endoscopists are able to tolerate very significant compression of endoscopic images without loss of clinical image quality. This finding suggests that even 1 MB color images can be compressed to well under 30KB, which is considered a maximal tolerable image size for downloading on the WWW.

  13. A Novel Approach Based on PCNNs Template for Fingerprint Image Thinning

    NARCIS (Netherlands)

    Dacheng, X.; Bailiang, L.; Nijholt, Antinus; Kacprzyk, J.

    2009-01-01

    A PCNNs-based square-and-triangle-template method for binary fingerprint image thinning is proposed. The algorithm is iterative, in which a combined sequential and parallel processing is employed to accelerate execution. When a neuron satisfies the square template, the pixel corresponding to this

  14. Secure fingerprint identification based on structural and microangiographic optical coherence tomography.

    Science.gov (United States)

    Liu, Xuan; Zaki, Farzana; Wang, Yahui; Huang, Qiongdan; Mei, Xin; Wang, Jiangjun

    2017-03-10

    Optical coherence tomography (OCT) allows noncontact acquisition of fingerprints and hence is a highly promising technology in the field of biometrics. OCT can be used to acquire both structural and microangiographic images of fingerprints. Microangiographic OCT derives its contrast from the blood flow in the vasculature of viable skin tissue, and microangiographic fingerprint imaging is inherently immune to fake fingerprint attack. Therefore, dual-modality (structural and microangiographic) OCT imaging of fingerprints will enable more secure acquisition of biometric data, which has not been investigated before. Our study on fingerprint identification based on structural and microangiographic OCT imaging is, we believe, highly innovative. In this study, we performed OCT imaging study for fingerprint acquisition, and demonstrated the capability of dual-modality OCT imaging for the identification of fake fingerprints.

  15. Microarray BASICA: Background Adjustment, Segmentation, Image Compression and Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jianping Hua

    2004-01-01

    Full Text Available This paper presents microarray BASICA: an integrated image processing tool for background adjustment, segmentation, image compression, and analysis of cDNA microarray images. BASICA uses a fast Mann-Whitney test-based algorithm to segment cDNA microarray images, and performs postprocessing to eliminate the segmentation irregularities. The segmentation results, along with the foreground and background intensities obtained with the background adjustment, are then used for independent compression of the foreground and background. We introduce a new distortion measurement for cDNA microarray image compression and devise a coding scheme by modifying the embedded block coding with optimized truncation (EBCOT algorithm (Taubman, 2000 to achieve optimal rate-distortion performance in lossy coding while still maintaining outstanding lossless compression performance. Experimental results show that the bit rate required to ensure sufficiently accurate gene expression measurement varies and depends on the quality of cDNA microarray images. For homogeneously hybridized cDNA microarray images, BASICA is able to provide from a bit rate as low as 5 bpp the gene expression data that are 99% in agreement with those of the original 32 bpp images.

  16. Comparison of JPEG and wavelet compression on intraoral digital radiographic images

    International Nuclear Information System (INIS)

    Kim, Eun Kyung

    2004-01-01

    To determine the proper image compression method and ratio without image quality degradation in intraoral digital radiographic images, comparing the discrete cosine transform (DCT)-based JPEG with the wavelet-based JPEG 2000 algorithm. Thirty extracted sound teeth and thirty extracted teeth with occlusal caries were used for this study. Twenty plaster blocks were made with three teeth each. They were radiographically exposed using CDR sensors (Schick Inc., Long Island, USA). Digital images were compressed to JPEG format, using Adobe Photoshop v. 7.0 and JPEG 2000 format using Jasper program with compression ratios of 5 : 1, 9 : 1, 14 : 1, 28 : 1 each. To evaluate the lesion detectability, receiver operating characteristic (ROC) analysis was performed by the three oral and maxillofacial radiologists. To evaluate the image quality, all the compressed images were assessed subjectively using 5 grades, in comparison to the original uncompressed images. Compressed images up to compression ratio of 14: 1 in JPEG and 28 : 1 in JPEG 2000 showed nearly the same the lesion detectability as the original images. In the subjective assessment of image quality, images up to compression ratio of 9 : 1 in JPEG and 14 : 1 in JPEG 2000 showed minute mean paired differences from the original images. The results showed that the clinically acceptable compression ratios were up to 9 : 1 for JPEG and 14 : 1 for JPEG 2000. The wavelet-based JPEG 2000 is a better compression method, comparing to DCT-based JPEG for intraoral digital radiographic images.

  17. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    Science.gov (United States)

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.

  18. An Efficient Reconfigurable Architecture for Fingerprint Recognition

    Directory of Open Access Journals (Sweden)

    Satish S. Bhairannawar

    2016-01-01

    Full Text Available The fingerprint identification is an efficient biometric technique to authenticate human beings in real-time Big Data Analytics. In this paper, we propose an efficient Finite State Machine (FSM based reconfigurable architecture for fingerprint recognition. The fingerprint image is resized, and Compound Linear Binary Pattern (CLBP is applied on fingerprint, followed by histogram to obtain histogram CLBP features. Discrete Wavelet Transform (DWT Level 2 features are obtained by the same methodology. The novel matching score of CLBP is computed using histogram CLBP features of test image and fingerprint images in the database. Similarly, the DWT matching score is computed using DWT features of test image and fingerprint images in the database. Further, the matching scores of CLBP and DWT are fused with arithmetic equation using improvement factor. The performance parameters such as TSR (Total Success Rate, FAR (False Acceptance Rate, and FRR (False Rejection Rate are computed using fusion scores with correlation matching technique for FVC2004 DB3 Database. The proposed fusion based VLSI architecture is synthesized on Virtex xc5vlx30T-3 FPGA board using Finite State Machine resulting in optimized parameters.

  19. High bit depth infrared image compression via low bit depth codecs

    Science.gov (United States)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-08-01

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.

  20. Fingerprint enhancement using a multispectral sensor

    Science.gov (United States)

    Rowe, Robert K.; Nixon, Kristin A.

    2005-03-01

    The level of performance of a biometric fingerprint sensor is critically dependent on the quality of the fingerprint images. One of the most common types of optical fingerprint sensors relies on the phenomenon of total internal reflectance (TIR) to generate an image. Under ideal conditions, a TIR fingerprint sensor can produce high-contrast fingerprint images with excellent feature definition. However, images produced by the same sensor under conditions that include dry skin, dirt on the skin, and marginal contact between the finger and the sensor, are likely to be severely degraded. This paper discusses the use of multispectral sensing as a means to collect additional images with new information about the fingerprint that can significantly augment the system performance under both normal and adverse sample conditions. In the context of this paper, "multispectral sensing" is used to broadly denote a collection of images taken under different illumination conditions: different polarizations, different illumination/detection configurations, as well as different wavelength illumination. Results from three small studies using an early-stage prototype of the multispectral-TIR (MTIR) sensor are presented along with results from the corresponding TIR data. The first experiment produced data from 9 people, 4 fingers from each person and 3 measurements per finger under "normal" conditions. The second experiment provided results from a study performed to test the relative performance of TIR and MTIR images when taken under extreme dry and dirty conditions. The third experiment examined the case where the area of contact between the finger and sensor is greatly reduced.

  1. Image data compression in diagnostic imaging. International literature review and workflow recommendation

    International Nuclear Information System (INIS)

    Braunschweig, R.; Kaden, Ingmar; Schwarzer, J.; Sprengel, C.; Klose, K.

    2009-01-01

    Purpose: Today healthcare policy is based on effectiveness. Diagnostic imaging became a ''pace-setter'' due to amazing technical developments (e.g. multislice CT), extensive data volumes, and especially the well defined workflow-orientated scenarios on a local and (inter)national level. To make centralized networks sufficient, image data compression has been regarded as the key to a simple and secure solution. In February 2008 specialized working groups of the DRG held a consensus conference. They designed recommended data compression techniques and ratios. Material und methoden: The purpose of our paper is an international review of the literature of compression technologies, different imaging procedures (e.g. DR, CT etc.), and targets (abdomen, etc.) and to combine recommendations for compression ratios and techniques with different workflows. The studies were assigned to 4 different levels (0-3) according to the evidence. 51 studies were assigned to the highest level 3. Results: We recommend a compression factor of 1: 8 (excluding cranial scans 1:5). For workflow reasons data compression should be based on the modalities (CT, etc.). PACS-based compression is currently possible but fails to maximize workflow benefits. Only the modality-based scenarios achieve all benefits. (orig.)

  2. Image data compression in diagnostic imaging. International literature review and workflow recommendation

    Energy Technology Data Exchange (ETDEWEB)

    Braunschweig, R.; Kaden, Ingmar [Klinik fuer Bildgebende Diagnostik und Interventionsradiologie, BG-Kliniken Bergmannstrost Halle (Germany); Schwarzer, J.; Sprengel, C. [Dept. of Management Information System and Operations Research, Martin-Luther-Univ. Halle Wittenberg (Germany); Klose, K. [Medizinisches Zentrum fuer Radiologie, Philips-Univ. Marburg (Germany)

    2009-07-15

    Purpose: Today healthcare policy is based on effectiveness. Diagnostic imaging became a ''pace-setter'' due to amazing technical developments (e.g. multislice CT), extensive data volumes, and especially the well defined workflow-orientated scenarios on a local and (inter)national level. To make centralized networks sufficient, image data compression has been regarded as the key to a simple and secure solution. In February 2008 specialized working groups of the DRG held a consensus conference. They designed recommended data compression techniques and ratios. Material und methoden: The purpose of our paper is an international review of the literature of compression technologies, different imaging procedures (e.g. DR, CT etc.), and targets (abdomen, etc.) and to combine recommendations for compression ratios and techniques with different workflows. The studies were assigned to 4 different levels (0-3) according to the evidence. 51 studies were assigned to the highest level 3. Results: We recommend a compression factor of 1: 8 (excluding cranial scans 1:5). For workflow reasons data compression should be based on the modalities (CT, etc.). PACS-based compression is currently possible but fails to maximize workflow benefits. Only the modality-based scenarios achieve all benefits. (orig.)

  3. Diagnostic imaging of compression neuropathy

    International Nuclear Information System (INIS)

    Weishaupt, D.; Andreisek, G.

    2007-01-01

    Compression-induced neuropathy of peripheral nerves can cause severe pain of the foot and ankle. Early diagnosis is important to institute prompt treatment and to minimize potential injury. Although clinical examination combined with electrophysiological studies remain the cornerstone of the diagnostic work-up, in certain cases, imaging may provide key information with regard to the exact anatomic location of the lesion or aid in narrowing the differential diagnosis. In other patients with peripheral neuropathies of the foot and ankle, imaging may establish the etiology of the condition and provide information crucial for management and/or surgical planning. MR imaging and ultrasound provide direct visualization of the nerve and surrounding abnormalities. Bony abnormalities contributing to nerve compression are best assessed by radiographs and CT. Knowledge of the anatomy, the etiology, typical clinical findings, and imaging features of peripheral neuropathies affecting the peripheral nerves of the foot and ankle will allow for a more confident diagnosis. (orig.) [de

  4. [Medical image compression: a review].

    Science.gov (United States)

    Noreña, Tatiana; Romero, Eduardo

    2013-01-01

    Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings.

  5. An efficient algorithm for MR image reconstruction and compression

    International Nuclear Information System (INIS)

    Wang, Hang; Rosenfeld, D.; Braun, M.; Yan, Hong

    1992-01-01

    In magnetic resonance imaging (MRI), the original data are sampled in the spatial frequency domain. The sampled data thus constitute a set of discrete Fourier transform (DFT) coefficients. The image is usually reconstructed by taking inverse DFT. The image data may then be efficiently compressed using the discrete cosine transform (DCT). A method of using DCT to treat the sampled data is presented which combines two procedures, image reconstruction and data compression. This method may be particularly useful in medical picture archiving and communication systems where both image reconstruction and compression are important issues. 11 refs., 3 figs

  6. The relationship between compression force, image quality and ...

    African Journals Online (AJOL)

    Theoretically, an increase in breast compression gives a reduction in thickness without changing the density, resulting in improved image quality and reduced radiation dose. Aim. This study investigates the relationship between compression force, phantom thickness, image quality and radiation dose. The existence of a ...

  7. Evaluation of compression ratio using JPEG 2000 on diagnostic images in dentistry

    International Nuclear Information System (INIS)

    Jung, Gi Hun; Han, Won Jeong; Yoo, Dong Soo; Kim, Eun Kyung; Choi, Soon Chul

    2005-01-01

    To find out the proper compression ratios without degrading image quality and affecting lesion detectability on diagnostic images used in dentistry compressed with JPEG 2000 algorithm. Sixty Digora peri apical images, sixty panoramic computed radiographic (CR) images, sixty computed tomography (CT) images, and sixty magnetic resonance (MR) images were compressed into JPEG 2000 with ratios of 10 levels from 5:1 to 50:1. To evaluate the lesion detectability, the images were graded with 5 levels (1 : definitely absent ; 2 : probably absent ; 3 : equivocal ; 4 : probably present ; 5 : definitely present), and then receiver operating characteristic analysis was performed using the original image as a gold standard. Also to evaluate subjectively the image quality, the images were graded with 5 levels (1 : definitely unacceptable ; 2 : probably unacceptable ; 3 : equivocal ; 4 : probably acceptable ; 5 : definitely acceptable), and then paired t-test was performed. In Digora, CR panoramic and CT images, compressed images up to ratios of 15:1 showed nearly the same lesion detectability as original images, and in MR images, compressed images did up to ratios of 25:1. In Digora and CR panoramic images, compressed images up to ratios of 5:1 showed little difference between the original and reconstructed images in subjective assessment of image quality. In CT images, compressed images did up to ratios of 10:1 and in MR images up to ratios of 15:1. We considered compression ratios up to 5:1 in Digora and CR panoramic images, up to 10:1 in CT images, up to 15:1 in MR images as clinically applicable compression ratios.

  8. MEDICAL IMAGE COMPRESSION USING HYBRID CODER WITH FUZZY EDGE DETECTION

    Directory of Open Access Journals (Sweden)

    K. Vidhya

    2011-02-01

    Full Text Available Medical imaging techniques produce prohibitive amounts of digitized clinical data. Compression of medical images is a must due to large memory space required for transmission and storage. This paper presents an effective algorithm to compress and to reconstruct medical images. The proposed algorithm first extracts edge information of medical images by using fuzzy edge detector. The images are decomposed using Cohen-Daubechies-Feauveau (CDF wavelet. The hybrid technique utilizes the efficient wavelet based compression algorithms such as JPEG2000 and Set Partitioning In Hierarchical Trees (SPIHT. The wavelet coefficients in the approximation sub band are encoded using tier 1 part of JPEG2000. The wavelet coefficients in the detailed sub bands are encoded using SPIHT. Consistent quality images are produced by this method at a lower bit rate compared to other standard compression algorithms. Two main approaches to assess image quality are objective testing and subjective testing. The image quality is evaluated by objective quality measures. Objective measures correlate well with the perceived image quality for the proposed compression algorithm.

  9. Blind compressed sensing image reconstruction based on alternating direction method

    Science.gov (United States)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  10. Reference point detection for improved fingerprint matching

    NARCIS (Netherlands)

    Ignatenko, T.; Kalker, A.A.C.M.; Veen, van der M.; Bazen, A.; Delp, E.J.; Wong, P.W.

    2006-01-01

    One of the important stages of fingerprint recognition is the registration of the fingerprints with respect to the original template. This is not a straightforward task as fingerprint images may have been subject to rotations and translations. Popular techniques for fingerprint registration use a

  11. Encryption of Stereo Images after Compression by Advanced Encryption Standard (AES

    Directory of Open Access Journals (Sweden)

    Marwah k Hussien

    2018-04-01

    Full Text Available New partial encryption schemes are proposed, in which a secure encryption algorithm is used to encrypt only part of the compressed data. Partial encryption applied after application of image compression algorithm. Only 0.0244%-25% of the original data isencrypted for two pairs of dif-ferent grayscale imageswiththe size (256 ´ 256 pixels. As a result, we see a significant reduction of time in the stage of encryption and decryption. In the compression step, the Orthogonal Search Algorithm (OSA for motion estimation (the dif-ferent between stereo images is used. The resulting disparity vector and the remaining image were compressed by Discrete Cosine Transform (DCT, Quantization and arithmetic encoding. The image compressed was encrypted by Advanced Encryption Standard (AES. The images were then decoded and were compared with the original images. Experimental results showed good results in terms of Peak Signal-to-Noise Ratio (PSNR, Com-pression Ratio (CR and processing time. The proposed partial encryption schemes are fast, se-cure and do not reduce the compression performance of the underlying selected compression methods

  12. Interpretation of fingerprint image quality features extracted by self-organizing maps

    Science.gov (United States)

    Danov, Ivan; Olsen, Martin A.; Busch, Christoph

    2014-05-01

    Accurate prediction of fingerprint quality is of significant importance to any fingerprint-based biometric system. Ensuring high quality samples for both probe and reference can substantially improve the system's performance by lowering false non-matches, thus allowing finer adjustment of the decision threshold of the biometric system. Furthermore, the increasing usage of biometrics in mobile contexts demands development of lightweight methods for operational environment. A novel two-tier computationally efficient approach was recently proposed based on modelling block-wise fingerprint image data using Self-Organizing Map (SOM) to extract specific ridge pattern features, which are then used as an input to a Random Forests (RF) classifier trained to predict the quality score of a propagated sample. This paper conducts an investigative comparative analysis on a publicly available dataset for the improvement of the two-tier approach by proposing additionally three feature interpretation methods, based respectively on SOM, Generative Topographic Mapping and RF. The analysis shows that two of the proposed methods produce promising results on the given dataset.

  13. Medical image compression and its application to TDIS-FILE equipment

    International Nuclear Information System (INIS)

    Tsubura, Shin-ichi; Nishihara, Eitaro; Iwai, Shunsuke

    1990-01-01

    In order to compress medical images for filing and communication, we have developed a compression algorithm which compresses images with remarkable quality using a high-pass filtering method. Hardware for this compression algorithm was also developed and applied to TDIS (total digital imaging system)-FILE equipment. In the future, hardware based on this algorithm will be developed for various types of diagnostic equipment and PACS. This technique has the following characteristics: (1) significant reduction of artifacts; (2) acceptable quality for clinical evaluation at 15:1 to 20:1 compression ratio; and (3) high-speed processing and compact hardware. (author)

  14. Performance evaluation of emerging JPEGXR compression standard for medical images

    International Nuclear Information System (INIS)

    Basit, M.A.

    2012-01-01

    Medical images require loss less compression as a small error due to lossy compression may be considered as a diagnostic error. JPEG XR is the latest image compression standard designed for variety of applications and has a support for lossy and loss less modes. This paper provides in-depth performance evaluation of latest JPEGXR with existing image coding standards for medical images using loss less compression. Various medical images are used for evaluation and ten images of each organ are tested. Performance of JPEGXR is compared with JPEG2000 and JPEGLS using mean square error, peak signal to noise ratio, mean absolute error and structural similarity index. JPEGXR shows improvement of 20.73 dB and 5.98 dB over JPEGLS and JPEG2000 respectively for various test images used in experimentation. (author)

  15. Image Compression Based On Wavelet, Polynomial and Quadtree

    Directory of Open Access Journals (Sweden)

    Bushra A. SULTAN

    2011-01-01

    Full Text Available In this paper a simple and fast image compression scheme is proposed, it is based on using wavelet transform to decompose the image signal and then using polynomial approximation to prune the smoothing component of the image band. The architect of proposed coding scheme is high synthetic where the error produced due to polynomial approximation in addition to the detail sub-band data are coded using both quantization and Quadtree spatial coding. As a last stage of the encoding process shift encoding is used as a simple and efficient entropy encoder to compress the outcomes of the previous stage.The test results indicate that the proposed system can produce a promising compression performance while preserving the image quality level.

  16. Image Quality Assessment of JPEG Compressed Mars Science Laboratory Mastcam Images using Convolutional Neural Networks

    Science.gov (United States)

    Kerner, H. R.; Bell, J. F., III; Ben Amor, H.

    2017-12-01

    The Mastcam color imaging system on the Mars Science Laboratory Curiosity rover acquires images within Gale crater for a variety of geologic and atmospheric studies. Images are often JPEG compressed before being downlinked to Earth. While critical for transmitting images on a low-bandwidth connection, this compression can result in image artifacts most noticeable as anomalous brightness or color changes within or near JPEG compression block boundaries. In images with significant high-frequency detail (e.g., in regions showing fine layering or lamination in sedimentary rocks), the image might need to be re-transmitted losslessly to enable accurate scientific interpretation of the data. The process of identifying which images have been adversely affected by compression artifacts is performed manually by the Mastcam science team, costing significant expert human time. To streamline the tedious process of identifying which images might need to be re-transmitted, we present an input-efficient neural network solution for predicting the perceived quality of a compressed Mastcam image. Most neural network solutions require large amounts of hand-labeled training data for the model to learn the target mapping between input (e.g. distorted images) and output (e.g. quality assessment). We propose an automatic labeling method using joint entropy between a compressed and uncompressed image to avoid the need for domain experts to label thousands of training examples by hand. We use automatically labeled data to train a convolutional neural network to estimate the probability that a Mastcam user would find the quality of a given compressed image acceptable for science analysis. We tested our model on a variety of Mastcam images and found that the proposed method correlates well with image quality perception by science team members. When assisted by our proposed method, we estimate that a Mastcam investigator could reduce the time spent reviewing images by a minimum of 70%.

  17. Expandable image compression system: A modular approach

    International Nuclear Information System (INIS)

    Ho, B.K.T.; Lo, S.C.; Huang, H.K.

    1986-01-01

    The full-frame bit-allocation algorithm for radiological image compression can achieve an acceptable compression ratio as high as 30:1. It involves two stages of operation: a two-dimensional discrete cosine transform and pixel quantization in the transformed space with pixel depth kept accountable by a bit-allocation table. The cosine transform hardware design took an expandable modular approach based on the VME bus system with a maximum data transfer rate of 48 Mbytes/sec and a microprocessor (Motorola 68000 family). The modules are cascadable and microprogrammable to perform 1,024-point butterfly operations. A total of 18 stages would be required for transforming a 1,000 x 1,000 image. Multiplicative constants and addressing sequences are to be software loaded into the parameter buffers of each stage prior to streaming data through the processor stages. The compression rate for 1K x 1K images is expected to be faster than one image per sec

  18. CoGI: Towards Compressing Genomes as an Image.

    Science.gov (United States)

    Xie, Xiaojing; Zhou, Shuigeng; Guan, Jihong

    2015-01-01

    Genomic science is now facing an explosive increase of data thanks to the fast development of sequencing technology. This situation poses serious challenges to genomic data storage and transferring. It is desirable to compress data to reduce storage and transferring cost, and thus to boost data distribution and utilization efficiency. Up to now, a number of algorithms / tools have been developed for compressing genomic sequences. Unlike the existing algorithms, most of which treat genomes as one-dimensional text strings and compress them based on dictionaries or probability models, this paper proposes a novel approach called CoGI (the abbreviation of Compressing Genomes as an Image) for genome compression, which transforms the genomic sequences to a two-dimensional binary image (or bitmap), then applies a rectangular partition coding algorithm to compress the binary image. CoGI can be used as either a reference-based compressor or a reference-free compressor. For the former, we develop two entropy-based algorithms to select a proper reference genome. Performance evaluation is conducted on various genomes. Experimental results show that the reference-based CoGI significantly outperforms two state-of-the-art reference-based genome compressors GReEn and RLZ-opt in both compression ratio and compression efficiency. It also achieves comparable compression ratio but two orders of magnitude higher compression efficiency in comparison with XM--one state-of-the-art reference-free genome compressor. Furthermore, our approach performs much better than Gzip--a general-purpose and widely-used compressor, in both compression speed and compression ratio. So, CoGI can serve as an effective and practical genome compressor. The source code and other related documents of CoGI are available at: http://admis.fudan.edu.cn/projects/cogi.htm.

  19. MR imaging of medullary compression due to vertebral metastases

    International Nuclear Information System (INIS)

    Dooms, G.C.; Mathurin, P.; Maldague, B.; Cornelis, G.; Malghem, J.; Demeure, R.

    1987-01-01

    A prospective study was performed to assess the value of MR imaging for demonstrating medullary compression due to vertebral metastases in cancer patients clinically suspected of presenting with that complication. Twenty-five consecutive unselected patients were studied, and the MR imaging findings were confirmed by myelography, CT, and/or surgical and autopsy findings for each patient. The MR examinations were performed with a superconducting magnet (Philips Gyroscan S15) operating at 0.5-T. MR imaging demonstrated the metastases (single or multiple) mainly on T1- weighted images (TR = 0.45 sec and TE = 20 msec). Soft-tissue tumoral mass and/or deformity of a vertebral body secondary to metastasis, compressing the spinal cord, was equally demonstrated on T1- and heavily T2-weighted images (TR = 1.65 sec and TE = 100 msec). In the sagittal plane, MR imaging demonstrated the exact level of the compression (one or multiple levels) and its full extent. In conclusion, MR is the first imaging modality for studying cancer patients with clinically suspected medullary compression and obviates the need for more invasive procedures

  20. EBLAST: an efficient high-compression image transformation 3. application to Internet image and video transmission

    Science.gov (United States)

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.

    2001-12-01

    A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.

  1. Block-Based Compressed Sensing for Neutron Radiation Image Using WDFB

    Directory of Open Access Journals (Sweden)

    Wei Jin

    2015-01-01

    Full Text Available An ideal compression method for neutron radiation image should have high compression ratio while keeping more details of the original image. Compressed sensing (CS, which can break through the restrictions of sampling theorem, is likely to offer an efficient compression scheme for the neutron radiation image. Combining wavelet transform with directional filter banks, a novel nonredundant multiscale geometry analysis transform named Wavelet Directional Filter Banks (WDFB is constructed and applied to represent neutron radiation image sparsely. Then, the block-based CS technique is introduced and a high performance CS scheme for neutron radiation image is proposed. By performing two-step iterative shrinkage algorithm the problem of L1 norm minimization is solved to reconstruct neutron radiation image from random measurements. The experiment results demonstrate that the scheme not only improves the quality of reconstructed image obviously but also retains more details of original image.

  2. Evaluation of fingerprint deformation using optical coherence tomography

    Science.gov (United States)

    Gutierrez da Costa, Henrique S.; Maxey, Jessica R.; Silva, Luciano; Ellerbee, Audrey K.

    2014-02-01

    Biometric identification systems have important applications to privacy and security. The most widely used of these, print identification, is based on imaging patterns present in the fingers, hands and feet that are formed by the ridges, valleys and pores of the skin. Most modern print sensors acquire images of the finger when pressed against a sensor surface. Unfortunately, this pressure may result in deformations, characterized by changes in the sizes and relative distances of the print patterns, and such changes have been shown to negatively affect the performance of fingerprint identification algorithms. Optical coherence tomography (OCT) is a novel imaging technique that is capable of imaging the subsurface of biological tissue. Hence, OCT may be used to obtain images of subdermal skin structures from which one can extract an internal fingerprint. The internal fingerprint is very similar in structure to the commonly used external fingerprint and is of increasing interest in investigations of identify fraud. We proposed and tested metrics based on measurements calculated from external and internal fingerprints to evaluate the amount of deformation of the skin. Such metrics were used to test hypotheses about the differences of deformation between the internal and external images, variations with the type of finger and location inside the fingerprint.

  3. Wavelets: Applications to Image Compression-II

    Indian Academy of Sciences (India)

    Wavelets: Applications to Image Compression-II. Sachin P ... successful application of wavelets in image com- ... b) Soft threshold: In this case, all the coefficients x ..... [8] http://www.jpeg.org} Official site of the Joint Photographic Experts Group.

  4. Multiple-image encryption via lifting wavelet transform and XOR operation based on compressive ghost imaging scheme

    Science.gov (United States)

    Li, Xianye; Meng, Xiangfeng; Yang, Xiulun; Wang, Yurong; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-03-01

    A multiple-image encryption method via lifting wavelet transform (LWT) and XOR operation is proposed, which is based on a row scanning compressive ghost imaging scheme. In the encryption process, the scrambling operation is implemented for the sparse images transformed by LWT, then the XOR operation is performed on the scrambled images, and the resulting XOR images are compressed in the row scanning compressive ghost imaging, through which the ciphertext images can be detected by bucket detector arrays. During decryption, the participant who possesses his/her correct key-group, can successfully reconstruct the corresponding plaintext image by measurement key regeneration, compression algorithm reconstruction, XOR operation, sparse images recovery, and inverse LWT (iLWT). Theoretical analysis and numerical simulations validate the feasibility of the proposed method.

  5. Medical image compression by using three-dimensional wavelet transformation

    International Nuclear Information System (INIS)

    Wang, J.; Huang, H.K.

    1996-01-01

    This paper proposes a three-dimensional (3-D) medical image compression method for computed tomography (CT) and magnetic resonance (MR) that uses a separable nonuniform 3-D wavelet transform. The separable wavelet transform employs one filter bank within two-dimensional (2-D) slices and then a second filter bank on the slice direction. CT and MR image sets normally have different resolutions within a slice and between slices. The pixel distances within a slice are normally less than 1 mm and the distance between slices can vary from 1 mm to 10 mm. To find the best filter bank in the slice direction, the authors use the various filter banks in the slice direction and compare the compression results. The results from the 12 selected MR and CT image sets at various slice thickness show that the Haar transform in the slice direction gives the optimum performance for most image sets, except for a CT image set which has 1 mm slice distance. Compared with 2-D wavelet compression, compression ratios of the 3-D method are about 70% higher for CT and 35% higher for MR image sets at a peak signal to noise ratio (PSNR) of 50 dB. In general, the smaller the slice distance, the better the 3-D compression performance

  6. Single exposure optically compressed imaging and visualization using random aperture coding

    Energy Technology Data Exchange (ETDEWEB)

    Stern, A [Electro Optical Unit, Ben Gurion University of the Negev, Beer-Sheva 84105 (Israel); Rivenson, Yair [Department of Electrical and Computer Engineering, Ben Gurion University of the Negev, Beer-Sheva 84105 (Israel); Javidi, Bahrain [Department of Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut 06269-1157 (United States)], E-mail: stern@bgu.ac.il

    2008-11-01

    The common approach in digital imaging follows the sample-then-compress framework. According to this approach, in the first step as many pixels as possible are captured and in the second step the captured image is compressed by digital means. The recently introduced theory of compressed sensing provides the mathematical foundation necessary to combine these two steps in a single one, that is, to compress the information optically before it is recorded. In this paper we overview and extend an optical implementation of compressed sensing theory that we have recently proposed. With this new imaging approach the compression is accomplished inherently in the optical acquisition step. The primary feature of this imaging approach is a randomly encoded aperture realized by means of a random phase screen. The randomly encoded aperture implements random projection of the object field in the image plane. Using a single exposure, a randomly encoded image is captured which can be decoded by proper decoding algorithm.

  7. A JPEG backward-compatible HDR image compression

    Science.gov (United States)

    Korshunov, Pavel; Ebrahimi, Touradj

    2012-10-01

    High Dynamic Range (HDR) imaging is expected to become one of the technologies that could shape next generation of consumer digital photography. Manufacturers are rolling out cameras and displays capable of capturing and rendering HDR images. The popularity and full public adoption of HDR content is however hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of Low Dynamic Range (LDR) displays that are unable to render HDR. To facilitate wide spread of HDR usage, the backward compatibility of HDR technology with commonly used legacy image storage, rendering, and compression is necessary. Although many tone-mapping algorithms were developed for generating viewable LDR images from HDR content, there is no consensus on which algorithm to use and under which conditions. This paper, via a series of subjective evaluations, demonstrates the dependency of perceived quality of the tone-mapped LDR images on environmental parameters and image content. Based on the results of subjective tests, it proposes to extend JPEG file format, as the most popular image format, in a backward compatible manner to also deal with HDR pictures. To this end, the paper provides an architecture to achieve such backward compatibility with JPEG and demonstrates efficiency of a simple implementation of this framework when compared to the state of the art HDR image compression.

  8. Dictionary Approaches to Image Compression and Reconstruction

    Science.gov (United States)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  9. Optimal Image Data Compression For Whole Slide Images

    Directory of Open Access Journals (Sweden)

    J. Isola

    2016-06-01

    Differences in WSI file sizes of scanned images deemed “visually lossless” were significant. If we set Hamamatsu Nanozoomer .NDPI file size (using its default “jpeg80 quality” as 100%, the size of a “visually lossless” JPEG2000 file was only 15-20% of that. Comparisons to Aperio and 3D-Histech files (.svs and .mrxs at their default settings yielded similar results. A further optimization of JPEG2000 was done by treating empty slide area as uniform white-grey surface, which could be maximally compressed. Using this algorithm, JPEG2000 file sizes were only half, or even smaller, of original JPEG2000. Variation was due to the proportion of empty slide area on the scan. We anticipate that wavelet-based image compression methods, such as JPEG2000, have a significant advantage in saving storage costs of scanned whole slide image. In routine pathology laboratories applying WSI technology widely to their histology material, absolute cost savings can be substantial.  

  10. Development and evaluation of a novel lossless image compression method (AIC: artificial intelligence compression method) using neural networks as artificial intelligence

    International Nuclear Information System (INIS)

    Fukatsu, Hiroshi; Naganawa, Shinji; Yumura, Shinnichiro

    2008-01-01

    This study was aimed to validate the performance of a novel image compression method using a neural network to achieve a lossless compression. The encoding consists of the following blocks: a prediction block; a residual data calculation block; a transformation and quantization block; an organization and modification block; and an entropy encoding block. The predicted image is divided into four macro-blocks using the original image for teaching; and then redivided into sixteen sub-blocks. The predicted image is compared to the original image to create the residual image. The spatial and frequency data of the residual image are compared and transformed. Chest radiography, computed tomography (CT), magnetic resonance imaging, positron emission tomography, radioisotope mammography, ultrasonography, and digital subtraction angiography images were compressed using the AIC lossless compression method; and the compression rates were calculated. The compression rates were around 15:1 for chest radiography and mammography, 12:1 for CT, and around 6:1 for other images. This method thus enables greater lossless compression than the conventional methods. This novel method should improve the efficiency of handling of the increasing volume of medical imaging data. (author)

  11. Pornographic image recognition and filtering using incremental learning in compressed domain

    Science.gov (United States)

    Zhang, Jing; Wang, Chao; Zhuo, Li; Geng, Wenhao

    2015-11-01

    With the rapid development and popularity of the network, the openness, anonymity, and interactivity of networks have led to the spread and proliferation of pornographic images on the Internet, which have done great harm to adolescents' physical and mental health. With the establishment of image compression standards, pornographic images are mainly stored with compressed formats. Therefore, how to efficiently filter pornographic images is one of the challenging issues for information security. A pornographic image recognition and filtering method in the compressed domain is proposed by using incremental learning, which includes the following steps: (1) low-resolution (LR) images are first reconstructed from the compressed stream of pornographic images, (2) visual words are created from the LR image to represent the pornographic image, and (3) incremental learning is adopted to continuously adjust the classification rules to recognize the new pornographic image samples after the covering algorithm is utilized to train and recognize the visual words in order to build the initial classification model of pornographic images. The experimental results show that the proposed pornographic image recognition method using incremental learning has a higher recognition rate as well as costing less recognition time in the compressed domain.

  12. Multispectral Image Compression Based on DSC Combined with CCSDS-IDC

    Directory of Open Access Journals (Sweden)

    Jin Li

    2014-01-01

    Full Text Available Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC combined with image data compression (IDC approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE. Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS-based algorithm has better compression performance than the traditional compression approaches.

  13. Multispectral image compression based on DSC combined with CCSDS-IDC.

    Science.gov (United States)

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.

  14. Cloud solution for histopathological image analysis using region of interest based compression.

    Science.gov (United States)

    Kanakatte, Aparna; Subramanya, Rakshith; Delampady, Ashik; Nayak, Rajarama; Purushothaman, Balamuralidhar; Gubbi, Jayavardhana

    2017-07-01

    Recent technological gains have led to the adoption of innovative cloud based solutions in medical imaging field. Once the medical image is acquired, it can be viewed, modified, annotated and shared on many devices. This advancement is mainly due to the introduction of Cloud computing in medical domain. Tissue pathology images are complex and are normally collected at different focal lengths using a microscope. The single whole slide image contains many multi resolution images stored in a pyramidal structure with the highest resolution image at the base and the smallest thumbnail image at the top of the pyramid. Highest resolution image will be used for tissue pathology diagnosis and analysis. Transferring and storing such huge images is a big challenge. Compression is a very useful and effective technique to reduce the size of these images. As pathology images are used for diagnosis, no information can be lost during compression (lossless compression). A novel method of extracting the tissue region and applying lossless compression on this region and lossy compression on the empty regions has been proposed in this paper. The resulting compression ratio along with lossless compression on tissue region is in acceptable range allowing efficient storage and transmission to and from the Cloud.

  15. An Image Compression Scheme in Wireless Multimedia Sensor Networks Based on NMF

    Directory of Open Access Journals (Sweden)

    Shikang Kong

    2017-02-01

    Full Text Available With the goal of addressing the issue of image compression in wireless multimedia sensor networks with high recovered quality and low energy consumption, an image compression and transmission scheme based on non-negative matrix factorization (NMF is proposed in this paper. First, the NMF algorithm theory is studied. Then, a collaborative mechanism of image capture, block, compression and transmission is completed. Camera nodes capture images and send them to ordinary nodes which use an NMF algorithm for image compression. Compressed images are transmitted to the station by the cluster head node and received from ordinary nodes. The station takes on the image restoration. Simulation results show that, compared with the JPEG2000 and singular value decomposition (SVD compression schemes, the proposed scheme has a higher quality of recovered images and lower total node energy consumption. It is beneficial to reduce the burden of energy consumption and prolong the life of the whole network system, which has great significance for practical applications of WMSNs.

  16. High speed fluorescence imaging with compressed ultrafast photography

    Science.gov (United States)

    Thompson, J. V.; Mason, J. D.; Beier, H. T.; Bixler, J. N.

    2017-02-01

    Fluorescent lifetime imaging is an optical technique that facilitates imaging molecular interactions and cellular functions. Because the excited lifetime of a fluorophore is sensitive to its local microenvironment,1, 2 measurement of fluorescent lifetimes can be used to accurately detect regional changes in temperature, pH, and ion concentration. However, typical state of the art fluorescent lifetime methods are severely limited when it comes to acquisition time (on the order of seconds to minutes) and video rate imaging. Here we show that compressed ultrafast photography (CUP) can be used in conjunction with fluorescent lifetime imaging to overcome these acquisition rate limitations. Frame rates up to one hundred billion frames per second have been demonstrated with compressed ultrafast photography using a streak camera.3 These rates are achieved by encoding time in the spatial direction with a pseudo-random binary pattern. The time domain information is then reconstructed using a compressed sensing algorithm, resulting in a cube of data (x,y,t) for each readout image. Thus, application of compressed ultrafast photography will allow us to acquire an entire fluorescent lifetime image with a single laser pulse. Using a streak camera with a high-speed CMOS camera, acquisition rates of 100 frames per second can be achieved, which will significantly enhance our ability to quantitatively measure complex biological events with high spatial and temporal resolution. In particular, we will demonstrate the ability of this technique to do single-shot fluorescent lifetime imaging of cells and microspheres.

  17. COxSwAIN: Compressive Sensing for Advanced Imaging and Navigation

    Science.gov (United States)

    Kurwitz, Richard; Pulley, Marina; LaFerney, Nathan; Munoz, Carlos

    2015-01-01

    The COxSwAIN project focuses on building an image and video compression scheme that can be implemented in a small or low-power satellite. To do this, we used Compressive Sensing, where the compression is performed by matrix multiplications on the satellite and reconstructed on the ground. Our paper explains our methodology and demonstrates the results of the scheme, being able to achieve high quality image compression that is robust to noise and corruption.

  18. New patient-controlled abdominal compression method in radiography: radiation dose and image quality.

    Science.gov (United States)

    Piippo-Huotari, Oili; Norrman, Eva; Anderzén-Carlsson, Agneta; Geijer, Håkan

    2018-05-01

    The radiation dose for patients can be reduced with many methods and one way is to use abdominal compression. In this study, the radiation dose and image quality for a new patient-controlled compression device were compared with conventional compression and compression in the prone position . To compare radiation dose and image quality of patient-controlled compression compared with conventional and prone compression in general radiography. An experimental design with quantitative approach. After obtaining the approval of the ethics committee, a consecutive sample of 48 patients was examined with the standard clinical urography protocol. The radiation doses were measured as dose-area product and analyzed with a paired t-test. The image quality was evaluated by visual grading analysis. Four radiologists evaluated each image individually by scoring nine criteria modified from the European quality criteria for diagnostic radiographic images. There was no significant difference in radiation dose or image quality between conventional and patient-controlled compression. Prone position resulted in both higher dose and inferior image quality. Patient-controlled compression gave similar dose levels as conventional compression and lower than prone compression. Image quality was similar with both patient-controlled and conventional compression and was judged to be better than in the prone position.

  19. COMPRESSING BIOMEDICAL IMAGE BY USING INTEGER WAVELET TRANSFORM AND PREDICTIVE ENCODER

    OpenAIRE

    Anushree Srivastava*, Narendra Kumar Chaurasia

    2016-01-01

    Image compression has become an important process in today’s world of information exchange. It helps in effective utilization of high speed network resources. Medical image compression has an important role in medical field because they are used for future reference of patients. Medical data is compressed in such a way so that the diagnostics capabilities are not compromised or no medical information is lost. Medical imaging poses the great challenge of having compression algorithms that redu...

  20. A medium resolution fingerprint matching system

    Directory of Open Access Journals (Sweden)

    Ayman Mohammad Bahaa-Eldin

    2013-09-01

    Full Text Available In this paper, a novel minutiae based fingerprint matching system is proposed. The system is suitable for medium resolution fingerprint images obtained by low cost commercial sensors. The paper presents a new thinning algorithm, a new features extraction and representation, and a novel feature distance matching algorithm. The proposed system is rotation and translation invariant and is suitable for complete or partial fingerprint matching. The proposed algorithms are optimized to be executed on low resource environments both in CPU power and memory space. The system was evaluated using a standard fingerprint dataset and good performance and accuracy were achieved under certain image quality requirements. In addition, the proposed system was compared favorably to that of the state of the art systems.

  1. Effect of high image compression on the reproducibility of cardiac Sestamibi reporting

    International Nuclear Information System (INIS)

    Thomas, P.; Allen, L.; Beuzeville, S.

    1999-01-01

    Full text: Compression algorithms have been mooted to minimize storage space and transmission times of digital images. We assessed the impact of high-level lousy compression using JPEG and wavelet algorithms on image quality and reporting accuracy of cardiac Sestamibi studies. Twenty stress/rest Sestamibi cardiac perfusion studies were reconstructed into horizontal short, vertical long and horizontal long axis slices using conventional methods. Each of these six sets of slices were aligned for reporting and saved (uncompressed) as a bitmap. This bitmap was then compressed using JPEG compression, then decompressed and saved as a bitmap for later viewing. This process was repeated using the original bitmap and wavelet compression. Finally, a second copy of the original bitmap was made. All 80 bitmaps were randomly coded to ensure blind reporting. The bitmaps were read blinded and by consensus of 2 experienced nuclear medicine physicians using a 5-point scale and 25 cardiac segments. Subjective image quality was also reported using a 3-point scale. Samples of the compressed images were also subtracted from the original bitmap for visual comparison of differences. Results showed an average compression ratio of 23:1 for wavelet and 13:1 for JPEG. Image subtraction showed only very minor discordance between the original and compressed images. There was no significant difference in subjective quality between the compressed and uncompressed images. There was no significant difference in reporting reproducibility of the identical bitmap copy, the JPEG image and the wavelet image compared with the original bitmap. Use of the high compression algorithms described had no significant impact on reporting reproducibility and subjective image quality of cardiac Sestamibi perfusion studies

  2. Multiband and Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Raffaele Pizzolante

    2016-02-01

    Full Text Available Hyperspectral images are widely used in several real-life applications. In this paper, we investigate on the compression of hyperspectral images by considering different aspects, including the optimization of the computational complexity in order to allow implementations on limited hardware (i.e., hyperspectral sensors, etc.. We present an approach that relies on a three-dimensional predictive structure. Our predictive structure, 3D-MBLP, uses one or more previous bands as references to exploit the redundancies among the third dimension. The achieved results are comparable, and often better, with respect to the other state-of-art lossless compression techniques for hyperspectral images.

  3. Moving image compression and generalization capability of constructive neural networks

    Science.gov (United States)

    Ma, Liying; Khorasani, Khashayar

    2001-03-01

    To date numerous techniques have been proposed to compress digital images to ease their storage and transmission over communication channels. Recently, a number of image compression algorithms using Neural Networks NNs have been developed. Particularly, several constructive feed-forward neural networks FNNs have been proposed by researchers for image compression, and promising results have been reported. At the previous SPIE AeroSense conference 2000, we proposed to use a constructive One-Hidden-Layer Feedforward Neural Network OHL-FNN for compressing digital images. In this paper, we first investigate the generalization capability of the proposed OHL-FNN in the presence of additive noise for network training and/ or generalization. Extensive experimental results for different scenarios are presented. It is revealed that the constructive OHL-FNN is not as robust to additive noise in input image as expected. Next, the constructive OHL-FNN is applied to moving images, video sequences. The first, or other specified frame in a moving image sequence is used to train the network. The remaining moving images that follow are then generalized/compressed by this trained network. Three types of correlation-like criteria measuring the similarity of any two images are introduced. The relationship between the generalization capability of the constructed net and the similarity of images is investigated in some detail. It is shown that the constructive OHL-FNN is promising even for changing images such as those extracted from a football game.

  4. Image Segmentation, Registration, Compression, and Matching

    Science.gov (United States)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  5. Optical Acquisition, Image and Data Compression

    Science.gov (United States)

    1988-07-30

    It is from problems where the syntactic method is a suitable this pattern vector that one starts the analysis and approach are fingerprint ...reference axes. FiG. 3. (a) Texture of French canvas ; (b) HT of a block of Fig. 3a: (c) HT of a block of Fig. 3a with preprocessing for line thinning...the p-0 (HT) plane as it will appear in the following illustrations. Figure 3a shows the texture image of French canvas (Brodatz’s plate No. 20

  6. Image compression using moving average histogram and RBF network

    International Nuclear Information System (INIS)

    Khowaja, S.; Ismaili, I.A.

    2015-01-01

    Modernization and Globalization have made the multimedia technology as one of the fastest growing field in recent times but optimal use of bandwidth and storage has been one of the topics which attract the research community to work on. Considering that images have a lion share in multimedia communication, efficient image compression technique has become the basic need for optimal use of bandwidth and space. This paper proposes a novel method for image compression based on fusion of moving average histogram and RBF (Radial Basis Function). Proposed technique employs the concept of reducing color intensity levels using moving average histogram technique followed by the correction of color intensity levels using RBF networks at reconstruction phase. Existing methods have used low resolution images for the testing purpose but the proposed method has been tested on various image resolutions to have a clear assessment of the said technique. The proposed method have been tested on 35 images with varying resolution and have been compared with the existing algorithms in terms of CR (Compression Ratio), MSE (Mean Square Error), PSNR (Peak Signal to Noise Ratio), computational complexity. The outcome shows that the proposed methodology is a better trade off technique in terms of compression ratio, PSNR which determines the quality of the image and computational complexity. (author)

  7. Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method

    Science.gov (United States)

    Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan

    2018-04-01

    Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.

  8. MR Fingerprinting for Rapid Quantitative Abdominal Imaging.

    Science.gov (United States)

    Chen, Yong; Jiang, Yun; Pahwa, Shivani; Ma, Dan; Lu, Lan; Twieg, Michael D; Wright, Katherine L; Seiberlich, Nicole; Griswold, Mark A; Gulani, Vikas

    2016-04-01

    To develop a magnetic resonance (MR) "fingerprinting" technique for quantitative abdominal imaging. This HIPAA-compliant study had institutional review board approval, and informed consent was obtained from all subjects. To achieve accurate quantification in the presence of marked B0 and B1 field inhomogeneities, the MR fingerprinting framework was extended by using a two-dimensional fast imaging with steady-state free precession, or FISP, acquisition and a Bloch-Siegert B1 mapping method. The accuracy of the proposed technique was validated by using agarose phantoms. Quantitative measurements were performed in eight asymptomatic subjects and in six patients with 20 focal liver lesions. A two-tailed Student t test was used to compare the T1 and T2 results in metastatic adenocarcinoma with those in surrounding liver parenchyma and healthy subjects. Phantom experiments showed good agreement with standard methods in T1 and T2 after B1 correction. In vivo studies demonstrated that quantitative T1, T2, and B1 maps can be acquired within a breath hold of approximately 19 seconds. T1 and T2 measurements were compatible with those in the literature. Representative values included the following: liver, 745 msec ± 65 (standard deviation) and 31 msec ± 6; renal medulla, 1702 msec ± 205 and 60 msec ± 21; renal cortex, 1314 msec ± 77 and 47 msec ± 10; spleen, 1232 msec ± 92 and 60 msec ± 19; skeletal muscle, 1100 msec ± 59 and 44 msec ± 9; and fat, 253 msec ± 42 and 77 msec ± 16, respectively. T1 and T2 in metastatic adenocarcinoma were 1673 msec ± 331 and 43 msec ± 13, respectively, significantly different from surrounding liver parenchyma relaxation times of 840 msec ± 113 and 28 msec ± 3 (P < .0001 and P < .01) and those in hepatic parenchyma in healthy volunteers (745 msec ± 65 and 31 msec ± 6, P < .0001 and P = .021, respectively). A rapid technique for quantitative abdominal imaging was developed that allows simultaneous quantification of multiple tissue

  9. Fingerprint and Face Identification for Large User Population

    Directory of Open Access Journals (Sweden)

    Teddy Ko

    2003-06-01

    Full Text Available The main objective of this paper is to present the state-of-the-art of the current biometric (fingerprint and face technology, lessons learned during the investigative analysis performed to ascertain the benefits of using combined fingerprint and facial technologies, and recommendations for the use of current available fingerprint and face identification technologies for optimum identification performance for applications using large user population. Prior fingerprint and face identification test study results have shown that their identification accuracies are strongly dependent on the image quality of the biometric inputs. Recommended methodologies for ensuring the capture of acceptable quality fingerprint and facial images of subjects are also presented in this paper.

  10. A novel high-frequency encoding algorithm for image compression

    Science.gov (United States)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-12-01

    In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.

  11. Longitudinal study of fingerprint recognition.

    Science.gov (United States)

    Yoon, Soweon; Jain, Anil K

    2015-07-14

    Human identification by fingerprints is based on the fundamental premise that ridge patterns from distinct fingers are different (uniqueness) and a fingerprint pattern does not change over time (persistence). Although the uniqueness of fingerprints has been investigated by developing statistical models to estimate the probability of error in comparing two random samples of fingerprints, the persistence of fingerprints has remained a general belief based on only a few case studies. In this study, fingerprint match (similarity) scores are analyzed by multilevel statistical models with covariates such as time interval between two fingerprints in comparison, subject's age, and fingerprint image quality. Longitudinal fingerprint records of 15,597 subjects are sampled from an operational fingerprint database such that each individual has at least five 10-print records over a minimum time span of 5 y. In regard to the persistence of fingerprints, the longitudinal analysis on a single (right index) finger demonstrates that (i) genuine match scores tend to significantly decrease when time interval between two fingerprints in comparison increases, whereas the change in impostor match scores is negligible; and (ii) fingerprint recognition accuracy at operational settings, nevertheless, tends to be stable as the time interval increases up to 12 y, the maximum time span in the dataset. However, the uncertainty of temporal stability of fingerprint recognition accuracy becomes substantially large if either of the two fingerprints being compared is of poor quality. The conclusions drawn from 10-finger fusion analysis coincide with the conclusions from single-finger analysis.

  12. Online fingerprint verification.

    Science.gov (United States)

    Upendra, K; Singh, S; Kumar, V; Verma, H K

    2007-01-01

    As organizations search for more secure authentication methods for user access, e-commerce, and other security applications, biometrics is gaining increasing attention. With an increasing emphasis on the emerging automatic personal identification applications, fingerprint based identification is becoming more popular. The most widely used fingerprint representation is the minutiae based representation. The main drawback with this representation is that it does not utilize a significant component of the rich discriminatory information available in the fingerprints. Local ridge structures cannot be completely characterized by minutiae. Also, it is difficult quickly to match two fingerprint images containing different number of unregistered minutiae points. In this study filter bank based representation, which eliminates these weakness, is implemented and the overall performance of the developed system is tested. The results have shown that this system can be used effectively for secure online verification applications.

  13. Reconfigurable Hardware for Compressing Hyperspectral Image Data

    Science.gov (United States)

    Aranki, Nazeeh; Namkung, Jeffrey; Villapando, Carlos; Kiely, Aaron; Klimesh, Matthew; Xie, Hua

    2010-01-01

    High-speed, low-power, reconfigurable electronic hardware has been developed to implement ICER-3D, an algorithm for compressing hyperspectral-image data. The algorithm and parts thereof have been the topics of several NASA Tech Briefs articles, including Context Modeler for Wavelet Compression of Hyperspectral Images (NPO-43239) and ICER-3D Hyperspectral Image Compression Software (NPO-43238), which appear elsewhere in this issue of NASA Tech Briefs. As described in more detail in those articles, the algorithm includes three main subalgorithms: one for computing wavelet transforms, one for context modeling, and one for entropy encoding. For the purpose of designing the hardware, these subalgorithms are treated as modules to be implemented efficiently in field-programmable gate arrays (FPGAs). The design takes advantage of industry- standard, commercially available FPGAs. The implementation targets the Xilinx Virtex II pro architecture, which has embedded PowerPC processor cores with flexible on-chip bus architecture. It incorporates an efficient parallel and pipelined architecture to compress the three-dimensional image data. The design provides for internal buffering to minimize intensive input/output operations while making efficient use of offchip memory. The design is scalable in that the subalgorithms are implemented as independent hardware modules that can be combined in parallel to increase throughput. The on-chip processor manages the overall operation of the compression system, including execution of the top-level control functions as well as scheduling, initiating, and monitoring processes. The design prototype has been demonstrated to be capable of compressing hyperspectral data at a rate of 4.5 megasamples per second at a conservative clock frequency of 50 MHz, with a potential for substantially greater throughput at a higher clock frequency. The power consumption of the prototype is less than 6.5 W. The reconfigurability (by means of reprogramming) of

  14. Diffusion tensor imaging in spinal cord compression

    International Nuclear Information System (INIS)

    Wang, Wei; Qin, Wen; Hao, Nanxin; Wang, Yibin; Zong, Genlin

    2012-01-01

    Background Although diffusion tensor imaging has been successfully applied in brain research for decades, several main difficulties have hindered its extended utilization in spinal cord imaging. Purpose To assess the feasibility and clinical value of diffusion tensor imaging and tractography for evaluating chronic spinal cord compression. Material and Methods Single-shot spin-echo echo-planar DT sequences were scanned in 42 spinal cord compression patients and 49 healthy volunteers. The mean values of the apparent diffusion coefficient and fractional anisotropy were measured in region of interest at the cervical and lower thoracic spinal cord. The patients were divided into two groups according to the high signal on T2WI (the SCC-HI group and the SCC-nHI group for with or without high signal). A one-way ANOVA was used. Diffusion tensor tractography was used to visualize the morphological features of normal and impaired white matter. Results There were no statistically significant differences in the apparent diffusion coefficient and fractional anisotropy values between the different spinal cord segments of the normal subjects. All of the patients in the SCC-HI group had increased apparent diffusion coefficient values and decreased fractional anisotropy values at the lesion level compared to the normal controls. However, there were no statistically significant diffusion index differences between the SCC-nHI group and the normal controls. In the diffusion tensor imaging maps, the normal spinal cord sections were depicted as fiber tracts that were color-encoded to a cephalocaudal orientation. The diffusion tensor images were compressed to different degrees in all of the patients. Conclusion Diffusion tensor imaging and tractography are promising methods for visualizing spinal cord tracts and can provide additional information in clinical studies in spinal cord compression

  15. HVS scheme for DICOM image compression: Design and comparative performance evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Prabhakar, B. [Biomedical and Engineering Division, Indian Institute of Technology Madras, Chennai 600036, Tamil Nadu (India)]. E-mail: prabhakarb@iitm.ac.in; Reddy, M. Ramasubba [Biomedical and Engineering Division, Indian Institute of Technology Madras, Chennai 600036, Tamil Nadu (India)

    2007-07-15

    Advanced digital imaging technology in medical domain demands efficient and effective DICOM image compression for progressive image transmission and picture archival. Here a compression system, which incorporates sensitivities of HVS coded with SPIHT quantization, is discussed. The weighting factors derived from luminance CSF are used to transform the wavelet subband coefficients to reflect characteristics of HVS in best possible manner. Mannos et al. and Daly HVS models have been used and results are compared. To evaluate the performance, Eskicioglu chart metric is considered. Experiment is done on both Monochrome and Color Dicom images of MRI, CT, OT, and CR, natural and benchmark images. Reconstructed image through our technique showed improvement in visual quality and Eskicioglu chart metric at same compression ratios. Also the Daly HVS model based compression shows better performance perceptually and quantitatively when compared to Mannos et el. model. Further 'bior4.4' wavelet filter provides better results than 'db9' filter for this compression system. Results give strong evidence that under common boundary conditions; our technique achieves competitive visual quality, compression ratio and coding/decoding time, when compared with jpeg2000 (kakadu)

  16. Cellular automata codebooks applied to compact image compression

    Directory of Open Access Journals (Sweden)

    Radu DOGARU

    2006-12-01

    Full Text Available Emergent computation in semi-totalistic cellular automata (CA is used to generate a set of basis (or codebook. Such codebooks are convenient for simple and circuit efficient compression schemes based on binary vector quantization, applied to the bitplanes of any monochrome or color image. Encryption is also naturally included using these codebooks. Natural images would require less than 0.5 bits per pixel (bpp while the quality of the reconstructed images is comparable with traditional compression schemes. The proposed scheme is attractive for low power, sensor integrated applications.

  17. Image-Data Compression Using Edge-Optimizing Algorithm for WFA Inference.

    Science.gov (United States)

    Culik, Karel II; Kari, Jarkko

    1994-01-01

    Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…

  18. Enhance Criminal Investigation by Proposed Fingerprint Recognition System

    International Nuclear Information System (INIS)

    Hashem, S.H.; Maolod, A.T.; Mohammad, A.A.

    2014-01-01

    Law enforcement officers and forensic specialists spend hours thinking about how fingerprints solve crimes, and trying to find, collect, record and compare these unique identifiers that can connect a specific person to a specific crime. These individuals understand that a basic human feature that most people take for granted, can be one of the most effective tools in crime solving.This research exploits our previous work to be applicable in criminal investigation field. The present study aims to solve the advance crime by strength fingerprint’s criminal investigation to control the alterations happen intentionally to criminals’ fingerprint. That done by suggest strategy introduce an optimal fingerprint image feature’s vector to the person and then considers it to be stored in database for future matching. Selecting optimal fingerprint feature’s vector strategy deal with considering 10 fingerprints for each criminal person (take the fingerprint in different time and different circumstance of criminal such as finger is dirty, wet, trembling, etc.). Proposal begun with apply a proposed enrollment on all 10 fingerprint for each criminal, the enrollment include the following consequence steps; begin with preprocessing step for each of 10 images including enhancement, then two level of feature extraction (first level to extract arches, whorls, and loops, where second level extract minutiae), after that applying proposed Genetic Algorithm to select optimal fingerprint, master fingerprint, which in our point of view present the most universal image which include more detailed features to recognition. Master fingerprint will be feature’s vector which stored in database. Then apply the proposed matching by testing fingerprints with these stored in database.While, measuring of criminal fingerprint investigation performance by calculating False Reject Rate (FRR)and False Accept Rate (FAR) for the traditional system and the proposed in criminal detection field. The

  19. A New Algorithm for the On-Board Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Raúl Guerra

    2018-03-01

    Full Text Available Hyperspectral sensors are able to provide information that is useful for many different applications. However, the huge amounts of data collected by these sensors are not exempt of drawbacks, especially in remote sensing environments where the hyperspectral images are collected on-board satellites and need to be transferred to the earth’s surface. In this situation, an efficient compression of the hyperspectral images is mandatory in order to save bandwidth and storage space. Lossless compression algorithms have been traditionally preferred, in order to preserve all the information present in the hyperspectral cube for scientific purposes, despite their limited compression ratio. Nevertheless, the increment in the data-rate of the new-generation sensors is making more critical the necessity of obtaining higher compression ratios, making it necessary to use lossy compression techniques. A new transform-based lossy compression algorithm, namely Lossy Compression Algorithm for Hyperspectral Image Systems (HyperLCA, is proposed in this manuscript. This compressor has been developed for achieving high compression ratios with a good compression performance at a reasonable computational burden. An extensive amount of experiments have been performed in order to evaluate the goodness of the proposed HyperLCA compressor using different calibrated and uncalibrated hyperspectral images from the AVIRIS and Hyperion sensors. The results provided by the proposed HyperLCA compressor have been evaluated and compared against those produced by the most relevant state-of-the-art compression solutions. The theoretical and experimental evidence indicates that the proposed algorithm represents an excellent option for lossy compressing hyperspectral images, especially for applications where the available computational resources are limited, such as on-board scenarios.

  20. Dynamic CT perfusion image data compression for efficient parallel processing.

    Science.gov (United States)

    Barros, Renan Sales; Olabarriaga, Silvia Delgado; Borst, Jordi; van Walderveen, Marianne A A; Posthuma, Jorrit S; Streekstra, Geert J; van Herk, Marcel; Majoie, Charles B L M; Marquering, Henk A

    2016-03-01

    The increasing size of medical imaging data, in particular time series such as CT perfusion (CTP), requires new and fast approaches to deliver timely results for acute care. Cloud architectures based on graphics processing units (GPUs) can provide the processing capacity required for delivering fast results. However, the size of CTP datasets makes transfers to cloud infrastructures time-consuming and therefore not suitable in acute situations. To reduce this transfer time, this work proposes a fast and lossless compression algorithm for CTP data. The algorithm exploits redundancies in the temporal dimension and keeps random read-only access to the image elements directly from the compressed data on the GPU. To the best of our knowledge, this is the first work to present a GPU-ready method for medical image compression with random access to the image elements from the compressed data.

  1. Fingerprint Change: Not Visible, But Tangible.

    Science.gov (United States)

    Negri, Francesca V; De Giorgi, Annamaria; Bozzetti, Cecilia; Squadrilli, Anna; Petronini, Pier Giorgio; Leonardi, Francesco; Bisogno, Luigi; Garofano, Luciano

    2017-09-01

    Hand-foot syndrome, a chemotherapy-induced cutaneous toxicity, can cause an alteration in fingerprints causing a setback for cancer patients due to the occurrence of false rejections. A colon cancer patient was fingerprinted after not having been able to use fingerprint recognition devices after 6 months of adjuvant chemotherapy. The fingerprint images were digitally processed to improve fingerprint definition without altering the papillary design. No evidence of skin toxicity was present. Two months later, the situation returned to normal. The fingerprint evaluation conducted on 15 identification points highlighted the quantitative and qualitative fingerprint alteration details detected after the end of chemotherapy and 2 months later. Fingerprint alteration during chemotherapy has been reported, but to our knowledge, this particular case is the first ever reported without evident clinical signs. Alternative fingerprint identification methods as well as improved biometric identification systems are needed in case of unexpected situations. © 2017 American Academy of Forensic Sciences.

  2. Effect of CT digital image compression on detection of coronary artery calcification

    International Nuclear Information System (INIS)

    Zheng, L.M.; Sone, S.; Itani, Y.; Wang, Q.; Hanamura, K.; Asakura, K.; Li, F.; Yang, Z.G.; Wang, J.C.; Funasaka, T.

    2000-01-01

    Purpose: To test the effect of digital compression of CT images on the detection of small linear or spotted high attenuation lesions such as coronary artery calcification (CAC). Material and methods: Fifty cases with and 50 without CAC were randomly selected from a population that had undergone spiral CT of the thorax for screening lung cancer. CT image data were compressed using JPEG (Joint Photographic Experts Group) or wavelet algorithms at ratios of 10:1, 20:1 or 40:1. Five radiologists reviewed the uncompressed and compressed images on a cathode-ray-tube. Observer performance was evaluated with receiver operating characteristic analysis. Results: CT images compressed at a ratio as high as 20:1 were acceptable for primary diagnosis of CAC. There was no significant difference in the detection accuracy for CAC between JPEG and wavelet algorithms at the compression ratios up to 20:1. CT images were more vulnerable to image blurring on the wavelet compression at relatively lower ratios, and 'blocking' artifacts occurred on the JPEG compression at relatively higher ratios. Conclusion: JPEG and wavelet algorithms allow compression of CT images without compromising their diagnostic value at ratios up to 20:1 in detecting small linear or spotted high attenuation lesions such as CAC, and there was no difference between the two algorithms in diagnostic accuracy

  3. Observer detection of image degradation caused by irreversible data compression processes

    Science.gov (United States)

    Chen, Ji; Flynn, Michael J.; Gross, Barry; Spizarny, David

    1991-05-01

    Irreversible data compression methods have been proposed to reduce the data storage and communication requirements of digital imaging systems. In general, the error produced by compression increases as an algorithm''s compression ratio is increased. We have studied the relationship between compression ratios and the detection of induced error using radiologic observers. The nature of the errors was characterized by calculating the power spectrum of the difference image. In contrast with studies designed to test whether detected errors alter diagnostic decisions, this study was designed to test whether observers could detect the induced error. A paired-film observer study was designed to test whether induced errors were detected. The study was conducted with chest radiographs selected and ranked for subtle evidence of interstitial disease, pulmonary nodules, or pneumothoraces. Images were digitized at 86 microns (4K X 5K) and 2K X 2K regions were extracted. A full-frame discrete cosine transform method was used to compress images at ratios varying between 6:1 and 60:1. The decompressed images were reprinted next to the original images in a randomized order with a laser film printer. The use of a film digitizer and a film printer which can reproduce all of the contrast and detail in the original radiograph makes the results of this study insensitive to instrument performance and primarily dependent on radiographic image quality. The results of this study define conditions for which errors associated with irreversible compression cannot be detected by radiologic observers. The results indicate that an observer can detect the errors introduced by this compression algorithm for compression ratios of 10:1 (1.2 bits/pixel) or higher.

  4. A Multiresolution Image Completion Algorithm for Compressing Digital Color Images

    Directory of Open Access Journals (Sweden)

    R. Gomathi

    2014-01-01

    Full Text Available This paper introduces a new framework for image coding that uses image inpainting method. In the proposed algorithm, the input image is subjected to image analysis to remove some of the portions purposefully. At the same time, edges are extracted from the input image and they are passed to the decoder in the compressed manner. The edges which are transmitted to decoder act as assistant information and they help inpainting process fill the missing regions at the decoder. Textural synthesis and a new shearlet inpainting scheme based on the theory of p-Laplacian operator are proposed for image restoration at the decoder. Shearlets have been mathematically proven to represent distributed discontinuities such as edges better than traditional wavelets and are a suitable tool for edge characterization. This novel shearlet p-Laplacian inpainting model can effectively reduce the staircase effect in Total Variation (TV inpainting model whereas it can still keep edges as well as TV model. In the proposed scheme, neural network is employed to enhance the value of compression ratio for image coding. Test results are compared with JPEG 2000 and H.264 Intracoding algorithms. The results show that the proposed algorithm works well.

  5. RSA Key Development Using Fingerprint Image on Text Message

    Science.gov (United States)

    Rahman, Sayuti; Triana, Indah; Khairani, Sumi; Yasir, Amru; Sundari, Siti

    2017-12-01

    Along with the development of technology today, humans are very facilitated in accessing information and Communicate with various media, including through the Internet network . Messages are sent by media such as text are not necessarily guaranteed security. it is often found someone that wants to send a secret message to the recipient, but the messages can be known by irresponsible people. So the sender feels dissappointed because the secret message that should be known only to the recipient only becomes known by the irresponsible people . It is necessary to do security the message by using the RSA algorithm, Using fingerprint image to generate RSA key.This is a solution to enrich the security of a message,it is needed to process images firstly before generating RSA keys with feature extraction.

  6. Influence of Skin Diseases on Fingerprint Recognition

    Science.gov (United States)

    Drahansky, Martin; Dolezel, Michal; Urbanek, Jaroslav; Brezinova, Eva; Kim, Tai-hoon

    2012-01-01

    There are many people who suffer from some of the skin diseases. These diseases have a strong influence on the process of fingerprint recognition. People with fingerprint diseases are unable to use fingerprint scanners, which is discriminating for them, since they are not allowed to use their fingerprints for the authentication purposes. First in this paper the various diseases, which might influence functionality of the fingerprint-based systems, are introduced, mainly from the medical point of view. This overview is followed by some examples of diseased finger fingerprints, acquired both from dactyloscopic card and electronic sensors. At the end of this paper the proposed fingerprint image enhancement algorithm is described. PMID:22654483

  7. Influence of Skin Diseases on Fingerprint Recognition

    Directory of Open Access Journals (Sweden)

    Martin Drahansky

    2012-01-01

    Full Text Available There are many people who suffer from some of the skin diseases. These diseases have a strong influence on the process of fingerprint recognition. People with fingerprint diseases are unable to use fingerprint scanners, which is discriminating for them, since they are not allowed to use their fingerprints for the authentication purposes. First in this paper the various diseases, which might influence functionality of the fingerprint-based systems, are introduced, mainly from the medical point of view. This overview is followed by some examples of diseased finger fingerprints, acquired both from dactyloscopic card and electronic sensors. At the end of this paper the proposed fingerprint image enhancement algorithm is described.

  8. Adaptive compressive ghost imaging based on wavelet trees and sparse representation.

    Science.gov (United States)

    Yu, Wen-Kai; Li, Ming-Fei; Yao, Xu-Ri; Liu, Xue-Feng; Wu, Ling-An; Zhai, Guang-Jie

    2014-03-24

    Compressed sensing is a theory which can reconstruct an image almost perfectly with only a few measurements by finding its sparsest representation. However, the computation time consumed for large images may be a few hours or more. In this work, we both theoretically and experimentally demonstrate a method that combines the advantages of both adaptive computational ghost imaging and compressed sensing, which we call adaptive compressive ghost imaging, whereby both the reconstruction time and measurements required for any image size can be significantly reduced. The technique can be used to improve the performance of all computational ghost imaging protocols, especially when measuring ultra-weak or noisy signals, and can be extended to imaging applications at any wavelength.

  9. A Review On Segmentation Based Image Compression Techniques

    Directory of Open Access Journals (Sweden)

    S.Thayammal

    2013-11-01

    Full Text Available Abstract -The storage and transmission of imagery become more challenging task in the current scenario of multimedia applications. Hence, an efficient compression scheme is highly essential for imagery, which reduces the requirement of storage medium and transmission bandwidth. Not only improvement in performance and also the compression techniques must converge quickly in order to apply them for real time applications. There are various algorithms have been done in image compression, but everyone has its own pros and cons. Here, an extensive analysis between existing methods is performed. Also, the use of existing works is highlighted, for developing the novel techniques which face the challenging task of image storage and transmission in multimedia applications.

  10. A Fingerprint Image Encryption Scheme Based on Hyperchaotic Rössler Map

    Directory of Open Access Journals (Sweden)

    F. Abundiz-Pérez

    2016-01-01

    Full Text Available Currently, biometric identifiers have been used to identify or authenticate users in a biometric system to increase the security in access control systems. Nevertheless, there are several attacks on the biometric system to steal and recover the user’s biometric trait. One of the most powerful attacks is extracting the fingerprint pattern when it is transmitted over communication lines between modules. In this paper, we present a novel fingerprint image encryption scheme based on hyperchaotic Rössler map to provide high security and secrecy in user’s biometric trait, avoid identity theft, and increase the robustness of the biometric system. A complete security analysis is presented to justify the secrecy of the biometric trait by using our proposed scheme at statistical level with 100% of NPCR, low correlation, and uniform histograms. Therefore, it can be used in secure biometric access control systems.

  11. Development and assessment of compression technique for medical images using neural network. I. Assessment of lossless compression

    International Nuclear Information System (INIS)

    Fukatsu, Hiroshi

    2007-01-01

    This paper describes assessment of the lossless compression of a new efficient compression technique (JIS system) using neural network that the author and co-workers have recently developed. At first, theory is explained for encoding and decoding the data. Assessment is done on 55 images each of chest digital roentgenography, digital mammography, 64-row multi-slice CT, 1.5 Tesla MRI, positron emission tomography (PET) and digital subtraction angiography, which are lossless-compressed by the present JIS system to see the compression rate and loss. For comparison, those data are also JPEG lossless-compressed. Personal computer (PC) is an Apple MacBook Pro with configuration of Boot Camp for Windows environment. The present JIS system is found to have a more than 4 times higher efficiency than the usual compressions which compressing the file volume to only 1/11 in average, and thus to be importantly responsible to the increasing medical imaging data. (R.T.)

  12. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  13. Effect of Aging and Surface Interactions on the Diffusion of Endogenous Compounds in Latent Fingerprints Studied by Mass Spectrometry Imaging.

    Science.gov (United States)

    O'Neill, Kelly C; Lee, Young Jin

    2018-05-01

    The ability to determine the age of fingerprints would be immeasurably beneficial in criminal investigations. We explore the possibility of determining the age of fingerprints by analyzing various compounds as they diffuse from the ridges to the valleys of fingerprints using matrix-assisted laser desorption/ionization mass spectrometry imaging. The diffusion of two classes of endogenous fingerprint compounds, fatty acids and triacylglycerols (TGs), was studied in fresh and aged fingerprints on four surfaces. We expected higher molecular weight TGs would diffuse slower than fatty acids and allow us to determine the age of older fingerprints. However, we found interactions between endogenous compounds and the surface have a much stronger impact on diffusion than molecular weight. For example, diffusion of TGs is faster on hydrophilic plain glass or partially hydrophilic stainless steel surfaces, than on a hydrophobic Rain-x treated surface. This result further complicates utilizing a diffusion model to age fingerprints. © 2017 American Academy of Forensic Sciences.

  14. Image compression-encryption algorithms by combining hyper-chaotic system with discrete fractional random transform

    Science.gov (United States)

    Gong, Lihua; Deng, Chengzhi; Pan, Shumin; Zhou, Nanrun

    2018-07-01

    Based on hyper-chaotic system and discrete fractional random transform, an image compression-encryption algorithm is designed. The original image is first transformed into a spectrum by the discrete cosine transform and the resulting spectrum is compressed according to the method of spectrum cutting. The random matrix of the discrete fractional random transform is controlled by a chaotic sequence originated from the high dimensional hyper-chaotic system. Then the compressed spectrum is encrypted by the discrete fractional random transform. The order of DFrRT and the parameters of the hyper-chaotic system are the main keys of this image compression and encryption algorithm. The proposed algorithm can compress and encrypt image signal, especially can encrypt multiple images once. To achieve the compression of multiple images, the images are transformed into spectra by the discrete cosine transform, and then the spectra are incised and spliced into a composite spectrum by Zigzag scanning. Simulation results demonstrate that the proposed image compression and encryption algorithm is of high security and good compression performance.

  15. Fractal Image Compression Based on High Entropy Values Technique

    Directory of Open Access Journals (Sweden)

    Douaa Younis Abbaas

    2018-04-01

    Full Text Available There are many attempts tried to improve the encoding stage of FIC because it consumed time. These attempts worked by reducing size of the search pool for pair range-domain matching but most of them led to get a bad quality, or a lower compression ratio of reconstructed image. This paper aims to present a method to improve performance of the full search algorithm by combining FIC (lossy compression and another lossless technique (in this case entropy coding is used. The entropy technique will reduce size of the domain pool (i. e., number of domain blocks based on the entropy value of each range block and domain block and then comparing the results of full search algorithm and proposed algorithm based on entropy technique to see each of which give best results (such as reduced the encoding time with acceptable values in both compression quali-ty parameters which are C. R (Compression Ratio and PSNR (Image Quality. The experimental results of the proposed algorithm proven that using the proposed entropy technique reduces the encoding time while keeping compression rates and reconstruction image quality good as soon as possible.

  16. On use of image quality metrics for perceptual blur modeling: image/video compression case

    Science.gov (United States)

    Cha, Jae H.; Olson, Jeffrey T.; Preece, Bradley L.; Espinola, Richard L.; Abbott, A. Lynn

    2018-02-01

    Linear system theory is employed to make target acquisition performance predictions for electro-optical/infrared imaging systems where the modulation transfer function (MTF) may be imposed from a nonlinear degradation process. Previous research relying on image quality metrics (IQM) methods, which heuristically estimate perceived MTF has supported that an average perceived MTF can be used to model some types of degradation such as image compression. Here, we discuss the validity of the IQM approach by mathematically analyzing the associated heuristics from the perspective of reliability, robustness, and tractability. Experiments with standard images compressed by x.264 encoding suggest that the compression degradation can be estimated by a perceived MTF within boundaries defined by well-behaved curves with marginal error. Our results confirm that the IQM linearizer methodology provides a credible tool for sensor performance modeling.

  17. Image Quality Assessment for Different Wavelet Compression Techniques in a Visual Communication Framework

    Directory of Open Access Journals (Sweden)

    Nuha A. S. Alwan

    2013-01-01

    Full Text Available Images with subband coding and threshold wavelet compression are transmitted over a Rayleigh communication channel with additive white Gaussian noise (AWGN, after quantization and 16-QAM modulation. A comparison is made between these two types of compression using both mean square error (MSE and structural similarity (SSIM image quality assessment (IQA criteria applied to the reconstructed image at the receiver. The two methods yielded comparable SSIM but different MSE measures. In this work, we justify our results which support previous findings in the literature that the MSE between two images is not indicative of structural similarity or the visibility of errors. It is found that it is difficult to reduce the pointwise errors in subband-compressed images (higher MSE. However, the compressed images provide comparable SSIM or perceived quality for both types of compression provided that the retained energy after compression is the same.

  18. High Resolution Ultrasonic Method for 3D Fingerprint Representation in Biometrics

    Science.gov (United States)

    Maev, R. Gr.; Bakulin, E. Y.; Maeva, E. Y.; Severin, F. M.

    Biometrics is an important field which studies different possible ways of personal identification. Among a number of existing biometric techniques fingerprint recognition stands alone - because very large database of fingerprints has already been acquired. Also, fingerprints are an important evidence that can be collected at a crime scene. Therefore, of all automated biometric techniques, especially in the field of law enforcement, fingerprint identification seems to be the most promising. Ultrasonic method of fingerprint imaging was originally introduced over a decade as the mapping of the reflection coefficient at the interface between the finger and a covering plate and has shown very good reliability and free from imperfections of previous two methods. This work introduces a newer development of the ultrasonic fingerprint imaging, focusing on the imaging of the internal structures of fingerprints (including sweat pores) with raw acoustic resolution of about 500 dpi (0.05 mm) using a scanning acoustic microscope to obtain images and acoustic data in the form of 3D data array. C-scans from different depths inside the fingerprint area of fingers of several volunteers were obtained and showed good contrast of ridges-and-valleys patterns and practically exact correspondence to the standard ink-and-paper prints of the same areas. Important feature reveled on the acoustic images was the clear appearance of the sweat pores, which could provide additional means of identification.

  19. FINGERPRINT MATCHING BASED ON PORE CENTROIDS

    Directory of Open Access Journals (Sweden)

    S. Malathi

    2011-05-01

    Full Text Available In recent years there has been exponential growth in the use of bio- metrics for user authentication applications. Automated Fingerprint Identification systems have become popular tool in many security and law enforcement applications. Most of these systems rely on minutiae (ridge ending and bifurcation features. With the advancement in sensor technology, high resolution fingerprint images (1000 dpi pro- vide micro level of features (pores that have proven to be useful fea- tures for identification. In this paper, we propose a new strategy for fingerprint matching based on pores by reliably extracting the pore features The extraction of pores is done by Marker Controlled Wa- tershed segmentation method and the centroids of each pore are con- sidered as feature vectors for matching of two fingerprint images. Experimental results shows that the proposed method has better per- formance with lower false rates and higher accuracy.

  20. An introduction to video image compression and authentication technology for safeguards applications

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1995-01-01

    Verification of a video image has been a major problem for safeguards for several years. Various verification schemes have been tried on analog video signals ever since the mid-1970's. These schemes have provided a measure of protection but have never been widely adopted. The development of reasonably priced complex video processing integrated circuits makes it possible to digitize a video image and then compress the resulting digital file into a smaller file without noticeable loss of resolution. Authentication and/or encryption algorithms can be more easily applied to digital video files that have been compressed. The compressed video files require less time for algorithm processing and image transmission. An important safeguards application for authenticated, compressed, digital video images is in unattended video surveillance systems and remote monitoring systems. The use of digital images in the surveillance system makes it possible to develop remote monitoring systems that send images over narrow bandwidth channels such as the common telephone line. This paper discusses the video compression process, authentication algorithm, and data format selected to transmit and store the authenticated images

  1. Compressive sensing based ptychography image encryption

    Science.gov (United States)

    Rawat, Nitin

    2015-09-01

    A compressive sensing (CS) based ptychography combined with an optical image encryption is proposed. The diffraction pattern is recorded through ptychography technique further compressed by non-uniform sampling via CS framework. The system requires much less encrypted data and provides high security. The diffraction pattern as well as the lesser measurements of the encrypted samples serves as a secret key which make the intruder attacks more difficult. Furthermore, CS shows that the linearly projected few random samples have adequate information for decryption with a dramatic volume reduction. Experimental results validate the feasibility and effectiveness of our proposed technique compared with the existing techniques. The retrieved images do not reveal any information with the original information. In addition, the proposed system can be robust even with partial encryption and under brute-force attacks.

  2. A MODIFIED EMBEDDED ZERO-TREE WAVELET METHOD FOR MEDICAL IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    T. Celine Therese Jenny

    2010-11-01

    Full Text Available The Embedded Zero-tree Wavelet (EZW is a lossy compression method that allows for progressive transmission of a compressed image. By exploiting the natural zero-trees found in a wavelet decomposed image, the EZW algorithm is able to encode large portions of insignificant regions of an still image with a minimal number of bits. The upshot of this encoding is an algorithm that is able to achieve relatively high peak signal to noise ratios (PSNR for high compression levels. The EZW algorithm is to encode large portions of insignificant regions of an image with a minimal number of bits. Vector Quantization (VQ method can be performed as a post processing step to reduce the coded file size. Vector Quantization (VQ method can be reduces redundancy of the image data in order to be able to store or transmit data in an efficient form. It is demonstrated by experimental results that the proposed method outperforms several well-known lossless image compression techniques for still images that contain 256 colors or less.

  3. Multi-dimensional medical images compressed and filtered with wavelets

    International Nuclear Information System (INIS)

    Boyen, H.; Reeth, F. van; Flerackers, E.

    2002-01-01

    Full text: Using the standard wavelet decomposition methods, multi-dimensional medical images can be compressed and filtered by repeating the wavelet-algorithm on 1D-signals in an extra loop per extra dimension. In the non-standard decomposition for multi-dimensional images the areas that must be zero-filled in case of band- or notch-filters are more complex than geometric areas such as rectangles or cubes. Adding an additional dimension in this algorithm until 4D (e.g. a 3D beating heart) increases the geometric complexity of those areas even more. The aim of our study was to calculate the boundaries of the formed complex geometric areas, so we can use the faster non-standard decomposition to compress and filter multi-dimensional medical images. Because a lot of 3D medical images taken by PET- or SPECT-cameras have only a few layers in the Z-dimension and compressing images in a dimension with a few voxels is usually not worthwhile, we provided a solution in which one can choose which dimensions will be compressed or filtered. With the proposal of non-standard decomposition on Daubechies' wavelets D2 to D20 by Steven Gollmer in 1992, 1D data can be compressed and filtered. Each additional level works only on the smoothed data, so the transformation-time halves per extra level. Zero-filling a well-defined area alter the wavelet-transform and then performing the inverse transform will do the filtering. To be capable to compress and filter up to 4D-Images with the faster non-standard wavelet decomposition method, we have investigated a new method for calculating the boundaries of the areas which must be zero-filled in case of filtering. This is especially true for band- and notch filtering. Contrary to the standard decomposition method, the areas are no longer rectangles in 2D or cubes in 3D or a row of cubes in 4D: they are rectangles expanded with a half-sized rectangle in the other direction for 2D, cubes expanded with half cubes in one and quarter cubes in the

  4. Comparative study of different approaches for multivariate image analysis in HPTLC fingerprinting of natural products such as plant resin.

    Science.gov (United States)

    Ristivojević, Petar; Trifković, Jelena; Vovk, Irena; Milojković-Opsenica, Dušanka

    2017-01-01

    Considering the introduction of phytochemical fingerprint analysis, as a method of screening the complex natural products for the presence of most bioactive compounds, use of chemometric classification methods, application of powerful scanning and image capturing and processing devices and algorithms, advancement in development of novel stationary phases as well as various separation modalities, high-performance thin-layer chromatography (HPTLC) fingerprinting is becoming attractive and fruitful field of separation science. Multivariate image analysis is crucial in the light of proper data acquisition. In a current study, different image processing procedures were studied and compared in detail on the example of HPTLC chromatograms of plant resins. In that sense, obtained variables such as gray intensities of pixels along the solvent front, peak area and mean values of peak were used as input data and compared to obtained best classification models. Important steps in image analysis, baseline removal, denoising, target peak alignment and normalization were pointed out. Numerical data set based on mean value of selected bands and intensities of pixels along the solvent front proved to be the most convenient for planar-chromatographic profiling, although required at least the basic knowledge on image processing methodology, and could be proposed for further investigation in HPLTC fingerprinting. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Aplicación del NFIS (Nist Fingerprint Image Software para la Extracción de Características de Huellas Dactilares Aplicación del NFIS (Nist Fingerprint Image Software para la Extracción de Características de Huellas Dactilares

    Directory of Open Access Journals (Sweden)

    Noé Mosqueda Valadez

    2012-02-01

    Full Text Available Este artículo presenta una descripción acerca de las huellas dactilares y sus características, así como la extracción de puntos característicos de la misma por medio del programa NFIS desarrollado por el NIST (National Institute of Standards and Technology en conjunción con el FBI (Federal Bureau of Investigation, descripción de algunas herramientas, así como un panorama general de un sistema AFAS (Automatic Fingerprint Authentification System y de un sistema AFIS (Automatic Fingerprint Identification System. This paper presents a description about the fingerprints and its characteristics, as well as the extraction of their characteristic points by means of the application of the program NFIS (NIST Fingerprint Image Software developed by the NIST (National Institute of Standards and Technology in conjunction with the FBI (Federal Bureau of Investigation, the description of some tools, as well as a general view of a system AFAS (Automatic Fingerprint Authentification System and of a system AFIS (Automatic Fingerprint Identification System.

  6. Subsurface Profile Mapping using 3-D Compressive Wave Imaging

    Directory of Open Access Journals (Sweden)

    Hazreek Z A M

    2017-01-01

    Full Text Available Geotechnical site investigation related to subsurface profile mapping was commonly performed to provide valuable data for design and construction stage based on conventional drilling techniques. From past experience, drilling techniques particularly using borehole method suffer from limitations related to expensive, time consuming and limited data coverage. Hence, this study performs subsurface profile mapping using 3-D compressive wave imaging in order to minimize those conventional method constraints. Field measurement and data analysis of compressive wave (p-wave, vp was performed using seismic refraction survey (ABEM Terraloc MK 8, 7 kg of sledgehammer and 24 units of vertical geophone and OPTIM (SeisOpt@Picker & SeisOpt@2D software respectively. Then, 3-D compressive wave distribution of subsurface studied was obtained using analysis of SURFER software. Based on 3-D compressive wave image analyzed, it was found that subsurface profile studied consist of three main layers representing top soil (vp = 376 – 600 m/s, weathered material (vp = 900 – 2600 m/s and bedrock (vp > 3000 m/s. Thickness of each layer was varied from 0 – 2 m (first layer, 2 – 20 m (second layer and 20 m and over (third layer. Moreover, groundwater (vp = 1400 – 1600 m/s starts to be detected at 2.0 m depth from ground surface. This study has demonstrated that geotechnical site investigation data related to subsurface profiling was applicable to be obtained using 3-D compressive wave imaging. Furthermore, 3-D compressive wave imaging was performed based on non destructive principle in ground exploration thus consider economic, less time, large data coverage and sustainable to our environment.

  7. Optimization of wavelet decomposition for image compression and feature preservation.

    Science.gov (United States)

    Lo, Shih-Chung B; Li, Huai; Freedman, Matthew T

    2003-09-01

    A neural-network-based framework has been developed to search for an optimal wavelet kernel that can be used for a specific image processing task. In this paper, a linear convolution neural network was employed to seek a wavelet that minimizes errors and maximizes compression efficiency for an image or a defined image pattern such as microcalcifications in mammograms and bone in computed tomography (CT) head images. We have used this method to evaluate the performance of tap-4 wavelets on mammograms, CTs, magnetic resonance images, and Lena images. We found that the Daubechies wavelet or those wavelets with similar filtering characteristics can produce the highest compression efficiency with the smallest mean-square-error for many image patterns including general image textures as well as microcalcifications in digital mammograms. However, the Haar wavelet produces the best results on sharp edges and low-noise smooth areas. We also found that a special wavelet whose low-pass filter coefficients are 0.32252136, 0.85258927, 1.38458542, and -0.14548269) produces the best preservation outcomes in all tested microcalcification features including the peak signal-to-noise ratio, the contrast and the figure of merit in the wavelet lossy compression scheme. Having analyzed the spectrum of the wavelet filters, we can find the compression outcomes and feature preservation characteristics as a function of wavelets. This newly developed optimization approach can be generalized to other image analysis applications where a wavelet decomposition is employed.

  8. Artificial fingerprint recognition by using optical coherence tomography with autocorrelation analysis

    Science.gov (United States)

    Cheng, Yezeng; Larin, Kirill V.

    2006-12-01

    Fingerprint recognition is one of the most widely used methods of biometrics. This method relies on the surface topography of a finger and, thus, is potentially vulnerable for spoofing by artificial dummies with embedded fingerprints. In this study, we applied the optical coherence tomography (OCT) technique to distinguish artificial materials commonly used for spoofing fingerprint scanning systems from the real skin. Several artificial fingerprint dummies made from household cement and liquid silicone rubber were prepared and tested using a commercial fingerprint reader and an OCT system. While the artificial fingerprints easily spoofed the commercial fingerprint reader, OCT images revealed the presence of them at all times. We also demonstrated that an autocorrelation analysis of the OCT images could be potentially used in automatic recognition systems.

  9. Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm

    Science.gov (United States)

    Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan

    2017-12-01

    Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.

  10. Lossless compression of multispectral images using spectral information

    Science.gov (United States)

    Ma, Long; Shi, Zelin; Tang, Xusheng

    2009-10-01

    Multispectral images are available for different purposes due to developments in spectral imaging systems. The sizes of multispectral images are enormous. Thus transmission and storage of these volumes of data require huge time and memory resources. That is why compression algorithms must be developed. A salient property of multispectral images is that strong spectral correlation exists throughout almost all bands. This fact is successfully used to predict each band based on the previous bands. We propose to use spectral linear prediction and entropy coding with context modeling for encoding multispectral images. Linear prediction predicts the value for the next sample and computes the difference between predicted value and the original value. This difference is usually small, so it can be encoded with less its than the original value. The technique implies prediction of each image band by involving number of bands along the image spectra. Each pixel is predicted using information provided by pixels in the previous bands in the same spatial position. As done in the JPEG-LS, the proposed coder also represents the mapped residuals by using an adaptive Golomb-Rice code with context modeling. This residual coding is context adaptive, where the context used for the current sample is identified by a context quantization function of the three gradients. Then, context-dependent Golomb-Rice code and bias parameters are estimated sample by sample. The proposed scheme was compared with three algorithms applied to the lossless compression of multispectral images, namely JPEG-LS, Rice coding, and JPEG2000. Simulation tests performed on AVIRIS images have demonstrated that the proposed compression scheme is suitable for multispectral images.

  11. Performance of target detection algorithm in compressive sensing miniature ultraspectral imaging compressed sensing system

    Science.gov (United States)

    Gedalin, Daniel; Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Rotman, Stanley R.; Stern, Adrian

    2017-04-01

    Compressive sensing theory was proposed to deal with the high quantity of measurements demanded by traditional hyperspectral systems. Recently, a compressive spectral imaging technique dubbed compressive sensing miniature ultraspectral imaging (CS-MUSI) was presented. This system uses a voltage controlled liquid crystal device to create multiplexed hyperspectral cubes. We evaluate the utility of the data captured using the CS-MUSI system for the task of target detection. Specifically, we compare the performance of the matched filter target detection algorithm in traditional hyperspectral systems and in CS-MUSI multiplexed hyperspectral cubes. We found that the target detection algorithm performs similarly in both cases, despite the fact that the CS-MUSI data is up to an order of magnitude less than that in conventional hyperspectral cubes. Moreover, the target detection is approximately an order of magnitude faster in CS-MUSI data.

  12. Compressed Sensing and Low-Rank Matrix Decomposition in Multisource Images Fusion

    Directory of Open Access Journals (Sweden)

    Kan Ren

    2014-01-01

    Full Text Available We propose a novel super-resolution multisource images fusion scheme via compressive sensing and dictionary learning theory. Under the sparsity prior of images patches and the framework of the compressive sensing theory, the multisource images fusion is reduced to a signal recovery problem from the compressive measurements. Then, a set of multiscale dictionaries are learned from several groups of high-resolution sample image’s patches via a nonlinear optimization algorithm. Moreover, a new linear weights fusion rule is proposed to obtain the high-resolution image. Some experiments are taken to investigate the performance of our proposed method, and the results prove its superiority to its counterparts.

  13. Image compression software for the SOHO LASCO and EIT experiments

    Science.gov (United States)

    Grunes, Mitchell R.; Howard, Russell A.; Hoppel, Karl; Mango, Stephen A.; Wang, Dennis

    1994-01-01

    This paper describes the lossless and lossy image compression algorithms to be used on board the Solar Heliospheric Observatory (SOHO) in conjunction with the Large Angle Spectrometric Coronograph and Extreme Ultraviolet Imaging Telescope experiments. It also shows preliminary results obtained using similar prior imagery and discusses the lossy compression artifacts which will result. This paper is in part intended for the use of SOHO investigators who need to understand the results of SOHO compression in order to better allocate the transmission bits which they have been allocated.

  14. Effect of Image Linearization on Normalized Compression Distance

    Science.gov (United States)

    Mortensen, Jonathan; Wu, Jia Jie; Furst, Jacob; Rogers, John; Raicu, Daniela

    Normalized Information Distance, based on Kolmogorov complexity, is an emerging metric for image similarity. It is approximated by the Normalized Compression Distance (NCD) which generates the relative distance between two strings by using standard compression algorithms to compare linear strings of information. This relative distance quantifies the degree of similarity between the two objects. NCD has been shown to measure similarity effectively on information which is already a string: genomic string comparisons have created accurate phylogeny trees and NCD has also been used to classify music. Currently, to find a similarity measure using NCD for images, the images must first be linearized into a string, and then compared. To understand how linearization of a 2D image affects the similarity measure, we perform four types of linearization on a subset of the Corel image database and compare each for a variety of image transformations. Our experiment shows that different linearization techniques produce statistically significant differences in NCD for identical spatial transformations.

  15. Privacy protection schemes for fingerprint recognition systems

    Science.gov (United States)

    Marasco, Emanuela; Cukic, Bojan

    2015-05-01

    The deployment of fingerprint recognition systems has always raised concerns related to personal privacy. A fingerprint is permanently associated with an individual and, generally, it cannot be reset if compromised in one application. Given that fingerprints are not a secret, potential misuses besides personal recognition represent privacy threats and may lead to public distrust. Privacy mechanisms control access to personal information and limit the likelihood of intrusions. In this paper, image- and feature-level schemes for privacy protection in fingerprint recognition systems are reviewed. Storing only key features of a biometric signature can reduce the likelihood of biometric data being used for unintended purposes. In biometric cryptosystems and biometric-based key release, the biometric component verifies the identity of the user, while the cryptographic key protects the communication channel. Transformation-based approaches only a transformed version of the original biometric signature is stored. Different applications can use different transforms. Matching is performed in the transformed domain which enable the preservation of low error rates. Since such templates do not reveal information about individuals, they are referred to as cancelable templates. A compromised template can be re-issued using a different transform. At image-level, de-identification schemes can remove identifiers disclosed for objectives unrelated to the original purpose, while permitting other authorized uses of personal information. Fingerprint images can be de-identified by, for example, mixing fingerprints or removing gender signature. In both cases, degradation of matching performance is minimized.

  16. High-speed reconstruction of compressed images

    Science.gov (United States)

    Cox, Jerome R., Jr.; Moore, Stephen M.

    1990-07-01

    A compression scheme is described that allows high-definition radiological images with greater than 8-bit intensity resolution to be represented by 8-bit pixels. Reconstruction of the images with their original intensity resolution can be carried out by means of a pipeline architecture suitable for compact, high-speed implementation. A reconstruction system is described that can be fabricated according to this approach and placed between an 8-bit display buffer and the display's video system thereby allowing contrast control of images at video rates. Results for 50 CR chest images are described showing that error-free reconstruction of the original 10-bit CR images can be achieved.

  17. Context-dependent JPEG backward-compatible high-dynamic range image compression

    Science.gov (United States)

    Korshunov, Pavel; Ebrahimi, Touradj

    2013-10-01

    High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.

  18. Defense of fake fingerprint attacks using a swept source laser optical coherence tomography setup

    Science.gov (United States)

    Meissner, Sven; Breithaupt, Ralph; Koch, Edmund

    2013-03-01

    The most established technique for the identification at biometric access control systems is the human fingerprint. While every human fingerprint is unique, fingerprints can be faked very easily by using thin layer fakes. Because commercial fingerprint scanners use only a two-dimensional image acquisition of the finger surface, they can only hardly differentiate between real fingerprints and fingerprint fakes applied on thin layer materials. A Swept Source OCT system with an A-line rate of 20 kHz and a lateral and axial resolution of approximately 13 μm, a centre wavelength of 1320 nm and a band width of 120 nm (FWHM) was used to acquire fingerprints and finger tips with overlying fakes. Three-dimensional volume stacks with dimensions of 4.5 mm x 4 mm x 2 mm were acquired. The layering arrangement of the imaged finger tips and faked finger tips was analyzed and subsequently classified into real and faked fingerprints. Additionally, sweat gland ducts were detected and consulted for the classification. The manual classification between real fingerprints and faked fingerprints results in almost 100 % correctness. The outer as well as the internal fingerprint can be recognized in all real human fingers, whereby this was not possible in the image stacks of the faked fingerprints. Furthermore, in all image stacks of real human fingers the sweat gland ducts were detected. The number of sweat gland ducts differs between the test persons. The typical helix shape of the ducts was observed. In contrast, in images of faked fingerprints we observe abnormal layer arrangements and no sweat gland ducts connecting the papillae of the outer fingerprint and the internal fingerprint. We demonstrated that OCT is a very useful tool to enhance the performance of biometric control systems concerning attacks by thin layer fingerprint fakes.

  19. Statistical Analysis of Compression Methods for Storing Binary Image for Low-Memory Systems

    Directory of Open Access Journals (Sweden)

    Roman Slaby

    2013-01-01

    Full Text Available The paper is focused on the statistical comparison of the selected compression methods which are used for compression of the binary images. The aim is to asses, which of presented compression method for low-memory system requires less number of bytes of memory. For assessment of the success rates of the input image to binary image the correlation functions are used. Correlation function is one of the methods of OCR algorithm used for the digitization of printed symbols. Using of compression methods is necessary for systems based on low-power micro-controllers. The data stream saving is very important for such systems with limited memory as well as the time required for decoding the compressed data. The success rate of the selected compression algorithms is evaluated using the basic characteristics of the exploratory analysis. The searched samples represent the amount of bytes needed to compress the test images, representing alphanumeric characters.

  20. Magnetic resonance imaging of vascular compression in trigeminal neuralgia and hemifacial spasms

    International Nuclear Information System (INIS)

    Nagaseki, Yoshishige; Horikoshi, Tohru; Omata, Tomohiro; Sugita, Masao; Nukui, Hideaki; Sakamoto, Hajime; Kumagai, Hiroshi; Sasaki, Hideo; Tsuji, Reizou.

    1991-01-01

    We show how neurosurgical planning can benefit from the better visualization of the precise vascular compression of the nerve provided by the oblique-sagittal and gradient-echo method (OS-GR image) using magnetic resonance images (MRI). The scans of 3 patients with trigeminal neuralgia (TN) and of 15 with hemifacial spasm (HFS) were analyzed for the presence and appearance of the vascular compression of the nerves. Imaging sequences consisted of an OS-GR image (TR/TE: 200/20, 3-mm-thick slice) cut along each nerve shown by the axial view, which was scanned at the angle of 105 degrees taken between the dorsal line of the brain stem and the line corresponding to the pontomedullary junction. In the OS-GR images of the TN's, the vascular compressions of the root entry zone (REZ) of the trigeminal nerve were well visualized as high-intensity lines in the 2 cases whose vessels were confirmed intraoperatively. In the other case, with atypical facial pain, vascular compression was confirmed at the rostral distal site on the fifth nerve, apart from the REZ. In the 15 cases of HFS, twelve OS-GR images (80%) demonstrated vascular compressions at the REZ of the facial nerves from the direction of the caudoventral side. During the surgery for these 12 cases, in 11 cases (excepting the 1 case whose facial nerve was not compressed by any vessels), vascular compressions were confirmed corresponding to the findings of the OS-GR images. Among the 10 OS-GR images on the non-affected side, two false-positive findings were visualized. It is concluded that OS-GR images obtained by means of MRI may serve as a useful planning aid prior to microvascular decompression for cases of TN and HFS. (author)

  1. AIR-MRF: Accelerated iterative reconstruction for magnetic resonance fingerprinting.

    Science.gov (United States)

    Cline, Christopher C; Chen, Xiao; Mailhe, Boris; Wang, Qiu; Pfeuffer, Josef; Nittka, Mathias; Griswold, Mark A; Speier, Peter; Nadar, Mariappan S

    2017-09-01

    Existing approaches for reconstruction of multiparametric maps with magnetic resonance fingerprinting (MRF) are currently limited by their estimation accuracy and reconstruction time. We aimed to address these issues with a novel combination of iterative reconstruction, fingerprint compression, additional regularization, and accelerated dictionary search methods. The pipeline described here, accelerated iterative reconstruction for magnetic resonance fingerprinting (AIR-MRF), was evaluated with simulations as well as phantom and in vivo scans. We found that the AIR-MRF pipeline provided reduced parameter estimation errors compared to non-iterative and other iterative methods, particularly at shorter sequence lengths. Accelerated dictionary search methods incorporated into the iterative pipeline reduced the reconstruction time at little cost of quality. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. OPTIMAL EXPERIMENT DESIGN FOR MAGNETIC RESONANCE FINGERPRINTING

    OpenAIRE

    Zhao, Bo; Haldar, Justin P.; Setsompop, Kawin; Wald, Lawrence L.

    2016-01-01

    Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cram��r-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experi...

  3. Compression and Processing of Space Image Sequences of Northern Lights and Sprites

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto; Martins, Bo; Jensen, Ole Riis

    1999-01-01

    Compression of image sequences of auroral activity as northern lights and thunderstorms with sprites is investigated.......Compression of image sequences of auroral activity as northern lights and thunderstorms with sprites is investigated....

  4. Application of a Noise Adaptive Contrast Sensitivity Function to Image Data Compression

    Science.gov (United States)

    Daly, Scott J.

    1989-08-01

    The visual contrast sensitivity function (CSF) has found increasing use in image compression as new algorithms optimize the display-observer interface in order to reduce the bit rate and increase the perceived image quality. In most compression algorithms, increasing the quantization intervals reduces the bit rate at the expense of introducing more quantization error, a potential image quality degradation. The CSF can be used to distribute this error as a function of spatial frequency such that it is undetectable by the human observer. Thus, instead of being mathematically lossless, the compression algorithm can be designed to be visually lossless, with the advantage of a significantly reduced bit rate. However, the CSF is strongly affected by image noise, changing in both shape and peak sensitivity. This work describes a model of the CSF that includes these changes as a function of image noise level by using the concepts of internal visual noise, and tests this model in the context of image compression with an observer study.

  5. Facial Image Compression Based on Structured Codebooks in Overcomplete Domain

    Directory of Open Access Journals (Sweden)

    Vila-Forcén JE

    2006-01-01

    Full Text Available We advocate facial image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: image compression is considered from the position of source coding with side information and, contrarily to the existing scenarios where the side information is given explicitly; the side information is created based on a deterministic approximation of the local image features. We consider an image in the overcomplete transform domain as a realization of a random source with a structured codebook of symbols where each symbol represents a particular edge shape. Due to the partial availability of the side information at both encoder and decoder, we treat our problem as a modification of the Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available at the decoder. Finally, the paper presents a practical image compression algorithm for facial images based on our concept that demonstrates the superior performance in the very-low-bit-rate regime.

  6. ORIENTATION FIELD RECONSTRUCTION OF ALTERED FINGERPRINT USING ORTHOGONAL WAVELETS

    Directory of Open Access Journals (Sweden)

    Mini M.G.

    2016-11-01

    Full Text Available Ridge orientation field is an important feature for fingerprint matching and fingerprint reconstruction. Matching of the altered fingerprint against its unaltered mates can be done by extracting the available features in the altered fingerprint and using it along with approximated ridge orientation. This paper presents a method for approximating ridge orientation field of altered fingerprints. In the proposed method, sine and cosine of doubled orientation of the fingerprint is decomposed using orthogonal wavelets and reconstructed back using only the approximation coefficients. No prior information about the singular points is needed for orientation approximation. The method is found suitable for orientation estimation of low quality fingerprint images also.

  7. An effective one-dimensional anisotropic fingerprint enhancement algorithm

    Science.gov (United States)

    Ye, Zhendong; Xie, Mei

    2012-01-01

    Fingerprint identification is one of the most important biometric technologies. The performance of the minutiae extraction and the speed of the fingerprint verification system rely heavily on the quality of the input fingerprint images, so the enhancement of the low fingerprint is a critical and difficult step in a fingerprint verification system. In this paper we proposed an effective algorithm for fingerprint enhancement. Firstly we use normalization algorithm to reduce the variations in gray level values along ridges and valleys. Then we utilize the structure tensor approach to estimate each pixel of the fingerprint orientations. At last we propose a novel algorithm which combines the advantages of onedimensional Gabor filtering method and anisotropic method to enhance the fingerprint in recoverable region. The proposed algorithm has been evaluated on the database of Fingerprint Verification Competition 2004, and the results show that our algorithm performs within less time.

  8. Study of noninvasive detection of latent fingerprints using UV laser

    Science.gov (United States)

    Li, Hong-xia; Cao, Jing; Niu, Jie-qing; Huang, Yun-gang; Mao, Lin-jie; Chen, Jing-rong

    2011-06-01

    Latent fingerprints present a considerable challenge in forensics, and noninvasive procedure that captures a digital image of the latent fingerprints is significant in the field of criminal investigation. The capability of photography technologies using 266nm UV Nd:YAG solid state laser as excitation light source to provide detailed images of unprocessed latent fingerprints is demonstrated. Unprocessed latent fingerprints were developed on various non-absorbent and absorbing substrates. According to the special absorption, reflection, scattering and fluorescence characterization of the various residues in fingerprints (fatty acid ester, protein, and carbosylic acid salts etc) to the UV light to weaken or eliminate the background disturbance and increase the brightness contrast of fingerprints with the background, and using 266nm UV laser as excitation light source, fresh and old latent fingerprints on the surface of four types of non-absorbent objects as magazine cover, glass, back of cellphone, wood desktop paintwork and two types of absorbing objects as manila envelope, notebook paper were noninvasive detected and appeared through reflection photography and fluorescence photography technologies, and the results meet the fingerprint identification requirements in forensic science.

  9. Telemedicine + OCT: toward design of optimized algorithms for high-quality compressed images

    Science.gov (United States)

    Mousavi, Mahta; Lurie, Kristen; Land, Julian; Javidi, Tara; Ellerbee, Audrey K.

    2014-03-01

    Telemedicine is an emerging technology that aims to provide clinical healthcare at a distance. Among its goals, the transfer of diagnostic images over telecommunication channels has been quite appealing to the medical community. When viewed as an adjunct to biomedical device hardware, one highly important consideration aside from the transfer rate and speed is the accuracy of the reconstructed image at the receiver end. Although optical coherence tomography (OCT) is an established imaging technique that is ripe for telemedicine, the effects of OCT data compression, which may be necessary on certain telemedicine platforms, have not received much attention in the literature. We investigate the performance and efficiency of several lossless and lossy compression techniques for OCT data and characterize their effectiveness with respect to achievable compression ratio, compression rate and preservation of image quality. We examine the effects of compression in the interferogram vs. A-scan domain as assessed with various objective and subjective metrics.

  10. An Improved Method to Watermark Images Sensitive to Blocking Artifacts

    OpenAIRE

    Afzel Noore

    2007-01-01

    A new digital watermarking technique for images that are sensitive to blocking artifacts is presented. Experimental results show that the proposed MDCT based approach produces highly imperceptible watermarked images and is robust to attacks such as compression, noise, filtering and geometric transformations. The proposed MDCT watermarking technique is applied to fingerprints for ensuring security. The face image and demographic text data of an individual are used as multi...

  11. A fingerprint classification algorithm based on combination of local and global information

    Science.gov (United States)

    Liu, Chongjin; Fu, Xiang; Bian, Junjie; Feng, Jufu

    2011-12-01

    Fingerprint recognition is one of the most important technologies in biometric identification and has been wildly applied in commercial and forensic areas. Fingerprint classification, as the fundamental procedure in fingerprint recognition, can sharply decrease the quantity for fingerprint matching and improve the efficiency of fingerprint recognition. Most fingerprint classification algorithms are based on the number and position of singular points. Because the singular points detecting method only considers the local information commonly, the classification algorithms are sensitive to noise. In this paper, we propose a novel fingerprint classification algorithm combining the local and global information of fingerprint. Firstly we use local information to detect singular points and measure their quality considering orientation structure and image texture in adjacent areas. Furthermore the global orientation model is adopted to measure the reliability of singular points group. Finally the local quality and global reliability is weighted to classify fingerprint. Experiments demonstrate the accuracy and effectivity of our algorithm especially for the poor quality fingerprint images.

  12. Two-level image authentication by two-step phase-shifting interferometry and compressive sensing

    Science.gov (United States)

    Zhang, Xue; Meng, Xiangfeng; Yin, Yongkai; Yang, Xiulun; Wang, Yurong; Li, Xianye; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-01-01

    A two-level image authentication method is proposed; the method is based on two-step phase-shifting interferometry, double random phase encoding, and compressive sensing (CS) theory, by which the certification image can be encoded into two interferograms. Through discrete wavelet transform (DWT), sparseness processing, Arnold transform, and data compression, two compressed signals can be generated and delivered to two different participants of the authentication system. Only the participant who possesses the first compressed signal attempts to pass the low-level authentication. The application of Orthogonal Match Pursuit CS algorithm reconstruction, inverse Arnold transform, inverse DWT, two-step phase-shifting wavefront reconstruction, and inverse Fresnel transform can result in the output of a remarkable peak in the central location of the nonlinear correlation coefficient distributions of the recovered image and the standard certification image. Then, the other participant, who possesses the second compressed signal, is authorized to carry out the high-level authentication. Therefore, both compressed signals are collected to reconstruct the original meaningful certification image with a high correlation coefficient. Theoretical analysis and numerical simulations verify the feasibility of the proposed method.

  13. Image and video compression for multimedia engineering fundamentals, algorithms, and standards

    CERN Document Server

    Shi, Yun Q

    2008-01-01

    Part I: Fundamentals Introduction Quantization Differential Coding Transform Coding Variable-Length Coding: Information Theory Results (II) Run-Length and Dictionary Coding: Information Theory Results (III) Part II: Still Image Compression Still Image Coding: Standard JPEG Wavelet Transform for Image Coding: JPEG2000 Nonstandard Still Image Coding Part III: Motion Estimation and Compensation Motion Analysis and Motion Compensation Block Matching Pel-Recursive Technique Optical Flow Further Discussion and Summary on 2-D Motion Estimation Part IV: Video Compression Fundam

  14. Fast hybrid fractal image compression using an image feature and neural network

    International Nuclear Information System (INIS)

    Zhou Yiming; Zhang Chao; Zhang Zengke

    2008-01-01

    Since fractal image compression could maintain high-resolution reconstructed images at very high compression ratio, it has great potential to improve the efficiency of image storage and image transmission. On the other hand, fractal image encoding is time consuming for the best matching search between range blocks and domain blocks, which limits the algorithm to practical application greatly. In order to solve this problem, two strategies are adopted to improve the fractal image encoding algorithm in this paper. Firstly, based on the definition of an image feature, a necessary condition of the best matching search and FFC algorithm are proposed, and it could reduce the search space observably and exclude most inappropriate domain blocks according to each range block before the best matching search. Secondly, on the basis of FFC algorithm, in order to reduce the mapping error during the best matching search, a special neural network is constructed to modify the mapping scheme for the subblocks, in which the pixel values fluctuate greatly (FNFC algorithm). Experimental results show that the proposed algorithms could obtain good quality of the reconstructed images and need much less time than the baseline encoding algorithm

  15. High bit depth infrared image compression via low bit depth codecs

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-01-01

    images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed.......264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can...

  16. Parallelization of one image compression method. Wavelet, Transform, Vector Quantization and Huffman Coding

    International Nuclear Information System (INIS)

    Moravie, Philippe

    1997-01-01

    Today, in the digitized satellite image domain, the needs for high dimension increase considerably. To transmit or to stock such images (more than 6000 by 6000 pixels), we need to reduce their data volume and so we have to use real-time image compression techniques. The large amount of computations required by image compression algorithms prohibits the use of common sequential processors, for the benefits of parallel computers. The study presented here deals with parallelization of a very efficient image compression scheme, based on three techniques: Wavelets Transform (WT), Vector Quantization (VQ) and Entropic Coding (EC). First, we studied and implemented the parallelism of each algorithm, in order to determine the architectural characteristics needed for real-time image compression. Then, we defined eight parallel architectures: 3 for Mallat algorithm (WT), 3 for Tree-Structured Vector Quantization (VQ) and 2 for Huffman Coding (EC). As our system has to be multi-purpose, we chose 3 global architectures between all of the 3x3x2 systems available. Because, for technological reasons, real-time is not reached at anytime (for all the compression parameter combinations), we also defined and evaluated two algorithmic optimizations: fix point precision and merging entropic coding in vector quantization. As a result, we defined a new multi-purpose multi-SMIMD parallel machine, able to compress digitized satellite image in real-time. The definition of the best suited architecture for real-time image compression was answered by presenting 3 parallel machines among which one multi-purpose, embedded and which might be used for other applications on board. (author) [fr

  17. A joint image encryption and watermarking algorithm based on compressive sensing and chaotic map

    International Nuclear Information System (INIS)

    Xiao Di; Cai Hong-Kun; Zheng Hong-Ying

    2015-01-01

    In this paper, a compressive sensing (CS) and chaotic map-based joint image encryption and watermarking algorithm is proposed. The transform domain coefficients of the original image are scrambled by Arnold map firstly. Then the watermark is adhered to the scrambled data. By compressive sensing, a set of watermarked measurements is obtained as the watermarked cipher image. In this algorithm, watermark embedding and data compression can be performed without knowing the original image; similarly, watermark extraction will not interfere with decryption. Due to the characteristics of CS, this algorithm features compressible cipher image size, flexible watermark capacity, and lossless watermark extraction from the compressed cipher image as well as robustness against packet loss. Simulation results and analyses show that the algorithm achieves good performance in the sense of security, watermark capacity, extraction accuracy, reconstruction, robustness, etc. (paper)

  18. Piezoelectric micromachined ultrasonic transducers for fingerprint sensing

    Science.gov (United States)

    Lu, Yipeng

    Fingerprint identification is the most prevalent biometric technology due to its uniqueness, universality and convenience. Over the past two decades, a variety of physical mechanisms have been exploited to capture an electronic image of a human fingerprint. Among these, capacitive fingerprint sensors are the ones most widely used in consumer electronics because they are fabricated using conventional complementary metal oxide semiconductor (CMOS) integrated circuit technology. However, capacitive fingerprint sensors are extremely sensitive to finger contamination and moisture. This thesis will introduce an ultrasonic fingerprint sensor using a PMUT array, which offers a potential solution to this problem. In addition, it has the potential to increase security, as it allows images to be collected at various depths beneath the epidermis, providing images of the sub-surface dermis layer and blood vessels. Firstly, PMUT sensitivity is maximized by optimizing the layer stack and electrode design, and the coupling coefficient is doubled via series transduction. Moreover, a broadband PMUT with 97% fractional bandwidth is achieved by utilizing a thinner structure excited at two adjacent mechanical vibration modes with overlapping bandwidth. In addition, we proposed waveguide PMUTs, which function to direct acoustic waves, confine acoustic energy, and provide mechanical protection for the PMUT array. Furthermore, PMUT arrays were fabricated with different processes to form the membrane, including front-side etching with a patterned sacrificial layer, front-side etching with additional anchor, cavity SOI wafers and eutectic bonding. Additionally, eutectic bonding allows the PMUT to be integrated with CMOS circuits. PMUTs were characterized in the mechanical, electrical and acoustic domains. Using transmit beamforming, a narrow acoustic beam was achieved, and high-resolution (sub-100 microm) and short-range (~1 mm) pulse-echo ultrasonic imaging was demonstrated using a steel

  19. A Near-Lossless Image Compression Algorithm Suitable for Hardware Design in Wireless Endoscopy System

    Directory of Open Access Journals (Sweden)

    Xie Xiang

    2007-01-01

    Full Text Available In order to decrease the communication bandwidth and save the transmitting power in the wireless endoscopy capsule, this paper presents a new near-lossless image compression algorithm based on the Bayer format image suitable for hardware design. This algorithm can provide low average compression rate ( bits/pixel with high image quality (larger than dB for endoscopic images. Especially, it has low complexity hardware overhead (only two line buffers and supports real-time compressing. In addition, the algorithm can provide lossless compression for the region of interest (ROI and high-quality compression for other regions. The ROI can be selected arbitrarily by varying ROI parameters. In addition, the VLSI architecture of this compression algorithm is also given out. Its hardware design has been implemented in m CMOS process.

  20. Chlorophyll Fluorescence Imaging Uncovers Photosynthetic Fingerprint of Citrus Huanglongbing

    Directory of Open Access Journals (Sweden)

    Haiyan Cen

    2017-08-01

    Full Text Available Huanglongbing (HLB is one of the most destructive diseases of citrus, which has posed a serious threat to the global citrus production. This research was aimed to explore the use of chlorophyll fluorescence imaging combined with feature selection to characterize and detect the HLB disease. Chlorophyll fluorescence images of citrus leaf samples were measured by an in-house chlorophyll fluorescence imaging system. The commonly used chlorophyll fluorescence parameters provided the first screening of HLB disease. To further explore the photosynthetic fingerprint of HLB infected leaves, three feature selection methods combined with the supervised classifiers were employed to identify the unique fluorescence signature of HLB and perform the three-class classification (i.e., healthy, HLB infected, and nutrient deficient leaves. Unlike the commonly used fluorescence parameters, this novel data-driven approach by using the combination of the mean fluorescence parameters and image features gave the best classification performance with the accuracy of 97%, and presented a better interpretation for the spatial heterogeneity of photochemical and non-photochemical components in HLB infected citrus leaves. These results imply the potential of the proposed approach for the citrus HLB disease diagnosis, and also provide a valuable insight for the photosynthetic response to the HLB disease.

  1. An investigation of fake fingerprint detection approaches

    Science.gov (United States)

    Ahmad, Asraful Syifaa'; Hassan, Rohayanti; Othman, Razib M.

    2017-10-01

    The most reliable biometrics technology, fingerprint recognition is widely used in terms of security due to its permanence and uniqueness. However, it is also vulnerable to the certain type of attacks including presenting fake fingerprints to the sensor which requires the development of new and efficient protection measures. Particularly, the aim is to identify the most recent literature related to the fake fingerprint recognition and only focus on software-based approaches. A systematic review is performed by analyzing 146 primary studies from the gross collection of 34 research papers to determine the taxonomy, approaches, online public databases, and limitations of the fake fingerprint. Fourteen software-based approaches have been briefly described, four limitations of fake fingerprint image were revealed and two known fake fingerprint databases were addressed briefly in this review. Therefore this work provides an overview of an insight into the current understanding of fake fingerprint recognition besides identifying future research possibilities.

  2. Contributions to HEVC Prediction for Medical Image Compression

    OpenAIRE

    Guarda, André Filipe Rodrigues

    2016-01-01

    Medical imaging technology and applications are continuously evolving, dealing with images of increasing spatial and temporal resolutions, which allow easier and more accurate medical diagnosis. However, this increase in resolution demands a growing amount of data to be stored and transmitted. Despite the high coding efficiency achieved by the most recent image and video coding standards in lossy compression, they are not well suited for quality-critical medical image compressi...

  3. An Investigation on the Problem of Thinning in Fingerprint Processing

    Directory of Open Access Journals (Sweden)

    I. O. Omeiza

    2012-06-01

    Full Text Available A high-integrity thinning procedure for binarised fingerprints is proposed in this paper. Several authors and software developers have approached the thinning problems in fingerprint-processing differently. Their approach produced in most cases, fingerprint skeletons with low reliability and thus require additional minutiae-pruning stage to discard the erroneous minutiae in the obtained skeletons. The work involves a careful blending of some already existing algorithms to achieve optimal performance in thinning binarised fingerprint images. The algorithms considered are as follows. The "Zhang and Suen" parallel algorithm for thinning digital patterns, the improved parallel thinning algorithm by Holt and company and template-based thinning algorithm by Stentiford and Mortimer. The idea of combining these stand-alone algorithms to improve the quality of obtained objects skeleton in general image processing was first suggested in a text by Parker in 1998. However, his work does not specifically address the fingerprint problem. This work has examined and proves the plausibility of this thinning approach in the particular case of fingerprint application domain. The thinning procedure obtained satisfactory skeletons for fingerprint applications.

  4. DWI-based neural fingerprinting technology: a preliminary study on stroke analysis.

    Science.gov (United States)

    Ye, Chenfei; Ma, Heather Ting; Wu, Jun; Yang, Pengfei; Chen, Xuhui; Yang, Zhengyi; Ma, Jingbo

    2014-01-01

    Stroke is a common neural disorder in neurology clinics. Magnetic resonance imaging (MRI) has become an important tool to assess the neural physiological changes under stroke, such as diffusion weighted imaging (DWI) and diffusion tensor imaging (DTI). Quantitative analysis of MRI images would help medical doctors to localize the stroke area in the diagnosis in terms of structural information and physiological characterization. However, current quantitative approaches can only provide localization of the disorder rather than measure physiological variation of subtypes of ischemic stroke. In the current study, we hypothesize that each kind of neural disorder would have its unique physiological characteristics, which could be reflected by DWI images on different gradients. Based on this hypothesis, a DWI-based neural fingerprinting technology was proposed to classify subtypes of ischemic stroke. The neural fingerprint was constructed by the signal intensity of the region of interest (ROI) on the DWI images under different gradients. The fingerprint derived from the manually drawn ROI could classify the subtypes with accuracy 100%. However, the classification accuracy was worse when using semiautomatic and automatic method in ROI segmentation. The preliminary results showed promising potential of DWI-based neural fingerprinting technology in stroke subtype classification. Further studies will be carried out for enhancing the fingerprinting accuracy and its application in other clinical practices.

  5. Magnetic resonance image compression using scalar-vector quantization

    Science.gov (United States)

    Mohsenian, Nader; Shahri, Homayoun

    1995-12-01

    A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.

  6. Sparse BLIP: BLind Iterative Parallel imaging reconstruction using compressed sensing.

    Science.gov (United States)

    She, Huajun; Chen, Rong-Rong; Liang, Dong; DiBella, Edward V R; Ying, Leslie

    2014-02-01

    To develop a sensitivity-based parallel imaging reconstruction method to reconstruct iteratively both the coil sensitivities and MR image simultaneously based on their prior information. Parallel magnetic resonance imaging reconstruction problem can be formulated as a multichannel sampling problem where solutions are sought analytically. However, the channel functions given by the coil sensitivities in parallel imaging are not known exactly and the estimation error usually leads to artifacts. In this study, we propose a new reconstruction algorithm, termed Sparse BLind Iterative Parallel, for blind iterative parallel imaging reconstruction using compressed sensing. The proposed algorithm reconstructs both the sensitivity functions and the image simultaneously from undersampled data. It enforces the sparseness constraint in the image as done in compressed sensing, but is different from compressed sensing in that the sensing matrix is unknown and additional constraint is enforced on the sensitivities as well. Both phantom and in vivo imaging experiments were carried out with retrospective undersampling to evaluate the performance of the proposed method. Experiments show improvement in Sparse BLind Iterative Parallel reconstruction when compared with Sparse SENSE, JSENSE, IRGN-TV, and L1-SPIRiT reconstructions with the same number of measurements. The proposed Sparse BLind Iterative Parallel algorithm reduces the reconstruction errors when compared to the state-of-the-art parallel imaging methods. Copyright © 2013 Wiley Periodicals, Inc.

  7. CMOS Compressed Imaging by Random Convolution

    OpenAIRE

    Jacques, Laurent; Vandergheynst, Pierre; Bibet, Alexandre; Majidzadeh, Vahid; Schmid, Alexandre; Leblebici, Yusuf

    2009-01-01

    We present a CMOS imager with built-in capability to perform Compressed Sensing. The adopted sensing strategy is the random Convolution due to J. Romberg. It is achieved by a shift register set in a pseudo-random configuration. It acts as a convolutive filter on the imager focal plane, the current issued from each CMOS pixel undergoing a pseudo-random redirection controlled by each component of the filter sequence. A pseudo-random triggering of the ADC reading is finally applied to comp...

  8. Expandable image compression system: A modular approach

    International Nuclear Information System (INIS)

    Ho, B.K.T.; Chan, K.K.; Ishimitsu, Y.; Lo, S.C.; Huang, H.K.

    1987-01-01

    The full-frame bit allocation algorithm for radiological image compression developed in the authors' laboratory can achieve compression ratios as high as 30:1. The software development and clinical evaluation of this algorithm has been completed. It involves two stages of operations: a two-dimensional discrete cosine transform and pixel quantization in the transform space with pixel depth kept accountable by a bit allocation table. Their design took an expandable modular approach based on the VME bus system which has a maximum data transfer rate of 48 Mbytes per second and a Motorola 68020 microprocessor as the master controller. The transform modules are based on advanced digital signal processor (DSP) chips microprogrammed to perform fast cosine transforms. Four DSP's built into a single-board transform module can process an 1K x 1K image in 1.7 seconds. Additional transform modules working in parallel can be added if even greater speeds are desired. The flexibility inherent in the microcode extends the capabilities of the system to incorporate images of variable sizes. Their design allows for a maximum image size of 2K x 2K

  9. A preliminary study of DTI Fingerprinting on stroke analysis.

    Science.gov (United States)

    Ma, Heather T; Ye, Chenfei; Wu, Jun; Yang, Pengfei; Chen, Xuhui; Yang, Zhengyi; Ma, Jingbo

    2014-01-01

    DTI (Diffusion Tensor Imaging) is a well-known MRI (Magnetic Resonance Imaging) technique which provides useful structural information about human brain. However, the quantitative measurement to physiological variation of subtypes of ischemic stroke is not available. An automatically quantitative method for DTI analysis will enhance the DTI application in clinics. In this study, we proposed a DTI Fingerprinting technology to quantitatively analyze white matter tissue, which was applied in stroke classification. The TBSS (Tract Based Spatial Statistics) method was employed to generate mask automatically. To evaluate the clustering performance of the automatic method, lesion ROI (Region of Interest) is manually drawn on the DWI images as a reference. The results from the DTI Fingerprinting were compared with those obtained from the reference ROIs. It indicates that the DTI Fingerprinting could identify different states of ischemic stroke and has promising potential to provide a more comprehensive measure of the DTI data. Further development should be carried out to improve DTI Fingerprinting technology in clinics.

  10. A Novel Medical Image Watermarking in Three-dimensional Fourier Compressed Domain

    Directory of Open Access Journals (Sweden)

    Baoru Han

    2015-09-01

    Full Text Available Digital watermarking is a research hotspot in the field of image security, which is protected digital image copyright. In order to ensure medical image information security, a novel medical image digital watermarking algorithm in three-dimensional Fourier compressed domain is proposed. The novel medical image digital watermarking algorithm takes advantage of three-dimensional Fourier compressed domain characteristics, Legendre chaotic neural network encryption features and robust characteristics of differences hashing, which is a robust zero-watermarking algorithm. On one hand, the original watermarking image is encrypted in order to enhance security. It makes use of Legendre chaotic neural network implementation. On the other hand, the construction of zero-watermarking adopts differences hashing in three-dimensional Fourier compressed domain. The novel watermarking algorithm does not need to select a region of interest, can solve the problem of medical image content affected. The specific implementation of the algorithm and the experimental results are given in the paper. The simulation results testify that the novel algorithm possesses a desirable robustness to common attack and geometric attack.

  11. Dual photon excitation microscopy and image threshold segmentation in live cell imaging during compression testing.

    Science.gov (United States)

    Moo, Eng Kuan; Abusara, Ziad; Abu Osman, Noor Azuan; Pingguan-Murphy, Belinda; Herzog, Walter

    2013-08-09

    Morphological studies of live connective tissue cells are imperative to helping understand cellular responses to mechanical stimuli. However, photobleaching is a constant problem to accurate and reliable live cell fluorescent imaging, and various image thresholding methods have been adopted to account for photobleaching effects. Previous studies showed that dual photon excitation (DPE) techniques are superior over conventional one photon excitation (OPE) confocal techniques in minimizing photobleaching. In this study, we investigated the effects of photobleaching resulting from OPE and DPE on morphology of in situ articular cartilage chondrocytes across repeat laser exposures. Additionally, we compared the effectiveness of three commonly-used image thresholding methods in accounting for photobleaching effects, with and without tissue loading through compression. In general, photobleaching leads to an apparent volume reduction for subsequent image scans. Performing seven consecutive scans of chondrocytes in unloaded cartilage, we found that the apparent cell volume loss caused by DPE microscopy is much smaller than that observed using OPE microscopy. Applying scan-specific image thresholds did not prevent the photobleaching-induced volume loss, and volume reductions were non-uniform over the seven repeat scans. During cartilage loading through compression, cell fluorescence increased and, depending on the thresholding method used, led to different volume changes. Therefore, different conclusions on cell volume changes may be drawn during tissue compression, depending on the image thresholding methods used. In conclusion, our findings confirm that photobleaching directly affects cell morphology measurements, and that DPE causes less photobleaching artifacts than OPE for uncompressed cells. When cells are compressed during tissue loading, a complicated interplay between photobleaching effects and compression-induced fluorescence increase may lead to interpretations in

  12. Extracting subsurface fingerprints using optical coherence tomography

    CSIR Research Space (South Africa)

    Akhoury, SS

    2015-02-01

    Full Text Available Subsurface Fingerprints using Optical Coherence Tomography Sharat Saurabh Akhoury, Luke Nicholas Darlow Modelling and Digital Science, Council for Scientific and Industrial Research, Pretoria, South Africa Abstract Physiologists have found... approach to extract the subsurface fingerprint representation using a high-resolution imaging technology known as Optical Coherence Tomography (OCT). ...

  13. NIR hyperspectral compressive imager based on a modified Fabry–Perot resonator

    Science.gov (United States)

    Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Stern, Adrian

    2018-04-01

    The acquisition of hyperspectral (HS) image datacubes with available 2D sensor arrays involves a time consuming scanning process. In the last decade, several compressive sensing (CS) techniques were proposed to reduce the HS acquisition time. In this paper, we present a method for near-infrared (NIR) HS imaging which relies on our rapid CS resonator spectroscopy technique. Within the framework of CS, and by using a modified Fabry–Perot resonator, a sequence of spectrally modulated images is used to recover NIR HS datacubes. Owing to the innovative CS design, we demonstrate the ability to reconstruct NIR HS images with hundreds of spectral bands from an order of magnitude fewer measurements, i.e. with a compression ratio of about 10:1. This high compression ratio, together with the high optical throughput of the system, facilitates fast acquisition of large HS datacubes.

  14. Interleaved EPI diffusion imaging using SPIRiT-based reconstruction with virtual coil compression.

    Science.gov (United States)

    Dong, Zijing; Wang, Fuyixue; Ma, Xiaodong; Zhang, Zhe; Dai, Erpeng; Yuan, Chun; Guo, Hua

    2018-03-01

    To develop a novel diffusion imaging reconstruction framework based on iterative self-consistent parallel imaging reconstruction (SPIRiT) for multishot interleaved echo planar imaging (iEPI), with computation acceleration by virtual coil compression. As a general approach for autocalibrating parallel imaging, SPIRiT improves the performance of traditional generalized autocalibrating partially parallel acquisitions (GRAPPA) methods in that the formulation with self-consistency is better conditioned, suggesting SPIRiT to be a better candidate in k-space-based reconstruction. In this study, a general SPIRiT framework is adopted to incorporate both coil sensitivity and phase variation information as virtual coils and then is applied to 2D navigated iEPI diffusion imaging. To reduce the reconstruction time when using a large number of coils and shots, a novel shot-coil compression method is proposed for computation acceleration in Cartesian sampling. Simulations and in vivo experiments were conducted to evaluate the performance of the proposed method. Compared with the conventional coil compression, the shot-coil compression achieved higher compression rates with reduced errors. The simulation and in vivo experiments demonstrate that the SPIRiT-based reconstruction outperformed the existing method, realigned GRAPPA, and provided superior images with reduced artifacts. The SPIRiT-based reconstruction with virtual coil compression is a reliable method for high-resolution iEPI diffusion imaging. Magn Reson Med 79:1525-1531, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  15. Performance characterization of structured light-based fingerprint scanner

    Science.gov (United States)

    Hassebrook, Laurence G.; Wang, Minghao; Daley, Raymond C.

    2013-05-01

    Our group believes that the evolution of fingerprint capture technology is in transition to include 3-D non-contact fingerprint capture. More specifically we believe that systems based on structured light illumination provide the highest level of depth measurement accuracy. However, for these new technologies to be fully accepted by the biometric community, they must be compliant with federal standards of performance. At present these standards do not exist for this new biometric technology. We propose and define a set of test procedures to be used to verify compliance with the Federal Bureau of Investigation's image quality specification for Personal Identity Verification single fingerprint capture devices. The proposed test procedures include: geometric accuracy, lateral resolution based on intensity or depth, gray level uniformity and flattened fingerprint image quality. Several 2-D contact analogies, performance tradeoffs and optimization dilemmas are evaluated and proposed solutions are presented.

  16. Single-event transient imaging with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor.

    Science.gov (United States)

    Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji

    2016-02-22

    In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.

  17. Evaluation of the distortions of the digital chest image caused by the data compression

    International Nuclear Information System (INIS)

    Ando, Yutaka; Kunieda, Etsuo; Ogawa, Koichi; Tukamoto, Nobuhiro; Hashimoto, Shozo; Aoki, Makoto; Kurotani, Kenichi.

    1988-01-01

    The image data compression methods using orthogonal transforms (Discrete cosine transform, Discrete fourier transform, Hadamard transform, Haar transform, Slant transform) were analyzed. From the points of the error and the speed of the data conversion, the discrete cosine transform method (DCT) is superior to the other methods. The block quantization by the DCT for the digital chest image was used. The quality of data compressed and reconstructed images by the score analysis and the ROC curve analysis was examined. The chest image with the esophageal cancer and metastatic lung tumors was evaluated at the 17 checkpoints (the tumor, the vascular markings, the border of the heart and ribs, the mediastinal structures and et al). By our score analysis, the satisfactory ratio of the data compression is 1/5 and 1/10. The ROC analysis using normal chest images superimposed by the artificial coin lesions was made. The ROC curve of the 1/5 compressed ratio is almost as same as the original one. To summarize our study, the image data compression method using the DCT is thought to be useful for the clinical use and the 1/5 compression ratio is a tolerable ratio. (author)

  18. Evaluation of the distortions of the digital chest image caused by the data compression

    Energy Technology Data Exchange (ETDEWEB)

    Ando, Yutaka; Kunieda, Etsuo; Ogawa, Koichi; Tukamoto, Nobuhiro; Hashimoto, Shozo; Aoki, Makoto; Kurotani, Kenichi

    1988-08-01

    The image data compression methods using orthogonal transforms (Discrete cosine transform, Discrete fourier transform, Hadamard transform, Haar transform, Slant transform) were analyzed. From the points of the error and the speed of the data conversion, the discrete cosine transform method (DCT) is superior to the other methods. The block quantization by the DCT for the digital chest image was used. The quality of data compressed and reconstructed images by the score analysis and the ROC curve analysis was examined. The chest image with the esophageal cancer and metastatic lung tumors was evaluated at the 17 checkpoints (the tumor, the vascular markings, the border of the heart and ribs, the mediastinal structures and et al). By our score analysis, the satisfactory ratio of the data compression is 1/5 and 1/10. The ROC analysis using normal chest images superimposed by the artificial coin lesions was made. The ROC curve of the 1/5 compressed ratio is almost as same as the original one. To summarize our study, the image data compression method using the DCT is thought to be useful for the clinical use and the 1/5 compression ratio is a tolerable ratio.

  19. Multiband CCD Image Compression for Space Camera with Large Field of View

    Directory of Open Access Journals (Sweden)

    Jin Li

    2014-01-01

    Full Text Available Space multiband CCD camera compression encoder requires low-complexity, high-robustness, and high-performance because of its captured images information being very precious and also because it is usually working on the satellite where the resources, such as power, memory, and processing capacity, are limited. However, the traditional compression approaches, such as JPEG2000, 3D transforms, and PCA, have the high-complexity. The Consultative Committee for Space Data Systems-Image Data Compression (CCSDS-IDC algorithm decreases the average PSNR by 2 dB compared with JPEG2000. In this paper, we proposed a low-complexity compression algorithm based on deep coupling algorithm among posttransform in wavelet domain, compressive sensing, and distributed source coding. In our algorithm, we integrate three low-complexity and high-performance approaches in a deeply coupled manner to remove the spatial redundant, spectral redundant, and bit information redundancy. Experimental results on multiband CCD images show that the proposed algorithm significantly outperforms the traditional approaches.

  20. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).

    Science.gov (United States)

    Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling

    2018-04-17

    Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  1. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT

    Directory of Open Access Journals (Sweden)

    Ran Li

    2018-04-01

    Full Text Available Aimed at a low-energy consumption of Green Internet of Things (IoT, this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  2. Sparse representations and compressive sensing for imaging and vision

    CERN Document Server

    Patel, Vishal M

    2013-01-01

    Compressed sensing or compressive sensing is a new concept in signal processing where one measures a small number of non-adaptive linear combinations of the signal.  These measurements are usually much smaller than the number of samples that define the signal.  From these small numbers of measurements, the signal is then reconstructed by non-linear procedure.  Compressed sensing has recently emerged as a powerful tool for efficiently processing data in non-traditional ways.  In this book, we highlight some of the key mathematical insights underlying sparse representation and compressed sensing and illustrate the role of these theories in classical vision, imaging and biometrics problems.

  3. Case study of 3D fingerprints applications.

    Directory of Open Access Journals (Sweden)

    Feng Liu

    Full Text Available Human fingers are 3D objects. More information will be provided if three dimensional (3D fingerprints are available compared with two dimensional (2D fingerprints. Thus, this paper firstly collected 3D finger point cloud data by Structured-light Illumination method. Additional features from 3D fingerprint images are then studied and extracted. The applications of these features are finally discussed. A series of experiments are conducted to demonstrate the helpfulness of 3D information to fingerprint recognition. Results show that a quick alignment can be easily implemented under the guidance of 3D finger shape feature even though this feature does not work for fingerprint recognition directly. The newly defined distinctive 3D shape ridge feature can be used for personal authentication with Equal Error Rate (EER of ~8.3%. Also, it is helpful to remove false core point. Furthermore, a promising of EER ~1.3% is realized by combining this feature with 2D features for fingerprint recognition which indicates the prospect of 3D fingerprint recognition.

  4. Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Barni Mauro

    2007-01-01

    Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.

  5. Visualization of latent fingerprints beneath opaque electrical tapes by optical coherence tomography

    Science.gov (United States)

    Liu, Kangkang; Zhang, Ning; Meng, Li; Li, Zhigang; Xu, Xiaojing

    2018-03-01

    Electrical tape is found as one type of important trace evidence in crime scene. For example, it is very frequently used to insulate wires in explosive devices in many criminal cases. The fingerprints of the suspects were often left on the adhesive side of the tapes, which can provide very useful clues for the investigation and make it possible for individual identification. The most commonly used method to detect and visualize those latent fingerprints is to peel off each layer of the tapes first and then adopt the chemical methods to develop the fingerprints on the tapes. However, the peeling-off and chemical development process would degrade and contaminate the fingerprints and thus adversely affect the accuracy of identification. Optical coherence tomography (OCT) is a novel forensic imaging modality based on lowcoherence interferometry, which has the advantages of non-destruction, micrometer-level high resolution and crosssectional imaging. In this study, a fiber-based spectral-domain OCT (SD-OCT) system with {6μm resolution was employed to obtain the image of fingerprint sandwiched between two opaque electrical tapes without any pre-processing procedure like peeling-off. Three-dimensional (3D) OCT reconstruction was performed and the subsurface image was produced to visualize the latent fingerprints. The results demonstrate that OCT is a promising tool for recovering the latent fingerprints hidden beneath opaque electrical tape non-destructively and rapidly.

  6. DWI-Based Neural Fingerprinting Technology: A Preliminary Study on Stroke Analysis

    Directory of Open Access Journals (Sweden)

    Chenfei Ye

    2014-01-01

    Full Text Available Stroke is a common neural disorder in neurology clinics. Magnetic resonance imaging (MRI has become an important tool to assess the neural physiological changes under stroke, such as diffusion weighted imaging (DWI and diffusion tensor imaging (DTI. Quantitative analysis of MRI images would help medical doctors to localize the stroke area in the diagnosis in terms of structural information and physiological characterization. However, current quantitative approaches can only provide localization of the disorder rather than measure physiological variation of subtypes of ischemic stroke. In the current study, we hypothesize that each kind of neural disorder would have its unique physiological characteristics, which could be reflected by DWI images on different gradients. Based on this hypothesis, a DWI-based neural fingerprinting technology was proposed to classify subtypes of ischemic stroke. The neural fingerprint was constructed by the signal intensity of the region of interest (ROI on the DWI images under different gradients. The fingerprint derived from the manually drawn ROI could classify the subtypes with accuracy 100%. However, the classification accuracy was worse when using semiautomatic and automatic method in ROI segmentation. The preliminary results showed promising potential of DWI-based neural fingerprinting technology in stroke subtype classification. Further studies will be carried out for enhancing the fingerprinting accuracy and its application in other clinical practices.

  7. Adaptive bit plane quadtree-based block truncation coding for image compression

    Science.gov (United States)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  8. Alternatives to the discrete cosine transform for irreversible tomographic image compression

    International Nuclear Information System (INIS)

    Villasenor, J.D.

    1993-01-01

    Full-frame irreversible compression of medical images is currently being performed using the discrete cosine transform (DCT). Although the DCT is the optimum fast transform for video compression applications, the authors show here that it is out-performed by the discrete Fourier transform (DFT) and discrete Hartley transform (DHT) for images obtained using positron emission tomography (PET) and magnetic resonance imaging (MRI), and possibly for certain types of digitized radiographs. The difference occurs because PET and MRI images are characterized by a roughly circular region D of non-zero intensity bounded by a region R in which the Image intensity is essentially zero. Clipping R to its minimum extent can reduce the number of low-intensity pixels but the practical requirement that images be stored on a rectangular grid means that a significant region of zero intensity must remain an integral part of the image to be compressed. With this constraint imposed, the DCT loses its advantage over the DFT because neither transform introduces significant artificial discontinuities. The DFT and DHT have the further important advantage of requiring less computation time than the DCT

  9. Accelerated Air-coupled Ultrasound Imaging of Wood Using Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Yiming Fang

    2015-12-01

    Full Text Available Air-coupled ultrasound has shown excellent sensitivity and specificity for the nondestructive imaging of wood-based material. However, it is time-consuming, due to the high scanning density limited by the Nyquist law. This study investigated the feasibility of applying compressed sensing techniques to air-coupled ultrasound imaging, aiming to reduce the number of scanning lines and then accelerate the imaging. Firstly, an undersampled scanning strategy specified by a random binary matrix was proposed to address the limitation of the compressed sensing framework. The undersampled scanning can be easily implemented, while only minor modification was required for the existing imaging system. Then, discrete cosine transform was selected experimentally as the representation basis. Finally, orthogonal matching pursuit algorithm was utilized to reconstruct the wood images. Experiments on three real air-coupled ultrasound images indicated the potential of the present method to accelerate air-coupled ultrasound imaging of wood. The same quality of ACU images can be obtained with scanning time cut in half.

  10. MR imaging of spinal factors and compression of the spinal cord in cervical myelopathy

    International Nuclear Information System (INIS)

    Kokubun, Shoichi; Ozawa, Hiroshi; Sakurai, Minoru; Ishii, Sukenobu; Tani, Shotaro; Sato, Tetsuaki.

    1992-01-01

    Magnetic resonance (MR) images of surgical 109 patients with cervical spondylotic myelopathy were retrospectively reviewed to examine whether MR imaging would replace conventional radiological procedures in determining spinal factors and spinal cord compression in this disease. MR imaging was useful in determining spondylotic herniation, continuous type of ossification of posterior longitudinal ligament, and calcification of yellow ligament, probably replacing CT myelography, discography, and CT discography. When total defect of the subarachnoid space on T2-weighted images and block on myelograms were compared in determining spinal cord compression, the spinal cord was affected more extensively by 1.3 intervertebral distance (IVD) on T2-weighted images. When indentation of one third or more in anterior and posterior diameter of the spinal cord was used as spinal cord compression, the difference in the affected extension between myelography and MR imaging was 0.2 IVD on T1-weighted images and 0.6 IVD on T2-weighted images. However, when block was seen in 3 or more IVD on myelograms, the range of spinal cord compression tended to be larger on T1-weighted images. For a small range of spinal cord compression, T1-weighted imaging seems to be helpful in determining the range of decompression. When using T2-weighted imaging, the range of decompression becomes large, frequently including posterior decompression. (N.K.)

  11. Bi-level image compression with tree coding

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1996-01-01

    Presently, tree coders are the best bi-level image coders. The current ISO standard, JBIG, is a good example. By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code. Three general-purpose coders...... are constructed by this principle. A multi-pass free tree coding scheme produces superior compression results for all test images. A multi-pass fast free template coding scheme produces much better results than JBIG for difficult images, such as halftonings. Rissanen's algorithm `Context' is presented in a new...

  12. Effect of image compression and scaling on automated scoring of immunohistochemical stainings and segmentation of tumor epithelium

    Directory of Open Access Journals (Sweden)

    Konsti Juho

    2012-03-01

    Full Text Available Abstract Background Digital whole-slide scanning of tissue specimens produces large images demanding increasing storing capacity. To reduce the need of extensive data storage systems image files can be compressed and scaled down. The aim of this article is to study the effect of different levels of image compression and scaling on automated image analysis of immunohistochemical (IHC stainings and automated tumor segmentation. Methods Two tissue microarray (TMA slides containing 800 samples of breast cancer tissue immunostained against Ki-67 protein and two TMA slides containing 144 samples of colorectal cancer immunostained against EGFR were digitized with a whole-slide scanner. The TMA images were JPEG2000 wavelet compressed with four compression ratios: lossless, and 1:12, 1:25 and 1:50 lossy compression. Each of the compressed breast cancer images was furthermore scaled down either to 1:1, 1:2, 1:4, 1:8, 1:16, 1:32, 1:64 or 1:128. Breast cancer images were analyzed using an algorithm that quantitates the extent of staining in Ki-67 immunostained images, and EGFR immunostained colorectal cancer images were analyzed with an automated tumor segmentation algorithm. The automated tools were validated by comparing the results from losslessly compressed and non-scaled images with results from conventional visual assessments. Percentage agreement and kappa statistics were calculated between results from compressed and scaled images and results from lossless and non-scaled images. Results Both of the studied image analysis methods showed good agreement between visual and automated results. In the automated IHC quantification, an agreement of over 98% and a kappa value of over 0.96 was observed between losslessly compressed and non-scaled images and combined compression ratios up to 1:50 and scaling down to 1:8. In automated tumor segmentation, an agreement of over 97% and a kappa value of over 0.93 was observed between losslessly compressed images and

  13. Optimization of compressive 4D-spatio-spectral snapshot imaging

    Science.gov (United States)

    Zhao, Xia; Feng, Weiyi; Lin, Lihua; Su, Wu; Xu, Guoqing

    2017-10-01

    In this paper, a modified 3D computational reconstruction method in the compressive 4D-spectro-volumetric snapshot imaging system is proposed for better sensing spectral information of 3D objects. In the design of the imaging system, a microlens array (MLA) is used to obtain a set of multi-view elemental images (EIs) of the 3D scenes. Then, these elemental images with one dimensional spectral information and different perspectives are captured by the coded aperture snapshot spectral imager (CASSI) which can sense the spectral data cube onto a compressive 2D measurement image. Finally, the depth images of 3D objects at arbitrary depths, like a focal stack, are computed by inversely mapping the elemental images according to geometrical optics. With the spectral estimation algorithm, the spectral information of 3D objects is also reconstructed. Using a shifted translation matrix, the contrast of the reconstruction result is further enhanced. Numerical simulation results verify the performance of the proposed method. The system can obtain both 3D spatial information and spectral data on 3D objects using only one single snapshot, which is valuable in the agricultural harvesting robots and other 3D dynamic scenes.

  14. Combining Biometric Fractal Pattern and Particle Swarm Optimization-Based Classifier for Fingerprint Recognition

    Directory of Open Access Journals (Sweden)

    Chia-Hung Lin

    2010-01-01

    Full Text Available This paper proposes combining the biometric fractal pattern and particle swarm optimization (PSO-based classifier for fingerprint recognition. Fingerprints have arch, loop, whorl, and accidental morphologies, and embed singular points, resulting in the establishment of fingerprint individuality. An automatic fingerprint identification system consists of two stages: digital image processing (DIP and pattern recognition. DIP is used to convert to binary images, refine out noise, and locate the reference point. For binary images, Katz's algorithm is employed to estimate the fractal dimension (FD from a two-dimensional (2D image. Biometric features are extracted as fractal patterns using different FDs. Probabilistic neural network (PNN as a classifier performs to compare the fractal patterns among the small-scale database. A PSO algorithm is used to tune the optimal parameters and heighten the accuracy. For 30 subjects in the laboratory, the proposed classifier demonstrates greater efficiency and higher accuracy in fingerprint recognition.

  15. Fingerprint segmentation: an investigation of various techniques and a parameter study of a variance-based method

    CSIR Research Space (South Africa)

    Msiza, IS

    2011-09-01

    Full Text Available Fingerprint image segmentation plays an important role in any fingerprint image analysis implementation and it should, ideally, be executed during the initial stages of a fingerprint manipulation process. After careful consideration of various...

  16. Spatial correlation genetic algorithm for fractal image compression

    International Nuclear Information System (INIS)

    Wu, M.-S.; Teng, W.-C.; Jeng, J.-H.; Hsieh, J.-G.

    2006-01-01

    Fractal image compression explores the self-similarity property of a natural image and utilizes the partitioned iterated function system (PIFS) to encode it. This technique is of great interest both in theory and application. However, it is time-consuming in the encoding process and such drawback renders it impractical for real time applications. The time is mainly spent on the search for the best-match block in a large domain pool. In this paper, a spatial correlation genetic algorithm (SC-GA) is proposed to speed up the encoder. There are two stages for the SC-GA method. The first stage makes use of spatial correlations in images for both the domain pool and the range pool to exploit local optima. The second stage is operated on the whole image to explore more adequate similarities if the local optima are not satisfied. With the aid of spatial correlation in images, the encoding time is 1.5 times faster than that of traditional genetic algorithm method, while the quality of the retrieved image is almost the same. Moreover, about half of the matched blocks come from the correlated space, so fewer bits are required to represent the fractal transform and therefore the compression ratio is also improved

  17. ITERATION FREE FRACTAL COMPRESSION USING GENETIC ALGORITHM FOR STILL COLOUR IMAGES

    Directory of Open Access Journals (Sweden)

    A.R. Nadira Banu Kamal

    2014-02-01

    Full Text Available The storage requirements for images can be excessive, if true color and a high-perceived image quality are desired. An RGB image may be viewed as a stack of three gray-scale images that when fed into the red, green and blue inputs of a color monitor, produce a color image on the screen. The abnormal size of many images leads to long, costly, transmission times. Hence, an iteration free fractal algorithm is proposed in this research paper to design an efficient search of the domain pools for colour image compression using Genetic Algorithm (GA. The proposed methodology reduces the coding process time and intensive computation tasks. Parameters such as image quality, compression ratio and coding time are analyzed. It is observed that the proposed method achieves excellent performance in image quality with reduction in storage space.

  18. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    OpenAIRE

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with vari...

  19. Electronic fingerprinting of the dead.

    Science.gov (United States)

    Rutty, G N; Stringer, K; Turk, E E

    2008-01-01

    To date, a number of methods exist for the capture of fingerprints from cadavers that can then be used in isolation as a primary method for the identification of the dead. We report the use of a handheld, mobile wireless unit used in conjunction with a personal digital assistant (PDA) device for the capture of fingerprints from the dead. We also consider a handheld single-digit fingerprint scanner that utilises a USB laptop connection for the electronic capture of cadaveric fingerprints. Both are single-operator units that, if ridge detail is preserved, can collect a 10-set of finger pad prints in approximately 45 and 90 s, respectively. We present our observations on the restrictions as to when such devices can be used with cadavers. We do, however, illustrate that the images are of sufficient quality to allow positive identification from finger pad prints of the dead. With the development of mobile, handheld, biometric, PDA-based units for the police, we hypothesize that, under certain circumstances, devices such as these could be used for the accelerated acquisition of fingerprint identification data with the potential for rapid near-patient identification in the future.

  20. Digitized hand-wrist radiographs: comparison of subjective and software-derived image quality at various compression ratios.

    Science.gov (United States)

    McCord, Layne K; Scarfe, William C; Naylor, Rachel H; Scheetz, James P; Silveira, Anibal; Gillespie, Kevin R

    2007-05-01

    The objectives of this study were to compare the effect of JPEG 2000 compression of hand-wrist radiographs on observer image quality qualitative assessment and to compare with a software-derived quantitative image quality index. Fifteen hand-wrist radiographs were digitized and saved as TIFF and JPEG 2000 images at 4 levels of compression (20:1, 40:1, 60:1, and 80:1). The images, including rereads, were viewed by 13 orthodontic residents who determined the image quality rating on a scale of 1 to 5. A quantitative analysis was also performed by using a readily available software based on the human visual system (Image Quality Measure Computer Program, version 6.2, Mitre, Bedford, Mass). ANOVA was used to determine the optimal compression level (P quality. When we used quantitative indexes, the JPEG 2000 images had lower quality at all compression ratios compared with the original TIFF images. There was excellent correlation (R2 >0.92) between qualitative and quantitative indexes. Image Quality Measure indexes are more sensitive than subjective image quality assessments in quantifying image degradation with compression. There is potential for this software-based quantitative method in determining the optimal compression ratio for any image without the use of subjective raters.

  1. A Coded Aperture Compressive Imaging Array and Its Visual Detection and Tracking Algorithms for Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Hanxiao Wu

    2012-10-01

    Full Text Available In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l1 optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l1 tracker without any optimization.

  2. Color image lossy compression based on blind evaluation and prediction of noise characteristics

    Science.gov (United States)

    Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena

    2011-03-01

    The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.

  3. Assessment of the impact of modeling axial compression on PET image reconstruction.

    Science.gov (United States)

    Belzunce, Martin A; Reader, Andrew J

    2017-10-01

    To comprehensively evaluate both the acceleration and image-quality impacts of axial compression and its degree of modeling in fully 3D PET image reconstruction. Despite being used since the very dawn of 3D PET reconstruction, there are still no extensive studies on the impact of axial compression and its degree of modeling during reconstruction on the end-point reconstructed image quality. In this work, an evaluation of the impact of axial compression on the image quality is performed by extensively simulating data with span values from 1 to 121. In addition, two methods for modeling the axial compression in the reconstruction were evaluated. The first method models the axial compression in the system matrix, while the second method uses an unmatched projector/backprojector, where the axial compression is modeled only in the forward projector. The different system matrices were analyzed by computing their singular values and the point response functions for small subregions of the FOV. The two methods were evaluated with simulated and real data for the Biograph mMR scanner. For the simulated data, the axial compression with span values lower than 7 did not show a decrease in the contrast of the reconstructed images. For span 11, the standard sinogram size of the mMR scanner, losses of contrast in the range of 5-10 percentage points were observed when measured for a hot lesion. For higher span values, the spatial resolution was degraded considerably. However, impressively, for all span values of 21 and lower, modeling the axial compression in the system matrix compensated for the spatial resolution degradation and obtained similar contrast values as the span 1 reconstructions. Such approaches have the same processing times as span 1 reconstructions, but they permit significant reduction in storage requirements for the fully 3D sinograms. For higher span values, the system has a large condition number and it is therefore difficult to recover accurately the higher

  4. Holistic processing of fingerprints by expert forensic examiners.

    Science.gov (United States)

    Vogelsang, Macgregor D; Palmeri, Thomas J; Busey, Thomas A

    2017-01-01

    Holistic processing is often characterized as a process by which objects are perceived as a whole rather than a compilation of individual features. This mechanism may play an important role in the development of perceptual expertise because it allows for rapid integration across image regions. The present work explores whether holistic processing is present in latent fingerprint examiners, who compare fingerprints collected from crime scenes against a set of standards taken from a suspect. We adapted a composite task widely used in the face recognition and perceptual expertise literatures, in which participants were asked to match only a particular half of a fingerprint with a previous image while ignoring the other half. We tested both experts and novices, using both upright and inverted fingerprints. For upright fingerprints, we found weak evidence for holistic processing, but with no differences between experts and novices with respect to holistic processing. For inverted fingerprints, we found stronger evidence of holistic processing, with weak evidence for differences between experts and novices. These relatively weak holistic processing effects contrast with robust evidence for holistic processing with faces and with objects in other domains of perceptual expertise. The data constrain models of holistic processing by demonstrating that latent fingerprint experts and novices may not substantively differ in terms of the amount of holistic processing and that inverted stimuli actually produced more evidence for holistic processing than upright stimuli. Important differences between the present fingerprint stimuli and those in the literature include the lack of verbal labels for experts and the absence of strong vertical asymmetries, both of which might contribute to stronger holistic processing signatures in other stimulus domains.

  5. View compensated compression of volume rendered images for remote visualization.

    Science.gov (United States)

    Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S

    2009-07-01

    Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.

  6. Development of information preserving data compression algorithm for CT images

    International Nuclear Information System (INIS)

    Kobayashi, Yoshio

    1989-01-01

    Although digital imaging techniques in radiology develop rapidly, problems arise in archival storage and communication of image data. This paper reports on a new information preserving data compression algorithm for computed tomographic (CT) images. This algorithm consists of the following five processes: 1. Pixels surrounding the human body showing CT values smaller than -900 H.U. are eliminated. 2. Each pixel is encoded by its numerical difference from its neighboring pixel along a matrix line. 3. Difference values are encoded by a newly designed code rather than the natural binary code. 4. Image data, obtained with the above process, are decomposed into bit planes. 5. The bit state transitions in each bit plane are encoded by run length coding. Using this new algorithm, the compression ratios of brain, chest, and abdomen CT images are 4.49, 4.34. and 4.40 respectively. (author)

  7. Virtual endoscopic images by 3D FASE cisternography for neurovascular compression

    International Nuclear Information System (INIS)

    Ishimori, Takashi; Nakano, Satoru; Kagawa, Masahiro

    2003-01-01

    Three-dimensional fast asymmetric spin echo (3D FASE) cisternography provides high spatial resolution and excellent contrast as a water image acquisition technique. It is also useful for the evaluation of various anatomical regions. This study investigated the usefulness and limitations of virtual endoscopic images obtained by 3D FASE MR cisternography in the preoperative evaluation of patients with neurovascular compression. The study included 12 patients with neurovascular compression: 10 with hemifacial spasm and two with trigeminal neuralgia. The diagnosis was surgically confirmed in all patients. The virtual endoscopic images obtained were judged to be of acceptable quality for interpretation in all cases. The areas of compression identified in preoperative diagnosis with virtual endoscopic images showed good agreement with those observed from surgery, except in one case in which the common trunk of the anterior inferior cerebellar artery and posterior inferior cerebellar artery (AICA-PICA) bifurcated near the root exit zone of the facial nerve. The veins are displayed in some cases but not in others. The main advantage of generating virtual endoscopic images is that such images can be used for surgical simulation, allowing the neurosurgeon to perform surgical procedures with greater confidence. (author)

  8. Angstrom-Resolution Magnetic Resonance Imaging of Single Molecules via Wave-Function Fingerprints of Nuclear Spins

    Science.gov (United States)

    Ma, Wen-Long; Liu, Ren-Bao

    2016-08-01

    Single-molecule sensitivity of nuclear magnetic resonance (NMR) and angstrom resolution of magnetic resonance imaging (MRI) are the highest challenges in magnetic microscopy. Recent development in dynamical-decoupling- (DD) enhanced diamond quantum sensing has enabled single-nucleus NMR and nanoscale NMR. Similar to conventional NMR and MRI, current DD-based quantum sensing utilizes the "frequency fingerprints" of target nuclear spins. The frequency fingerprints by their nature cannot resolve different nuclear spins that have the same noise frequency or differentiate different types of correlations in nuclear-spin clusters, which limit the resolution of single-molecule MRI. Here we show that this limitation can be overcome by using "wave-function fingerprints" of target nuclear spins, which is much more sensitive than the frequency fingerprints to the weak hyperfine interaction between the targets and a sensor under resonant DD control. We demonstrate a scheme of angstrom-resolution MRI that is capable of counting and individually localizing single nuclear spins of the same frequency and characterizing the correlations in nuclear-spin clusters. A nitrogen-vacancy-center spin sensor near a diamond surface, provided that the coherence time is improved by surface engineering in the near future, may be employed to determine with angstrom resolution the positions and conformation of single molecules that are isotope labeled. The scheme in this work offers an approach to breaking the resolution limit set by the "frequency gradients" in conventional MRI and to reaching the angstrom-scale resolution.

  9. Establishing physical criteria to stop the losing compression of digital medical imaging

    International Nuclear Information System (INIS)

    Perez Diaz, M

    2008-01-01

    Full text: A key to store and/or transmit digital medical images obtained from modern technologies is the size in bytes they occupy difficulty. One way to solve the above is the implementation of compression algorithms (codecs) with or without losses. Particularly the latter do allow significant reductions in the size of the images, but if not applied on solid scientific criteria can lead to useful diagnostic information is lost. This talk takes a description and assessment of the quality of image obtained after the application of current compression codecs from analysis of physical parameters such as: Spatial resolution, random noise , contrast and image generation devices. Open for Medical Physics and Image Processing, directed toward establishing objective criteria to stop losing compression, based on the implementation of Univariate and bivariate traditional metrics such as mean square error introduced by each issue focuses rate compression, Signal to Noise peak to peak noise and contrast ratio , and other metrics, more modern, such as Structural Similarity Index and, Measures Distance , singular value decomposition of the image matrix and Correlation and Spectral Measurements. It also makes a review of physical approaches for predicting image quality from use mathematical observers as the Hotelling and Hotelling Pipeline with Gabor functions or Laguerre - Gauss polynomials . Finally the correlation of these objective methods with subjective assessment of image quality made ​​from ROC analysis based on Diagnostic Performance Curves is analyzed. (author)

  10. Hardware Implementation of Lossless Adaptive Compression of Data From a Hyperspectral Imager

    Science.gov (United States)

    Keymeulen, Didlier; Aranki, Nazeeh I.; Klimesh, Matthew A.; Bakhshi, Alireza

    2012-01-01

    Efficient onboard data compression can reduce the data volume from hyperspectral imagers on NASA and DoD spacecraft in order to return as much imagery as possible through constrained downlink channels. Lossless compression is important for signature extraction, object recognition, and feature classification capabilities. To provide onboard data compression, a hardware implementation of a lossless hyperspectral compression algorithm was developed using a field programmable gate array (FPGA). The underlying algorithm is the Fast Lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral- Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), p. 26 with the modification reported in Lossless, Multi-Spectral Data Comressor for Improved Compression for Pushbroom-Type Instruments (NPO-45473), NASA Tech Briefs, Vol. 32, No. 7 (July 2008) p. 63, which provides improved compression performance for data from pushbroom-type imagers. An FPGA implementation of the unmodified FL algorithm was previously developed and reported in Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System (NPO-46867), NASA Tech Briefs, Vol. 36, No. 5 (May 2012) p. 42. The essence of the FL algorithm is adaptive linear predictive compression using the sign algorithm for filter adaption. The FL compressor achieves a combination of low complexity and compression effectiveness that exceeds that of stateof- the-art techniques currently in use. The modification changes the predictor structure to tolerate differences in sensitivity of different detector elements, as occurs in pushbroom-type imagers, which are suitable for spacecraft use. The FPGA implementation offers a low-cost, flexible solution compared to traditional ASIC (application specific integrated circuit) and can be integrated as an intellectual property (IP) for part of, e.g., a design that manages the instrument interface. The FPGA implementation was benchmarked on the Xilinx

  11. Magnetic Resonance Fingerprinting - a promising new approach to obtain standardized imaging biomarkers from MRI

    OpenAIRE

    2015-01-01

    Current routine MRI examinations rely on the acquisition of qualitative images that have a contrast ?weighted? for a mixture of (magnetic) tissue properties. Recently, a novel approach was introduced, namely MR Fingerprinting (MRF) with a completely different approach to data acquisition, post-processing and visualization. Instead of using a repeated, serial acquisition of data for the characterization of individual parameters of interest, MRF uses a pseudo randomized acquisition that causes ...

  12. Inter frame motion estimation and its application to image sequence compression: an introduction

    International Nuclear Information System (INIS)

    Cremy, C.

    1996-01-01

    With the constant development of new communication technologies like, digital TV, teleconference, and the development of image analysis applications, there is a growing volume of data to manage. Compression techniques are required for the transmission and storage of these data. Dealing with original images would require the use of expansive high bandwidth communication devices and huge storage media. Image sequence compression can be achieved by means of interframe estimation that consists in retrieving redundant information relative to zones where there is little motion between two frames. This paper is an introduction to some motion estimation techniques like gradient techniques, pel-recursive, block-matching, and its application to image sequence compression. (Author) 17 refs

  13. Chemical Visualization of Sweat Pores in Fingerprints Using GO-Enhanced TOF-SIMS.

    Science.gov (United States)

    Cai, Lesi; Xia, Meng-Chan; Wang, Zhaoying; Zhao, Ya-Bin; Li, Zhanping; Zhang, Sichun; Zhang, Xinrong

    2017-08-15

    Time-of-flight secondary ion mass spectrometry (TOF-SIMS) has been used in imaging of small molecules (SIMS was used to detect and image relatively high mass molecules such as poison, alkaloids (>600 Da) and controlled drugs, and antibiotics (>700 Da) in fingerprints. Detail features of fingerprints such as the number and distribution of sweat pores in a ridge and even the delicate morphology of one pore were clearly revealed in SIMS images of relatively high mass molecules. The detail features combining with identified chemical composition were sufficient to establish a human identity and link the suspect to a crime scene. The wide detectable mass range and high spatial resolution make GO-enhanced TOF-SIMS a promising tool in accurate and fast analysis of fingerprints, especially in fragmental fingerprint analysis.

  14. A STRONG SECURITY PROTOCOL AGAINST FINGERPRINT DATABASE ATTACKS

    Directory of Open Access Journals (Sweden)

    U. Latha

    2013-08-01

    Full Text Available The Biometric data is subject to on-going changes and create a crucial problem in fingerprint database. To deal with this, a security protocol is proposed to protect the finger prints information from the prohibited users. Here, a security protocol is proposed to protect the finger prints information. The proposed system comprised of three phases namely, fingerprint reconstruction, feature extraction and development of trigon based security protocol. In fingerprint reconstruction, the different crack variance level finger prints images are reconstructed by the M-band Dual Tree Complex Wavelet Transform (DTCWT. After that features are extracted by binarization. A set of finger print images are utilized to evaluate the performance of security protocol and the result from this process guarantees the healthiness of the proposed trigon based security protocol. The implementation results show the effectiveness of proposed trigon based security protocol in protecting the finger print information and the achieved improvement in image reconstruction and the security process.

  15. Differential diagnosis of benign and malignant vertebral compression fractures with MR imaging

    International Nuclear Information System (INIS)

    Staebler, A.; Krimmel, K.; Seiderer, M.; Gaertner, C.; Fritsch, S.; Raum, W.

    1992-01-01

    42 patients with known malignancy and vertebral compressions underwent MRI. Sagittal T 1 -weighted spin-echo images pre and post Gd-DTPA, out of phase long TR gradient-echo images (GE) and short T 1 inversion recovery images (STIR) were obtained at 1.0 T. In 39 of 42 cases a correct differentiation between osteoporotic and tumorous vertebral compression fractures was possible by quantification and correlation of SE and GE signal intensities. Gd-DTPA did not improve differential diagnosis, since both tumour infiltration and bone marrow oedema in acute compression fracture showed comparable enhancement. STIR-sequences were most sensitive for pathology but unspecific due to a comparable amount of water in tumour tissue and bone marrow oedema. Susceptibility-induced signal reduction in GE images and morphologic criteria proved to be most reliable for differentiation of benign and tumour-related fractures. (orig./GDG) [de

  16. A new approach of objective quality evaluation on JPEG2000 lossy-compressed lung cancer CT images

    Science.gov (United States)

    Cai, Weihua; Tan, Yongqiang; Zhang, Jianguo

    2007-03-01

    Image compression has been used to increase the communication efficiency and storage capacity. JPEG 2000 compression, based on the wavelet transformation, has its advantages comparing to other compression methods, such as ROI coding, error resilience, adaptive binary arithmetic coding and embedded bit-stream. However it is still difficult to find an objective method to evaluate the image quality of lossy-compressed medical images so far. In this paper, we present an approach to evaluate the image quality by using a computer aided diagnosis (CAD) system. We selected 77 cases of CT images, bearing benign and malignant lung nodules with confirmed pathology, from our clinical Picture Archiving and Communication System (PACS). We have developed a prototype of CAD system to classify these images into benign ones and malignant ones, the performance of which was evaluated by the receiver operator characteristics (ROC) curves. We first used JPEG 2000 to compress these cases of images with different compression ratio from lossless to lossy, and used the CAD system to classify the cases with different compressed ratio, then compared the ROC curves from the CAD classification results. Support vector machine (SVM) and neural networks (NN) were used to classify the malignancy of input nodules. In each approach, we found that the area under ROC (AUC) decreases with the increment of compression ratio with small fluctuations.

  17. A novel hand-type detection technique with fingerprint sensor

    Science.gov (United States)

    Abe, Narishige; Shinzaki, Takashi

    2013-05-01

    In large-scale biometric authentication systems such as the US-Visit (USA), a 10-fingerprints scanner which simultaneously captures four fingerprints is used. In traditional systems, specific hand-types (left or right) are indicated, but it is difficult to detect hand-type due to the hand rotation and the opening and closing of fingers. In this paper, we evaluated features that were extracted from hand images (which were captured by a general optical scanner) that are considered to be effective for detecting hand-type. Furthermore, we extended the knowledge to real fingerprint images, and evaluated the accuracy with which it detects hand-type. We obtained an accuracy of about 80% with only three fingers (index, middle, ring finger).

  18. Correspondence normalized ghost imaging on compressive sensing

    International Nuclear Information System (INIS)

    Zhao Sheng-Mei; Zhuang Peng

    2014-01-01

    Ghost imaging (GI) offers great potential with respect to conventional imaging techniques. It is an open problem in GI systems that a long acquisition time is be required for reconstructing images with good visibility and signal-to-noise ratios (SNRs). In this paper, we propose a new scheme to get good performance with a shorter construction time. We call it correspondence normalized ghost imaging based on compressive sensing (CCNGI). In the scheme, we enhance the signal-to-noise performance by normalizing the reference beam intensity to eliminate the noise caused by laser power fluctuations, and reduce the reconstruction time by using both compressive sensing (CS) and time-correspondence imaging (CI) techniques. It is shown that the qualities of the images have been improved and the reconstruction time has been reduced using CCNGI scheme. For the two-grayscale ''double-slit'' image, the mean square error (MSE) by GI and the normalized GI (NGI) schemes with the measurement number of 5000 are 0.237 and 0.164, respectively, and that is 0.021 by CCNGI scheme with 2500 measurements. For the eight-grayscale ''lena'' object, the peak signal-to-noise rates (PSNRs) are 10.506 and 13.098, respectively using GI and NGI schemes while the value turns to 16.198 using CCNGI scheme. The results also show that a high-fidelity GI reconstruction has been achieved using only 44% of the number of measurements corresponding to the Nyquist limit for the two-grayscale “double-slit'' object. The qualities of the reconstructed images using CCNGI are almost the same as those from GI via sparsity constraints (GISC) with a shorter reconstruction time. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  19. VLSI ARCHITECTURE FOR IMAGE COMPRESSION THROUGH ADDER MINIMIZATION TECHNIQUE AT DCT STRUCTURE

    Directory of Open Access Journals (Sweden)

    N.R. Divya

    2014-08-01

    Full Text Available Data compression plays a vital role in multimedia devices to present the information in a succinct frame. Initially, the DCT structure is used for Image compression, which has lesser complexity and area efficient. Similarly, 2D DCT also has provided reasonable data compression, but implementation concern, it calls more multipliers and adders thus its lead to acquire more area and high power consumption. To contain an account of all, this paper has been dealt with VLSI architecture for image compression using Rom free DA based DCT (Discrete Cosine Transform structure. This technique provides high-throughput and most suitable for real-time implementation. In order to achieve this image matrix is subdivided into odd and even terms then the multiplication functions are removed by shift and add approach. Kogge_Stone_Adder techniques are proposed for obtaining a bit-wise image quality which determines the new trade-off levels as compared to the previous techniques. Overall the proposed architecture produces reduced memory, low power consumption and high throughput. MATLAB is used as a funding tool for receiving an input pixel and obtaining output image. Verilog HDL is used for implementing the design, Model Sim for simulation, Quatres II is used to synthesize and obtain details about power and area.

  20. Simultaneous optical image compression and encryption using error-reduction phase retrieval algorithm

    International Nuclear Information System (INIS)

    Liu, Wei; Liu, Shutian; Liu, Zhengjun

    2015-01-01

    We report a simultaneous image compression and encryption scheme based on solving a typical optical inverse problem. The secret images to be processed are multiplexed as the input intensities of a cascaded diffractive optical system. At the output plane, a compressed complex-valued data with a lot fewer measurements can be obtained by utilizing error-reduction phase retrieval algorithm. The magnitude of the output image can serve as the final ciphertext while its phase serves as the decryption key. Therefore the compression and encryption are simultaneously completed without additional encoding and filtering operations. The proposed strategy can be straightforwardly applied to the existing optical security systems that involve diffraction and interference. Numerical simulations are performed to demonstrate the validity and security of the proposal. (paper)

  1. Effects of compression and individual variability on face recognition performance

    Science.gov (United States)

    McGarry, Delia P.; Arndt, Craig M.; McCabe, Steven A.; D'Amato, Donald P.

    2004-08-01

    The Enhanced Border Security and Visa Entry Reform Act of 2002 requires that the Visa Waiver Program be available only to countries that have a program to issue to their nationals machine-readable passports incorporating biometric identifiers complying with applicable standards established by the International Civil Aviation Organization (ICAO). In June 2002, the New Technologies Working Group of ICAO unanimously endorsed the use of face recognition (FR) as the globally interoperable biometric for machine-assisted identity confirmation with machine-readable travel documents (MRTDs), although Member States may elect to use fingerprint and/or iris recognition as additional biometric technologies. The means and formats are still being developed through which biometric information might be stored in the constrained space of integrated circuit chips embedded within travel documents. Such information will be stored in an open, yet unalterable and very compact format, probably as digitally signed and efficiently compressed images. The objective of this research is to characterize the many factors that affect FR system performance with respect to the legislated mandates concerning FR. A photograph acquisition environment and a commercial face recognition system have been installed at Mitretek, and over 1,400 images have been collected of volunteers. The image database and FR system are being used to analyze the effects of lossy image compression, individual differences, such as eyeglasses and facial hair, and the acquisition environment on FR system performance. Images are compressed by varying ratios using JPEG2000 to determine the trade-off points between recognition accuracy and compression ratio. The various acquisition factors that contribute to differences in FR system performance among individuals are also being measured. The results of this study will be used to refine and test efficient face image interchange standards that ensure highly accurate recognition, both

  2. Integrating dynamic and distributed compressive sensing techniques to enhance image quality of the compressive line sensing system for unmanned aerial vehicles application

    Science.gov (United States)

    Ouyang, Bing; Hou, Weilin; Caimi, Frank M.; Dalgleish, Fraser R.; Vuorenkoski, Anni K.; Gong, Cuiling

    2017-07-01

    The compressive line sensing imaging system adopts distributed compressive sensing (CS) to acquire data and reconstruct images. Dynamic CS uses Bayesian inference to capture the correlated nature of the adjacent lines. An image reconstruction technique that incorporates dynamic CS in the distributed CS framework was developed to improve the quality of reconstructed images. The effectiveness of the technique was validated using experimental data acquired in an underwater imaging test facility. Results that demonstrate contrast and resolution improvements will be presented. The improved efficiency is desirable for unmanned aerial vehicles conducting long-duration missions.

  3. Nucleus fingerprinting for the unique identification of Feulgen-stained nuclei

    Science.gov (United States)

    Friedrich, David; Brozio, Matthias; Bell, André; Biesterfeld, Stefan; Böcking, Alfred; Aach, Til

    2012-03-01

    DNA Image Cytometry is a method for non-invasive cancer diagnosis which measures the DNA content of Feulgen-stained nuclei. DNA content is measured using a microscope system equipped with a digital camera as a densitometer and estimating the DNA content from the absorption of light when passing through the nuclei. However, a DNA Image Cytometry measurement is only valid if each nucleus is only measured once. To assist the user in preventing multiple measurements of the same nucleus, we have developed a unique digital identifier for the characterization of Feulgen-stained nuclei, the so called Nucleus Fingerprint. Only nuclei with a new fingerprint can be added to the measurement. This fingerprint is based on basic nucleus features, the contour of the nucleus and the spatial relationship to nuclei in the vicinity. Based on this characterization, a classifier for testing two nuclei for identity is presented. In a pairwise comparison of ~40000 pairs of mutually different nuclei, 99.5% were classified as different. In another 450 tests, the fingerprints of the same nucleus recorded a second time were in all cases judged identical. We therefore conclude that our Nucleus Fingerprint approach robustly prevents the repeated measurement of nuclei in DNA Image Cytometry.

  4. Verification and Validation of a Fingerprint Image Registration Software

    Directory of Open Access Journals (Sweden)

    Liu Yan

    2006-01-01

    Full Text Available The need for reliable identification and authentication is driving the increased use of biometric devices and systems. Verification and validation techniques applicable to these systems are rather immature and ad hoc, yet the consequences of the wide deployment of biometric systems could be significant. In this paper we discuss an approach towards validation and reliability estimation of a fingerprint registration software. Our validation approach includes the following three steps: (a the validation of the source code with respect to the system requirements specification; (b the validation of the optimization algorithm, which is in the core of the registration system; and (c the automation of testing. Since the optimization algorithm is heuristic in nature, mathematical analysis and test results are used to estimate the reliability and perform failure analysis of the image registration module.

  5. Multi-scale simulations of field ion microscopy images—Image compression with and without the tip shank

    International Nuclear Information System (INIS)

    NiewieczerzaŁ, Daniel; Oleksy, CzesŁaw; Szczepkowicz, Andrzej

    2012-01-01

    Multi-scale simulations of field ion microscopy images of faceted and hemispherical samples are performed using a 3D model. It is shown that faceted crystals have compressed images even in cases with no shank. The presence of the shank increases the compression of images of faceted crystals quantitatively in the same way as for hemispherical samples. It is hereby proven that the shank does not influence significantly the local, relative variations of the magnification caused by the atomic-scale structure of the sample. -- Highlights: ► Multi-scale simulations of field ion microscopy images. ► Faceted and hemispherical samples with and without shank. ► Shank causes overall compression, but does not influence local magnification effects. ► Image compression linearly increases with the shank angle. ► Shank changes compression of image of faceted tip in the same way as for smooth sample.

  6. Local System Matrix Compression for Efficient Reconstruction in Magnetic Particle Imaging

    Directory of Open Access Journals (Sweden)

    T. Knopp

    2015-01-01

    Full Text Available Magnetic particle imaging (MPI is a quantitative method for determining the spatial distribution of magnetic nanoparticles, which can be used as tracers for cardiovascular imaging. For reconstructing a spatial map of the particle distribution, the system matrix describing the magnetic particle imaging equation has to be known. Due to the complex dynamic behavior of the magnetic particles, the system matrix is commonly measured in a calibration procedure. In order to speed up the reconstruction process, recently, a matrix compression technique has been proposed that makes use of a basis transformation in order to compress the MPI system matrix. By thresholding the resulting matrix and storing the remaining entries in compressed row storage format, only a fraction of the data has to be processed when reconstructing the particle distribution. In the present work, it is shown that the image quality of the algorithm can be considerably improved by using a local threshold for each matrix row instead of a global threshold for the entire system matrix.

  7. Network-based Fingerprint Authentication System Using a Mobile Device

    OpenAIRE

    Zhang, Qihu

    2016-01-01

    Abstract— Fingerprint-based user authentication is highly effective in networked services such as electronic payment, but conventional authentication solutions have problems in cost, usability and security. To resolve these problems, we propose a touch-less fingerprint authentication solution, in which a mobile device's built-in camera is used to capture fingerprint image, and then it is sent to the server to determine the identity of the user. We designed and implemented a prototype as an a...

  8. Magni: A Python Package for Compressive Sampling and Reconstruction of Atomic Force Microscopy Images

    Directory of Open Access Journals (Sweden)

    Christian Schou Oxvig

    2014-10-01

    Full Text Available Magni is an open source Python package that embraces compressed sensing and Atomic Force Microscopy (AFM imaging techniques. It provides AFM-specific functionality for undersampling and reconstructing images from AFM equipment and thereby accelerating the acquisition of AFM images. Magni also provides researchers in compressed sensing with a selection of algorithms for reconstructing undersampled general images, and offers a consistent and rigorous way to efficiently evaluate the researchers own developed reconstruction algorithms in terms of phase transitions. The package also serves as a convenient platform for researchers in compressed sensing aiming at obtaining a high degree of reproducibility of their research.

  9. Fast 3D magnetic resonance fingerprinting for a whole-brain coverage.

    Science.gov (United States)

    Ma, Dan; Jiang, Yun; Chen, Yong; McGivney, Debra; Mehta, Bhairav; Gulani, Vikas; Griswold, Mark

    2018-04-01

    The purpose of this study was to accelerate the acquisition and reconstruction time of 3D magnetic resonance fingerprinting scans. A 3D magnetic resonance fingerprinting scan was accelerated by using a single-shot spiral trajectory with an undersampling factor of 48 in the x-y plane, and an interleaved sampling pattern with an undersampling factor of 3 through plane. Further acceleration came from reducing the waiting time between neighboring partitions. The reconstruction time was accelerated by applying singular value decomposition compression in k-space. Finally, a 3D premeasured B 1 map was used to correct for the B 1 inhomogeneity. The T 1 and T 2 values of the International Society for Magnetic Resonance in Medicine/National Institute of Standards and Technology MRI phantom showed a good agreement with the standard values, with an average concordance correlation coefficient of 0.99, and coefficient of variation of 7% in the repeatability scans. The results from in vivo scans also showed high image quality in both transverse and coronal views. This study applied a fast acquisition scheme for a fully quantitative 3D magnetic resonance fingerprinting scan with a total acceleration factor of 144 as compared with the Nyquist rate, such that 3D T 1 , T 2 , and proton density maps can be acquired with whole-brain coverage at clinical resolution in less than 5 min. Magn Reson Med 79:2190-2197, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  10. Faster tissue interface analysis from Raman microscopy images using compressed factorisation

    Science.gov (United States)

    Palmer, Andrew D.; Bannerman, Alistair; Grover, Liam; Styles, Iain B.

    2013-06-01

    The structure of an artificial ligament was examined using Raman microscopy in combination with novel data analysis. Basis approximation and compressed principal component analysis are shown to provide efficient compression of confocal Raman microscopy images, alongside powerful methods for unsupervised analysis. This scheme allows the acceleration of data mining, such as principal component analysis, as they can be performed on the compressed data representation, providing a decrease in the factorisation time of a single image from five minutes to under a second. Using this workflow the interface region between a chemically engineered ligament construct and a bone-mimic anchor was examined. Natural ligament contains a striated interface between the bone and tissue that provides improved mechanical load tolerance, a similar interface was found in the ligament construct.

  11. Efficient JPEG 2000 Image Compression Scheme for Multihop Wireless Networks

    Directory of Open Access Journals (Sweden)

    Halim Sghaier

    2011-08-01

    Full Text Available When using wireless sensor networks for real-time data transmission, some critical points should be considered. Restricted computational power, reduced memory, narrow bandwidth and energy supplied present strong limits in sensor nodes. Therefore, maximizing network lifetime and minimizing energy consumption are always optimization goals. To overcome the computation and energy limitation of individual sensor nodes during image transmission, an energy efficient image transport scheme is proposed, taking advantage of JPEG2000 still image compression standard using MATLAB and C from Jasper. JPEG2000 provides a practical set of features, not necessarily available in the previous standards. These features were achieved using techniques: the discrete wavelet transform (DWT, and embedded block coding with optimized truncation (EBCOT. Performance of the proposed image transport scheme is investigated with respect to image quality and energy consumption. Simulation results are presented and show that the proposed scheme optimizes network lifetime and reduces significantly the amount of required memory by analyzing the functional influence of each parameter of this distributed image compression algorithm.

  12. PET image reconstruction with rotationally symmetric polygonal pixel grid based highly compressible system matrix

    International Nuclear Information System (INIS)

    Yu Yunhan; Xia Yan; Liu Yaqiang; Wang Shi; Ma Tianyu; Chen Jing; Hong Baoyu

    2013-01-01

    To achieve a maximum compression of system matrix in positron emission tomography (PET) image reconstruction, we proposed a polygonal image pixel division strategy in accordance with rotationally symmetric PET geometry. Geometrical definition and indexing rule for polygonal pixels were established. Image conversion from polygonal pixel structure to conventional rectangular pixel structure was implemented using a conversion matrix. A set of test images were analytically defined in polygonal pixel structure, converted to conventional rectangular pixel based images, and correctly displayed which verified the correctness of the image definition, conversion description and conversion of polygonal pixel structure. A compressed system matrix for PET image recon was generated by tap model and tested by forward-projecting three different distributions of radioactive sources to the sinogram domain and comparing them with theoretical predictions. On a practical small animal PET scanner, a compress ratio of 12.6:1 of the system matrix size was achieved with the polygonal pixel structure, comparing with the conventional rectangular pixel based tap-mode one. OS-EM iterative image reconstruction algorithms with the polygonal and conventional Cartesian pixel grid were developed. A hot rod phantom was detected and reconstructed based on these two grids with reasonable time cost. Image resolution of reconstructed images was both 1.35 mm. We conclude that it is feasible to reconstruct and display images in a polygonal image pixel structure based on a compressed system matrix in PET image reconstruction. (authors)

  13. Image Compression using Haar and Modified Haar Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Mohannad Abid Shehab Ahmed

    2013-04-01

    Full Text Available Efficient image compression approaches can provide the best solutions to the recent growth of the data intensive and multimedia based applications. As presented in many papers the Haar matrix–based methods and wavelet analysis can be used in various areas of image processing such as edge detection, preserving, smoothing or filtering. In this paper, color image compression analysis and synthesis based on Haar and modified Haar is presented. The standard Haar wavelet transformation with N=2 is composed of a sequence of low-pass and high-pass filters, known as a filter bank, the vertical and horizontal Haar filters are composed to construct four 2-dimensional filters, such filters applied directly to the image to speed up the implementation of the Haar wavelet transform. Modified Haar technique is studied and implemented for odd based numbers i.e. (N=3 & N=5 to generate many solution sets, these sets are tested using the energy function or numerical method to get the optimum one.The Haar transform is simple, efficient in memory usage due to high zero value spread (it can use sparse principle, and exactly reversible without the edge effects as compared to DCT (Discrete Cosine Transform. The implemented Matlab simulation results prove the effectiveness of DWT (Discrete Wave Transform algorithms based on Haar and Modified Haar techniques in attaining an efficient compression ratio (C.R, achieving higher peak signal to noise ratio (PSNR, and the resulting images are of much smoother as compared to standard JPEG especially for high C.R. A comparison between standard JPEG, Haar, and Modified Haar techniques is done finally which approves the highest capability of Modified Haar between others.

  14. PROMISE: parallel-imaging and compressed-sensing reconstruction of multicontrast imaging using SharablE information.

    Science.gov (United States)

    Gong, Enhao; Huang, Feng; Ying, Kui; Wu, Wenchuan; Wang, Shi; Yuan, Chun

    2015-02-01

    A typical clinical MR examination includes multiple scans to acquire images with different contrasts for complementary diagnostic information. The multicontrast scheme requires long scanning time. The combination of partially parallel imaging and compressed sensing (CS-PPI) has been used to reconstruct accelerated scans. However, there are several unsolved problems in existing methods. The target of this work is to improve existing CS-PPI methods for multicontrast imaging, especially for two-dimensional imaging. If the same field of view is scanned in multicontrast imaging, there is significant amount of sharable information. It is proposed in this study to use manifold sharable information among multicontrast images to enhance CS-PPI in a sequential way. Coil sensitivity information and structure based adaptive regularization, which were extracted from previously reconstructed images, were applied to enhance the following reconstructions. The proposed method is called Parallel-imaging and compressed-sensing Reconstruction Of Multicontrast Imaging using SharablE information (PROMISE). Using L1 -SPIRiT as a CS-PPI example, results on multicontrast brain and carotid scans demonstrated that lower error level and better detail preservation can be achieved by exploiting manifold sharable information. Besides, the privilege of PROMISE still exists while there is interscan motion. Using the sharable information among multicontrast images can enhance CS-PPI with tolerance to motions. © 2014 Wiley Periodicals, Inc.

  15. An SDN-Based Fingerprint Hopping Method to Prevent Fingerprinting Attacks

    Directory of Open Access Journals (Sweden)

    Zheng Zhao

    2017-01-01

    Full Text Available Fingerprinting attacks are one of the most severe threats to the security of networks. Fingerprinting attack aims to obtain the operating system information of target hosts to make preparations for future attacks. In this paper, a fingerprint hopping method (FPH is proposed based on software-defined networks to defend against fingerprinting attacks. FPH introduces the idea of moving target defense to show a hopping fingerprint toward the fingerprinting attackers. The interaction of the fingerprinting attack and its defense is modeled as a signal game, and the equilibriums of the game are analyzed to develop an optimal defense strategy. Experiments show that FPH can resist fingerprinting attacks effectively.

  16. Optimal experiment design for magnetic resonance fingerprinting.

    Science.gov (United States)

    Bo Zhao; Haldar, Justin P; Setsompop, Kawin; Wald, Lawrence L

    2016-08-01

    Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cramér-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experiment design problem based on the CRB to choose a set of acquisition parameters (e.g., flip angles and/or repetition times) that maximizes the signal-to-noise ratio efficiency of the resulting experiment. The utility of the proposed approach is validated by numerical studies. Representative results demonstrate that the optimized experiments allow for substantial reduction in the length of an MR fingerprinting acquisition, and substantial improvement in parameter estimation performance.

  17. A network identity authentication system based on Fingerprint identification technology

    Science.gov (United States)

    Xia, Hong-Bin; Xu, Wen-Bo; Liu, Yuan

    2005-10-01

    Fingerprint verification is one of the most reliable personal identification methods. However, most of the automatic fingerprint identification system (AFIS) is not run via Internet/Intranet environment to meet today's increasing Electric commerce requirements. This paper describes the design and implementation of the archetype system of identity authentication based on fingerprint biometrics technology, and the system can run via Internet environment. And in our system the COM and ASP technology are used to integrate Fingerprint technology with Web database technology, The Fingerprint image preprocessing algorithms are programmed into COM, which deployed on the internet information server. The system's design and structure are proposed, and the key points are discussed. The prototype system of identity authentication based on Fingerprint have been successfully tested and evaluated on our university's distant education applications in an internet environment.

  18. Compressive Sampling for Non-Imaging Remote Classification

    Science.gov (United States)

    2013-10-22

    spectro -­‐polarization  imager,   a  compressive  coherence  imager  to  resolve  objects  through  turbulence...2    The  relay  lens  for   UV -­‐CASSI,  which  focuses  the  aperture  code  onto  the  monochrome  detector...below  in  Fig.  3,  with  a  silicon   UV  sensitive  detector  on  the  left,  and   a   UV

  19. An compression algorithm for medical images and a display with the decoding function

    International Nuclear Information System (INIS)

    Gotoh, Toshiyuki; Nakagawa, Yukihiro; Shiohara, Morito; Yoshida, Masumi

    1990-01-01

    This paper describes and efficient image compression method for medical images, a high-speed display with the decoding function. In our method, an input image is divided into blocks, and either of Discrete Cosine Transform coding (DCT) or Block Truncation Coding (BTC) is adaptively applied on each block to improve image quality. The display, we developed, receives the compressed data from the host computer and reconstruct images of good quality at high speed using four decoding microprocessors on which our algorithm is implemented in pipeline. By the experiments, our method and display were verified to be effective. (author)

  20. Magni: A Python Package for Compressive Sampling and Reconstruction of Atomic Force Microscopy Images

    DEFF Research Database (Denmark)

    Oxvig, Christian Schou; Pedersen, Patrick Steffen; Arildsen, Thomas

    2014-01-01

    Magni is an open source Python package that embraces compressed sensing and Atomic Force Microscopy (AFM) imaging techniques. It provides AFM-specific functionality for undersampling and reconstructing images from AFM equipment and thereby accelerating the acquisition of AFM images. Magni also pr...... as a convenient platform for researchers in compressed sensing aiming at obtaining a high degree of reproducibility of their research....

  1. Clinical evaluation of the JPEG2000 compression rate of CT and MR images for long term archiving in PACS

    International Nuclear Information System (INIS)

    Cha, Soon Joo; Kim, Sung Hwan; Kim, Yong Hoon

    2006-01-01

    We wanted to evaluate an acceptable compression rate of JPEG2000 for long term archiving of CT and MR images in PACS. Nine CT images and 9 MR images that had small or minimal lesions were randomly selected from the PACS at our institute. All the images are compressed with rates of 5:1, 10:1, 20:1, 40:1 and 80:1 by the JPEG2000 compression protocol. Pairs of original and compressed images were compared by 9 radiologists who were working independently. We designed a JPEG2000 viewing program for comparing two images on one monitor system for performing easy and quick evaluation. All the observers performed the comparison study twice on 5 mega pixel grey scale LCD monitors and 2 mega pixel color LCD monitors, respectively. The PSNR (Peak Signal to Noise Ratio) values were calculated for making quantitative comparisions. On MR and CT, all the images with 5:1 compression images showed no difference from the original images by all 9 observers and only one observer could detect a image difference on one CT image for 10:1 compression on only the 5 mega pixel monitor. For the 20:1 compression rate, clinically significant image deterioration was found in 50% of the images on the 5M pixel monitor study, and in 30% of the images on the 2M pixel monitor. PSNR values larger than 44 dB were calculated for all the compressed images. The clinically acceptable image compression rate for long term archiving by the JPEG2000 compression protocol is 10:1 for MR and CT, and if this is applied to PACS, it would reduce the cost and responsibility of the system

  2. Edge-based compression of cartoon-like images with homogeneous diffusion

    DEFF Research Database (Denmark)

    Mainberger, Markus; Bruhn, Andrés; Weickert, Joachim

    2011-01-01

    Edges provide semantically important image features. In this paper a lossy compression method for cartoon-like images is presented, which is based on edge information. Edges together with some adjacent grey/colour values are extracted and encoded using a classical edge detector, binary compressio...

  3. Multiview Depth-Image Compression Using an Extended H.264 Encoder

    NARCIS (Netherlands)

    Morvan, Y.; Farin, D.S.; With, de P.H.N.; Blanc-Talon, J.; Philips, W.

    2007-01-01

    This paper presents a predictive-coding algorithm for the compression of multiple depth-sequences obtained from a multi-camera acquisition setup. The proposed depth-prediction algorithm works by synthesizing a virtual depth-image that matches the depth-image (of the predicted camera). To generate

  4. IMPROVED COMPRESSION OF XML FILES FOR FAST IMAGE TRANSMISSION

    Directory of Open Access Journals (Sweden)

    S. Manimurugan

    2011-02-01

    Full Text Available The eXtensible Markup Language (XML is a format that is widely used as a tool for data exchange and storage. It is being increasingly used in secure transmission of image data over wireless network and World Wide Web. Verbose in nature, XML files can be tens of megabytes long. Thus, to reduce their size and to allow faster transmission, compression becomes vital. Several general purpose compression tools have been proposed without satisfactory results. This paper proposes a novel technique using modified BWT for compressing XML files in a lossless fashion. The experimental results show that the performance of the proposed technique outperforms both general purpose and XML-specific compressors.

  5. Efficient Imaging and Real-Time Display of Scanning Ion Conductance Microscopy Based on Block Compressive Sensing

    Science.gov (United States)

    Li, Gongxin; Li, Peng; Wang, Yuechao; Wang, Wenxue; Xi, Ning; Liu, Lianqing

    2014-07-01

    Scanning Ion Conductance Microscopy (SICM) is one kind of Scanning Probe Microscopies (SPMs), and it is widely used in imaging soft samples for many distinctive advantages. However, the scanning speed of SICM is much slower than other SPMs. Compressive sensing (CS) could improve scanning speed tremendously by breaking through the Shannon sampling theorem, but it still requires too much time in image reconstruction. Block compressive sensing can be applied to SICM imaging to further reduce the reconstruction time of sparse signals, and it has another unique application that it can achieve the function of image real-time display in SICM imaging. In this article, a new method of dividing blocks and a new matrix arithmetic operation were proposed to build the block compressive sensing model, and several experiments were carried out to verify the superiority of block compressive sensing in reducing imaging time and real-time display in SICM imaging.

  6. USING H.264/AVC-INTRA FOR DCT BASED SEGMENTATION DRIVEN COMPOUND IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    S. Ebenezer Juliet

    2011-08-01

    Full Text Available This paper presents a one pass block classification algorithm for efficient coding of compound images which consists of multimedia elements like text, graphics and natural images. The objective is to minimize the loss of visual quality of text during compression by separating text information which needs high special resolution than the pictures and background. It segments computer screen images into text/graphics and picture/background classes based on DCT energy in each 4x4 block, and then compresses both text/graphics pixels and picture/background blocks by H.264/AVC with variable quantization parameter. Experimental results show that the single H.264/AVC-INTRA coder with variable quantization outperforms single coders such as JPEG, JPEG-2000 for compound images. Also the proposed method improves the PSNR value significantly than standard JPEG, JPEG-2000 and while keeping competitive compression ratios.

  7. REMOTELY SENSEDC IMAGE COMPRESSION BASED ON WAVELET TRANSFORM

    Directory of Open Access Journals (Sweden)

    Heung K. Lee

    1996-06-01

    Full Text Available In this paper, we present an image compression algorithm that is capable of significantly reducing the vast mount of information contained in multispectral images. The developed algorithm exploits the spectral and spatial correlations found in multispectral images. The scheme encodes the difference between images after contrast/brightness equalization to remove the spectral redundancy, and utilizes a two-dimensional wavelet trans-form to remove the spatial redundancy. The transformed images are than encoded by hilbert-curve scanning and run-length-encoding, followed by huffman coding. We also present the performance of the proposed algorithm with KITSAT-1 image as well as the LANDSAT MultiSpectral Scanner data. The loss of information is evaluated by peak signal to noise ratio (PSNR and classification capability.

  8. Compressed sensing in imaging mass spectrometry

    International Nuclear Information System (INIS)

    Bartels, Andreas; Dülk, Patrick; Trede, Dennis; Alexandrov, Theodore; Maaß, Peter

    2013-01-01

    Imaging mass spectrometry (IMS) is a technique of analytical chemistry for spatially resolved, label-free and multipurpose analysis of biological samples that is able to detect the spatial distribution of hundreds of molecules in one experiment. The hyperspectral IMS data is typically generated by a mass spectrometer analyzing the surface of the sample. In this paper, we propose a compressed sensing approach to IMS which potentially allows for faster data acquisition by collecting only a part of the pixels in the hyperspectral image and reconstructing the full image from this data. We present an integrative approach to perform both peak-picking spectra and denoising m/z-images simultaneously, whereas the state of the art data analysis methods solve these problems separately. We provide a proof of the robustness of the recovery of both the spectra and individual channels of the hyperspectral image and propose an algorithm to solve our optimization problem which is based on proximal mappings. The paper concludes with the numerical reconstruction results for an IMS dataset of a rat brain coronal section. (paper)

  9. Lossless medical image compression using geometry-adaptive partitioning and least square-based prediction.

    Science.gov (United States)

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2018-06-01

    To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.

  10. Computational simulation of breast compression based on segmented breast and fibroglandular tissues on magnetic resonance images

    Energy Technology Data Exchange (ETDEWEB)

    Shih, Tzu-Ching [Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, 40402, Taiwan (China); Chen, Jeon-Hor; Nie Ke; Lin Muqing; Chang, Daniel; Nalcioglu, Orhan; Su, Min-Ying [Tu and Yuen Center for Functional Onco-Imaging and Radiological Sciences, University of California, Irvine, CA 92697 (United States); Liu Dongxu; Sun Lizhi, E-mail: shih@mail.cmu.edu.t [Department of Civil and Environmental Engineering, University of California, Irvine, CA 92697 (United States)

    2010-07-21

    This study presents a finite element-based computational model to simulate the three-dimensional deformation of a breast and fibroglandular tissues under compression. The simulation was based on 3D MR images of the breast, and craniocaudal and mediolateral oblique compression, as used in mammography, was applied. The geometry of the whole breast and the segmented fibroglandular tissues within the breast were reconstructed using triangular meshes by using the Avizo (registered) 6.0 software package. Due to the large deformation in breast compression, a finite element model was used to simulate the nonlinear elastic tissue deformation under compression, using the MSC.Marc (registered) software package. The model was tested in four cases. The results showed a higher displacement along the compression direction compared to the other two directions. The compressed breast thickness in these four cases at a compression ratio of 60% was in the range of 5-7 cm, which is a typical range of thickness in mammography. The projection of the fibroglandular tissue mesh at a compression ratio of 60% was compared to the corresponding mammograms of two women, and they demonstrated spatially matched distributions. However, since the compression was based on magnetic resonance imaging (MRI), which has much coarser spatial resolution than the in-plane resolution of mammography, this method is unlikely to generate a synthetic mammogram close to the clinical quality. Whether this model may be used to understand the technical factors that may impact the variations in breast density needs further investigation. Since this method can be applied to simulate compression of the breast at different views and different compression levels, another possible application is to provide a tool for comparing breast images acquired using different imaging modalities--such as MRI, mammography, whole breast ultrasound and molecular imaging--that are performed using different body positions and under

  11. Impact of Finger Type in Fingerprint Authentication

    Science.gov (United States)

    Gafurov, Davrondzhon; Bours, Patrick; Yang, Bian; Busch, Christoph

    Nowadays fingerprint verification system is the most widespread and accepted biometric technology that explores various features of the human fingers for this purpose. In general, every normal person has 10 fingers with different size. Although it is claimed that recognition performance with little fingers can be less accurate compared to other finger types, to our best knowledge, this has not been investigated yet. This paper presents our study on the topic of influence of the finger type into fingerprint recognition performance. For analysis we employ two fingerprint verification software packages (one public and one commercial). We conduct test on GUC100 multi sensor fingerprint database which contains fingerprint images of all 10 fingers from 100 subjects. Our analysis indeed confirms that performance with small fingers is less accurate than performance with the others fingers of the hand. It also appears that best performance is being obtained with thumb or index fingers. For example, performance deterioration from the best finger (i.e. index or thumb) to the worst fingers (i.e. small ones) can be in the range of 184%-1352%.

  12. Single-photon compressive imaging with some performance benefits over raster scanning

    International Nuclear Information System (INIS)

    Yu, Wen-Kai; Liu, Xue-Feng; Yao, Xu-Ri; Wang, Chao; Zhai, Guang-Jie; Zhao, Qing

    2014-01-01

    A single-photon imaging system based on compressed sensing has been developed to image objects under ultra-low illumination. With this system, we have successfully realized imaging at the single-photon level with a single-pixel avalanche photodiode without point-by-point raster scanning. From analysis of the signal-to-noise ratio in the measurement we find that our system has much higher sensitivity than conventional ones based on point-by-point raster scanning, while the measurement time is also reduced. - Highlights: • We design a single photon imaging system with compressed sensing. • A single point avalanche photodiode is used without raster scanning. • The Poisson shot noise in the measurement is analyzed. • The sensitivity of our system is proved to be higher than that of raster scanning

  13. Edge-Based Image Compression with Homogeneous Diffusion

    Science.gov (United States)

    Mainberger, Markus; Weickert, Joachim

    It is well-known that edges contain semantically important image information. In this paper we present a lossy compression method for cartoon-like images that exploits information at image edges. These edges are extracted with the Marr-Hildreth operator followed by hysteresis thresholding. Their locations are stored in a lossless way using JBIG. Moreover, we encode the grey or colour values at both sides of each edge by applying quantisation, subsampling and PAQ coding. In the decoding step, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. Our experiments show that the suggested method outperforms the widely-used JPEG standard and can even beat the advanced JPEG2000 standard for cartoon-like images.

  14. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    Science.gov (United States)

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  15. Efficiency and Flexibility of Fingerprint Scheme Using Partial Encryption and Discrete Wavelet Transform to Verify User in Cloud Computing.

    Science.gov (United States)

    Yassin, Ali A

    2014-01-01

    Now, the security of digital images is considered more and more essential and fingerprint plays the main role in the world of image. Furthermore, fingerprint recognition is a scheme of biometric verification that applies pattern recognition techniques depending on image of fingerprint individually. In the cloud environment, an adversary has the ability to intercept information and must be secured from eavesdroppers. Unluckily, encryption and decryption functions are slow and they are often hard. Fingerprint techniques required extra hardware and software; it is masqueraded by artificial gummy fingers (spoof attacks). Additionally, when a large number of users are being verified at the same time, the mechanism will become slow. In this paper, we employed each of the partial encryptions of user's fingerprint and discrete wavelet transform to obtain a new scheme of fingerprint verification. Moreover, our proposed scheme can overcome those problems; it does not require cost, reduces the computational supplies for huge volumes of fingerprint images, and resists well-known attacks. In addition, experimental results illustrate that our proposed scheme has a good performance of user's fingerprint verification.

  16. A Fingerprint Encryption Scheme Based on Irreversible Function and Secure Authentication

    Directory of Open Access Journals (Sweden)

    Yijun Yang

    2015-01-01

    Full Text Available A fingerprint encryption scheme based on irreversible function has been designed in this paper. Since the fingerprint template includes almost the entire information of users’ fingerprints, the personal authentication can be determined only by the fingerprint features. This paper proposes an irreversible transforming function (using the improved SHA1 algorithm to transform the original minutiae which are extracted from the thinned fingerprint image. Then, Chinese remainder theorem is used to obtain the biokey from the integration of the transformed minutiae and the private key. The result shows that the scheme has better performance on security and efficiency comparing with other irreversible function schemes.

  17. A new modified fast fractal image compression algorithm

    DEFF Research Database (Denmark)

    Salarian, Mehdi; Nadernejad, Ehsan; MiarNaimi, Hossein

    2013-01-01

    In this paper, a new fractal image compression algorithm is proposed, in which the time of the encoding process is considerably reduced. The algorithm exploits a domain pool reduction approach, along with the use of innovative predefined values for contrast scaling factor, S, instead of searching...

  18. Fingerprint recognition system by use of graph matching

    Science.gov (United States)

    Shen, Wei; Shen, Jun; Zheng, Huicheng

    2001-09-01

    Fingerprint recognition is an important subject in biometrics to identify or verify persons by physiological characteristics, and has found wide applications in different domains. In the present paper, we present a finger recognition system that combines singular points and structures. The principal steps of processing in our system are: preprocessing and ridge segmentation, singular point extraction and selection, graph representation, and finger recognition by graphs matching. Our fingerprint recognition system is implemented and tested for many fingerprint images and the experimental result are satisfactory. Different techniques are used in our system, such as fast calculation of orientation field, local fuzzy dynamical thresholding, algebraic analysis of connections and fingerprints representation and matching by graphs. Wed find that for fingerprint database that is not very large, the recognition rate is very high even without using a prior coarse category classification. This system works well for both one-to-few and one-to-many problems.

  19. Efficient burst image compression using H.265/HEVC

    Science.gov (United States)

    Roodaki-Lavasani, Hoda; Lainema, Jani

    2014-02-01

    New imaging use cases are emerging as more powerful camera hardware is entering consumer markets. One family of such use cases is based on capturing multiple pictures instead of just one when taking a photograph. That kind of a camera operation allows e.g. selecting the most successful shot from a sequence of images, showing what happened right before or after the shot was taken or combining the shots by computational means to improve either visible characteristics of the picture (such as dynamic range or focus) or the artistic aspects of the photo (e.g. by superimposing pictures on top of each other). Considering that photographic images are typically of high resolution and quality and the fact that these kind of image bursts can consist of at least tens of individual pictures, an efficient compression algorithm is desired. However, traditional video coding approaches fail to provide the random access properties these use cases require to achieve near-instantaneous access to the pictures in the coded sequence. That feature is critical to allow users to browse the pictures in an arbitrary order or imaging algorithms to extract desired pictures from the sequence quickly. This paper proposes coding structures that provide such random access properties while achieving coding efficiency superior to existing image coders. The results indicate that using HEVC video codec with a single reference picture fixed for the whole sequence can achieve nearly as good compression as traditional IPPP coding structures. It is also shown that the selection of the reference frame can further improve the coding efficiency.

  20. Least median of squares filtering of locally optimal point matches for compressible flow image registration

    International Nuclear Information System (INIS)

    Castillo, Edward; Guerrero, Thomas; Castillo, Richard; White, Benjamin; Rojo, Javier

    2012-01-01

    Compressible flow based image registration operates under the assumption that the mass of the imaged material is conserved from one image to the next. Depending on how the mass conservation assumption is modeled, the performance of existing compressible flow methods is limited by factors such as image quality, noise, large magnitude voxel displacements, and computational requirements. The Least Median of Squares Filtered Compressible Flow (LFC) method introduced here is based on a localized, nonlinear least squares, compressible flow model that describes the displacement of a single voxel that lends itself to a simple grid search (block matching) optimization strategy. Spatially inaccurate grid search point matches, corresponding to erroneous local minimizers of the nonlinear compressible flow model, are removed by a novel filtering approach based on least median of squares fitting and the forward search outlier detection method. The spatial accuracy of the method is measured using ten thoracic CT image sets and large samples of expert determined landmarks (available at www.dir-lab.com). The LFC method produces an average error within the intra-observer error on eight of the ten cases, indicating that the method is capable of achieving a high spatial accuracy for thoracic CT registration. (paper)

  1. Diffusion-Weighted Imaging for Predicting New Compression Fractures Following Percutaneous Vertebroplasty

    International Nuclear Information System (INIS)

    Sugimoto, T.

    2008-01-01

    Background: Percutaneous vertebroplasty (PVP) is a technique that structurally stabilizes a fractured vertebral body. However, some patients return to the hospital due to recurrent back pain following PVP, and such pain is sometimes caused by new compression fractures. Purpose: To investigate whether the apparent diffusion coefficient (ADC) of adjacent vertebral bodies as assessed by diffusion-weighted imaging before PVP could predict the onset of new compression fractures following PVP. Material and Methods: 25 patients with osteoporotic compression fractures who underwent PVP were enrolled in this study. ADC was measured for 49 vertebral bodies immediately above and below each vertebral body injected with bone cement before and after PVP. By measuring ADC for each adjacent vertebral body, ADC was compared between vertebral bodies with a new compression fracture within 1 month and those without new compression fractures. In addition, the mean ADC of adjacent vertebral bodies per patient was calculated. Results: Mean preoperative ADC for the six adjacent vertebral bodies with new compression fractures was 0.55x10 -3 mm 2 /s (range 0.36-1.01x10 -3 mm 2 /s), and for the 43 adjacent vertebral bodies without new compression fractures 0.20x10 -3 mm 2 /s (range 0-0.98x10 -3 mm 2 /s) (P -3 mm 2 /s (range 0.21-1.01x10 -3 mm 2 /s), and that for the 19 patients without new compression fractures 0.17x10 -3 mm 2 /s (range 0.01-0.43x10 -3 mm 2 /s) (P<0.001). Conclusion: The ADC of adjacent vertebral bodies as assessed by diffusion-weighted imaging before PVP might be one of the predictors for new compression fractures following PVP

  2. Adaptive Binary Arithmetic Coder-Based Image Feature and Segmentation in the Compressed Domain

    Directory of Open Access Journals (Sweden)

    Hsi-Chin Hsin

    2012-01-01

    Full Text Available Image compression is necessary in various applications, especially for efficient transmission over a band-limited channel. It is thus desirable to be able to segment an image in the compressed domain directly such that the burden of decompressing computation can be avoided. Motivated by the adaptive binary arithmetic coder (MQ coder of JPEG2000, we propose an efficient scheme to segment the feature vectors that are extracted from the code stream of an image. We modify the Compression-based Texture Merging (CTM algorithm to alleviate the influence of overmerging problem by making use of the rate distortion information. Experimental results show that the MQ coder-based image segmentation is preferable in terms of the boundary displacement error (BDE measure. It has the advantage of saving computational cost as the segmentation results even at low rates of bits per pixel (bpp are satisfactory.

  3. Video on the Internet: An introduction to the digital encoding, compression, and transmission of moving image data.

    Science.gov (United States)

    Boudier, T; Shotton, D M

    1999-01-01

    In this paper, we seek to provide an introduction to the fast-moving field of digital video on the Internet, from the viewpoint of the biological microscopist who might wish to store or access videos, for instance in image databases such as the BioImage Database (http://www.bioimage.org). We describe and evaluate the principal methods used for encoding and compressing moving image data for digital storage and transmission over the Internet, which involve compromises between compression efficiency and retention of image fidelity, and describe the existing alternate software technologies for downloading or streaming compressed digitized videos using a Web browser. We report the results of experiments on video microscopy recordings and three-dimensional confocal animations of biological specimens to evaluate the compression efficiencies of the principal video compression-decompression algorithms (codecs) and to document the artefacts associated with each of them. Because MPEG-1 gives very high compression while yet retaining reasonable image quality, these studies lead us to recommend that video databases should store both a high-resolution original version of each video, ideally either uncompressed or losslessly compressed, and a separate edited and highly compressed MPEG-1 preview version that can be rapidly downloaded for interactive viewing by the database user. Copyright 1999 Academic Press.

  4. Development of a compressive sampling hyperspectral imager prototype

    Science.gov (United States)

    Barducci, Alessandro; Guzzi, Donatella; Lastri, Cinzia; Nardino, Vanni; Marcoionni, Paolo; Pippi, Ivan

    2013-10-01

    Compressive sensing (CS) is a new technology that investigates the chance to sample signals at a lower rate than the traditional sampling theory. The main advantage of CS is that compression takes place during the sampling phase, making possible significant savings in terms of the ADC, data storage memory, down-link bandwidth, and electrical power absorption. The CS technology could have primary importance for spaceborne missions and technology, paving the way to noteworthy reductions of payload mass, volume, and cost. On the contrary, the main CS disadvantage is made by the intensive off-line data processing necessary to obtain the desired source estimation. In this paper we summarize the CS architecture and its possible implementations for Earth observation, giving evidence of possible bottlenecks hindering this technology. CS necessarily employs a multiplexing scheme, which should produce some SNR disadvantage. Moreover, this approach would necessitate optical light modulators and 2-dim detector arrays of high frame rate. This paper describes the development of a sensor prototype at laboratory level that will be utilized for the experimental assessment of CS performance and the related reconstruction errors. The experimental test-bed adopts a push-broom imaging spectrometer, a liquid crystal plate, a standard CCD camera and a Silicon PhotoMultiplier (SiPM) matrix. The prototype is being developed within the framework of the ESA ITI-B Project titled "Hyperspectral Passive Satellite Imaging via Compressive Sensing".

  5. On the Use of Normalized Compression Distances for Image Similarity Detection

    Directory of Open Access Journals (Sweden)

    Dinu Coltuc

    2018-01-01

    Full Text Available This paper investigates the usefulness of the normalized compression distance (NCD for image similarity detection. Instead of the direct NCD between images, the paper considers the correlation between NCD based feature vectors extracted for each image. The vectors are derived by computing the NCD between the original image and sequences of translated (rotated versions. Feature vectors for simple transforms (circular translations on horizontal, vertical, diagonal directions and rotations around image center and several standard compressors are generated and tested in a very simple experiment of similarity detection between the original image and two filtered versions (median and moving average. The promising vector configurations (geometric transform, lossless compressor are further tested for similarity detection on the 24 images of the Kodak set subject to some common image processing. While the direct computation of NCD fails to detect image similarity even in the case of simple median and moving average filtering in 3 × 3 windows, for certain transforms and compressors, the proposed approach appears to provide robustness at similarity detection against smoothing, lossy compression, contrast enhancement, noise addition and some robustness against geometrical transforms (scaling, cropping and rotation.

  6. Optical image transformation and encryption by phase-retrieval-based double random-phase encoding and compressive ghost imaging

    Science.gov (United States)

    Yuan, Sheng; Yang, Yangrui; Liu, Xuemei; Zhou, Xin; Wei, Zhenzhuo

    2018-01-01

    An optical image transformation and encryption scheme is proposed based on double random-phase encoding (DRPE) and compressive ghost imaging (CGI) techniques. In this scheme, a secret image is first transformed into a binary image with the phase-retrieval-based DRPE technique, and then encoded by a series of random amplitude patterns according to the ghost imaging (GI) principle. Compressive sensing, corrosion and expansion operations are implemented to retrieve the secret image in the decryption process. This encryption scheme takes the advantage of complementary capabilities offered by the phase-retrieval-based DRPE and GI-based encryption techniques. That is the phase-retrieval-based DRPE is used to overcome the blurring defect of the decrypted image in the GI-based encryption, and the CGI not only reduces the data amount of the ciphertext, but also enhances the security of DRPE. Computer simulation results are presented to verify the performance of the proposed encryption scheme.

  7. Astronomical Image Compression Techniques Based on ACC and KLT Coder

    Directory of Open Access Journals (Sweden)

    J. Schindler

    2011-01-01

    Full Text Available This paper deals with a compression of image data in applications in astronomy. Astronomical images have typical specific properties — high grayscale bit depth, size, noise occurrence and special processing algorithms. They belong to the class of scientific images. Their processing and compression is quite different from the classical approach of multimedia image processing. The database of images from BOOTES (Burst Observer and Optical Transient Exploring System has been chosen as a source of the testing signal. BOOTES is a Czech-Spanish robotic telescope for observing AGN (active galactic nuclei and the optical transient of GRB (gamma ray bursts searching. This paper discusses an approach based on an analysis of statistical properties of image data. A comparison of two irrelevancy reduction methods is presented from a scientific (astrometric and photometric point of view. The first method is based on a statistical approach, using the Karhunen-Loeve transform (KLT with uniform quantization in the spectral domain. The second technique is derived from wavelet decomposition with adaptive selection of used prediction coefficients. Finally, the comparison of three redundancy reduction methods is discussed. Multimedia format JPEG2000 and HCOMPRESS, designed especially for astronomical images, are compared with the new Astronomical Context Coder (ACC coder based on adaptive median regression.

  8. Advanced Fingerprint Analysis Project Fingerprint Constituents

    Energy Technology Data Exchange (ETDEWEB)

    GM Mong; CE Petersen; TRW Clauss

    1999-10-29

    The work described in this report was focused on generating fundamental data on fingerprint components which will be used to develop advanced forensic techniques to enhance fluorescent detection, and visualization of latent fingerprints. Chemical components of sweat gland secretions are well documented in the medical literature and many chemical techniques are available to develop latent prints, but there have been no systematic forensic studies of fingerprint sweat components or of the chemical and physical changes these substances undergo over time.

  9. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    Directory of Open Access Journals (Sweden)

    Huiyan Jiang

    2012-01-01

    Full Text Available An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  10. Ultra high-speed x-ray imaging of laser-driven shock compression using synchrotron light

    Science.gov (United States)

    Olbinado, Margie P.; Cantelli, Valentina; Mathon, Olivier; Pascarelli, Sakura; Grenzer, Joerg; Pelka, Alexander; Roedel, Melanie; Prencipe, Irene; Laso Garcia, Alejandro; Helbig, Uwe; Kraus, Dominik; Schramm, Ulrich; Cowan, Tom; Scheel, Mario; Pradel, Pierre; De Resseguier, Thibaut; Rack, Alexander

    2018-02-01

    A high-power, nanosecond pulsed laser impacting the surface of a material can generate an ablation plasma that drives a shock wave into it; while in situ x-ray imaging can provide a time-resolved probe of the shock-induced material behaviour on macroscopic length scales. Here, we report on an investigation into laser-driven shock compression of a polyurethane foam and a graphite rod by means of single-pulse synchrotron x-ray phase-contrast imaging with MHz frame rate. A 6 J, 10 ns pulsed laser was used to generate shock compression. Physical processes governing the laser-induced dynamic response such as elastic compression, compaction, pore collapse, fracture, and fragmentation have been imaged; and the advantage of exploiting the partial spatial coherence of a synchrotron source for studying low-density, carbon-based materials is emphasized. The successful combination of a high-energy laser and ultra high-speed x-ray imaging using synchrotron light demonstrates the potentiality of accessing complementary information from scientific studies of laser-driven shock compression.

  11. Evaluation of onboard hyperspectral-image compression techniques for a parallel push-broom sensor

    Energy Technology Data Exchange (ETDEWEB)

    Briles, S.

    1996-04-01

    A single hyperspectral imaging sensor can produce frames with spatially-continuous rows of differing, but adjacent, spectral wavelength. If the frame sample-rate of the sensor is such that subsequent hyperspectral frames are spatially shifted by one row, then the sensor can be thought of as a parallel (in wavelength) push-broom sensor. An examination of data compression techniques for such a sensor is presented. The compression techniques are intended to be implemented onboard a space-based platform and to have implementation speeds that match the date rate of the sensor. Data partitions examined extend from individually operating on a single hyperspectral frame to operating on a data cube comprising the two spatial axes and the spectral axis. Compression algorithms investigated utilize JPEG-based image compression, wavelet-based compression and differential pulse code modulation. Algorithm performance is quantitatively presented in terms of root-mean-squared error and root-mean-squared correlation coefficient error. Implementation issues are considered in algorithm development.

  12. Real-time Image Generation for Compressive Light Field Displays

    International Nuclear Information System (INIS)

    Wetzstein, G; Lanman, D; Hirsch, M; Raskar, R

    2013-01-01

    With the invention of integral imaging and parallax barriers in the beginning of the 20th century, glasses-free 3D displays have become feasible. Only today—more than a century later—glasses-free 3D displays are finally emerging in the consumer market. The technologies being employed in current-generation devices, however, are fundamentally the same as what was invented 100 years ago. With rapid advances in optical fabrication, digital processing power, and computational perception, a new generation of display technology is emerging: compressive displays exploring the co-design of optical elements and computational processing while taking particular characteristics of the human visual system into account. In this paper, we discuss real-time implementation strategies for emerging compressive light field displays. We consider displays composed of multiple stacked layers of light-attenuating or polarization-rotating layers, such as LCDs. The involved image generation requires iterative tomographic image synthesis. We demonstrate that, for the case of light field display, computed tomographic light field synthesis maps well to operations included in the standard graphics pipeline, facilitating efficient GPU-based implementations with real-time framerates.

  13. Predicting the fidelity of JPEG2000 compressed CT images using DICOM header information

    International Nuclear Information System (INIS)

    Kim, Kil Joong; Kim, Bohyoung; Lee, Hyunna; Choi, Hosik; Jeon, Jong-June; Ahn, Jeong-Hwan; Lee, Kyoung Ho

    2011-01-01

    Purpose: To propose multiple logistic regression (MLR) and artificial neural network (ANN) models constructed using digital imaging and communications in medicine (DICOM) header information in predicting the fidelity of Joint Photographic Experts Group (JPEG) 2000 compressed abdomen computed tomography (CT) images. Methods: Our institutional review board approved this study and waived informed patient consent. Using a JPEG2000 algorithm, 360 abdomen CT images were compressed reversibly (n = 48, as negative control) or irreversibly (n = 312) to one of different compression ratios (CRs) ranging from 4:1 to 10:1. Five radiologists independently determined whether the original and compressed images were distinguishable or indistinguishable. The 312 irreversibly compressed images were divided randomly into training (n = 156) and testing (n = 156) sets. The MLR and ANN models were constructed regarding the DICOM header information as independent variables and the pooled radiologists' responses as dependent variable. As independent variables, we selected the CR (DICOM tag number: 0028, 2112), effective tube current-time product (0018, 9332), section thickness (0018, 0050), and field of view (0018, 0090) among the DICOM tags. Using the training set, an optimal subset of independent variables was determined by backward stepwise selection in a four-fold cross-validation scheme. The MLR and ANN models were constructed with the determined independent variables using the training set. The models were then evaluated on the testing set by using receiver-operating-characteristic (ROC) analysis regarding the radiologists' pooled responses as the reference standard and by measuring Spearman rank correlation between the model prediction and the number of radiologists who rated the two images as distinguishable. Results: The CR and section thickness were determined as the optimal independent variables. The areas under the ROC curve for the MLR and ANN predictions were 0.91 (95% CI; 0

  14. Regional variance of visually lossless threshold in compressed chest CT images: Lung versus mediastinum and chest wall

    International Nuclear Information System (INIS)

    Kim, Tae Jung; Lee, Kyoung Ho; Kim, Bohyoung; Kim, Kil Joong; Chun, Eun Ju; Bajpai, Vasundhara; Kim, Young Hoon; Hahn, Seokyung; Lee, Kyung Won

    2009-01-01

    Objective: To estimate the visually lossless threshold (VLT) for the Joint Photographic Experts Group (JPEG) 2000 compression of chest CT images and to demonstrate the variance of the VLT between the lung and mediastinum/chest wall. Subjects and methods: Eighty images were compressed reversibly (as negative control) and irreversibly to 5:1, 10:1, 15:1 and 20:1. Five radiologists determined if the compressed images were distinguishable from their originals in the lung and mediastinum/chest wall. Exact tests for paired proportions were used to compare the readers' responses between the reversible and irreversible compressions and between the lung and mediastinum/chest wall. Results: At reversible, 5:1, 10:1, 15:1, and 20:1 compressions, 0%, 0%, 3-49% (p < .004, for three readers), 69-99% (p < .001, for all readers), and 100% of the 80 image pairs were distinguishable in the lung, respectively; and 0%, 0%, 74-100% (p < .001, for all readers), 100%, and 100% were distinguishable in the mediastinum/chest wall, respectively. The image pairs were less frequently distinguishable in the lung than in the mediastinum/chest wall at 10:1 (p < .001, for all readers) and 15:1 (p < .001, for two readers). In 321 image comparisons, the image pairs were indistinguishable in the lung but distinguishable in the mediastinum/chest wall, whereas there was no instance of the opposite. Conclusion: For JPEG2000 compression of chest CT images, the VLT is between 5:1 and 10:1. The lung is more tolerant to the compression than the mediastinum/chest wall.

  15. Regional variance of visually lossless threshold in compressed chest CT images: Lung versus mediastinum and chest wall

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Jung [Department of Radiology, Seoul National University Bundang Hospital, 300 Gumi-dong, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of); Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center (Korea, Republic of); Lee, Kyoung Ho [Department of Radiology, Seoul National University Bundang Hospital, 300 Gumi-dong, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of); Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center (Korea, Republic of)], E-mail: kholee@snubhrad.snu.ac.kr; Kim, Bohyoung; Kim, Kil Joong; Chun, Eun Ju; Bajpai, Vasundhara; Kim, Young Hoon [Department of Radiology, Seoul National University Bundang Hospital, 300 Gumi-dong, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of); Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center (Korea, Republic of); Hahn, Seokyung [Medical Research Collaborating Center, Seoul National University Hospital, 28 Yongon-dong, Chongno-gu, Seoul 110-744 (Korea, Republic of); Seoul National University College of Medicine (Korea, Republic of); Lee, Kyung Won [Department of Radiology, Seoul National University Bundang Hospital, 300 Gumi-dong, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of); Seoul National University College of Medicine, Institute of Radiation Medicine, Seoul National University Medical Research Center (Korea, Republic of)

    2009-03-15

    Objective: To estimate the visually lossless threshold (VLT) for the Joint Photographic Experts Group (JPEG) 2000 compression of chest CT images and to demonstrate the variance of the VLT between the lung and mediastinum/chest wall. Subjects and methods: Eighty images were compressed reversibly (as negative control) and irreversibly to 5:1, 10:1, 15:1 and 20:1. Five radiologists determined if the compressed images were distinguishable from their originals in the lung and mediastinum/chest wall. Exact tests for paired proportions were used to compare the readers' responses between the reversible and irreversible compressions and between the lung and mediastinum/chest wall. Results: At reversible, 5:1, 10:1, 15:1, and 20:1 compressions, 0%, 0%, 3-49% (p < .004, for three readers), 69-99% (p < .001, for all readers), and 100% of the 80 image pairs were distinguishable in the lung, respectively; and 0%, 0%, 74-100% (p < .001, for all readers), 100%, and 100% were distinguishable in the mediastinum/chest wall, respectively. The image pairs were less frequently distinguishable in the lung than in the mediastinum/chest wall at 10:1 (p < .001, for all readers) and 15:1 (p < .001, for two readers). In 321 image comparisons, the image pairs were indistinguishable in the lung but distinguishable in the mediastinum/chest wall, whereas there was no instance of the opposite. Conclusion: For JPEG2000 compression of chest CT images, the VLT is between 5:1 and 10:1. The lung is more tolerant to the compression than the mediastinum/chest wall.

  16. Compressive Sampling based Image Coding for Resource-deficient Visual Communication.

    Science.gov (United States)

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen

    2016-04-14

    In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.

  17. Performance evaluation of objective quality metrics for HDR image compression

    Science.gov (United States)

    Valenzise, Giuseppe; De Simone, Francesca; Lauga, Paul; Dufaux, Frederic

    2014-09-01

    Due to the much larger luminance and contrast characteristics of high dynamic range (HDR) images, well-known objective quality metrics, widely used for the assessment of low dynamic range (LDR) content, cannot be directly applied to HDR images in order to predict their perceptual fidelity. To overcome this limitation, advanced fidelity metrics, such as the HDR-VDP, have been proposed to accurately predict visually significant differences. However, their complex calibration may make them difficult to use in practice. A simpler approach consists in computing arithmetic or structural fidelity metrics, such as PSNR and SSIM, on perceptually encoded luminance values but the performance of quality prediction in this case has not been clearly studied. In this paper, we aim at providing a better comprehension of the limits and the potentialities of this approach, by means of a subjective study. We compare the performance of HDR-VDP to that of PSNR and SSIM computed on perceptually encoded luminance values, when considering compressed HDR images. Our results show that these simpler metrics can be effectively employed to assess image fidelity for applications such as HDR image compression.

  18. Multispectral imaging for biometrics

    Science.gov (United States)

    Rowe, Robert K.; Corcoran, Stephen P.; Nixon, Kristin A.; Ostrom, Robert E.

    2005-03-01

    Automated identification systems based on fingerprint images are subject to two significant types of error: an incorrect decision about the identity of a person due to a poor quality fingerprint image and incorrectly accepting a fingerprint image generated from an artificial sample or altered finger. This paper discusses the use of multispectral sensing as a means to collect additional information about a finger that significantly augments the information collected using a conventional fingerprint imager based on total internal reflectance. In the context of this paper, "multispectral sensing" is used broadly to denote a collection of images taken under different polarization conditions and illumination configurations, as well as using multiple wavelengths. Background information is provided on conventional fingerprint imaging. A multispectral imager for fingerprint imaging is then described and a means to combine the two imaging systems into a single unit is discussed. Results from an early-stage prototype of such a system are shown.

  19. Compressive Sensing Based Bio-Inspired Shape Feature Detection CMOS Imager

    Science.gov (United States)

    Duong, Tuan A. (Inventor)

    2015-01-01

    A CMOS imager integrated circuit using compressive sensing and bio-inspired detection is presented which integrates novel functions and algorithms within a novel hardware architecture enabling efficient on-chip implementation.

  20. A Novel 1D Hybrid Chaotic Map-Based Image Compression and Encryption Using Compressed Sensing and Fibonacci-Lucas Transform

    Directory of Open Access Journals (Sweden)

    Tongfeng Zhang

    2016-01-01

    Full Text Available A one-dimensional (1D hybrid chaotic system is constructed by three different 1D chaotic maps in parallel-then-cascade fashion. The proposed chaotic map has larger key space and exhibits better uniform distribution property in some parametric range compared with existing 1D chaotic map. Meanwhile, with the combination of compressive sensing (CS and Fibonacci-Lucas transform (FLT, a novel image compression and encryption scheme is proposed with the advantages of the 1D hybrid chaotic map. The whole encryption procedure includes compression by compressed sensing (CS, scrambling with FLT, and diffusion after linear scaling. Bernoulli measurement matrix in CS is generated by the proposed 1D hybrid chaotic map due to its excellent uniform distribution. To enhance the security and complexity, transform kernel of FLT varies in each permutation round according to the generated chaotic sequences. Further, the key streams used in the diffusion process depend on the chaotic map as well as plain image, which could resist chosen plaintext attack (CPA. Experimental results and security analyses demonstrate the validity of our scheme in terms of high security and robustness against noise attack and cropping attack.

  1. Does an increase in compression force really improve visual image quality in mammography? – An initial investigation

    International Nuclear Information System (INIS)

    Mercer, C.E.; Hogg, P.; Cassidy, S.; Denton, E.R.E.

    2013-01-01

    Objective: Literature speculates that visual image quality (IQ) and compression force levels may be directly related. This small study investigates whether a relationship exists between compression force levels and visual IQ. Method: To investigate how visual IQ varies with different levels of compression force, 39 clients were selected over a 6 year screening period that had received markedly different amounts of compression force on each of their three sequential screens. Images for the 3 screening episodes for all women were scored visually using 3 different IQ scales. Results: Correlation coefficients between the 3 IQ scales were positive and high (0.82, 0.9 and 0.85). For the scales, the IQ scores their correlation does not vary significantly, even though different compression levels had been applied. Kappa IQ scale 1: 0.92, 0.89, 0.89. ANOVA IQ scale 2: p = 0.98, p = 0.55, p = 0.56. ICC IQ scale 3: 0.97, 0.93, 0.91. Conclusion: For the 39 clients there is no difference in visual IQ when different amounts of compression are applied. We believe that further work should be conducted into compression force and image quality as ‘higher levels’ of compression force may not be justified in the attainment of suitable visual image quality

  2. Improved MR imaging evaluation of chondromalacia patellae with use of a vise for cartilage compression

    International Nuclear Information System (INIS)

    Koenig, H.; Dinkelaker, F.; Wolf, K.J.

    1990-01-01

    This paper reports on earlier and more precise evaluation of chondromalacia patellae by means of MR imaging performed with a specially constructed vise for compression of the retropatellar cartilage. Two volunteers and 18 patients were examined 1-4 weeks before arthroscopy and cartilage biopsy. Imaging parameters included spin-echo (SE) (1,600/22 + 110 msec) and fast low-angle shot (FLASH) (30/12 msec, 10 degrees and 30 degrees excitation angles) sequences, 4-mm section thickness, and sagittal and axial views. For cartilage compression, we used a wooden vise. FLASH imaging was done without and with compression of the retropatellar cartilage. Cartilage thickness and signal intensities were measured

  3. INCREASE OF STABILITY AT JPEG COMPRESSION OF THE DIGITAL WATERMARKS EMBEDDED IN STILL IMAGES

    Directory of Open Access Journals (Sweden)

    V. A. Batura

    2015-07-01

    Full Text Available Subject of Research. The paper deals with creation and research of method for increasing stability at JPEG compressing of digital watermarks embedded in still images. Method. A new algorithm of digital watermarking for still images which embeds digital watermark into a still image via modification of frequency coefficients for Hadamard discrete transformation is presented. The choice of frequency coefficients for embedding of a digital watermark is based on existence of sharp change of their values after modification at the maximum compression of JPEG. The choice of blocks of pixels for embedding is based on the value of their entropy. The new algorithm was subjected to the analysis of resistance to an image compression, noising, filtration, change of size, color and histogram equalization. Elham algorithm possessing a good resistance to JPEG compression was chosen for comparative analysis. Nine gray-scale images were selected as objects for protection. Obscurity of the distortions embedded in them was defined on the basis of the peak value of a signal to noise ratio which should be not lower than 43 dB for obscurity of the brought distortions. Resistibility of embedded watermark was determined by the Pearson correlation coefficient, which value should not be below 0.5 for the minimum allowed stability. The algorithm of computing experiment comprises: watermark embedding into each test image by the new algorithm and Elham algorithm; introducing distortions to the object of protection; extracting of embedded information with its subsequent comparison with the original. Parameters of the algorithms were chosen so as to provide approximately the same level of distortions introduced into the images. Main Results. The method of preliminary processing of digital watermark presented in the paper makes it possible to reduce significantly the volume of information embedded in the still image. The results of numerical experiment have shown that the

  4. Videos and images from 25 years of teaching compressible flow

    Science.gov (United States)

    Settles, Gary

    2008-11-01

    Compressible flow is a very visual topic due to refractive optical flow visualization and the public fascination with high-speed flight. Films, video clips, and many images are available to convey this in the classroom. An overview of this material is given and selected examples are shown, drawn from educational films, the movies, television, etc., and accumulated over 25 years of teaching basic and advanced compressible-flow courses. The impact of copyright protection and the doctrine of fair use is also discussed.

  5. Efficient internal and surface fingerprint extraction and blending using optical coherence tomography.

    Science.gov (United States)

    Darlow, Luke Nicholas; Connan, James

    2015-11-01

    Optical coherence tomography provides a 3D representation of fingertip skin where surface and internal fingerprints are found. These fingerprints are topographically identical. However, the surface skin is prone to damage, distortion, and spoofing; and the internal fingerprint is difficult to access and extract. This research presents a novel scaling-resolution approach to fingerprint zone detection and extraction. Furthermore, a local-quality-based blending procedure is also proposed. The accuracy of the zone-detection algorithm is comparable to an earlier work, yielding a mean-squared error of 25.9 and structural similarity of 95.8% (compared to a ground-truth estimate). Blending the surface and internal fingerprints improved the National Institute of Science and Technology's Fingerprint Image Quality scores and the average maximum match scores (when matched against conventional surface counterparts). The fingerprint blending procedure was able to combine high-quality regions from both fingerprints, thus mitigating surface wrinkles and anomalous poor-quality regions. Furthermore, spoof detection via a surface-to-internal fingerprint comparison was proposed and tested.

  6. Uniform Local Binary Pattern for Fingerprint Liveness Detection in the Gaussian Pyramid

    Directory of Open Access Journals (Sweden)

    Yujia Jiang

    2018-01-01

    Full Text Available Fingerprint recognition schemas are widely used in our daily life, such as Door Security, Identification, and Phone Verification. However, the existing problem is that fingerprint recognition systems are easily tricked by fake fingerprints for collaboration. Therefore, designing a fingerprint liveness detection module in fingerprint recognition systems is necessary. To solve the above problem and discriminate true fingerprint from fake ones, a novel software-based liveness detection approach using uniform local binary pattern (ULBP in spatial pyramid is applied to recognize fingerprint liveness in this paper. Firstly, preprocessing operation for each fingerprint is necessary. Then, to solve image rotation and scale invariance, three-layer spatial pyramids of fingerprints are introduced in this paper. Next, texture information for three layers spatial pyramids is described by using uniform local binary pattern to extract features of given fingerprints. The accuracy of our proposed method has been compared with several state-of-the-art methods in fingerprint liveness detection. Experiments based on standard databases, taken from Liveness Detection Competition 2013 composed of four different fingerprint sensors, have been carried out. Finally, classifier model based on extracted features is trained using SVM classifier. Experimental results present that our proposed method can achieve high recognition accuracy compared with other methods.

  7. Towards secondary fingerprint classification

    CSIR Research Space (South Africa)

    Msiza, IS

    2011-07-01

    Full Text Available an accuracy figure of 76.8%. This small difference between the two figures is indicative of the validity of the proposed secondary classification module. Keywords?fingerprint core; fingerprint delta; primary classifi- cation; secondary classification I..., namely, the fingerprint core and the fingerprint delta. Forensically, a fingerprint core is defined as the innermost turning point where the fingerprint ridges form a loop, while the fingerprint delta is defined as the point where these ridges form a...

  8. Subband directional vector quantization in radiological image compression

    Science.gov (United States)

    Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel

    1992-05-01

    The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.

  9. Video compression and DICOM proxies for remote viewing of DICOM images

    Science.gov (United States)

    Khorasani, Elahe; Sheinin, Vadim; Paulovicks, Brent; Jagmohan, Ashish

    2009-02-01

    Digital medical images are rapidly growing in size and volume. A typical study includes multiple image "slices." These images have a special format and a communication protocol referred to as DICOM (Digital Imaging Communications in Medicine). Storing, retrieving, and viewing these images are handled by DICOM-enabled systems. DICOM images are stored in central repository servers called PACS (Picture Archival and Communication Systems). Remote viewing stations are DICOM-enabled applications that can query the PACS servers and retrieve the DICOM images for viewing. Modern medical images are quite large, reaching as much as 1 GB per file. When the viewing station is connected to the PACS server via a high-bandwidth local LAN, downloading of the images is relatively efficient and does not cause significant wasted time for physicians. Problems arise when the viewing station is located in a remote facility that has a low-bandwidth link to the PACS server. If the link between the PACS and remote facility is in the range of 1 Mbit/sec, downloading medical images is very slow. To overcome this problem, medical images are compressed to reduce the size for transmission. This paper describes a method of compression that maintains diagnostic quality of images while significantly reducing the volume to be transmitted, without any change to the existing PACS servers and viewer software, and without requiring any change in the way doctors retrieve and view images today.

  10. Effects of JPEG data compression on magnetic resonance imaging evaluation of small vessels ischemic lesions of the brain

    International Nuclear Information System (INIS)

    Kuriki, Paulo Eduardo de Aguiar; Abdala, Nitamar; Nogueira, Roberto Gomes; Carrete Junior, Henrique; Szejnfeld, Jacob

    2006-01-01

    Objective: to establish the maximum achievable JPEG compression ratio without affecting quantitative and qualitative magnetic resonance imaging analysis of ischemic lesion in small vessels of the brain. Material and method: fifteen DICOM images were converted to JPEG with a compression ratio of 1:10 to 1:60 and were assessed together with the original images by three neuro radiologists. The number, morphology and signal intensity of the lesions were analyzed. Results: lesions were properly identified up to a 1:30 ratio. More lesions were identified with a 1:10 ratio then in the original images. Morphology and edges were properly evaluated up toa 1:40 ratio. Compression did not affect signal. Conclusion: small lesions were identified ( < 2 mm ) and in all compression ratios the JPEG algorithm generated image noise that misled observers to identify more lesions in JPEG images then in DICOM images, thus generating false-positive results.(author)

  11. Independent transmission of sign language interpreter in DVB: assessment of image compression

    Science.gov (United States)

    Zatloukal, Petr; Bernas, Martin; Dvořák, LukáÅ.¡

    2015-02-01

    Sign language on television provides information to deaf that they cannot get from the audio content. If we consider the transmission of the sign language interpreter over an independent data stream, the aim is to ensure sufficient intelligibility and subjective image quality of the interpreter with minimum bit rate. The work deals with the ROI-based video compression of Czech sign language interpreter implemented to the x264 open source library. The results of this approach are verified in subjective tests with the deaf. They examine the intelligibility of sign language expressions containing minimal pairs for different levels of compression and various resolution of image with interpreter and evaluate the subjective quality of the final image for a good viewing experience.

  12. Joint Group Sparse PCA for Compressed Hyperspectral Imaging.

    Science.gov (United States)

    Khan, Zohaib; Shafait, Faisal; Mian, Ajmal

    2015-12-01

    A sparse principal component analysis (PCA) seeks a sparse linear combination of input features (variables), so that the derived features still explain most of the variations in the data. A group sparse PCA introduces structural constraints on the features in seeking such a linear combination. Collectively, the derived principal components may still require measuring all the input features. We present a joint group sparse PCA (JGSPCA) algorithm, which forces the basic coefficients corresponding to a group of features to be jointly sparse. Joint sparsity ensures that the complete basis involves only a sparse set of input features, whereas the group sparsity ensures that the structural integrity of the features is maximally preserved. We evaluate the JGSPCA algorithm on the problems of compressed hyperspectral imaging and face recognition. Compressed sensing results show that the proposed method consistently outperforms sparse PCA and group sparse PCA in reconstructing the hyperspectral scenes of natural and man-made objects. The efficacy of the proposed compressed sensing method is further demonstrated in band selection for face recognition.

  13. An L1-norm phase constraint for half-Fourier compressed sensing in 3D MR imaging.

    Science.gov (United States)

    Li, Guobin; Hennig, Jürgen; Raithel, Esther; Büchert, Martin; Paul, Dominik; Korvink, Jan G; Zaitsev, Maxim

    2015-10-01

    In most half-Fourier imaging methods, explicit phase replacement is used. In combination with parallel imaging, or compressed sensing, half-Fourier reconstruction is usually performed in a separate step. The purpose of this paper is to report that integration of half-Fourier reconstruction into iterative reconstruction minimizes reconstruction errors. The L1-norm phase constraint for half-Fourier imaging proposed in this work is compared with the L2-norm variant of the same algorithm, with several typical half-Fourier reconstruction methods. Half-Fourier imaging with the proposed phase constraint can be seamlessly combined with parallel imaging and compressed sensing to achieve high acceleration factors. In simulations and in in-vivo experiments half-Fourier imaging with the proposed L1-norm phase constraint enables superior performance both reconstruction of image details and with regard to robustness against phase estimation errors. The performance and feasibility of half-Fourier imaging with the proposed L1-norm phase constraint is reported. Its seamless combination with parallel imaging and compressed sensing enables use of greater acceleration in 3D MR imaging.

  14. Fast algorithm for exploring and compressing of large hyperspectral images

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey

    2011-01-01

    A new method for calculation of latent variable space for exploratory analysis and dimension reduction of large hyperspectral images is proposed. The method is based on significant downsampling of image pixels with preservation of pixels’ structure in feature (variable) space. To achieve this, in...... can be used first of all for fast compression of large data arrays with principal component analysis or similar projection techniques....

  15. An analytical look at the effects of compression on medical images

    OpenAIRE

    Persons, Kenneth; Palisson, Patrice; Manduca, Armando; Erickson, Bradley J.; Savcenko, Vladimir

    1997-01-01

    This article will take an analytical look at how lossy Joint Photographic Experts Group (JPEG) and wavelet image compression techniques affect medical image content. It begins with a brief explanation of how the JPEG and wavelet algorithms work, and describes in general terms what effect they can have on image quality (removal of noise, blurring, and artifacts). It then focuses more specifically on medical image diagnostic content and explains why subtle pathologies, that may be difficult for...

  16. Ridge Distance Estimation in Fingerprint Images: Algorithm and Performance Evaluation

    Directory of Open Access Journals (Sweden)

    Tian Jie

    2004-01-01

    Full Text Available It is important to estimate the ridge distance accurately, an intrinsic texture property of a fingerprint image. Up to now, only several articles have touched directly upon ridge distance estimation. Little has been published providing detailed evaluation of methods for ridge distance estimation, in particular, the traditional spectral analysis method applied in the frequency field. In this paper, a novel method on nonoverlap blocks, called the statistical method, is presented to estimate the ridge distance. Direct estimation ratio (DER and estimation accuracy (EA are defined and used as parameters along with time consumption (TC to evaluate performance of these two methods for ridge distance estimation. Based on comparison of performances of these two methods, a third hybrid method is developed to combine the merits of both methods. Experimental results indicate that DER is 44.7%, 63.8%, and 80.6%; EA is 84%, 93%, and 91%; and TC is , , and seconds, with the spectral analysis method, statistical method, and hybrid method, respectively.

  17. On Scientific Data and Image Compression Based on Adaptive Higher-Order FEM

    Czech Academy of Sciences Publication Activity Database

    Šolín, Pavel; Andrš, David

    2009-01-01

    Roč. 1, č. 1 (2009), s. 56-68 ISSN 2070-0733 R&D Projects: GA ČR(CZ) GA102/07/0496; GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z20570509 Keywords : data compress ion * image compress ion * adaptive hp-FEM Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering http://www.global-sci.org/aamm

  18. Polarimetric and Indoor Imaging Fusion Based on Compressive Sensing

    Science.gov (United States)

    2013-04-01

    34 in Proc. IEEE Radar Conf, Rome, Italy , May 2008. [17] M. G. Amin, F. Ahmad, W. Zhang, "A compressive sensing approach to moving target... Ferrara , J. Jackson, and M. Stuff, "Three-dimensional sparse-aperture moving-target imaging," in Proc. SPIE, vol. 6970, 2008. [43] M. Skolnik (Ed

  19. Data compression of scanned halftone images

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Jensen, Kim S.

    1994-01-01

    with the halftone grid, and converted to a gray level representation. A new digital description of (halftone) grids has been developed for this purpose. The gray level values are coded according to a scheme based on states derived from a segmentation of gray values. To enable real-time processing of high resolution...... scanner output, the coding has been parallelized and implemented on a transputer system. For comparison, the test image was coded using existing (lossless) methods giving compression rates of 2-7. The best of these, a combination of predictive and binary arithmetic coding was modified and optimized...

  20. Compressed sensing with cyclic-S Hadamard matrix for terahertz imaging applications

    Science.gov (United States)

    Ermeydan, Esra Şengün; ćankaya, Ilyas

    2018-01-01

    Compressed Sensing (CS) with Cyclic-S Hadamard matrix is proposed for single pixel imaging applications in this study. In single pixel imaging scheme, N = r . c samples should be taken for r×c pixel image where . denotes multiplication. CS is a popular technique claiming that the sparse signals can be reconstructed with samples under Nyquist rate. Therefore to solve the slow data acquisition problem in Terahertz (THz) single pixel imaging, CS is a good candidate. However, changing mask for each measurement is a challenging problem since there is no commercial Spatial Light Modulators (SLM) for THz band yet, therefore circular masks are suggested so that for each measurement one or two column shifting will be enough to change the mask. The CS masks are designed using cyclic-S matrices based on Hadamard transform for 9 × 7 and 15 × 17 pixel images within the framework of this study. The %50 compressed images are reconstructed using total variation based TVAL3 algorithm. Matlab simulations demonstrates that cyclic-S matrices can be used for single pixel imaging based on CS. The circular masks have the advantage to reduce the mechanical SLMs to a single sliding strip, whereas the CS helps to reduce acquisition time and energy since it allows to reconstruct the image from fewer samples.

  1. Context-Aware Image Compression.

    Directory of Open Access Journals (Sweden)

    Jacky C K Chan

    Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.

  2. Secure biometric image sensor and authentication scheme based on compressed sensing.

    Science.gov (United States)

    Suzuki, Hiroyuki; Suzuki, Masamichi; Urabe, Takuya; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2013-11-20

    It is important to ensure the security of biometric authentication information, because its leakage causes serious risks, such as replay attacks using the stolen biometric data, and also because it is almost impossible to replace raw biometric information. In this paper, we propose a secure biometric authentication scheme that protects such information by employing an optical data ciphering technique based on compressed sensing. The proposed scheme is based on two-factor authentication, the biometric information being supplemented by secret information that is used as a random seed for a cipher key. In this scheme, a biometric image is optically encrypted at the time of image capture, and a pair of restored biometric images for enrollment and verification are verified in the authentication server. If any of the biometric information is exposed to risk, it can be reenrolled by changing the secret information. Through numerical experiments, we confirm that finger vein images can be restored from the compressed sensing measurement data. We also present results that verify the accuracy of the scheme.

  3. Selection of bi-level image compression method for reduction of communication energy in wireless visual sensor networks

    Science.gov (United States)

    Khursheed, Khursheed; Imran, Muhammad; Ahmad, Naeem; O'Nils, Mattias

    2012-06-01

    Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.

  4. 32Still Image Compression Algorithm Based on Directional Filter Banks

    OpenAIRE

    Chunling Yang; Duanwu Cao; Li Ma

    2010-01-01

    Hybrid wavelet and directional filter banks (HWD) is an effective multi-scale geometrical analysis method. Compared to wavelet transform, it can better capture the directional information of images. But the ringing artifact, which is caused by the coefficient quantization in transform domain, is the biggest drawback of image compression algorithms in HWD domain. In this paper, by researching on the relationship between directional decomposition and ringing artifact, an improved decomposition ...

  5. Quality Evaluation and Nonuniform Compression of Geometrically Distorted Images Using the Quadtree Distortion Map

    Directory of Open Access Journals (Sweden)

    Cristina Costa

    2004-09-01

    Full Text Available The paper presents an analysis of the effects of lossy compression algorithms applied to images affected by geometrical distortion. It will be shown that the encoding-decoding process results in a nonhomogeneous image degradation in the geometrically corrected image, due to the different amount of information associated to each pixel. A distortion measure named quadtree distortion map (QDM able to quantify this aspect is proposed. Furthermore, QDM is exploited to achieve adaptive compression of geometrically distorted pictures, in order to ensure a uniform quality on the final image. Tests are performed using JPEG and JPEG2000 coding standards in order to quantitatively and qualitatively assess the performance of the proposed method.

  6. A System for Compressive Spectral and Polarization Imaging at Short Wave Infrared (SWIR) Wavelengths

    Science.gov (United States)

    2017-10-18

    UV -­‐ VIS -­‐IR   60mm   Apo   Macro  lens   Jenoptik-­‐Inc   $5,817.36   IR... VIS /NIR Compressive Spectral Imager”, Proceedings of IEEE International Conference on Image Processing (ICIP ’15), Quebec City, Canada, (September...imaging   system   will   lead   to   a   wide-­‐band   VIS -­‐NIR-­‐SWIR   compressive  spectral  and  polarimetric

  7. Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder

    Science.gov (United States)

    August, Isaac; Oiknine, Yaniv; Abuleil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian

    2016-03-01

    Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems.

  8. A rapid compression technique for 4-D functional MRI images using data rearrangement and modified binary array techniques.

    Science.gov (United States)

    Uma Vetri Selvi, G; Nadarajan, R

    2015-12-01

    Compression techniques are vital for efficient storage and fast transfer of medical image data. The existing compression techniques take significant amount of time for performing encoding and decoding and hence the purpose of compression is not fully satisfied. In this paper a rapid 4-D lossy compression method constructed using data rearrangement, wavelet-based contourlet transformation and a modified binary array technique has been proposed for functional magnetic resonance imaging (fMRI) images. In the proposed method, the image slices of fMRI data are rearranged so that the redundant slices form a sequence. The image sequence is then divided into slices and transformed using wavelet-based contourlet transform (WBCT). In WBCT, the high frequency sub-band obtained from wavelet transform is further decomposed into multiple directional sub-bands by directional filter bank to obtain more directional information. The relationship between the coefficients has been changed in WBCT as it has more directions. The differences in parent–child relationships are handled by a repositioning algorithm. The repositioned coefficients are then subjected to quantization. The quantized coefficients are further compressed by modified binary array technique where the most frequently occurring value of a sequence is coded only once. The proposed method has been experimented with fMRI images the results indicated that the processing time of the proposed method is less compared to existing wavelet-based set partitioning in hierarchical trees and set partitioning embedded block coder (SPECK) compression schemes [1]. The proposed method could also yield a better compression performance compared to wavelet-based SPECK coder. The objective results showed that the proposed method could gain good compression ratio in maintaining a peak signal noise ratio value of above 70 for all the experimented sequences. The SSIM value is equal to 1 and the value of CC is greater than 0.9 for all

  9. Optical image encryption scheme with multiple light paths based on compressive ghost imaging

    Science.gov (United States)

    Zhu, Jinan; Yang, Xiulun; Meng, Xiangfeng; Wang, Yurong; Yin, Yongkai; Sun, Xiaowen; Dong, Guoyan

    2018-02-01

    An optical image encryption method with multiple light paths is proposed based on compressive ghost imaging. In the encryption process, M random phase-only masks (POMs) are generated by means of logistic map algorithm, and these masks are then uploaded to the spatial light modulator (SLM). The collimated laser light is divided into several beams by beam splitters as it passes through the SLM, and the light beams illuminate the secret images, which are converted into sparse images by discrete wavelet transform beforehand. Thus, the secret images are simultaneously encrypted into intensity vectors by ghost imaging. The distances between the SLM and secret images vary and can be used as the main keys with original POM and the logistic map algorithm coefficient in the decryption process. In the proposed method, the storage space can be significantly decreased and the security of the system can be improved. The feasibility, security and robustness of the method are further analysed through computer simulations.

  10. Cyclops: single-pixel imaging lidar system based on compressive sensing

    Science.gov (United States)

    Magalhães, F.; Correia, M. V.; Farahi, F.; Pereira do Carmo, J.; Araújo, F. M.

    2017-11-01

    Mars and the Moon are envisaged as major destinations of future space exploration missions in the upcoming decades. Imaging LIDARs are seen as a key enabling technology in the support of autonomous guidance, navigation and control operations, as they can provide very accurate, wide range, high-resolution distance measurements as required for the exploration missions. Imaging LIDARs can be used at critical stages of these exploration missions, such as descent and selection of safe landing sites, rendezvous and docking manoeuvres, or robotic surface navigation and exploration. Despite these devices have been commercially available and used for long in diverse metrology and ranging applications, their size, mass and power consumption are still far from being suitable and attractive for space exploratory missions. Here, we describe a compact Single-Pixel Imaging LIDAR System that is based on a compressive sensing technique. The application of the compressive codes to a DMD array enables compression of the spatial information, while the collection of timing histograms correlated to the pulsed laser source ensures image reconstruction at the ranged distances. Single-pixel cameras have been compared with raster scanning and array based counterparts in terms of noise performance, and proved to be superior. Since a single photodetector is used, a better SNR and higher reliability is expected in contrast with systems using large format photodetector arrays. Furthermore, the event of failure of one or more micromirror elements in the DMD does not prevent full reconstruction of the images. This brings additional robustness to the proposed 3D imaging LIDAR. The prototype that was implemented has three modes of operation. Range Finder: outputs the average distance between the system and the area of the target under illumination; Attitude Meter: provides the slope of the target surface based on distance measurements in three areas of the target; 3D Imager: produces 3D ranged

  11. Cerebral magnetic resonance imaging of compressed air divers in diving accidents.

    Science.gov (United States)

    Gao, G K; Wu, D; Yang, Y; Yu, T; Xue, J; Wang, X; Jiang, Y P

    2009-01-01

    To investigate the characteristics of the cerebral magnetic resonance imaging (MRI) of compressed air divers in diving accidents, we conducted an observational case series study. MRI of brain were examined and analysed on seven cases compressed air divers complicated with cerebral arterial gas embolism CAGE. There were some characteristics of cerebral injury: (1) Multiple lesions; (2) larger size; (3) Susceptible to parietal and frontal lobe; (4) Both cortical grey matter and subcortical white matter can be affected; (5) Cerebellum is also the target of air embolism. The MRI of brain is an sensitive method for detecting cerebral lesions in compressed air divers in diving accidents. The MRI should be finished on divers in diving accidents within 5 days.

  12. Uses of software in digital image analysis: a forensic report

    Science.gov (United States)

    Sharma, Mukesh; Jha, Shailendra

    2010-02-01

    Forensic image analysis is required an expertise to interpret the content of an image or the image itself in legal matters. Major sub-disciplines of forensic image analysis with law enforcement applications include photo-grammetry, photographic comparison, content analysis and image authentication. It has wide applications in forensic science range from documenting crime scenes to enhancing faint or indistinct patterns such as partial fingerprints. The process of forensic image analysis can involve several different tasks, regardless of the type of image analysis performed. Through this paper authors have tried to explain these tasks, which are described in to three categories: Image Compression, Image Enhancement & Restoration and Measurement Extraction. With the help of examples like signature comparison, counterfeit currency comparison and foot-wear sole impression using the software Canvas and Corel Draw.

  13. Simultaneous heating and compression of irradiated graphite during synchrotron microtomographic imaging

    Science.gov (United States)

    Bodey, A. J.; Mileeva, Z.; Lowe, T.; Williamson-Brown, E.; Eastwood, D. S.; Simpson, C.; Titarenko, V.; Jones, A. N.; Rau, C.; Mummery, P. M.

    2017-06-01

    Nuclear graphite is used as a neutron moderator in fission power stations. To investigate the microstructural changes that occur during such use, it has been studied for the first time by X-ray microtomography with in situ heating and compression. This experiment was the first to involve simultaneous heating and mechanical loading of radioactive samples at Diamond Light Source, and represented the first study of radioactive materials at the Diamond-Manchester Imaging Branchline I13-2. Engineering methods and safety protocols were developed to ensure the safe containment of irradiated graphite as it was simultaneously compressed to 450N in a Deben 10kN Open-Frame Rig and heated to 300°C with dual focused infrared lamps. Central to safe containment was a double containment vessel which prevented escape of airborne particulates while enabling compression via a moveable ram and the transmission of infrared light to the sample. Temperature measurements were made in situ via thermocouple readout. During heating and compression, samples were simultaneously rotated and imaged with polychromatic X-rays. The resulting microtomograms are being studied via digital volume correlation to provide insights into how thermal expansion coefficients and microstructure are affected by irradiation history, load and heat. Such information will be key to improving the accuracy of graphite degradation models which inform safety margins at power stations.

  14. High-dynamic range compressive spectral imaging by grayscale coded aperture adaptive filtering

    Directory of Open Access Journals (Sweden)

    Nelson Eduardo Diaz

    2015-09-01

    Full Text Available The coded aperture snapshot spectral imaging system (CASSI is an imaging architecture which senses the three dimensional informa-tion of a scene with two dimensional (2D focal plane array (FPA coded projection measurements. A reconstruction algorithm takes advantage of the compressive measurements sparsity to recover the underlying 3D data cube. Traditionally, CASSI uses block-un-block coded apertures (BCA to spatially modulate the light. In CASSI the quality of the reconstructed images depends on the design of these coded apertures and the FPA dynamic range. This work presents a new CASSI architecture based on grayscaled coded apertu-res (GCA which reduce the FPA saturation and increase the dynamic range of the reconstructed images. The set of GCA is calculated in a real-time adaptive manner exploiting the information from the FPA compressive measurements. Extensive simulations show the attained improvement in the quality of the reconstructed images when GCA are employed.  In addition, a comparison between traditional coded apertures and GCA is realized with respect to noise tolerance.

  15. Fingerprint detection and using intercalated CdSe nanoparticles on non-porous surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Algarra, Manuel, E-mail: malgarra67@gmail.com [Centro de Geología da Universidade do Porto, Departamento de Geociências, Ambiente e Ordenamemto do Territorio do Porto, Faculdade de Ciências, Universidade do Porto, Rua do Campo Alegre 687, 4169-007 Porto (Portugal); Radotić, Ksenija; Kalauzi, Aleksandar; Mutavdžić, Dragosav; Savić, Aleksandar [Institute for Multidisciplinary Research, University of Belgrade, Kneza Višeslava 1, 11000 Beograd (Serbia); Jiménez-Jiménez, José; Rodríguez-Castellón, Enrique [Departamento de Química Inorgánica, Facultad de Ciencias, Universidad de Málaga, Campus de Teatinos s/n, 29071Málaga (Spain); Silva, Joaquim C.G. Esteves da [Centro de Investigação em Química (CIQ-UP). Faculdade de Ciências da Universidade do Porto, Rua do Campo Alegre 687, 4169-007 Porto (Portugal); Guerrero-González, Juan José [Policía Científica, Cuerpo Nacional de Policía, Málaga (Spain)

    2014-02-17

    Graphical abstract: -- Highlights: •Fluorescent nanocomposite based on the inclusion of CdSe quantum dots in porous phosphate heterostructures. •Characterized by FTIR, XRD and fluorescence spectroscopies. •Deconvolution of the emission spectra was confirmed by using multivariate curve resolution (MCR) method. •Application for fingerprint detection and analysis on different non-porous surfaces. -- Abstract: A fluorescent nanocomposite based on the inclusion of CdSe quantum dots in porous phosphate heterostructures, functionalized with amino groups (PPH-NH{sub 2}@CdSe), was synthesized, characterized and used for fingerprint detection. The main scopes of this work were first to develop a friendly chemical powder for detecting latent fingerprints, especially in non-porous surfaces; their further intercalation in PPH structure enables not to spread the fluorescent nanoparticles, for that reason very good fluorescent images can be obtained. The fingerprints, obtained on different non-porous surfaces such as iron tweezers, mobile telephone screen and magnetic band of a credit card, treated with this powder emit a pale orange luminescence under ultraviolet excitation. A further image processing consists of contrast enhancement that allows obtaining positive matches according to the information supplied from a police database, and showed to be more effective than that obtained with the non-processed images. Experimental results illustrate the effectiveness of proposed methods.

  16. Fingerprint detection and using intercalated CdSe nanoparticles on non-porous surfaces

    International Nuclear Information System (INIS)

    Algarra, Manuel; Radotić, Ksenija; Kalauzi, Aleksandar; Mutavdžić, Dragosav; Savić, Aleksandar; Jiménez-Jiménez, José; Rodríguez-Castellón, Enrique; Silva, Joaquim C.G. Esteves da; Guerrero-González, Juan José

    2014-01-01

    Graphical abstract: -- Highlights: •Fluorescent nanocomposite based on the inclusion of CdSe quantum dots in porous phosphate heterostructures. •Characterized by FTIR, XRD and fluorescence spectroscopies. •Deconvolution of the emission spectra was confirmed by using multivariate curve resolution (MCR) method. •Application for fingerprint detection and analysis on different non-porous surfaces. -- Abstract: A fluorescent nanocomposite based on the inclusion of CdSe quantum dots in porous phosphate heterostructures, functionalized with amino groups (PPH-NH 2 @CdSe), was synthesized, characterized and used for fingerprint detection. The main scopes of this work were first to develop a friendly chemical powder for detecting latent fingerprints, especially in non-porous surfaces; their further intercalation in PPH structure enables not to spread the fluorescent nanoparticles, for that reason very good fluorescent images can be obtained. The fingerprints, obtained on different non-porous surfaces such as iron tweezers, mobile telephone screen and magnetic band of a credit card, treated with this powder emit a pale orange luminescence under ultraviolet excitation. A further image processing consists of contrast enhancement that allows obtaining positive matches according to the information supplied from a police database, and showed to be more effective than that obtained with the non-processed images. Experimental results illustrate the effectiveness of proposed methods

  17. A Framework for Reproducible Latent Fingerprint Enhancements.

    Science.gov (United States)

    Carasso, Alfred S

    2014-01-01

    Photoshop processing of latent fingerprints is the preferred methodology among law enforcement forensic experts, but that appproach is not fully reproducible and may lead to questionable enhancements. Alternative, independent, fully reproducible enhancements, using IDL Histogram Equalization and IDL Adaptive Histogram Equalization, can produce better-defined ridge structures, along with considerable background information. Applying a systematic slow motion smoothing procedure to such IDL enhancements, based on the rapid FFT solution of a Lévy stable fractional diffusion equation, can attenuate background detail while preserving ridge information. The resulting smoothed latent print enhancements are comparable to, but distinct from, forensic Photoshop images suitable for input into automated fingerprint identification systems, (AFIS). In addition, this progressive smoothing procedure can be reexamined by displaying the suite of progressively smoother IDL images. That suite can be stored, providing an audit trail that allows monitoring for possible loss of useful information, in transit to the user-selected optimal image. Such independent and fully reproducible enhancements provide a valuable frame of reference that may be helpful in informing, complementing, and possibly validating the forensic Photoshop methodology.

  18. Touchless fingerprint biometrics

    CERN Document Server

    Labati, Ruggero Donida; Scotti, Fabio

    2015-01-01

    Offering the first comprehensive analysis of touchless fingerprint-recognition technologies, Touchless Fingerprint Biometrics gives an overview of the state of the art and describes relevant industrial applications. It also presents new techniques to efficiently and effectively implement advanced solutions based on touchless fingerprinting.The most accurate current biometric technologies in touch-based fingerprint-recognition systems require a relatively high level of user cooperation to acquire samples of the concerned biometric trait. With the potential for reduced constraints, reduced hardw

  19. Finite-element modeling of compression and gravity on a population of breast phantoms for multimodality imaging simulation.

    Science.gov (United States)

    Sturgeon, Gregory M; Kiarashi, Nooshin; Lo, Joseph Y; Samei, E; Segars, W P

    2016-05-01

    The authors are developing a series of computational breast phantoms based on breast CT data for imaging research. In this work, the authors develop a program that will allow a user to alter the phantoms to simulate the effect of gravity and compression of the breast (craniocaudal or mediolateral oblique) making the phantoms applicable to multimodality imaging. This application utilizes a template finite-element (FE) breast model that can be applied to their presegmented voxelized breast phantoms. The FE model is automatically fit to the geometry of a given breast phantom, and the material properties of each element are set based on the segmented voxels contained within the element. The loading and boundary conditions, which include gravity, are then assigned based on a user-defined position and compression. The effect of applying these loads to the breast is computed using a multistage contact analysis in FEBio, a freely available and well-validated FE software package specifically designed for biomedical applications. The resulting deformation of the breast is then applied to a boundary mesh representation of the phantom that can be used for simulating medical images. An efficient script performs the above actions seamlessly. The user only needs to specify which voxelized breast phantom to use, the compressed thickness, and orientation of the breast. The authors utilized their FE application to simulate compressed states of the breast indicative of mammography and tomosynthesis. Gravity and compression were simulated on example phantoms and used to generate mammograms in the craniocaudal or mediolateral oblique views. The simulated mammograms show a high degree of realism illustrating the utility of the FE method in simulating imaging data of repositioned and compressed breasts. The breast phantoms and the compression software can become a useful resource to the breast imaging research community. These phantoms can then be used to evaluate and compare imaging

  20. An Implementation Of Elias Delta Code And ElGamal Algorithm In Image Compression And Security

    Science.gov (United States)

    Rachmawati, Dian; Andri Budiman, Mohammad; Saffiera, Cut Amalia

    2018-01-01

    In data transmission such as transferring an image, confidentiality, integrity, and efficiency of data storage aspects are highly needed. To maintain the confidentiality and integrity of data, one of the techniques used is ElGamal. The strength of this algorithm is found on the difficulty of calculating discrete logs in a large prime modulus. ElGamal belongs to the class of Asymmetric Key Algorithm and resulted in enlargement of the file size, therefore data compression is required. Elias Delta Code is one of the compression algorithms that use delta code table. The image was first compressed using Elias Delta Code Algorithm, then the result of the compression was encrypted by using ElGamal algorithm. Prime test was implemented using Agrawal Biswas Algorithm. The result showed that ElGamal method could maintain the confidentiality and integrity of data with MSE and PSNR values 0 and infinity. The Elias Delta Code method generated compression ratio and space-saving each with average values of 62.49%, and 37.51%.

  1. Compression of digital images in radiology. Results of a consensus conference; Kompression digitaler Bilddaten in der Radiologie. Ergebnisse einer Konsensuskonferenz

    Energy Technology Data Exchange (ETDEWEB)

    Loose, R. [Klinikum Nuernberg-Nord (Germany). Inst. fuer Diagnostische und Interventionelle Radiologie; Braunschweig, R. [BG Kliniken Bergmannstrost, Halle/Saale (Germany). Klinik fuer Bildgebende Diagnostik und Interventionsradiologie; Kotter, E. [Universitaetsklinikum Freiburg (Germany). Abt. Roentgendiagnostik; Mildenberger, P. [Mainz Univ. (Germany). Klinik und Poliklinik fuer Diagnostische und Interventionelle Radiologie; Simmler, R.; Wucherer, M. [Klinikum Nuernberg (Germany). Inst. fuer Medizinische Physik

    2009-01-15

    Purpose: Recommendations for lossy compression of digital radiological DICOM images in Germany by means of a consensus conference. The compression of digital radiological images was evaluated in many studies. Even though the results demonstrate full diagnostic image quality of modality-dependent compression between 1:5 and 1:200, there are only a few clinical applications. Materials and Methods: A consensus conference with approx. 80 interested participants (radiology, industry, physics, and agencies) without individual invitation was organized by the working groups AGIT and APT of the German Roentgen Society DRG to determine compression factors without loss of diagnostic image quality for different anatomical regions for CT, CR/DR, MR, RF/XA examinations. The consent level was specified as at least 66 %. Results: For individual modalities the following compression factors were recommended: CT (brain) 1:5, CT (all other applications) 1:8, CR/DR (all applications except mammography) 1:10, CR/DR (mammography) 1:15, MR (all applications) 1:7, RF/XA (fluoroscopy, DSA, cardiac angio) 1:6. The recommended compression ratios are valid for JPEG and JPEG 2000 /Wavelet compressions. Conclusion: The results may be understood as recommendations and indicate limits of compression factors with no expected reduction of diagnostic image quality. They are similar to the current national recommendations for Canada and England. (orig.)

  2. Image compression using the W-transform

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, W.D. Jr. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.

    1995-12-31

    The authors present the W-transform for a multiresolution signal decomposition. One of the differences between the wavelet transform and W-transform is that the W-transform leads to a nonorthogonal signal decomposition. Another difference between the two is the manner in which the W-transform handles the endpoints (boundaries) of the signal. This approach does not restrict the length of the signal to be a power of two. Furthermore, it does not call for the extension of the signal thus, the W-transform is a convenient tool for image compression. They present the basic theory behind the W-transform and include experimental simulations to demonstrate its capabilities.

  3. Transparent Fingerprint Sensor System for Large Flat Panel Display.

    Science.gov (United States)

    Seo, Wonkuk; Pi, Jae-Eun; Cho, Sung Haeung; Kang, Seung-Youl; Ahn, Seong-Deok; Hwang, Chi-Sun; Jeon, Ho-Sik; Kim, Jong-Uk; Lee, Myunghee

    2018-01-19

    In this paper, we introduce a transparent fingerprint sensing system using a thin film transistor (TFT) sensor panel, based on a self-capacitive sensing scheme. An armorphousindium gallium zinc oxide (a-IGZO) TFT sensor array and associated custom Read-Out IC (ROIC) are implemented for the system. The sensor panel has a 200 × 200 pixel array and each pixel size is as small as 50 μm × 50 μm. The ROIC uses only eight analog front-end (AFE) amplifier stages along with a successive approximation analog-to-digital converter (SAR ADC). To get the fingerprint image data from the sensor array, the ROIC senses a capacitance, which is formed by a cover glass material between a human finger and an electrode of each pixel of the sensor array. Three methods are reviewed for estimating the self-capacitance. The measurement result demonstrates that the transparent fingerprint sensor system has an ability to differentiate a human finger's ridges and valleys through the fingerprint sensor array.

  4. An investigative study of multispectral data compression for remotely-sensed images using vector quantization and difference-mapped shift-coding

    Science.gov (United States)

    Jaggi, S.

    1993-01-01

    A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.

  5. PERFORMANCE ANALYSIS OF SET PARTITIONING IN HIERARCHICAL TREES (SPIHT ALGORITHM FOR A FAMILY OF WAVELETS USED IN COLOR IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    A. Sreenivasa Murthy

    2014-11-01

    Full Text Available With the spurt in the amount of data (Image, video, audio, speech, & text available on the net, there is a huge demand for memory & bandwidth savings. One has to achieve this, by maintaining the quality & fidelity of the data acceptable to the end user. Wavelet transform is an important and practical tool for data compression. Set partitioning in hierarchal trees (SPIHT is a widely used compression algorithm for wavelet transformed images. Among all wavelet transform and zero-tree quantization based image compression algorithms SPIHT has become the benchmark state-of-the-art algorithm because it is simple to implement & yields good results. In this paper we present a comparative study of various wavelet families for image compression with SPIHT algorithm. We have conducted experiments with Daubechies, Coiflet, Symlet, Bi-orthogonal, Reverse Bi-orthogonal and Demeyer wavelet types. The resulting image quality is measured objectively, using peak signal-to-noise ratio (PSNR, and subjectively, using perceived image quality (human visual perception, HVP for short. The resulting reduction in the image size is quantified by compression ratio (CR.

  6. The Modified Frequency Algorithm of Digital Watermarking of Still Images Resistant to JPEG Compression

    Directory of Open Access Journals (Sweden)

    V. A. Batura

    2015-01-01

    Full Text Available Digital watermarking is an effective copyright protection for multimedia products (in particular, still images. Digital marking represents process of embedding into object of protection of a digital watermark which is invisible for a human eye. However there is rather large number of the harmful influences capable to destroy the watermark which is embedded into the still image. The most widespread attack is JPEG compression that is caused by efficiency of this format of compression and its big prevalence on the Internet.The new algorithm which is modification of algorithm of Elham is presented in the present article. The algorithm of digital marking of motionless images carries out embedding of a watermark in frequency coefficients of discrete Hadamard transform of the chosen image blocks. The choice of blocks of the image for embedding of a digital watermark is carried out on the basis of the set threshold of entropy of pixels. The choice of low-frequency coefficients for embedding is carried out on the basis of comparison of values of coefficients of discrete cosine transformation with a predetermined threshold, depending on the product of the built-in watermark coefficient on change coefficient.Resistance of new algorithm to compression of JPEG, noising, filtration, change of color, the size and histogram equalization is in details analysed. Research of algorithm consists in comparison of the appearance taken from the damaged image of a watermark with the introduced logo. Ability of algorithm to embedding of a watermark with a minimum level of distortions of the image is in addition analysed. It is established that the new algorithm in comparison by initial algorithm of Elham showed full resistance to compression of JPEG, and also the improved resistance to a noising, change of brightness and histogram equalization.The developed algorithm can be used for copyright protection on the static images. Further studies will be used to study the

  7. Enhancing Perceived Quality of Compressed Images and Video with Anisotropic Diffusion and Fuzzy Filtering

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Korhonen, Jari; Forchhammer, Søren

    2013-01-01

    and subjective results on JPEG compressed images, as well as MJPEG and H.264/AVC compressed video, indicate that the proposed algorithms employing directional and spatial fuzzy filters achieve better artifact reduction than other methods. In particular, robust improvements with H.264/AVC video have been gained...

  8. Artifact reduction of compressed images and video combining adaptive fuzzy filtering and directional anisotropic diffusion

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Forchhammer, Søren; Korhonen, Jari

    2011-01-01

    and ringing artifacts, we have applied directional anisotropic diffusion. Besides that, the selection of the adaptive threshold parameter for the diffusion coefficient has also improved the performance of the algorithm. Experimental results on JPEG compressed images as well as MJPEG and H.264 compressed......Fuzzy filtering is one of the recently developed methods for reducing distortion in compressed images and video. In this paper, we combine the powerful anisotropic diffusion equations with fuzzy filtering in order to reduce the impact of artifacts. Based on the directional nature of the blocking...... videos show improvement in artifact reduction of the proposed algorithm over other directional and spatial fuzzy filters....

  9. Content Based Image Matching for Planetary Science

    Science.gov (United States)

    Deans, M. C.; Meyer, C.

    2006-12-01

    Planetary missions generate large volumes of data. With the MER rovers still functioning on Mars, PDS contains over 7200 released images from the Microscopic Imagers alone. These data products are only searchable by keys such as the Sol, spacecraft clock, or rover motion counter index, with little connection to the semantic content of the images. We have developed a method for matching images based on the visual textures in images. For every image in a database, a series of filters compute the image response to localized frequencies and orientations. Filter responses are turned into a low dimensional descriptor vector, generating a 37 dimensional fingerprint. For images such as the MER MI, this represents a compression ratio of 99.9965% (the fingerprint is approximately 0.0035% the size of the original image). At query time, fingerprints are quickly matched to find images with similar appearance. Image databases containing several thousand images are preprocessed offline in a matter of hours. Image matches from the database are found in a matter of seconds. We have demonstrated this image matching technique using three sources of data. The first database consists of 7200 images from the MER Microscopic Imager. The second database consists of 3500 images from the Narrow Angle Mars Orbital Camera (MOC-NA), which were cropped into 1024×1024 sub-images for consistency. The third database consists of 7500 scanned archival photos from the Apollo Metric Camera. Example query results from all three data sources are shown. We have also carried out user tests to evaluate matching performance by hand labeling results. User tests verify approximately 20% false positive rate for the top 14 results for MOC NA and MER MI data. This means typically 10 to 12 results out of 14 match the query image sufficiently. This represents a powerful search tool for databases of thousands of images where the a priori match probability for an image might be less than 1%. Qualitatively, correct

  10. Lossless Compression of Digital Images

    DEFF Research Database (Denmark)

    Martins, Bo

    Presently, tree coders are the best bi-level image coders. The currentISO standard, JBIG, is a good example.By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code.A number of general-purpose coders...... version that is substantially faster than its precursorsand brings it close to the multi-pass coders in compression performance.Handprinted characters are of unequal complexity; recent work by Singer and Tishby demonstrates that utilizing the physiological process of writing one can synthesize cursive.......The feature vector of a bitmap initially constitutes a lossy representation of the contour(s) of the bitmap. The initial feature space is usually too large but can be reduced automatically by use ofa predictive code length or predictive error criterion....

  11. Laser speckle decorrelation for fingerprint acquisition

    International Nuclear Information System (INIS)

    Schirripa Spagnolo, Giuseppe; Cozzella, Lorenzo

    2012-01-01

    Biometry is gaining popularity as a physical security approach in situations where a high level of security is necessary. Currently, biometric solutions are embedded in a very large and heterogeneous group of applications. One of the most sensible is for airport security access to boarding gates. More airports are introducing biometric solutions based on face, fingerprint or iris recognition for passenger identification. In particular, fingerprints are the most widely used biometric, and they are mandatorily included in electronic identification documents. One important issue, which is difficult to address in traditional fingerprint acquisition systems, is preventing contact between subsequent users; sebum, which can be a potential vector for contagious diseases. Currently, non-contact devices are used to overcome this problem. In this paper, a new contact device based on laser speckle decorrelation is presented. Our system has the advantage of being compact and low-cost compared with an actual contactless system, allowing enhancement of the sebum pattern imaging contrast in a simple and low-cost way. Furthermore, it avoids the spreading of contagious diseases. (paper)

  12. In vivo metabolic fingerprinting of neutral lipids with hyperspectral stimulated Raman scattering microscopy.

    Science.gov (United States)

    Fu, Dan; Yu, Yong; Folick, Andrew; Currie, Erin; Farese, Robert V; Tsai, Tsung-Huang; Xie, Xiaoliang Sunney; Wang, Meng C

    2014-06-18

    Metabolic fingerprinting provides valuable information on the physiopathological states of cells and tissues. Traditional imaging mass spectrometry and magnetic resonance imaging are unable to probe the spatial-temporal dynamics of metabolites at the subcellular level due to either lack of spatial resolution or inability to perform live cell imaging. Here we report a complementary metabolic imaging technique that is based on hyperspectral stimulated Raman scattering (hsSRS). We demonstrated the use of hsSRS imaging in quantifying two major neutral lipids: cholesteryl ester and triacylglycerol in cells and tissues. Our imaging results revealed previously unknown changes of lipid composition associated with obesity and steatohepatitis. We further used stable-isotope labeling to trace the metabolic dynamics of fatty acids in live cells and live Caenorhabditis elegans with hsSRS imaging. We found that unsaturated fatty acid has preferential uptake into lipid storage while saturated fatty acid exhibits toxicity in hepatic cells. Simultaneous metabolic fingerprinting of deuterium-labeled saturated and unsaturated fatty acids in living C. elegans revealed that there is a lack of interaction between the two, unlike previously hypothesized. Our findings provide new approaches for metabolic tracing of neutral lipids and their precursors in living cells and organisms, and could potentially serve as a general approach for metabolic fingerprinting of other metabolites.

  13. Combining nonlinear multiresolution system and vector quantization for still image compression

    Energy Technology Data Exchange (ETDEWEB)

    Wong, Y.

    1993-12-17

    It is popular to use multiresolution systems for image coding and compression. However, general-purpose techniques such as filter banks and wavelets are linear. While these systems are rigorous, nonlinear features in the signals cannot be utilized in a single entity for compression. Linear filters are known to blur the edges. Thus, the low-resolution images are typically blurred, carrying little information. We propose and demonstrate that edge-preserving filters such as median filters can be used in generating a multiresolution system using the Laplacian pyramid. The signals in the detail images are small and localized to the edge areas. Principal component vector quantization (PCVQ) is used to encode the detail images. PCVQ is a tree-structured VQ which allows fast codebook design and encoding/decoding. In encoding, the quantization error at each level is fed back through the pyramid to the previous level so that ultimately all the error is confined to the first level. With simple coding methods, we demonstrate that images with PSNR 33 dB can be obtained at 0.66 bpp without the use of entropy coding. When the rate is decreased to 0.25 bpp, the PSNR of 30 dB can still be achieved. Combined with an earlier result, our work demonstrate that nonlinear filters can be used for multiresolution systems and image coding.

  14. Design of a Lossless Image Compression System for Video Capsule Endoscopy and Its Performance in In-Vivo Trials

    Science.gov (United States)

    Khan, Tareq H.; Wahid, Khan A.

    2014-01-01

    In this paper, a new low complexity and lossless image compression system for capsule endoscopy (CE) is presented. The compressor consists of a low-cost YEF color space converter and variable-length predictive with a combination of Golomb-Rice and unary encoding. All these components have been heavily optimized for low-power and low-cost and lossless in nature. As a result, the entire compression system does not incur any loss of image information. Unlike transform based algorithms, the compressor can be interfaced with commercial image sensors which send pixel data in raster-scan fashion that eliminates the need of having large buffer memory. The compression algorithm is capable to work with white light imaging (WLI) and narrow band imaging (NBI) with average compression ratio of 78% and 84% respectively. Finally, a complete capsule endoscopy system is developed on a single, low-power, 65-nm field programmable gate arrays (FPGA) chip. The prototype is developed using circular PCBs having a diameter of 16 mm. Several in-vivo and ex-vivo trials using pig's intestine have been conducted using the prototype to validate the performance of the proposed lossless compression algorithm. The results show that, compared with all other existing works, the proposed algorithm offers a solution to wireless capsule endoscopy with lossless and yet acceptable level of compression. PMID:25375753

  15. Optical image encryption using chaos-based compressed sensing and phase-shifting interference in fractional wavelet domain

    Science.gov (United States)

    Liu, Qi; Wang, Ying; Wang, Jun; Wang, Qiong-Hua

    2018-02-01

    In this paper, a novel optical image encryption system combining compressed sensing with phase-shifting interference in fractional wavelet domain is proposed. To improve the encryption efficiency, the volume data of original image are decreased by compressed sensing. Then the compacted image is encoded through double random phase encoding in asymmetric fractional wavelet domain. In the encryption system, three pseudo-random sequences, generated by three-dimensional chaos map, are used as the measurement matrix of compressed sensing and two random-phase masks in the asymmetric fractional wavelet transform. It not only simplifies the keys to storage and transmission, but also enhances our cryptosystem nonlinearity to resist some common attacks. Further, holograms make our cryptosystem be immune to noises and occlusion attacks, which are obtained by two-step-only quadrature phase-shifting interference. And the compression and encryption can be achieved in the final result simultaneously. Numerical experiments have verified the security and validity of the proposed algorithm.

  16. Tracking lung tissue motion and expansion/compression with inverse consistent image registration and spirometry.

    Science.gov (United States)

    Christensen, Gary E; Song, Joo Hyun; Lu, Wei; El Naqa, Issam; Low, Daniel A

    2007-06-01

    Breathing motion is one of the major limiting factors for reducing dose and irradiation of normal tissue for conventional conformal radiotherapy. This paper describes a relationship between tracking lung motion using spirometry data and image registration of consecutive CT image volumes collected from a multislice CT scanner over multiple breathing periods. Temporal CT sequences from 5 individuals were analyzed in this study. The couch was moved from 11 to 14 different positions to image the entire lung. At each couch position, 15 image volumes were collected over approximately 3 breathing periods. It is assumed that the expansion and contraction of lung tissue can be modeled as an elastic material. Furthermore, it is assumed that the deformation of the lung is small over one-fifth of a breathing period and therefore the motion of the lung can be adequately modeled using a small deformation linear elastic model. The small deformation inverse consistent linear elastic image registration algorithm is therefore well suited for this problem and was used to register consecutive image scans. The pointwise expansion and compression of lung tissue was measured by computing the Jacobian of the transformations used to register the images. The logarithm of the Jacobian was computed so that expansion and compression of the lung were scaled equally. The log-Jacobian was computed at each voxel in the volume to produce a map of the local expansion and compression of the lung during the breathing period. These log-Jacobian images demonstrate that the lung does not expand uniformly during the breathing period, but rather expands and contracts locally at different rates during inhalation and exhalation. The log-Jacobian numbers were averaged over a cross section of the lung to produce an estimate of the average expansion or compression from one time point to the next and compared to the air flow rate measured by spirometry. In four out of five individuals, the average log

  17. Tracking lung tissue motion and expansion/compression with inverse consistent image registration and spirometry

    International Nuclear Information System (INIS)

    Christensen, Gary E.; Song, Joo Hyun; Lu, Wei; Naqa, Issam El; Low, Daniel A.

    2007-01-01

    Breathing motion is one of the major limiting factors for reducing dose and irradiation of normal tissue for conventional conformal radiotherapy. This paper describes a relationship between tracking lung motion using spirometry data and image registration of consecutive CT image volumes collected from a multislice CT scanner over multiple breathing periods. Temporal CT sequences from 5 individuals were analyzed in this study. The couch was moved from 11 to 14 different positions to image the entire lung. At each couch position, 15 image volumes were collected over approximately 3 breathing periods. It is assumed that the expansion and contraction of lung tissue can be modeled as an elastic material. Furthermore, it is assumed that the deformation of the lung is small over one-fifth of a breathing period and therefore the motion of the lung can be adequately modeled using a small deformation linear elastic model. The small deformation inverse consistent linear elastic image registration algorithm is therefore well suited for this problem and was used to register consecutive image scans. The pointwise expansion and compression of lung tissue was measured by computing the Jacobian of the transformations used to register the images. The logarithm of the Jacobian was computed so that expansion and compression of the lung were scaled equally. The log-Jacobian was computed at each voxel in the volume to produce a map of the local expansion and compression of the lung during the breathing period. These log-Jacobian images demonstrate that the lung does not expand uniformly during the breathing period, but rather expands and contracts locally at different rates during inhalation and exhalation. The log-Jacobian numbers were averaged over a cross section of the lung to produce an estimate of the average expansion or compression from one time point to the next and compared to the air flow rate measured by spirometry. In four out of five individuals, the average log

  18. Compression of Multispectral Images with Comparatively Few Bands Using Posttransform Tucker Decomposition

    Directory of Open Access Journals (Sweden)

    Jin Li

    2014-01-01

    Full Text Available Up to now, data compression for the multispectral charge-coupled device (CCD images with comparatively few bands (MSCFBs is done independently on each multispectral channel. This compression codec is called a “monospectral compressor.” The monospectral compressor does not have a removing spectral redundancy stage. To fill this gap, we propose an efficient compression approach for MSCFBs. In our approach, the one dimensional discrete cosine transform (1D-DCT is performed on spectral dimension to exploit the spectral information, and the posttransform (PT in 2D-DWT domain is performed on each spectral band to exploit the spatial information. A deep coupling approach between the PT and Tucker decomposition (TD is proposed to remove residual spectral redundancy between bands and residual spatial redundancy of each band. Experimental results on multispectral CCD camera data set show that the proposed compression algorithm can obtain a better compression performance and significantly outperforms the traditional compression algorithm-based TD in 2D-DWT and 3D-DCT domain.

  19. Research on the Compression Algorithm of the Infrared Thermal Image Sequence Based on Differential Evolution and Double Exponential Decay Model

    Science.gov (United States)

    Zhang, Jin-Yu; Meng, Xiang-Bing; Xu, Wei; Zhang, Wei; Zhang, Yong

    2014-01-01

    This paper has proposed a new thermal wave image sequence compression algorithm by combining double exponential decay fitting model and differential evolution algorithm. This study benchmarked fitting compression results and precision of the proposed method was benchmarked to that of the traditional methods via experiment; it investigated the fitting compression performance under the long time series and improved model and validated the algorithm by practical thermal image sequence compression and reconstruction. The results show that the proposed algorithm is a fast and highly precise infrared image data processing method. PMID:24696649

  20. Research on the Compression Algorithm of the Infrared Thermal Image Sequence Based on Differential Evolution and Double Exponential Decay Model

    Directory of Open Access Journals (Sweden)

    Jin-Yu Zhang

    2014-01-01

    Full Text Available This paper has proposed a new thermal wave image sequence compression algorithm by combining double exponential decay fitting model and differential evolution algorithm. This study benchmarked fitting compression results and precision of the proposed method was benchmarked to that of the traditional methods via experiment; it investigated the fitting compression performance under the long time series and improved model and validated the algorithm by practical thermal image sequence compression and reconstruction. The results show that the proposed algorithm is a fast and highly precise infrared image data processing method.

  1. Low rank approximation methods for MR fingerprinting with large scale dictionaries.

    Science.gov (United States)

    Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra

    2018-04-01

    This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  2. Compress compound images in H.264/MPGE-4 AVC by exploiting spatial correlation.

    Science.gov (United States)

    Lan, Cuiling; Shi, Guangming; Wu, Feng

    2010-04-01

    Compound images are a combination of text, graphics and natural image. They present strong anisotropic features, especially on the text and graphics parts. These anisotropic features often render conventional compression inefficient. Thus, this paper proposes a novel coding scheme from the H.264 intraframe coding. In the scheme, two new intramodes are developed to better exploit spatial correlation in compound images. The first is the residual scalar quantization (RSQ) mode, where intrapredicted residues are directly quantized and coded without transform. The second is the base colors and index map (BCIM) mode that can be viewed as an adaptive color quantization. In this mode, an image block is represented by several representative colors, referred to as base colors, and an index map to compress. Every block selects its coding mode from two new modes and the previous intramodes in H.264 by rate-distortion optimization (RDO). Experimental results show that the proposed scheme improves the coding efficiency even more than 10 dB at most bit rates for compound images and keeps a comparable efficient performance to H.264 for natural images.

  3. Lossless Image Compression Based on Multiple-Tables Arithmetic Coding

    Directory of Open Access Journals (Sweden)

    Rung-Ching Chen

    2009-01-01

    Full Text Available This paper is intended to present a lossless image compression method based on multiple-tables arithmetic coding (MTAC method to encode a gray-level image f. First, the MTAC method employs a median edge detector (MED to reduce the entropy rate of f. The gray levels of two adjacent pixels in an image are usually similar. A base-switching transformation approach is then used to reduce the spatial redundancy of the image. The gray levels of some pixels in an image are more common than those of others. Finally, the arithmetic encoding method is applied to reduce the coding redundancy of the image. To promote high performance of the arithmetic encoding method, the MTAC method first classifies the data and then encodes each cluster of data using a distinct code table. The experimental results show that, in most cases, the MTAC method provides a higher efficiency in use of storage space than the lossless JPEG2000 does.

  4. Study of key technology of ghost imaging via compressive sensing for a phase object based on phase-shifting digital holography

    International Nuclear Information System (INIS)

    Leihong, Zhang; Dong, Liang; Bei, Li; Zilan, Pan; Dawei, Zhang; Xiuhua, Ma

    2015-01-01

    In this article, the algorithm of compressing sensing is used to improve the imaging resolution and realize ghost imaging via compressive sensing for a phase object based on the theoretical analysis of the lensless Fourier imaging of the algorithm of ghost imaging based on phase-shifting digital holography. The algorithm of ghost imaging via compressive sensing based on phase-shifting digital holography uses the bucket detector to measure the total light intensity of the interference and the four-step phase-shifting method is used to obtain the total light intensity of differential interference light. The experimental platform is built based on the software simulation, and the experimental results show that the algorithm of ghost imaging via compressive sensing based on phase-shifting digital holography can obtain the high-resolution phase distribution figure of the phase object. With the same sampling times, the phase clarity of the phase distribution figure obtained by the algorithm of ghost imaging via compressive sensing based on phase-shifting digital holography is higher than that obtained by the algorithm of ghost imaging based on phase-shift digital holography. In this article, this study further extends the application range of ghost imaging and obtains the phase distribution of the phase object. (letter)

  5. Choice of word length in the design of a specialized hardware for lossless wavelet compression of medical images

    Science.gov (United States)

    Urriza, Isidro; Barragan, Luis A.; Artigas, Jose I.; Garcia, Jose I.; Navarro, Denis

    1997-11-01

    Image compression plays an important role in the archiving and transmission of medical images. Discrete cosine transform (DCT)-based compression methods are not suitable for medical images because of block-like image artifacts that could mask or be mistaken for pathology. Wavelet transforms (WTs) are used to overcome this problem. When implementing WTs in hardware, finite precision arithmetic introduces quantization errors. However, lossless compression is usually required in the medical image field. Thus, the hardware designer must look for the optimum register length that, while ensuring the lossless accuracy criteria, will also lead to a high-speed implementation with small chip area. In addition, wavelet choice is a critical issue that affects image quality as well as system design. We analyze the filters best suited to image compression that appear in the literature. For them, we obtain the maximum quantization errors produced in the calculation of the WT components. Thus, we deduce the minimum word length required for the reconstructed image to be numerically identical to the original image. The theoretical results are compared with experimental results obtained from algorithm simulations on random test images. These results enable us to compare the hardware implementation cost of the different filter banks. Moreover, to reduce the word length, we have analyzed the case of increasing the integer part of the numbers while maintaining constant the word length when the scale increases.

  6. Fusion of Thresholding Rules During Wavelet-Based Noisy Image Compression

    Directory of Open Access Journals (Sweden)

    Bekhtin Yury

    2016-01-01

    Full Text Available The new method for combining semisoft thresholding rules during wavelet-based data compression of images with multiplicative noise is suggested. The method chooses the best thresholding rule and the threshold value using the proposed criteria which provide the best nonlinear approximations and take into consideration errors of quantization. The results of computer modeling have shown that the suggested method provides relatively good image quality after restoration in the sense of some criteria such as PSNR, SSIM, etc.

  7. Comparing subjective and objective quality assessment of HDR images compressed with JPEG-XT

    DEFF Research Database (Denmark)

    Mantel, Claire; Ferchiu, Stefan Catalin; Forchhammer, Søren

    2014-01-01

    In this paper a subjective test in which participants evaluate the quality of JPEG-XT compressed HDR images is presented. Results show that for the selected test images and display, the subjective quality reached its saturation point starting around 3bpp. Objective evaluations are obtained...

  8. Observation of Compressive Deformation Behavior of Nuclear Graphite by Digital Image Correlation

    International Nuclear Information System (INIS)

    Kim, Hyunju; Kim, Eungseon; Kim, Minhwan; Kim, Yongwan

    2014-01-01

    Polycrystalline nuclear graphite has been proposed as a fuel element, moderator and reflector blocks, and core support structures in a very high temperature gas-cooled reactor. During reactor operation, graphite core components and core support structures are subjected to various stresses. It is therefore important to understand the mechanism of deformation and fracture of nuclear graphites, and their significance to structural integrity assessment methods. Digital image correlation (DIC) is a powerful tool to measure the full field displacement distribution on the surface of the specimens. In this study, to gain an understanding of compressive deformation characteristic, the formation of strain field during a compression test was examined using a commercial DIC system. An examination was made to characterize the compressive deformation behavior of nuclear graphite by a digital image correlation. The non-linear load-displacement characteristic prior to the peak load was shown to be mainly dominated by the presence of localized strains, which resulted in a permanent displacement. Young's modulus was properly calculated from the measured strain

  9. Clustered DPCM with removing noise spectra for the lossless compression of hyperspectral images

    Science.gov (United States)

    Wu, Jiaji; Xu, Jianglei

    2013-10-01

    The clustered DPCM (C-DPCM) lossless compression method by Jarno et al. for hyperspectral images achieved a good compression effect. It can be divided into three components: clustering, prediction, and coding. In the prediction part, it solves a multiple linear regression model for each of the clusters in every band. Without considering the effect of noise spectra, there is still room for improvement. This paper proposes a C-DPCM method with Removing Noise Spectra (C-DPCM-RNS) for the lossless compression of hyperspectral images. C-DPCM-RNS's prediction part consists of two-times trainings. The prediction coefficients obtained from the first training will be used in the linear predictor to compute all the predicted values and then the difference between original and predicted values in current band of current class. Only the non-noise spectra are used in the second training. The resulting prediction coefficients from the second training will be used for prediction and sent to the decoder. The two-times trainings remove part of the interference of noise spectra, and reaches a better compression effect than other methods based on regression prediction.

  10. Efficient High-Dimensional Entanglement Imaging with a Compressive-Sensing Double-Pixel Camera

    Directory of Open Access Journals (Sweden)

    Gregory A. Howland

    2013-02-01

    Full Text Available We implement a double-pixel compressive-sensing camera to efficiently characterize, at high resolution, the spatially entangled fields that are produced by spontaneous parametric down-conversion. This technique leverages sparsity in spatial correlations between entangled photons to improve acquisition times over raster scanning by a scaling factor up to n^{2}/log⁡(n for n-dimensional images. We image at resolutions up to 1024 dimensions per detector and demonstrate a channel capacity of 8.4 bits per photon. By comparing the entangled photons’ classical mutual information in conjugate bases, we violate an entropic Einstein-Podolsky-Rosen separability criterion for all measured resolutions. More broadly, our result indicates that compressive sensing can be especially effective for higher-order measurements on correlated systems.

  11. Pattern-based compression of multi-band image data for landscape analysis

    CERN Document Server

    Myers, Wayne L; Patil, Ganapati P

    2006-01-01

    This book describes an integrated approach to using remotely sensed data in conjunction with geographic information systems for landscape analysis. Remotely sensed data are compressed into an analytical image-map that is compatible with the most popular geographic information systems as well as freeware viewers. The approach is most effective for landscapes that exhibit a pronounced mosaic pattern of land cover. The image maps are much more compact than the original remotely sensed data, which enhances utility on the internet. As value-added products, distribution of image-maps is not affected by copyrights on original multi-band image data.

  12. Physics and fingerprints

    Science.gov (United States)

    Voss-de Haan, Patrick

    2006-08-01

    This article discusses a variety of aspects in the detection and development of fingerprints and the physics involved in it. It gives an introduction to some basic issues like composition and properties of fingerprint deposits and a rudimentary framework of dactyloscopy; it covers various techniques for the visualization of latent fingerprints; and it concludes with a view of current research topics. The techniques range from very common procedures, such as powdering and cyanoacrylate fuming, to more demanding methods, for example luminescence and vacuum metal deposition, to fairly unusual approaches like autoradiography. The emphasis is placed on the physical rather than the forensic aspects of these topics while trying to give the physicist—who is not dealing with fingerprinting and forensic science on a daily basis—a feeling for the problems and solutions in the visualization of latent fingerprints.

  13. Low-complexity Compression of High Dynamic Range Infrared Images with JPEG compatibility

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-01-01

    data size, then we include the raw residual image instead. If the residual image contains only zero values or the quality factor for it is 0 then we do not include the residual image into the header. Experimental results show that compared with JPEG-XT Part 6 with ’global Reinhard’ tone-mapping....... Then we compress each image by a JPEG baseline encoder and include the residual image bit stream into the application part of JPEG header of the base image. As a result, the base image can be reconstructed by JPEG baseline decoder. If the JPEG bit stream size of the residual image is higher than the raw...

  14. Visual Communications for Heterogeneous Networks/Visually Optimized Scalable Image Compression. Final Report for September 1, 1995 - February 28, 2002

    Energy Technology Data Exchange (ETDEWEB)

    Hemami, S. S.

    2003-06-03

    The authors developed image and video compression algorithms that provide scalability, reconstructibility, and network adaptivity, and developed compression and quantization strategies that are visually optimal at all bit rates. The goal of this research is to enable reliable ''universal access'' to visual communications over the National Information Infrastructure (NII). All users, regardless of their individual network connection bandwidths, qualities-of-service, or terminal capabilities, should have the ability to access still images, video clips, and multimedia information services, and to use interactive visual communications services. To do so requires special capabilities for image and video compression algorithms: scalability, reconstructibility, and network adaptivity. Scalability allows an information service to provide visual information at many rates, without requiring additional compression or storage after the stream has been compressed the first time. Reconstructibility allows reliable visual communications over an imperfect network. Network adaptivity permits real-time modification of compression parameters to adjust to changing network conditions. Furthermore, to optimize the efficiency of the compression algorithms, they should be visually optimal, where each bit expended reduces the visual distortion. Visual optimality is achieved through first extensive experimentation to quantify human sensitivity to supra-threshold compression artifacts and then incorporation of these experimental results into quantization strategies and compression algorithms.

  15. High Resolution Ultrasonic Method for 3D Fingerprint Recognizable Characteristics in Biometrics Identification

    Science.gov (United States)

    Maev, R. Gr.; Bakulin, E. Yu.; Maeva, A.; Severin, F.

    Biometrics is a rapidly evolving scientific and applied discipline that studies possible ways of personal identification by means of unique biological characteristics. Such identification is important in various situations requiring restricted access to certain areas, information and personal data and for cases of medical emergencies. A number of automated biometric techniques have been developed, including fingerprint, hand shape, eye and facial recognition, thermographic imaging, etc. All these techniques differ in the recognizable parameters, usability, accuracy and cost. Among these, fingerprint recognition stands alone since a very large database of fingerprints has already been acquired. Also, fingerprints are key evidence left at a crime scene and can be used to indentify suspects. Therefore, of all automated biometric techniques, especially in the field of law enforcement, fingerprint identification seems to be the most promising. We introduce a newer development of the ultrasonic fingerprint imaging. The proposed method obtains a scan only once and then varies the C-scan gate position and width to visualize acoustic reflections from any appropriate depth inside the skin. Also, B-scans and A-scans can be recreated from any position using such data array, which gives the control over the visualization options. By setting the C-scan gate deeper inside the skin, distribution of the sweat pores (which are located along the ridges) can be easily visualized. This distribution should be unique for each individual so this provides a means of personal identification, which is not affected by any changes (accidental or intentional) of the fingers' surface conditions. This paper discusses different setups, acoustic parameters of the system, signal and image processing options and possible ways of 3-dimentional visualization that could be used as a recognizable characteristic in biometric identification.

  16. Single image super-resolution based on compressive sensing and improved TV minimization sparse recovery

    Science.gov (United States)

    Vishnukumar, S.; Wilscy, M.

    2017-12-01

    In this paper, we propose a single image Super-Resolution (SR) method based on Compressive Sensing (CS) and Improved Total Variation (TV) Minimization Sparse Recovery. In the CS framework, low-resolution (LR) image is treated as the compressed version of high-resolution (HR) image. Dictionary Training and Sparse Recovery are the two phases of the method. K-Singular Value Decomposition (K-SVD) method is used for dictionary training and the dictionary represents HR image patches in a sparse manner. Here, only the interpolated version of the LR image is used for training purpose and thereby the structural self similarity inherent in the LR image is exploited. In the sparse recovery phase the sparse representation coefficients with respect to the trained dictionary for LR image patches are derived using Improved TV Minimization method. HR image can be reconstructed by the linear combination of the dictionary and the sparse coefficients. The experimental results show that the proposed method gives better results quantitatively as well as qualitatively on both natural and remote sensing images. The reconstructed images have better visual quality since edges and other sharp details are preserved.

  17. Extracting valley-ridge lines from point-cloud-based 3D fingerprint models.

    Science.gov (United States)

    Pang, Xufang; Song, Zhan; Xie, Wuyuan

    2013-01-01

    3D fingerprinting is an emerging technology with the distinct advantage of touchless operation. More important, 3D fingerprint models contain more biometric information than traditional 2D fingerprint images. However, current approaches to fingerprint feature detection usually must transform the 3D models to a 2D space through unwrapping or other methods, which might introduce distortions. A new approach directly extracts valley-ridge features from point-cloud-based 3D fingerprint models. It first applies the moving least-squares method to fit a local paraboloid surface and represent the local point cloud area. It then computes the local surface's curvatures and curvature tensors to facilitate detection of the potential valley and ridge points. The approach projects those points to the most likely valley-ridge lines, using statistical means such as covariance analysis and cross correlation. To finally extract the valley-ridge lines, it grows the polylines that approximate the projected feature points and removes the perturbations between the sampled points. Experiments with different 3D fingerprint models demonstrate this approach's feasibility and performance.

  18. Characterization of Diesel and Gasoline Compression Ignition Combustion in a Rapid Compression-Expansion Machine using OH* Chemiluminescence Imaging

    Science.gov (United States)

    Krishnan, Sundar Rajan; Srinivasan, Kalyan Kumar; Stegmeir, Matthew

    2015-11-01

    Direct-injection compression ignition combustion of diesel and gasoline were studied in a rapid compression-expansion machine (RCEM) using high-speed OH* chemiluminescence imaging. The RCEM (bore = 84 mm, stroke = 110-250 mm) was used to simulate engine-like operating conditions at the start of fuel injection. The fuels were supplied by a high-pressure fuel cart with an air-over-fuel pressure amplification system capable of providing fuel injection pressures up to 2000 bar. A production diesel fuel injector was modified to provide a single fuel spray for both diesel and gasoline operation. Time-resolved combustion pressure in the RCEM was measured using a Kistler piezoelectric pressure transducer mounted on the cylinder head and the instantaneous piston displacement was measured using an inductive linear displacement sensor (0.05 mm resolution). Time-resolved, line-of-sight OH* chemiluminescence images were obtained using a Phantom V611 CMOS camera (20.9 kHz @ 512 x 512 pixel resolution, ~ 48 μs time resolution) coupled with a short wave pass filter (cut-off ~ 348 nm). The instantaneous OH* distributions, which indicate high temperature flame regions within the combustion chamber, were used to discern the characteristic differences between diesel and gasoline compression ignition combustion. The authors gratefully acknowledge facilities support for the present work from the Energy Institute at Mississippi State University.

  19. Image Quality Assessment for Fake Biometric Detection: Application to Iris, Fingerprint, and Face Recognition.

    Science.gov (United States)

    Galbally, Javier; Marcel, Sébastien; Fierrez, Julian

    2014-02-01

    To ensure the actual presence of a real legitimate trait in contrast to a fake self-manufactured synthetic or reconstructed sample is a significant problem in biometric authentication, which requires the development of new and efficient protection measures. In this paper, we present a novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. The objective of the proposed system is to enhance the security of biometric recognition frameworks, by adding liveness assessment in a fast, user-friendly, and non-intrusive manner, through the use of image quality assessment. The proposed approach presents a very low degree of complexity, which makes it suitable for real-time applications, using 25 general image quality features extracted from one image (i.e., the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. The experimental results, obtained on publicly available data sets of fingerprint, iris, and 2D face, show that the proposed method is highly competitive compared with other state-of-the-art approaches and that the analysis of the general image quality of real biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake traits.

  20. Fingerprints in cancer cells

    International Nuclear Information System (INIS)

    Servomaa, K.

    1994-01-01

    Gene research has shown that factors causing cancer, or carcinogens, may leave marks typical of each particular carcinogen (fingerprints) in the genotype of the cell. Radiation, for instance, may leave such fingerprints in a cancer cell. In particular, the discovery of a gene called p53 has yielded much new information on fingerprints. It has been discovered, for example, that toxic fungus and UV-radiation each leave fingerprints in the p53 gene. Based on the detection of fingerprints, it may be possible in the future to tell a cancer patient what factor had trigged the maglinancy

  1. Image interpolation used in three-dimensional range data compression.

    Science.gov (United States)

    Zhang, Shaoze; Zhang, Jianqi; Huang, Xi; Liu, Delian

    2016-05-20

    Advances in the field of three-dimensional (3D) scanning have made the acquisition of 3D range data easier and easier. However, with the large size of 3D range data comes the challenge of storing and transmitting it. To address this challenge, this paper presents a framework to further compress 3D range data using image interpolation. We first use a virtual fringe-projection system to store 3D range data as images, and then apply the interpolation algorithm to the images to reduce their resolution to further reduce the data size. When the 3D range data are needed, the low-resolution image is scaled up to its original resolution by applying the interpolation algorithm, and then the scaled-up image is decoded and the 3D range data are recovered according to the decoded result. Experimental results show that the proposed method could further reduce the data size while maintaining a low rate of error.

  2. Image acquisition system using on sensor compressed sampling technique

    Science.gov (United States)

    Gupta, Pravir Singh; Choi, Gwan Seong

    2018-01-01

    Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.

  3. Magnetic resonance imaging validation of pituitary gland compression and distortion by typical sellar pathology.

    Science.gov (United States)

    Cho, Charles H; Barkhoudarian, Garni; Hsu, Liangge; Bi, Wenya Linda; Zamani, Amir A; Laws, Edward R

    2013-12-01

    Identification of the normal pituitary gland is an important component of presurgical planning, defining many aspects of the surgical approach and facilitating normal gland preservation. Magnetic resonance imaging is a proven imaging modality for optimal soft-tissue contrast discrimination in the brain. This study is designed to validate the accuracy of localization of the normal pituitary gland with MRI in a cohort of surgical patients with pituitary mass lesions, and to evaluate for correlation between presurgical pituitary hormone values and pituitary gland characteristics on neuroimaging. Fifty-eight consecutive patients with pituitary mass lesions were included in the study. Anterior pituitary hormone levels were measured preoperatively in all patients. Video recordings from the endoscopic or microscopic surgical procedures were available for evaluation in 47 cases. Intraoperative identification of the normal gland was possible in 43 of 58 cases. Retrospective MR images were reviewed in a blinded fashion for the 43 cases, emphasizing the position of the normal gland and the extent of compression and displacement by the lesion. There was excellent agreement between imaging and surgery in 84% of the cases for normal gland localization, and in 70% for compression or noncompression of the normal gland. There was no consistent correlation between preoperative pituitary dysfunction and pituitary gland localization on imaging, gland identification during surgery, or pituitary gland compression. Magnetic resonance imaging proved to be accurate in identifying the normal gland in patients with pituitary mass lesions, and was useful for preoperative surgical planning.

  4. Transparent Fingerprint Sensor System for Large Flat Panel Display

    Directory of Open Access Journals (Sweden)

    Wonkuk Seo

    2018-01-01

    Full Text Available In this paper, we introduce a transparent fingerprint sensing system using a thin film transistor (TFT sensor panel, based on a self-capacitive sensing scheme. An armorphousindium gallium zinc oxide (a-IGZO TFT sensor array and associated custom Read-Out IC (ROIC are implemented for the system. The sensor panel has a 200 × 200 pixel array and each pixel size is as small as 50 μm × 50 μm. The ROIC uses only eight analog front-end (AFE amplifier stages along with a successive approximation analog-to-digital converter (SAR ADC. To get the fingerprint image data from the sensor array, the ROIC senses a capacitance, which is formed by a cover glass material between a human finger and an electrode of each pixel of the sensor array. Three methods are reviewed for estimating the self-capacitance. The measurement result demonstrates that the transparent fingerprint sensor system has an ability to differentiate a human finger’s ridges and valleys through the fingerprint sensor array.

  5. Phase Imaging: A Compressive Sensing Approach

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, Sebastian; Stevens, Andrew; Browning, Nigel D.; Pohl, Darius; Nielsch, Kornelius; Rellinghaus, Bernd

    2017-07-01

    Since Wolfgang Pauli posed the question in 1933, whether the probability densities |Ψ(r)|² (real-space image) and |Ψ(q)|² (reciprocal space image) uniquely determine the wave function Ψ(r) [1], the so called Pauli Problem sparked numerous methods in all fields of microscopy [2, 3]. Reconstructing the complete wave function Ψ(r) = a(r)e-iφ(r) with the amplitude a(r) and the phase φ(r) from the recorded intensity enables the possibility to directly study the electric and magnetic properties of the sample through the phase. In transmission electron microscopy (TEM), electron holography is by far the most established method for phase reconstruction [4]. Requiring a high stability of the microscope, next to the installation of a biprism in the TEM, holography cannot be applied to any microscope straightforwardly. Recently, a phase retrieval approach was proposed using conventional TEM electron diffractive imaging (EDI). Using the SAD aperture as reciprocal-space constraint, a localized sample structure can be reconstructed from its diffraction pattern and a real-space image using the hybrid input-output algorithm [5]. We present an alternative approach using compressive phase-retrieval [6]. Our approach does not require a real-space image. Instead, random complimentary pairs of checkerboard masks are cut into a 200 nm Pt foil covering a conventional TEM aperture (cf. Figure 1). Used as SAD aperture, subsequently diffraction patterns are recorded from the same sample area. Hereby every mask blocks different parts of gold particles on a carbon support (cf. Figure 2). The compressive sensing problem has the following formulation. First, we note that the complex-valued reciprocal-space wave-function is the Fourier transform of the (also complex-valued) real-space wave-function, Ψ(q) = F[Ψ(r)], and subsequently the diffraction pattern image is given by |Ψ(q)|2 = |F[Ψ(r)]|2. We want to find Ψ(r) given a few differently coded diffraction pattern measurements yn

  6. FINGERPRINT VERIFICATION IN PERSONAL IDENTIFICATION BY APPLYING LOCAL WALSH HADAMARD TRANSFORM AND GABOR COEFFICIENTS

    Directory of Open Access Journals (Sweden)

    K N Pushpalatha

    2017-05-01

    Full Text Available In an era of advanced computer technology world where innumerable services such as access to bank accounts, or access to secured data or entry to some national important organizations require authentication of genuine individual. Among all biometric personal identification systems, fingerprint recognition system is most accurate and economical technology. In this paper we have proposed fingerprint recognition system using Local Walsh Hadamard Transform (LWHT with Phase Magnitude Histograms (PMHs for feature extraction. Fingerprints display oriented texture-like patterns. Gabor filters have the property of capturing global and local texture information from blur or unclear images and filter bank provides the orientation features which are robust to image distortion and rotation. The LWHT algorithm is compared with other two approaches viz., Gabor Coefficients and Directional Features. The three methods are compared using FVC 2006 Finger print database images. It is found from the observation that the values of TSR, FAR and FRR have improved results compared to existing algorithm.

  7. Intelligent fuzzy approach for fast fractal image compression

    Science.gov (United States)

    Nodehi, Ali; Sulong, Ghazali; Al-Rodhaan, Mznah; Al-Dhelaan, Abdullah; Rehman, Amjad; Saba, Tanzila

    2014-12-01

    Fractal image compression (FIC) is recognized as a NP-hard problem, and it suffers from a high number of mean square error (MSE) computations. In this paper, a two-phase algorithm was proposed to reduce the MSE computation of FIC. In the first phase, based on edge property, range and domains are arranged. In the second one, imperialist competitive algorithm (ICA) is used according to the classified blocks. For maintaining the quality of the retrieved image and accelerating algorithm operation, we divided the solutions into two groups: developed countries and undeveloped countries. Simulations were carried out to evaluate the performance of the developed approach. Promising results thus achieved exhibit performance better than genetic algorithm (GA)-based and Full-search algorithms in terms of decreasing the number of MSE computations. The number of MSE computations was reduced by the proposed algorithm for 463 times faster compared to the Full-search algorithm, although the retrieved image quality did not have a considerable change.

  8. Three-dimensional short-range MR angiography and multiplanar reconstruction images in the evaluation of neurovascular compression in hemifacial spasm

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Woo Suk; Kim, Eui Jong; Lee, Jae Gue; Rhee, Bong Arm [Kyunghee Univ. Hospital, Seoul (Korea, Republic of)

    1998-08-01

    To evaluate the diagnostic efficacy of three-dimensional(3D) short-range MR angiography(MRA) and multiplanar reconstruction(MPR) imaging in hemifacial spasm(HS). Materials and Methods : Two hundreds patients with HS were studied using a 1.5T MRI system with a 3D time-of-flight(TOF) MRA sequence. To reconstruct short-range MRA, 6-10 source images near the 7-8th cranial nerve complex were processed using a maximum-intensity projection technique. In addition, an MPR technique was used to investigate neurovascular compression. We observed the relationship between the root-exit zone(REZ) of the 7th cranial nerve and compressive vessel, and identified the compressive vessels on symptomatic sides. To investigate neurovascular contact, asymptomatic contralateral sides were also evaluated. Results : MRI showed that in 197 of 200 patients there was vascular compression or contact with the facial nerve REZ on symptomatic sides. One of the three remaining patients was suffering from acoustic neurinoma on the symptomatic side, while in two patients there were no definite abnormal findings.Compressive vessels were demonstrated in all 197 patients; 80 cases involved the anterior inferior cerebellar artery(AICA), 74 the posterior cerebellar artery(PICA), 13 the vertebral artery(VA), 16 the VA and AICA, eight the VA and PICA, and six the AICA and PICA. In all 197 patients, compressive vessels were reconstructed on one 3D short-range MRA image without discontinuation from vertebral or basilar arteries. 3D MPR studies provided additional information such as the direction of compression and course of the compressive vessel. In 31 patients there was neurovascular contact on the contralateral side at the 7-8th cranial nerve complex. Conclusion : Inpatients with HS, 3D short-range MRA and MPR images are excellent and very helpful for the investigation of neurovascular compression and the identification of compressive vessels.

  9. Three-dimensional short-range MR angiography and multiplanar reconstruction images in the evaluation of neurovascular compression in hemifacial spasm

    International Nuclear Information System (INIS)

    Choi, Woo Suk; Kim, Eui Jong; Lee, Jae Gue; Rhee, Bong Arm

    1998-01-01

    To evaluate the diagnostic efficacy of three-dimensional(3D) short-range MR angiography(MRA) and multiplanar reconstruction(MPR) imaging in hemifacial spasm(HS). Materials and Methods : Two hundreds patients with HS were studied using a 1.5T MRI system with a 3D time-of-flight(TOF) MRA sequence. To reconstruct short-range MRA, 6-10 source images near the 7-8th cranial nerve complex were processed using a maximum-intensity projection technique. In addition, an MPR technique was used to investigate neurovascular compression. We observed the relationship between the root-exit zone(REZ) of the 7th cranial nerve and compressive vessel, and identified the compressive vessels on symptomatic sides. To investigate neurovascular contact, asymptomatic contralateral sides were also evaluated. Results : MRI showed that in 197 of 200 patients there was vascular compression or contact with the facial nerve REZ on symptomatic sides. One of the three remaining patients was suffering from acoustic neurinoma on the symptomatic side, while in two patients there were no definite abnormal findings.Compressive vessels were demonstrated in all 197 patients; 80 cases involved the anterior inferior cerebellar artery(AICA), 74 the posterior cerebellar artery(PICA), 13 the vertebral artery(VA), 16 the VA and AICA, eight the VA and PICA, and six the AICA and PICA. In all 197 patients, compressive vessels were reconstructed on one 3D short-range MRA image without discontinuation from vertebral or basilar arteries. 3D MPR studies provided additional information such as the direction of compression and course of the compressive vessel. In 31 patients there was neurovascular contact on the contralateral side at the 7-8th cranial nerve complex. Conclusion : Inpatients with HS, 3D short-range MRA and MPR images are excellent and very helpful for the investigation of neurovascular compression and the identification of compressive vessels

  10. Balanced sparse model for tight frames in compressed sensing magnetic resonance imaging.

    Directory of Open Access Journals (Sweden)

    Yunsong Liu

    Full Text Available Compressed sensing has shown to be promising to accelerate magnetic resonance imaging. In this new technology, magnetic resonance images are usually reconstructed by enforcing its sparsity in sparse image reconstruction models, including both synthesis and analysis models. The synthesis model assumes that an image is a sparse combination of atom signals while the analysis model assumes that an image is sparse after the application of an analysis operator. Balanced model is a new sparse model that bridges analysis and synthesis models by introducing a penalty term on the distance of frame coefficients to the range of the analysis operator. In this paper, we study the performance of the balanced model in tight frame based compressed sensing magnetic resonance imaging and propose a new efficient numerical algorithm to solve the optimization problem. By tuning the balancing parameter, the new model achieves solutions of three models. It is found that the balanced model has a comparable performance with the analysis model. Besides, both of them achieve better results than the synthesis model no matter what value the balancing parameter is. Experiment shows that our proposed numerical algorithm constrained split augmented Lagrangian shrinkage algorithm for balanced model (C-SALSA-B converges faster than previously proposed algorithms accelerated proximal algorithm (APG and alternating directional method of multipliers for balanced model (ADMM-B.

  11. Nanotag luminescent fingerprint anti-counterfeiting technology

    DEFF Research Database (Denmark)

    Radziwon, Michal Jędrzej; Rubahn, Horst-Günter; Johansen, Stefan

    2012-01-01

    We describe a method to fabricate, transfer and validate via image processing nanofibre- based, unique security marks (“nanotags”) for anti-counterfeiting purposes. Epitaxial surface growth of oligophenylenes on a heated muscovite mica crystal results in a thin film of mutually aligned nanofibres....... This fingerprint can be transferred on an adhesive tape as a label of a product, imaged using low magnification microscopy, digitalised and stored in a database. Infrared surface heating, enforced cooling and load lock transfer makes the fabrication process fast and scalable to mass production....

  12. Magnetic resonance imaging of malignant extradural tumors with acute spinal cord compression

    International Nuclear Information System (INIS)

    Lien, H.H.; Blomlie, V.; Heimdal, K.; Norwegian Radium Hospital, Oslo; Norwegian Radium Hospital, Oslo

    1990-01-01

    Thirty-six cancer patients with extradural spinal metastatic disease and acute symptoms of spinal cord compression underwent magnetic resonance (MR) imaging at 1.5 T. Cord involvement was found in all 36, 7 of whom had lesions at 2 different sites. Vertebral metastases in addition to those corresponding to the cord compressions were detected in 27 patients, and 18 of these had widespread deposits. MR displayed the extent of the tumors in the craniocaudal and lateral directions. The ability to identify multiple sites of cord and vertebral involvement and to delineate tumor accurately makes MR the examination of choice in cancer patients with suspected spinal cord compression. It obviates the need for myelography and postmyelography CT in this group of patients. (orig.)

  13. Medical image compression with fast Hartley transform

    International Nuclear Information System (INIS)

    Paik, C.H.; Fox, M.D.

    1988-01-01

    The purpose of data compression is storage and transmission of images with minimization of memory for storage and bandwidth for transmission, while maintaining robustness in the presence of transmission noise or storage medium errors. Here, the fast Hartley transform (FHT) is used for transformation and a new thresholding method is devised. The FHT is used instead of the fast Fourier transform (FFT), thus providing calculation at least as fast as that of the fastest algorithm of FFT. This real numbered transform requires only half the memory array space for saving of transform coefficients and allows for easy implementation on very large-scale integrated circuits because of the use of the same formula for both forward and inverse transformation and the conceptually straightforward algorithm. Threshold values were adaptively selected according to the correlation factor of each block of equally divided blocks of the image. Therefore, this approach provided a coding scheme that included maximum information with minimum image bandwidth. Overall, the results suggested that the Hartley transform adaptive thresholding approach results in improved fidelity, shorter decoding time, and greater robustness in the presence of noise than previous approaches

  14. Mammographic compression in Asian women.

    Science.gov (United States)

    Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong

    2017-01-01

    To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35-80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (pAsian women. The median compression force should be about 8.1 daN compared to the current 12.0 daN. Decreasing compression force from 12.0 daN to 9.0 daN increased CBT by 3.3±1.4 mm, MGD by 6.2-11.0%, and caused no significant effects on image quality (p>0.05). Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD.

  15. Benign compression fractures of the spine: signal patterns

    International Nuclear Information System (INIS)

    Ryu, Kyung Nam; Choi, Woo Suk; Lee, Sun Wha; Lim, Jae Hoon

    1992-01-01

    Fifteen patients with 38 compression fractures of the spine underwent magnetic resonance(MR) imaging. We retrospectively evaluated MR images in those benign compression fractures. MR images showed four patterns in T1-weighted images. MR imaging patterns were normal signal(21), band like low signal(8), low signal with preservation of peripheral portion of the body(8), and diffuse low signal through the vertebral body(1). The low signal portions were changed to high signal intensities in T2-weighted images. In 7 of 15 patients (11 compression fractures), there was a history of trauma, and the remaining 8 patients (27 compression fractures) had no history of trauma. Benign compression fractures of trauma, remained 8 patients (27 compression fractures) were non-traumatic. Benign compression fractures of the spine reveal variable signal intensities in MR imagings. These patterns of benign compression fractures may be useful in interpretation of MR imagings of the spine

  16. Statistical Prior Aided Separate Compressed Image Sensing for Green Internet of Multimedia Things

    Directory of Open Access Journals (Sweden)

    Shaohua Wu

    2017-01-01

    Full Text Available In this paper, we aim to propose an image compression and reconstruction strategy under the compressed sensing (CS framework to enable the green computation and communication for the Internet of Multimedia Things (IoMT. The core idea is to explore the statistics of image representations in the wavelet domain to aid the reconstruction method design. Specifically, the energy distribution of natural images in the wavelet domain is well characterized by an exponential decay model and then used in the two-step separate image reconstruction method, by which the row-wise (or column-wise intermediates and column-wise (or row-wise final results are reconstructed sequentially. Both the intermediates and the final results are constrained to conform with the statistical prior by using a weight matrix. Two recovery strategies with different levels of complexity, namely, the direct recovery with fixed weight matrix (DR-FM and the iterative recovery with refined weight matrix (IR-RM, are designed to obtain different quality of recovery. Extensive simulations show that both DR-FM and IR-RM can achieve much better image reconstruction quality with much faster recovery speed than traditional methods.

  17. Diagnostic imaging of compression neuropathy; Bildgebende Diagnostik von Nervenkompressionssyndromen

    Energy Technology Data Exchange (ETDEWEB)

    Weishaupt, D.; Andreisek, G. [Universitaetsspital, Institut fuer Diagnostische Radiologie, Zuerich (Switzerland)

    2007-03-15

    Compression-induced neuropathy of peripheral nerves can cause severe pain of the foot and ankle. Early diagnosis is important to institute prompt treatment and to minimize potential injury. Although clinical examination combined with electrophysiological studies remain the cornerstone of the diagnostic work-up, in certain cases, imaging may provide key information with regard to the exact anatomic location of the lesion or aid in narrowing the differential diagnosis. In other patients with peripheral neuropathies of the foot and ankle, imaging may establish the etiology of the condition and provide information crucial for management and/or surgical planning. MR imaging and ultrasound provide direct visualization of the nerve and surrounding abnormalities. Bony abnormalities contributing to nerve compression are best assessed by radiographs and CT. Knowledge of the anatomy, the etiology, typical clinical findings, and imaging features of peripheral neuropathies affecting the peripheral nerves of the foot and ankle will allow for a more confident diagnosis. (orig.) [German] Kompressionsbedingte Schaedigungen peripherer Nerven koennen die Ursache hartnaeckiger Schmerzen im Bereich des Sprunggelenks und Fusses sein. Eine fruehzeitige Diagnose ist entscheidend, um den Patienten der richtigen Therapie zuzufuehren und potenzielle Schaedigungen zu vermeiden oder zu verringern. Obschon die klinische Untersuchung und die elektrophysiologische Abklaerungen die wichtigsten Elemente der Diagnostik peripherer Nervenkompressionssyndrome sind, kann die Bildgebung entscheidend sein, wenn es darum geht, die Hoehe des Nervenschadens festzulegen oder die Differenzialdiagnose einzugrenzen. In gewissen Faellen kann durch Bildgebung sogar die Ursache der Nervenkompression gefunden werden. In anderen Faellen ist die Bildgebung wichtig bei der Therapieplanung, insbesondere dann, wenn die Laesion chirurgisch angegangen wird. Magnetresonanztomographie (MRT) und Sonographie ermoeglichen eine

  18. A New Multistage Lattice Vector Quantization with Adaptive Subband Thresholding for Image Compression

    Directory of Open Access Journals (Sweden)

    J. Soraghan

    2007-01-01

    Full Text Available Lattice vector quantization (LVQ reduces coding complexity and computation due to its regular structure. A new multistage LVQ (MLVQ using an adaptive subband thresholding technique is presented and applied to image compression. The technique concentrates on reducing the quantization error of the quantized vectors by “blowing out” the residual quantization errors with an LVQ scale factor. The significant coefficients of each subband are identified using an optimum adaptive thresholding scheme for each subband. A variable length coding procedure using Golomb codes is used to compress the codebook index which produces a very efficient and fast technique for entropy coding. Experimental results using the MLVQ are shown to be significantly better than JPEG 2000 and the recent VQ techniques for various test images.

  19. A New Multistage Lattice Vector Quantization with Adaptive Subband Thresholding for Image Compression

    Directory of Open Access Journals (Sweden)

    Salleh MFM

    2007-01-01

    Full Text Available Lattice vector quantization (LVQ reduces coding complexity and computation due to its regular structure. A new multistage LVQ (MLVQ using an adaptive subband thresholding technique is presented and applied to image compression. The technique concentrates on reducing the quantization error of the quantized vectors by "blowing out" the residual quantization errors with an LVQ scale factor. The significant coefficients of each subband are identified using an optimum adaptive thresholding scheme for each subband. A variable length coding procedure using Golomb codes is used to compress the codebook index which produces a very efficient and fast technique for entropy coding. Experimental results using the MLVQ are shown to be significantly better than JPEG 2000 and the recent VQ techniques for various test images.

  20. Learning-based compressed sensing for infrared image super resolution

    Science.gov (United States)

    Zhao, Yao; Sui, Xiubao; Chen, Qian; Wu, Shaochi

    2016-05-01

    This paper presents an infrared image super-resolution method based on compressed sensing (CS). First, the reconstruction model under the CS framework is established and a Toeplitz matrix is selected as the sensing matrix. Compared with traditional learning-based methods, the proposed method uses a set of sub-dictionaries instead of two coupled dictionaries to recover high resolution (HR) images. And Toeplitz sensing matrix allows the proposed method time-efficient. Second, all training samples are divided into several feature spaces by using the proposed adaptive k-means classification method, which is more accurate than the standard k-means method. On the basis of this approach, a complex nonlinear mapping from the HR space to low resolution (LR) space can be converted into several compact linear mappings. Finally, the relationships between HR and LR image patches can be obtained by multi-sub-dictionaries and HR infrared images are reconstructed by the input LR images and multi-sub-dictionaries. The experimental results show that the proposed method is quantitatively and qualitatively more effective than other state-of-the-art methods.

  1. Low rank alternating direction method of multipliers reconstruction for MR fingerprinting.

    Science.gov (United States)

    Assländer, Jakob; Cloos, Martijn A; Knoll, Florian; Sodickson, Daniel K; Hennig, Jürgen; Lattanzi, Riccardo

    2018-01-01

    The proposed reconstruction framework addresses the reconstruction accuracy, noise propagation and computation time for magnetic resonance fingerprinting. Based on a singular value decomposition of the signal evolution, magnetic resonance fingerprinting is formulated as a low rank (LR) inverse problem in which one image is reconstructed for each singular value under consideration. This LR approximation of the signal evolution reduces the computational burden by reducing the number of Fourier transformations. Also, the LR approximation improves the conditioning of the problem, which is further improved by extending the LR inverse problem to an augmented Lagrangian that is solved by the alternating direction method of multipliers. The root mean square error and the noise propagation are analyzed in simulations. For verification, in vivo examples are provided. The proposed LR alternating direction method of multipliers approach shows a reduced root mean square error compared to the original fingerprinting reconstruction, to a LR approximation alone and to an alternating direction method of multipliers approach without a LR approximation. Incorporating sensitivity encoding allows for further artifact reduction. The proposed reconstruction provides robust convergence, reduced computational burden and improved image quality compared to other magnetic resonance fingerprinting reconstruction approaches evaluated in this study. Magn Reson Med 79:83-96, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  2. Capacity and optimal collusion attack channels for Gaussian fingerprinting games

    Science.gov (United States)

    Wang, Ying; Moulin, Pierre

    2007-02-01

    In content fingerprinting, the same media covertext - image, video, audio, or text - is distributed to many users. A fingerprint, a mark unique to each user, is embedded into each copy of the distributed covertext. In a collusion attack, two or more users may combine their copies in an attempt to "remove" their fingerprints and forge a pirated copy. To trace the forgery back to members of the coalition, we need fingerprinting codes that can reliably identify the fingerprints of those members. Researchers have been focusing on designing or testing fingerprints for Gaussian host signals and the mean square error (MSE) distortion under some classes of collusion attacks, in terms of the detector's error probability in detecting collusion members. For example, under the assumptions of Gaussian fingerprints and Gaussian attacks (the fingerprinted signals are averaged and then the result is passed through a Gaussian test channel), Moulin and Briassouli1 derived optimal strategies in a game-theoretic framework that uses the detector's error probability as the performance measure for a binary decision problem (whether a user participates in the collusion attack or not); Stone2 and Zhao et al. 3 studied average and other non-linear collusion attacks for Gaussian-like fingerprints; Wang et al. 4 stated that the average collusion attack is the most efficient one for orthogonal fingerprints; Kiyavash and Moulin 5 derived a mathematical proof of the optimality of the average collusion attack under some assumptions. In this paper, we also consider Gaussian cover signals, the MSE distortion, and memoryless collusion attacks. We do not make any assumption about the fingerprinting codes used other than an embedding distortion constraint. Also, our only assumptions about the attack channel are an expected distortion constraint, a memoryless constraint, and a fairness constraint. That is, the colluders are allowed to use any arbitrary nonlinear strategy subject to the above

  3. Compression-Based Tools for Navigation with an Image Database

    Directory of Open Access Journals (Sweden)

    Giovanni Motta

    2012-01-01

    Full Text Available We present tools that can be used within a larger system referred to as a passive assistant. The system receives information from a mobile device, as well as information from an image database such as Google Street View, and employs image processing to provide useful information about a local urban environment to a user who is visually impaired. The first stage acquires and computes accurate location information, the second stage performs texture and color analysis of a scene, and the third stage provides specific object recognition and navigation information. These second and third stages rely on compression-based tools (dimensionality reduction, vector quantization, and coding that are enhanced by knowledge of (approximate location of objects.

  4. MR fingerprinting using fast imaging with steady state precession (FISP) with spiral readout.

    Science.gov (United States)

    Jiang, Yun; Ma, Dan; Seiberlich, Nicole; Gulani, Vikas; Griswold, Mark A

    2015-12-01

    This study explores the possibility of using gradient echo-based sequences other than balanced steady-state free precession (bSSFP) in the magnetic resonance fingerprinting (MRF) framework to quantify the relaxation parameters . An MRF method based on a fast imaging with steady-state precession (FISP) sequence structure is presented. A dictionary containing possible signal evolutions with physiological range of T1 and T2 was created using the extended phase graph formalism according to the acquisition parameters. The proposed method was evaluated in a phantom and a human brain. T1 , T2 , and proton density were quantified directly from the undersampled data by the pattern recognition algorithm. T1 and T2 values from the phantom demonstrate that the results of MRF FISP are in good agreement with the traditional gold-standard methods. T1 and T2 values in brain are within the range of previously reported values. MRF-FISP enables a fast and accurate quantification of the relaxation parameters. It is immune to the banding artifact of bSSFP due to B0 inhomogeneities, which could improve the ability to use MRF for applications beyond brain imaging. © 2014 Wiley Periodicals, Inc.

  5. High bit depth infrared image compression via low bit depth codecs

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    .264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can...

  6. Effects of image compression and degradation on an automatic diabetic retinopathy screening algorithm

    Science.gov (United States)

    Agurto, C.; Barriga, S.; Murray, V.; Pattichis, M.; Soliz, P.

    2010-03-01

    Diabetic retinopathy (DR) is one of the leading causes of blindness among adult Americans. Automatic methods for detection of the disease have been developed in recent years, most of them addressing the segmentation of bright and red lesions. In this paper we present an automatic DR screening system that does approach the problem through the segmentation of features. The algorithm determines non-diseased retinal images from those with pathology based on textural features obtained using multiscale Amplitude Modulation-Frequency Modulation (AM-FM) decompositions. The decomposition is represented as features that are the inputs to a classifier. The algorithm achieves 0.88 area under the ROC curve (AROC) for a set of 280 images from the MESSIDOR database. The algorithm is then used to analyze the effects of image compression and degradation, which will be present in most actual clinical or screening environments. Results show that the algorithm is insensitive to illumination variations, but high rates of compression and large blurring effects degrade its performance.

  7. Dealing with Insufficient Location Fingerprints in Wi-Fi Based Indoor Location Fingerprinting

    Directory of Open Access Journals (Sweden)

    Kai Dong

    2017-01-01

    Full Text Available The development of the Internet of Things has accelerated research in the indoor location fingerprinting technique, which provides value-added localization services for existing WLAN infrastructures without the need for any specialized hardware. The deployment of a fingerprinting based localization system requires an extremely large amount of measurements on received signal strength information to generate a location fingerprint database. Nonetheless, this requirement can rarely be satisfied in most indoor environments. In this paper, we target one but common situation when the collected measurements on received signal strength information are insufficient, and show limitations of existing location fingerprinting methods in dealing with inadequate location fingerprints. We also introduce a novel method to reduce noise in measuring the received signal strength based on the maximum likelihood estimation, and compute locations from inadequate location fingerprints by using the stochastic gradient descent algorithm. Our experiment results show that our proposed method can achieve better localization performance even when only a small quantity of RSS measurements is available. Especially when the number of observations at each location is small, our proposed method has evident superiority in localization accuracy.

  8. Comparative data compression techniques and multi-compression results

    International Nuclear Information System (INIS)

    Hasan, M R; Ibrahimy, M I; Motakabber, S M A; Ferdaus, M M; Khan, M N H

    2013-01-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms

  9. Preclinical Magnetic Resonance Fingerprinting (MRF) at 7 T: Effective Quantitative Imaging for Rodent Disease Models

    Science.gov (United States)

    Gao, Ying; Chen, Yong; Ma, Dan; Jiang, Yun; Herrmann, Kelsey A.; Vincent, Jason A.; Dell, Katherine M.; Drumm, Mitchell L.; Brady-Kalnay, Susann M.; Griswold, Mark A.; Flask, Chris A.; Lu, Lan

    2015-01-01

    High field, preclinical magnetic resonance imaging (MRI) scanners are now commonly used to quantitatively assess disease status and efficacy of novel therapies in a wide variety of rodent models. Unfortunately, conventional MRI methods are highly susceptible to respiratory and cardiac motion artifacts resulting in potentially inaccurate and misleading data. We have developed an initial preclinical, 7.0 T MRI implementation of the highly novel Magnetic Resonance Fingerprinting (MRF) methodology that has been previously described for clinical imaging applications. The MRF technology combines a priori variation in the MRI acquisition parameters with dictionary-based matching of acquired signal evolution profiles to simultaneously generate quantitative maps of T1 and T2 relaxation times and proton density. This preclinical MRF acquisition was constructed from a Fast Imaging with Steady-state Free Precession (FISP) MRI pulse sequence to acquire 600 MRF images with both evolving T1 and T2 weighting in approximately 30 minutes. This initial high field preclinical MRF investigation demonstrated reproducible and differentiated estimates of in vitro phantoms with different relaxation times. In vivo preclinical MRF results in mouse kidneys and brain tumor models demonstrated an inherent resistance to respiratory motion artifacts as well as sensitivity to known pathology. These results suggest that MRF methodology may offer the opportunity for quantification of numerous MRI parameters for a wide variety of preclinical imaging applications. PMID:25639694

  10. Design And Implementation Of Bank Locker Security System Based On Fingerprint Sensing Circuit And RFID Reader

    Directory of Open Access Journals (Sweden)

    Khaing Mar Htwe

    2015-07-01

    Full Text Available Abstract The main goal of this system is to design a locker security system using RFID and Fingerprint. In this system only authenticated person can open the door. A security system is implemented containing door locking system using passive type of RFID which can activate authenticate and validate the user and unlock the door in real time for secure access. The advantage of using passive RFID is that it functions without a battery and passive tags are lighter and are less expensive than the active tags. This system consists of fingerprint reader microcontroller RFID reader and PC. The RFID reader reads the id number from passive. The fingerprint sensor will check incoming image with enrolled data and it will send confirming signal for C program. If both RFID check and fingerprint image confirmation are matched the microcontroller will drive door motor according to sensors at door edges. This system is more secure than other systems because two codes protection method used.

  11. Detection of cores in fingerprints with improved dimension reduction

    NARCIS (Netherlands)

    Bazen, A.M.; Veldhuis, Raymond N.J.

    In this paper, we present a statistical approach to core detection in fingerprint images that is based on the likelihood ratio, using models of variation of core templates and randomly chosen templates. Additionally, we propose an alternative dimension reduction method. Unlike standard linear

  12. On the Separation of Quantum Noise for Cardiac X-Ray Image Compression

    NARCIS (Netherlands)

    de Bruijn, F.J.; Slump, Cornelis H.

    1996-01-01

    In lossy medical image compression, the requirements for the preservation of diagnostic integrity cannot be easily formulated in terms of a perceptual model. Especially, since human visual perception is dependent on numerous factors such as the viewing conditions and psycho-visual factors.

  13. Fast downscaled inverses for images compressed with M-channel lapped transforms.

    Science.gov (United States)

    de Queiroz, R L; Eschbach, R

    1997-01-01

    Compressed images may be decompressed and displayed or printed using different devices at different resolutions. Full decompression and rescaling in space domain is a very expensive method. We studied downscaled inverses where the image is decompressed partially, and a reduced inverse transform is used to recover the image. In this fashion, fewer transform coefficients are used and the synthesis process is simplified. We studied the design of fast inverses, for a given forward transform. General solutions are presented for M-channel finite impulse response (FIR) filterbanks, of which block and lapped transforms are a subset. Designs of faster inverses are presented for popular block and lapped transforms.

  14. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    Science.gov (United States)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  15. Collusion-resistant audio fingerprinting system in the modulated complex lapped transform domain.

    Directory of Open Access Journals (Sweden)

    Jose Juan Garcia-Hernandez

    Full Text Available Collusion-resistant fingerprinting paradigm seems to be a practical solution to the piracy problem as it allows media owners to detect any unauthorized copy and trace it back to the dishonest users. Despite the billionaire losses in the music industry, most of the collusion-resistant fingerprinting systems are devoted to digital images and very few to audio signals. In this paper, state-of-the-art collusion-resistant fingerprinting ideas are extended to audio signals and the corresponding parameters and operation conditions are proposed. Moreover, in order to carry out fingerprint detection using just a fraction of the pirate audio clip, block-based embedding and its corresponding detector is proposed. Extensive simulations show the robustness of the proposed system against average collusion attack. Moreover, by using an efficient Fast Fourier Transform core and standard computer machines it is shown that the proposed system is suitable for real-world scenarios.

  16. An image compression method for space multispectral time delay and integration charge coupled device camera

    International Nuclear Information System (INIS)

    Li Jin; Jin Long-Xu; Zhang Ran-Feng

    2013-01-01

    Multispectral time delay and integration charge coupled device (TDICCD) image compression requires a low-complexity encoder because it is usually completed on board where the energy and memory are limited. The Consultative Committee for Space Data Systems (CCSDS) has proposed an image data compression (CCSDS-IDC) algorithm which is so far most widely implemented in hardware. However, it cannot reduce spectral redundancy in multispectral images. In this paper, we propose a low-complexity improved CCSDS-IDC (ICCSDS-IDC)-based distributed source coding (DSC) scheme for multispectral TDICCD image consisting of a few bands. Our scheme is based on an ICCSDS-IDC approach that uses a bit plane extractor to parse the differences in the original image and its wavelet transformed coefficient. The output of bit plane extractor will be encoded by a first order entropy coder. Low-density parity-check-based Slepian—Wolf (SW) coder is adopted to implement the DSC strategy. Experimental results on space multispectral TDICCD images show that the proposed scheme significantly outperforms the CCSDS-IDC-based coder in each band

  17. Design of a receiver operating characteristic (ROC) study of 10:1 lossy image compression

    Science.gov (United States)

    Collins, Cary A.; Lane, David; Frank, Mark S.; Hardy, Michael E.; Haynor, David R.; Smith, Donald V.; Parker, James E.; Bender, Gregory N.; Kim, Yongmin

    1994-04-01

    The digital archiving system at Madigan Army Medical Center (MAMC) uses a 10:1 lossy data compression algorithm for most forms of computed radiography. A systematic study on the potential effect of lossy image compression on patient care has been initiated with a series of studies focused on specific diagnostic tasks. The studies are based upon the receiver operating characteristic (ROC) method of analysis for diagnostic systems. The null hypothesis is that observer performance with approximately 10:1 compressed and decompressed images is not different from using original, uncompressed images for detecting subtle pathologic findings seen on computed radiographs of bone, chest, or abdomen, when viewed on a high-resolution monitor. Our design involves collecting cases from eight pathologic categories. Truth is determined by committee using confirmatory studies performed during routine clinical practice whenever possible. Software has been developed to aid in case collection and to allow reading of the cases for the study using stand-alone Siemens Litebox workstations. Data analysis uses two methods, ROC analysis and free-response ROC (FROC) methods. This study will be one of the largest ROC/FROC studies of its kind and could benefit clinical radiology practice using PACS technology. The study design and results from a pilot FROC study are presented.

  18. A parallelizable compression scheme for Monte Carlo scatter system matrices in PET image reconstruction

    International Nuclear Information System (INIS)

    Rehfeld, Niklas; Alber, Markus

    2007-01-01

    Scatter correction techniques in iterative positron emission tomography (PET) reconstruction increasingly utilize Monte Carlo (MC) simulations which are very well suited to model scatter in the inhomogeneous patient. Due to memory constraints the results of these simulations are not stored in the system matrix, but added or subtracted as a constant term or recalculated in the projector at each iteration. This implies that scatter is not considered in the back-projector. The presented scheme provides a method to store the simulated Monte Carlo scatter in a compressed scatter system matrix. The compression is based on parametrization and B-spline approximation and allows the formation of the scatter matrix based on low statistics simulations. The compression as well as the retrieval of the matrix elements are parallelizable. It is shown that the proposed compression scheme provides sufficient compression so that the storage in memory of a scatter system matrix for a 3D scanner is feasible. Scatter matrices of two different 2D scanner geometries were compressed and used for reconstruction as a proof of concept. Compression ratios of 0.1% could be achieved and scatter induced artifacts in the images were successfully reduced by using the compressed matrices in the reconstruction algorithm

  19. Fingerprint separation: an application of ICA

    Science.gov (United States)

    Singh, Meenakshi; Singh, Deepak Kumar; Kalra, Prem Kumar

    2008-04-01

    Among all existing biometric techniques, fingerprint-based identification is the oldest method, which has been successfully used in numerous applications. Fingerprint-based identification is the most recognized tool in biometrics because of its reliability and accuracy. Fingerprint identification is done by matching questioned and known friction skin ridge impressions from fingers, palms, and toes to determine if the impressions are from the same finger (or palm, toe, etc.). There are many fingerprint matching algorithms which automate and facilitate the job of fingerprint matching, but for any of these algorithms matching can be difficult if the fingerprints are overlapped or mixed. In this paper, we have proposed a new algorithm for separating overlapped or mixed fingerprints so that the performance of the matching algorithms will improve when they are fed with these inputs. Independent Component Analysis (ICA) has been used as a tool to separate the overlapped or mixed fingerprints.

  20. Backtracking-Based Iterative Regularization Method for Image Compressive Sensing Recovery

    Directory of Open Access Journals (Sweden)

    Lingjun Liu

    2017-01-01

    Full Text Available This paper presents a variant of the iterative shrinkage-thresholding (IST algorithm, called backtracking-based adaptive IST (BAIST, for image compressive sensing (CS reconstruction. For increasing iterations, IST usually yields a smoothing of the solution and runs into prematurity. To add back more details, the BAIST method backtracks to the previous noisy image using L2 norm minimization, i.e., minimizing the Euclidean distance between the current solution and the previous ones. Through this modification, the BAIST method achieves superior performance while maintaining the low complexity of IST-type methods. Also, BAIST takes a nonlocal regularization with an adaptive regularizor to automatically detect the sparsity level of an image. Experimental results show that our algorithm outperforms the original IST method and several excellent CS techniques.