WorldWideScience

Sample records for robust image watermarking

  1. Copyright Protection of Color Imaging Using Robust-Encoded Watermarking

    Directory of Open Access Journals (Sweden)

    M. Cedillo-Hernandez

    2015-04-01

    Full Text Available In this paper we present a robust-encoded watermarking method applied to color images for copyright protection, which presents robustness against several geometric and signal processing distortions. Trade-off between payload, robustness and imperceptibility is a very important aspect which has to be considered when a watermark algorithm is designed. In our proposed scheme, previously to be embedded into the image, the watermark signal is encoded using a convolutional encoder, which can perform forward error correction achieving better robustness performance. Then, the embedding process is carried out through the discrete cosine transform domain (DCT of an image using the image normalization technique to accomplish robustness against geometric and signal processing distortions. The embedded watermark coded bits are extracted and decoded using the Viterbi algorithm. In order to determine the presence or absence of the watermark into the image we compute the bit error rate (BER between the recovered and the original watermark data sequence. The quality of the watermarked image is measured using the well-known indices: Peak Signal to Noise Ratio (PSNR, Visual Information Fidelity (VIF and Structural Similarity Index (SSIM. The color difference between the watermarked and original images is obtained by using the Normalized Color Difference (NCD measure. The experimental results show that the proposed method provides good performance in terms of imperceptibility and robustness. The comparison among the proposed and previously reported methods based on different techniques is also provided.

  2. Invertible chaotic fragile watermarking for robust image authentication

    International Nuclear Information System (INIS)

    Sidiropoulos, Panagiotis; Nikolaidis, Nikos; Pitas, Ioannis

    2009-01-01

    Fragile watermarking is a popular method for image authentication. In such schemes, a fragile signal that is sensitive to manipulations is embedded in the image, so that it becomes undetectable after any modification of the original work. Most algorithms focus either on the ability to retrieve the original work after watermark detection (invertibility) or on detecting which image parts have been altered (localization). Furthermore, the majority of fragile watermarking schemes suffer from robustness flaws. We propose a new technique that combines localization and invertibility. Moreover, watermark dependency on the original image and the non-linear watermark embedding procedure guarantees that no malicious attacks will manage to create information leaks.

  3. Image-adaptive and robust digital wavelet-domain watermarking for images

    Science.gov (United States)

    Zhao, Yi; Zhang, Liping

    2018-03-01

    We propose a new frequency domain wavelet based watermarking technique. The key idea of our scheme is twofold: multi-tier solution representation of image and odd-even quantization embedding/extracting watermark. Because many complementary watermarks need to be hidden, the watermark image designed is image-adaptive. The meaningful and complementary watermark images was embedded into the original image (host image) by odd-even quantization modifying coefficients, which was selected from the detail wavelet coefficients of the original image, if their magnitudes are larger than their corresponding Just Noticeable Difference thresholds. The tests show good robustness against best-known attacks such as noise addition, image compression, median filtering, clipping as well as geometric transforms. Further research may improve the performance by refining JND thresholds.

  4. A Robust Image Watermarking in the Joint Time-Frequency Domain

    Directory of Open Access Journals (Sweden)

    Yalçın Çekiç

    2010-01-01

    Full Text Available With the rapid development of computers and internet applications, copyright protection of multimedia data has become an important problem. Watermarking techniques are proposed as a solution to copyright protection of digital media files. In this paper, a new, robust, and high-capacity watermarking method that is based on spatiofrequency (SF representation is presented. We use the discrete evolutionary transform (DET calculated by the Gabor expansion to represent an image in the joint SF domain. The watermark is embedded onto selected coefficients in the joint SF domain. Hence, by combining the advantages of spatial and spectral domain watermarking methods, a robust, invisible, secure, and high-capacity watermarking method is presented. A correlation-based detector is also proposed to detect and extract any possible watermarks on an image. The proposed watermarking method was tested on some commonly used test images under different signal processing attacks like additive noise, Wiener and Median filtering, JPEG compression, rotation, and cropping. Simulation results show that our method is robust against all of the attacks.

  5. A robust color image watermarking algorithm against rotation attacks

    Science.gov (United States)

    Han, Shao-cheng; Yang, Jin-feng; Wang, Rui; Jia, Gui-min

    2018-01-01

    A robust digital watermarking algorithm is proposed based on quaternion wavelet transform (QWT) and discrete cosine transform (DCT) for copyright protection of color images. The luminance component Y of a host color image in YIQ space is decomposed by QWT, and then the coefficients of four low-frequency subbands are transformed by DCT. An original binary watermark scrambled by Arnold map and iterated sine chaotic system is embedded into the mid-frequency DCT coefficients of the subbands. In order to improve the performance of the proposed algorithm against rotation attacks, a rotation detection scheme is implemented before watermark extracting. The experimental results demonstrate that the proposed watermarking scheme shows strong robustness not only against common image processing attacks but also against arbitrary rotation attacks.

  6. Image segmentation-based robust feature extraction for color image watermarking

    Science.gov (United States)

    Li, Mianjie; Deng, Zeyu; Yuan, Xiaochen

    2018-04-01

    This paper proposes a local digital image watermarking method based on Robust Feature Extraction. The segmentation is achieved by Simple Linear Iterative Clustering (SLIC) based on which an Image Segmentation-based Robust Feature Extraction (ISRFE) method is proposed for feature extraction. Our method can adaptively extract feature regions from the blocks segmented by SLIC. This novel method can extract the most robust feature region in every segmented image. Each feature region is decomposed into low-frequency domain and high-frequency domain by Discrete Cosine Transform (DCT). Watermark images are then embedded into the coefficients in the low-frequency domain. The Distortion-Compensated Dither Modulation (DC-DM) algorithm is chosen as the quantization method for embedding. The experimental results indicate that the method has good performance under various attacks. Furthermore, the proposed method can obtain a trade-off between high robustness and good image quality.

  7. A Robust Color Image Watermarking Scheme Using Entropy and QR Decomposition

    Directory of Open Access Journals (Sweden)

    L. Laur

    2015-12-01

    Full Text Available Internet has affected our everyday life drastically. Expansive volumes of information are exchanged over the Internet consistently which causes numerous security concerns. Issues like content identification, document and image security, audience measurement, ownership, copyrights and others can be settled by using digital watermarking. In this work, robust and imperceptible non-blind color image watermarking algorithm is proposed, which benefit from the fact that watermark can be hidden in different color channel which results into further robustness of the proposed technique to attacks. Given method uses some algorithms such as entropy, discrete wavelet transform, Chirp z-transform, orthogonal-triangular decomposition and Singular value decomposition in order to embed the watermark in a color image. Many experiments are performed using well-known signal processing attacks such as histogram equalization, adding noise and compression. Experimental results show that proposed scheme is imperceptible and robust against common signal processing attacks.

  8. Computationally Efficient Robust Color Image Watermarking Using Fast Walsh Hadamard Transform

    Directory of Open Access Journals (Sweden)

    Suja Kalarikkal Pullayikodi

    2017-10-01

    Full Text Available Watermark is the copy deterrence mechanism used in the multimedia signal that is to be protected from hacking and piracy such a way that it can later be extracted from the watermarked signal by the decoder. Watermarking can be used in various applications such as authentication, video indexing, copyright protection and access control. In this paper a new CDMA (Code Division Multiple Access based robust watermarking algorithm using customized 8 × 8 Walsh Hadamard Transform, is proposed for the color images and detailed performance and robustness analysis have been performed. The paper studies in detail the effect of spreading code length, number of spreading codes and type of spreading codes on the performance of the watermarking system. Compared to the existing techniques the proposed scheme is computationally more efficient and consumes much less time for execution. Furthermore, the proposed scheme is robust and survives most of the common signal processing and geometric attacks.

  9. A robust image watermarking in contourlet transform domain

    Science.gov (United States)

    Sharma, Rajat; Gupta, Abhishek Kumar; Singh, Deepak; Verma, Vivek Singh; Bhardwaj, Anuj

    2017-10-01

    A lot of work has been done in the field of image watermarking to overcome the problems of rightful ownership, copyright protection etc. In order to provide a robust solution of such issues, the authors propose a hybrid approach that involves contourlet, lifting wavelet, and discrete cosine transform. The first level coefficients of the original image which are obtained using contourlet transform are further decomposed using one level lifting wavelet transform. After that, these coefficients are modified using discrete cosine transform. Whereas, second level subband of contourlet transform coefficients are used to obtain block wise modification parameter based on edge detection and entropy calculations. Watermark bits are embedded by quantizing the discrete cosine transform coefficient blocks obtained using HL sub-band of first level lifting wavelet transform coefficients. The experimental results reveal that the proposed scheme has high robustness and imperceptibility.

  10. Robust Fourier Watermarking for ID Images on Smart Card Plastic Supports

    Directory of Open Access Journals (Sweden)

    RIAD, R.

    2016-11-01

    Full Text Available Security checking can be improved by watermarking identity (ID images printed on smart cards plastic supports. The major challenge is resistance to attacks: printing the images on the plastic cards, durability and other attacks then scanning the image from the plastic card. In this work, a robust watermarking technique is presented in this context. It is composed of three main mechanisms. The first is a watermarking algorithm based on the Fourier transform to cope with global geometric distortions. The second comprises a filter that reduces image blurring. The third attenuates color degradations. Experiments on 400 ID images show that the Wiener filter strongly improves the detection rate and outperforms competitive algorithms (blind deconvolution and unsharp filter. Color corrections also enhance the watermarking score. The whole scheme has a high efficiency and a low computational cost. It makes it compatible with the desired industrial constraints, i.e. the watermark is to be invisible, the error rate must be lower than 1%, and the detection of the mark should be fast and simple for the user.

  11. Robust Digital Image Watermarking Against Cropping Using Sudoku Puzzle in Spatial and Transform Domain

    Directory of Open Access Journals (Sweden)

    shadi saneie

    2016-10-01

    Full Text Available With rapid development of digital technology, protecting information such as copyright, content ownership confirmation has become more important. In image watermarking, information of the image is inserted such that the visual quality of the image is not reduced and the receiver is able to get the required information. Some attacks such as image cropping, destroy the watermark’s information. In this article, a new watermarking scheme is proposed which is robust against tough cropping. In the proposed scheme, classic Sudoku table which is a 9*9 table, has been used. One feature of Sudoku table is that Sudoku's limitations cause uniform scattering of symbols or numbers throughout the table. In the proposed scheme, Sudoku table and both watermarking approaches based on spatial domain and transform domain such as DCT and DWT are used. Lack of using of soduko solution at the stage of extraction and finding correct solution to obtain watermark, is innovation of this scheme. Robustness of watermarking against cropping attack is up to 92%, which shows good and effective performance of the proposed scheme.

  12. Digital Image Watermarking in Transform Domains

    International Nuclear Information System (INIS)

    EL-Shazly, E.H.M.

    2012-01-01

    Fast development of internet and availability of huge digital content make it easy to create, modify and copy digital media such as audio, video and images. This causes a problem for owners of that content and hence a need to copy right protection tool was essential. First, encryption was proposed but it ensures protection during transmission only and once decryption occurred any one can modify the data. at that point watermarking was introduced as a solution to such problem. Watermarking is a process of inserting a low energy signal in to a high energy one so that it doesn't affect the main signal features. A good digital image watermarking technique should satisfy four requirements: 1) Embedding of a watermark should not degrade the host image visual quality (imperceptibility). 2) The embedded watermark should stick to the host image so that it couldn’t be removed by common image processing operation and could be extracted from the attacked watermarked image (robustness). 3) Knowing the embedding and extraction procedures is sufficient but not enough to extract the watermark; extra keys should be needed (security). 4) The watermarking technique should allow embedding and extraction of more than one watermark each independent of the other (capacity). This thesis presents a watermarking scheme that full fill the mentioned four requirements by jointing transform domains with Fractional Fourier Transform Domain (FracFT). More work on cascaded Discrete Wavelet Transform DWT with FracFT was done to develop a joint transform simply called Fractional Wavelet Transform (FWT). The proposed schemes were tested with different image processing attacks to verify its robustness. Finally, the watermarked image is transmitted over simulated MC CDMA channel to prove robustness in real transmission conditions case.

  13. A Self-embedding Robust Digital Watermarking Algorithm with Blind Detection

    Directory of Open Access Journals (Sweden)

    Gong Yunfeng

    2014-08-01

    Full Text Available In order to achieve the perfectly blind detection of robustness watermarking algorithm, a novel self-embedding robust digital watermarking algorithm with blind detection is proposed in this paper. Firstly the original image is divided to not overlap image blocks and then decomposable coefficients are obtained by lifting-based wavelet transform in every image blocks. Secondly the low-frequency coefficients of block images are selected and then approximately represented as a product of a base matrix and a coefficient matrix using NMF. Then the feature vector represent original image is obtained by quantizing coefficient matrix, and finally the adaptive quantization of the robustness watermark is embedded in the low-frequency coefficients of LWT. Experimental results show that the scheme is robust against common signal processing attacks, meanwhile perfect blind detection is achieve.

  14. Embedding Color Watermarks in Color Images

    Directory of Open Access Journals (Sweden)

    Wu Tung-Lin

    2003-01-01

    Full Text Available Robust watermarking with oblivious detection is essential to practical copyright protection of digital images. Effective exploitation of the characteristics of human visual perception to color stimuli helps to develop the watermarking scheme that fills the requirement. In this paper, an oblivious watermarking scheme that embeds color watermarks in color images is proposed. Through color gamut analysis and quantizer design, color watermarks are embedded by modifying quantization indices of color pixels without resulting in perceivable distortion. Only a small amount of information including the specification of color gamut, quantizer stepsize, and color tables is required to extract the watermark. Experimental results show that the proposed watermarking scheme is computationally simple and quite robust in face of various attacks such as cropping, low-pass filtering, white-noise addition, scaling, and JPEG compression with high compression ratios.

  15. Cryptanalysis and Improvement of the Robust and Blind Watermarking Scheme for Dual Color Image

    Directory of Open Access Journals (Sweden)

    Hai Nan

    2015-01-01

    Full Text Available With more color images being widely used on the Internet, the research on embedding color watermark image into color host image has been receiving more attention. Recently, Su et al. have proposed a robust and blind watermarking scheme for dual color image, in which the main innovation is the using of two-level DCT. However, it has been demonstrated in this paper that the original scheme in Su’s study is not secure and can be attacked by our proposed method. In addition, some errors in the original scheme have been pointed out. Also, an improvement measure is presented to enhance the security of the original watermarking scheme. The proposed method has been confirmed by both theoretical analysis and experimental results.

  16. Ambiguity attacks on robust blind image watermarking scheme based on redundant discrete wavelet transform and singular value decomposition

    Directory of Open Access Journals (Sweden)

    Khaled Loukhaoukha

    2017-12-01

    Full Text Available Among emergent applications of digital watermarking are copyright protection and proof of ownership. Recently, Makbol and Khoo (2013 have proposed for these applications a new robust blind image watermarking scheme based on the redundant discrete wavelet transform (RDWT and the singular value decomposition (SVD. In this paper, we present two ambiguity attacks on this algorithm that have shown that this algorithm fails when used to provide robustness applications like owner identification, proof of ownership, and transaction tracking. Keywords: Ambiguity attack, Image watermarking, Singular value decomposition, Redundant discrete wavelet transform

  17. A Reliable Image Watermarking Scheme Based on Redistributed Image Normalization and SVD

    Directory of Open Access Journals (Sweden)

    Musrrat Ali

    2016-01-01

    Full Text Available Digital image watermarking is the process of concealing secret information in a digital image for protecting its rightful ownership. Most of the existing block based singular value decomposition (SVD digital watermarking schemes are not robust to geometric distortions, such as rotation in an integer multiple of ninety degree and image flipping, which change the locations of the pixels but don’t make any changes to the pixel’s intensity of the image. Also, the schemes have used a constant scaling factor to give the same weightage to the coefficients of different magnitudes that results in visible distortion in some regions of the watermarked image. Therefore, to overcome the problems mentioned here, this paper proposes a novel image watermarking scheme by incorporating the concepts of redistributed image normalization and variable scaling factor depending on the coefficient’s magnitude to be embedded. Furthermore, to enhance the security and robustness the watermark is shuffled by using the piecewise linear chaotic map before the embedding. To investigate the robustness of the scheme several attacks are applied to seriously distort the watermarked image. Empirical analysis of the results has demonstrated the efficiency of the proposed scheme.

  18. The comparison between SVD-DCT and SVD-DWT digital image watermarking

    Science.gov (United States)

    Wira Handito, Kurniawan; Fauzi, Zulfikar; Aminy Ma’ruf, Firda; Widyaningrum, Tanti; Muslim Lhaksmana, Kemas

    2018-03-01

    With internet, anyone can publish their creation into digital data simply, inexpensively, and absolutely easy to be accessed by everyone. However, the problem appears when anyone else claims that the creation is their property or modifies some part of that creation. It causes necessary protection of copyrights; one of the examples is with watermarking method in digital image. The application of watermarking technique on digital data, especially on image, enables total invisibility if inserted in carrier image. Carrier image will not undergo any decrease of quality and also the inserted image will not be affected by attack. In this paper, watermarking will be implemented on digital image using Singular Value Decomposition based on Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) by expectation in good performance of watermarking result. In this case, trade-off happen between invisibility and robustness of image watermarking. In embedding process, image watermarking has a good quality for scaling factor < 0.1. The quality of image watermarking in decomposition level 3 is better than level 2 and level 1. Embedding watermark in low-frequency is robust to Gaussian blur attack, rescale, and JPEG compression, but in high-frequency is robust to Gaussian noise.

  19. A Color Image Watermarking Scheme Resistant against Geometrical Attacks

    Directory of Open Access Journals (Sweden)

    Y. Xing

    2010-04-01

    Full Text Available The geometrical attacks are still a problem for many digital watermarking algorithms at present. In this paper, we propose a watermarking algorithm for color images resistant to geometrical distortions (rotation and scaling. The singular value decomposition is used for watermark embedding and extraction. The log-polar map- ping (LPM and phase correlation method are used to register the position of geometrical distortion suffered by the watermarked image. Experiments with different kinds of color images and watermarks demonstrate that the watermarking algorithm is robust to common image processing attacks, especially geometrical attacks.

  20. A Novel Medical Image Watermarking in Three-dimensional Fourier Compressed Domain

    Directory of Open Access Journals (Sweden)

    Baoru Han

    2015-09-01

    Full Text Available Digital watermarking is a research hotspot in the field of image security, which is protected digital image copyright. In order to ensure medical image information security, a novel medical image digital watermarking algorithm in three-dimensional Fourier compressed domain is proposed. The novel medical image digital watermarking algorithm takes advantage of three-dimensional Fourier compressed domain characteristics, Legendre chaotic neural network encryption features and robust characteristics of differences hashing, which is a robust zero-watermarking algorithm. On one hand, the original watermarking image is encrypted in order to enhance security. It makes use of Legendre chaotic neural network implementation. On the other hand, the construction of zero-watermarking adopts differences hashing in three-dimensional Fourier compressed domain. The novel watermarking algorithm does not need to select a region of interest, can solve the problem of medical image content affected. The specific implementation of the algorithm and the experimental results are given in the paper. The simulation results testify that the novel algorithm possesses a desirable robustness to common attack and geometric attack.

  1. AN EFFICIENT ROBUST IMAGE WATERMARKING BASED ON AC PREDICTION TECHNIQUE USING DCT TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Gaurav Gupta

    2015-08-01

    Full Text Available The expansion of technology has made several simple ways to manipulate the original content. This has brought the concern for security of the content which is easily available in open network. Digital watermarking is the most suitable solution for the defined issue. Digital watermarking is the art of inserting the logo into multimedia object to have proof of ownership whenever it is required. The proposed algorithm is useful in authorized distribution and ownership verification. The algorithm uses the concept of AC prediction using DCT to embed the watermark in the image. The algorithm has excellent robustness against all the attacks and outperforms the similar work with admirable performance in terms of Normalized Correlation (NC, Peak Signal to Noise Ratio (PSNR and Tamper Assessment Function (TAF.

  2. An image adaptive, wavelet-based watermarking of digital images

    Science.gov (United States)

    Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia

    2007-12-01

    In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.

  3. Video Multiple Watermarking Technique Based on Image Interlacing Using DWT

    Directory of Open Access Journals (Sweden)

    Mohamed M. Ibrahim

    2014-01-01

    Full Text Available Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.

  4. Video multiple watermarking technique based on image interlacing using DWT.

    Science.gov (United States)

    Ibrahim, Mohamed M; Abdel Kader, Neamat S; Zorkany, M

    2014-01-01

    Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.

  5. Robust Watermarking of Video Streams

    Directory of Open Access Journals (Sweden)

    T. Polyák

    2006-01-01

    Full Text Available In the past few years there has been an explosion in the use of digital video data. Many people have personal computers at home, and with the help of the Internet users can easily share video files on their computer. This makes possible the unauthorized use of digital media, and without adequate protection systems the authors and distributors have no means to prevent it.Digital watermarking techniques can help these systems to be more effective by embedding secret data right into the video stream. This makes minor changes in the frames of the video, but these changes are almost imperceptible to the human visual system. The embedded information can involve copyright data, access control etc. A robust watermark is resistant to various distortions of the video, so it cannot be removed without affecting the quality of the host medium. In this paper I propose a video watermarking scheme that fulfills the requirements of a robust watermark

  6. A content-based digital image watermarking scheme resistant to local geometric distortions

    International Nuclear Information System (INIS)

    Yang, Hong-ying; Chen, Li-li; Wang, Xiang-yang

    2011-01-01

    Geometric distortion is known as one of the most difficult attacks to resist, as it can desynchronize the location of the watermark and hence cause incorrect watermark detection. Geometric distortion can be decomposed into two classes: global affine transforms and local geometric distortions. Most countermeasures proposed in the literature only address the problem of global affine transforms. It is a challenging problem to design a robust image watermarking scheme against local geometric distortions. In this paper, we propose a new content-based digital image watermarking scheme with good visual quality and reasonable resistance against local geometric distortions. Firstly, the robust feature points, which can survive various common image processing and global affine transforms, are extracted by using a multi-scale SIFT (scale invariant feature transform) detector. Then, the affine covariant local feature regions (LFRs) are constructed adaptively according to the feature scale and local invariant centroid. Finally, the digital watermark is embedded into the affine covariant LFRs by modulating the magnitudes of discrete Fourier transform (DFT) coefficients. By binding the watermark with the affine covariant LFRs, the watermark detection can be done without synchronization error. Experimental results show that the proposed image watermarking is not only invisible and robust against common image processing operations such as sharpening, noise addition, and JPEG compression, etc, but also robust against global affine transforms and local geometric distortions

  7. A joint image encryption and watermarking algorithm based on compressive sensing and chaotic map

    International Nuclear Information System (INIS)

    Xiao Di; Cai Hong-Kun; Zheng Hong-Ying

    2015-01-01

    In this paper, a compressive sensing (CS) and chaotic map-based joint image encryption and watermarking algorithm is proposed. The transform domain coefficients of the original image are scrambled by Arnold map firstly. Then the watermark is adhered to the scrambled data. By compressive sensing, a set of watermarked measurements is obtained as the watermarked cipher image. In this algorithm, watermark embedding and data compression can be performed without knowing the original image; similarly, watermark extraction will not interfere with decryption. Due to the characteristics of CS, this algorithm features compressible cipher image size, flexible watermark capacity, and lossless watermark extraction from the compressed cipher image as well as robustness against packet loss. Simulation results and analyses show that the algorithm achieves good performance in the sense of security, watermark capacity, extraction accuracy, reconstruction, robustness, etc. (paper)

  8. Dual-tree complex wavelet for medical image watermarking

    International Nuclear Information System (INIS)

    Mavudila, K.R.; Ndaye, B.M.; Masmoudi, L.; Hassanain, N.; Cherkaoui, M.

    2010-01-01

    In order to transmit medical data between hospitals, we insert the information for each patient in the image and its diagnosis, the watermarking consist to insert a message in the image and try to find it with the maximum possible fidelity. This paper presents a blind watermarking scheme in wavelet transform domain dual tree (DTT), who increasing the robustness and preserves the image quality. This system is transparent to the user and allows image integrity control. In addition, it provides information on the location of potential alterations and an evaluation of image modifications which is of major importance in a medico-legal framework. An example using head magnetic resonance and mammography imaging illustrates the overall method. Wavelet techniques can be successfully applied in various image processing methods, namely in image de noising, segmentation, classification, watermarking and others. In this paper we discussed the application of dual tree complex wavelet transform (D T-CWT), which has significant advantages over classic discrete wavelet transform (DWT), for certain image processing problems. The D T-CWT is a form of discreet wavelet transform which generates complex coefficients by using a dual tree of wavelet filters to obtain their real and imaginary parts. The main part of the paper is devoted to profit the exceptional quality for D T-CWT, compared to classical DWT, for a blind medical image watermarking, our schemes are using for the performance bivariate shrinkage with local variance estimation and are robust of attacks and favourably preserves the visual quality. Experimental results show that embedded watermarks using CWT give good image quality and are robust in comparison with the classical DWT.

  9. Robust and Reversible Audio Watermarking by Modifying Statistical Features in Time Domain

    Directory of Open Access Journals (Sweden)

    Shijun Xiang

    2017-01-01

    Full Text Available Robust and reversible watermarking is a potential technique in many sensitive applications, such as lossless audio or medical image systems. This paper presents a novel robust reversible audio watermarking method by modifying the statistic features in time domain in the way that the histogram of these statistical values is shifted for data hiding. Firstly, the original audio is divided into nonoverlapped equal-sized frames. In each frame, the use of three samples as a group generates a prediction error and a statistical feature value is calculated as the sum of all the prediction errors in the frame. The watermark bits are embedded into the frames by shifting the histogram of the statistical features. The watermark is reversible and robust to common signal processing operations. Experimental results have shown that the proposed method not only is reversible but also achieves satisfactory robustness to MP3 compression of 64 kbps and additive Gaussian noise of 35 dB.

  10. An Improved Method to Watermark Images Sensitive to Blocking Artifacts

    OpenAIRE

    Afzel Noore

    2007-01-01

    A new digital watermarking technique for images that are sensitive to blocking artifacts is presented. Experimental results show that the proposed MDCT based approach produces highly imperceptible watermarked images and is robust to attacks such as compression, noise, filtering and geometric transformations. The proposed MDCT watermarking technique is applied to fingerprints for ensuring security. The face image and demographic text data of an individual are used as multi...

  11. Just Noticeable Distortion Model and Its Application in Color Image Watermarking

    Science.gov (United States)

    Liu, Kuo-Cheng

    In this paper, a perceptually adaptive watermarking scheme for color images is proposed in order to achieve robustness and transparency. A new just noticeable distortion (JND) estimator for color images is first designed in the wavelet domain. The key issue of the JND model is to effectively integrate visual masking effects. The estimator is an extension to the perceptual model that is used in image coding for grayscale images. Except for the visual masking effects given coefficient by coefficient by taking into account the luminance content and the texture of grayscale images, the crossed masking effect given by the interaction between luminance and chrominance components and the effect given by the variance within the local region of the target coefficient are investigated such that the visibility threshold for the human visual system (HVS) can be evaluated. In a locally adaptive fashion based on the wavelet decomposition, the estimator applies to all subbands of luminance and chrominance components of color images and is used to measure the visibility of wavelet quantization errors. The subband JND profiles are then incorporated into the proposed color image watermarking scheme. Performance in terms of robustness and transparency of the watermarking scheme is obtained by means of the proposed approach to embed the maximum strength watermark while maintaining the perceptually lossless quality of the watermarked color image. Simulation results show that the proposed scheme with inserting watermarks into luminance and chrominance components is more robust than the existing scheme while retaining the watermark transparency.

  12. A new method for robust video watermarking resistant against key estimation attacks

    Science.gov (United States)

    Mitekin, Vitaly

    2015-12-01

    This paper presents a new method for high-capacity robust digital video watermarking and algorithms of embedding and extraction of watermark based on this method. Proposed method uses password-based two-dimensional pseudonoise arrays for watermark embedding, making brute-force attacks aimed at steganographic key retrieval mostly impractical. Proposed algorithm for 2-dimensional "noise-like" watermarking patterns generation also allows to significantly decrease watermark collision probability ( i.e. probability of correct watermark detection and extraction using incorrect steganographic key or password).. Experimental research provided in this work also shows that simple correlation-based watermark detection procedure can be used, providing watermark robustness against lossy compression and watermark estimation attacks. At the same time, without decreasing robustness of embedded watermark, average complexity of the brute-force key retrieval attack can be increased to 1014 watermark extraction attempts (compared to 104-106 for a known robust watermarking schemes). Experimental results also shows that for lowest embedding intensity watermark preserves it's robustness against lossy compression of host video and at the same time preserves higher video quality (PSNR up to 51dB) compared to known wavelet-based and DCT-based watermarking algorithms.

  13. Digital watermark

    Directory of Open Access Journals (Sweden)

    Jasna Maver

    2000-01-01

    Full Text Available The huge amount of multimedia contents available on the World-Wide-Web is beginning to raise the question of their protection. Digital watermarking is a technique which can serve various purposes, including intellectual property protection, authentication and integrity verification, as well as visible or invisible content labelling of multimedia content. Due to the diversity of digital watermarking applicability, there are many different techniques, which can be categorised according to different criteria. A digital watermark can be categorised as visible or invisible and as robust or fragile. In contrast to the visible watermark where a visible pattern or image is embedded into the original image, the invisible watermark does not change the visual appearance of the image. The existence of such a watermark can be determined only through a watermark ex¬traction or detection algorithm. The robust watermark is used for copyright protection, while the fragile watermark is designed for authentication and integrity verification of multimedia content. A watermark must be detectable or extractable to be useful. In some watermarking schemes, a watermark can be extracted in its exact form, in other cases, we can detect only whether a specific given watermarking signal is present in an image. Digital libraries, through which cultural institutions will make multimedia contents available, should support a wide range of service models for intellectual property protection, where digital watermarking may play an important role.

  14. Towards distortion-free robust image authentication

    International Nuclear Information System (INIS)

    Coltuc, D

    2007-01-01

    This paper investigates a general framework for distortion-free robust image authentication by multiple marking. First, by robust watermarking a subsampled version of image edges is embedded. Then, by reversible watermarking the information needed to recover the original image is inserted, too. The hiding capacity of the reversible watermarking is the essential requirement for this approach. Thus in case of no attacks not only image is authenticated but also the original is exactly recovered. In case of attacks, reversibility is lost, but image can still be authenticated. Preliminary results providing very good robustness against JPEG compression are presented

  15. A dual adaptive watermarking scheme in contourlet domain for DICOM images

    Directory of Open Access Journals (Sweden)

    Rabbani Hossein

    2011-06-01

    Full Text Available Abstract Background Nowadays, medical imaging equipments produce digital form of medical images. In a modern health care environment, new systems such as PACS (picture archiving and communication systems, use the digital form of medical image too. The digital form of medical images has lots of advantages over its analog form such as ease in storage and transmission. Medical images in digital form must be stored in a secured environment to preserve patient privacy. It is also important to detect modifications on the image. These objectives are obtained by watermarking in medical image. Methods In this paper, we present a dual and oblivious (blind watermarking scheme in the contourlet domain. Because of importance of ROI (region of interest in interpretation by medical doctors rather than RONI (region of non-interest, we propose an adaptive dual watermarking scheme with different embedding strength in ROI and RONI. We embed watermark bits in singular value vectors of the embedded blocks within lowpass subband in contourlet domain. Results The values of PSNR (peak signal-to-noise ratio and SSIM (structural similarity measure index of ROI for proposed DICOM (digital imaging and communications in medicine images in this paper are respectively larger than 64 and 0.997. These values confirm that our algorithm has good transparency. Because of different embedding strength, BER (bit error rate values of signature watermark are less than BER values of caption watermark. Our results show that watermarked images in contourlet domain have greater robustness against attacks than wavelet domain. In addition, the qualitative analysis of our method shows it has good invisibility. Conclusions The proposed contourlet-based watermarking algorithm in this paper uses an automatically selection for ROI and embeds the watermark in the singular values of contourlet subbands that makes the algorithm more efficient, and robust against noise attacks than other transform

  16. Optical 3D watermark based digital image watermarking for telemedicine

    Science.gov (United States)

    Li, Xiao Wei; Kim, Seok Tae

    2013-12-01

    Region of interest (ROI) of a medical image is an area including important diagnostic information and must be stored without any distortion. This algorithm for application of watermarking technique for non-ROI of the medical image preserving ROI. The paper presents a 3D watermark based medical image watermarking scheme. In this paper, a 3D watermark object is first decomposed into 2D elemental image array (EIA) by a lenslet array, and then the 2D elemental image array data is embedded into the host image. The watermark extraction process is an inverse process of embedding. The extracted EIA through the computational integral imaging reconstruction (CIIR) technique, the 3D watermark can be reconstructed. Because the EIA is composed of a number of elemental images possesses their own perspectives of a 3D watermark object. Even though the embedded watermark data badly damaged, the 3D virtual watermark can be successfully reconstructed. Furthermore, using CAT with various rule number parameters, it is possible to get many channels for embedding. So our method can recover the weak point having only one transform plane in traditional watermarking methods. The effectiveness of the proposed watermarking scheme is demonstrated with the aid of experimental results.

  17. Robust Digital Speech Watermarking For Online Speaker Recognition

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Nematollahi

    2015-01-01

    Full Text Available A robust and blind digital speech watermarking technique has been proposed for online speaker recognition systems based on Discrete Wavelet Packet Transform (DWPT and multiplication to embed the watermark in the amplitudes of the wavelet’s subbands. In order to minimize the degradation effect of the watermark, these subbands are selected where less speaker-specific information was available (500 Hz–3500 Hz and 6000 Hz–7000 Hz. Experimental results on Texas Instruments Massachusetts Institute of Technology (TIMIT, Massachusetts Institute of Technology (MIT, and Mobile Biometry (MOBIO show that the degradation for speaker verification and identification is 1.16% and 2.52%, respectively. Furthermore, the proposed watermark technique can provide enough robustness against different signal processing attacks.

  18. Robust and Secure Watermarking Using Sparse Information of Watermark for Biometric Data Protection

    Directory of Open Access Journals (Sweden)

    Rohit M Thanki

    2016-08-01

    Full Text Available Biometric based human authentication system is used for security purpose in many organizations in the present world. This biometric authentication system has several vulnerable points. Two of vulnerable points are protection of biometric templates at system database and protection of biometric templates at communication channel between two modules of biometric authentication systems. In this paper proposed a robust watermarking scheme using the sparse information of watermark biometric to secure vulnerable point like protection of biometric templates at the communication channel of biometric authentication systems. A compressive sensing theory procedure is used for generation of sparse information on watermark biometric data using detail wavelet coefficients. Then sparse information of watermark biometric data is embedded into DCT coefficients of host biometric data. This proposed scheme is robust to common signal processing and geometric attacks like JPEG compression, adding noise, filtering, and cropping, histogram equalization. This proposed scheme has more advantages and high quality measures compared to existing schemes in the literature.

  19. Watermark Compression in Medical Image Watermarking Using Lempel-Ziv-Welch (LZW) Lossless Compression Technique.

    Science.gov (United States)

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohd; Ali, Mushtaq

    2016-04-01

    In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery.

  20. Robustness evaluation of transactional audio watermarking systems

    Science.gov (United States)

    Neubauer, Christian; Steinebach, Martin; Siebenhaar, Frank; Pickel, Joerg

    2003-06-01

    Distribution via Internet is of increasing importance. Easy access, transmission and consumption of digitally represented music is very attractive to the consumer but led also directly to an increasing problem of illegal copying. To cope with this problem watermarking is a promising concept since it provides a useful mechanism to track illicit copies by persistently attaching property rights information to the material. Especially for online music distribution the use of so-called transaction watermarking, also denoted with the term bitstream watermarking, is beneficial since it offers the opportunity to embed watermarks directly into perceptually encoded material without the need of full decompression/compression. Besides the concept of bitstream watermarking, former publications presented the complexity, the audio quality and the detection performance. These results are now extended by an assessment of the robustness of such schemes. The detection performance before and after applying selected attacks is presented for MPEG-1/2 Layer 3 (MP3) and MPEG-2/4 AAC bitstream watermarking, contrasted to the performance of PCM spread spectrum watermarking.

  1. Optical colour image watermarking based on phase-truncated linear canonical transform and image decomposition

    Science.gov (United States)

    Su, Yonggang; Tang, Chen; Li, Biyuan; Lei, Zhenkun

    2018-05-01

    This paper presents a novel optical colour image watermarking scheme based on phase-truncated linear canonical transform (PT-LCT) and image decomposition (ID). In this proposed scheme, a PT-LCT-based asymmetric cryptography is designed to encode the colour watermark into a noise-like pattern, and an ID-based multilevel embedding method is constructed to embed the encoded colour watermark into a colour host image. The PT-LCT-based asymmetric cryptography, which can be optically implemented by double random phase encoding with a quadratic phase system, can provide a higher security to resist various common cryptographic attacks. And the ID-based multilevel embedding method, which can be digitally implemented by a computer, can make the information of the colour watermark disperse better in the colour host image. The proposed colour image watermarking scheme possesses high security and can achieve a higher robustness while preserving the watermark’s invisibility. The good performance of the proposed scheme has been demonstrated by extensive experiments and comparison with other relevant schemes.

  2. A blind reversible robust watermarking scheme for relational databases.

    Science.gov (United States)

    Chang, Chin-Chen; Nguyen, Thai-Son; Lin, Chia-Chen

    2013-01-01

    Protecting the ownership and controlling the copies of digital data have become very important issues in Internet-based applications. Reversible watermark technology allows the distortion-free recovery of relational databases after the embedded watermark data are detected or verified. In this paper, we propose a new, blind, reversible, robust watermarking scheme that can be used to provide proof of ownership for the owner of a relational database. In the proposed scheme, a reversible data-embedding algorithm, which is referred to as "histogram shifting of adjacent pixel difference" (APD), is used to obtain reversibility. The proposed scheme can detect successfully 100% of the embedded watermark data, even if as much as 80% of the watermarked relational database is altered. Our extensive analysis and experimental results show that the proposed scheme is robust against a variety of data attacks, for example, alteration attacks, deletion attacks, mix-match attacks, and sorting attacks.

  3. Quantum color image watermarking based on Arnold transformation and LSB steganography

    Science.gov (United States)

    Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Luo, Gaofeng

    In this paper, a quantum color image watermarking scheme is proposed through twice-scrambling of Arnold transformations and steganography of least significant bit (LSB). Both carrier image and watermark images are represented by the novel quantum representation of color digital images model (NCQI). The image sizes for carrier and watermark are assumed to be 2n×2n and 2n‑1×2n‑1, respectively. At first, the watermark is scrambled into a disordered form through image preprocessing technique of exchanging the image pixel position and altering the color information based on Arnold transforms, simultaneously. Then, the scrambled watermark with 2n‑1×2n‑1 image size and 24-qubit grayscale is further expanded to an image with size 2n×2n and 6-qubit grayscale using the nearest-neighbor interpolation method. Finally, the scrambled and expanded watermark is embedded into the carrier by steganography of LSB scheme, and a key image with 2n×2n size and 3-qubit information is generated at the meantime, which only can use the key image to retrieve the original watermark. The extraction of watermark is the reverse process of embedding, which is achieved by applying a sequence of operations in the reverse order. Simulation-based experimental results involving different carrier and watermark images (i.e. conventional or non-quantum) are simulated based on the classical computer’s MATLAB 2014b software, which illustrates that the present method has a good performance in terms of three items: visual quality, robustness and steganography capacity.

  4. Region of interest based robust watermarking scheme for adaptation in small displays

    Science.gov (United States)

    Vivekanandhan, Sapthagirivasan; K. B., Kishore Mohan; Vemula, Krishna Manohar

    2010-02-01

    Now-a-days Multimedia data can be easily replicated and the copyright is not legally protected. Cryptography does not allow the use of digital data in its original form and once the data is decrypted, it is no longer protected. Here we have proposed a new double protected digital image watermarking algorithm, which can embed the watermark image blocks into the adjacent regions of the host image itself based on their blocks similarity coefficient which is robust to various noise effects like Poisson noise, Gaussian noise, Random noise and thereby provide double security from various noises and hackers. As instrumentation application requires a much accurate data, the watermark image which is to be extracted back from the watermarked image must be immune to various noise effects. Our results provide better extracted image compared to the present/existing techniques and in addition we have done resizing the same for various displays. Adaptive resizing for various size displays is being experimented wherein we crop the required information in a frame, zoom it for a large display or resize for a small display using a threshold value and in either cases background is not given much importance but it is only the fore-sight object which gains importance which will surely be helpful in performing surgeries.

  5. Robust watermarking on copyright protection of digital originals

    Energy Technology Data Exchange (ETDEWEB)

    Gu, C; Hu, X Y, E-mail: guchong527@gmail.co, E-mail: emma_huxy@yahoo.com.c [College of Packaging and Printing Engineering, Tianjin University of Science and Technology, Tianjin, 300222 (China)

    2010-06-01

    The issues about the difference between digital vector originals and raster originals were discussed. A new algorithm based on displacing vertices to realize the embedding and extracting of digital watermarking in vector data was proposed after that. The results showed that the watermark produced by the method is resistant against translation, scaling, rotation, additive random noise; it is also resistant, to some extent, against cropping. This paper also modified the DCT raster image watermarking algorithm, using a bitmap image as watermark embedded into target images, instead of some meaningless serial numbers or simple symbols. The embedding and extraction part of these two digital watermark systems achieved with software. Experiments proved that both algorithms are not only imperceptible, but also have strong resistance against the common attracts, which can prove the copyright more effectively.

  6. A wavelet domain adaptive image watermarking method based on chaotic encryption

    Science.gov (United States)

    Wei, Fang; Liu, Jian; Cao, Hanqiang; Yang, Jun

    2009-10-01

    A digital watermarking technique is a specific branch of steganography, which can be used in various applications, provides a novel way to solve security problems for multimedia information. In this paper, we proposed a kind of wavelet domain adaptive image digital watermarking method using chaotic stream encrypt and human eye visual property. The secret information that can be seen as a watermarking is hidden into a host image, which can be publicly accessed, so the transportation of the secret information will not attract the attention of illegal receiver. The experimental results show that the method is invisible and robust against some image processing.

  7. Robust and Secure Watermarking Using Sparse Information of Watermark for Biometric Data Protection

    OpenAIRE

    Rohit M Thanki; Ved Vyas Dwivedi; Komal Borisagar

    2016-01-01

    Biometric based human authentication system is used for security purpose in many organizations in the present world. This biometric authentication system has several vulnerable points. Two of vulnerable points are protection of biometric templates at system database and protection of biometric templates at communication channel between two modules of biometric authentication systems. In this paper proposed a robust watermarking scheme using the sparse information of watermark biometric to sec...

  8. JPEG digital watermarking for copyright protection

    Directory of Open Access Journals (Sweden)

    Vitaliy G. Ivanenko

    2018-05-01

    Full Text Available With the rapid growth of the multimedia technology, copyright protection has become a very important issue, especially for images. The advantages of easy photo distribution are discarded by their possible theft and unauthorized usage on different websites. Therefore, there is a need in securing information with technical methods, for example digital watermarks. This paper reviews digital watermark embedding methods for image copyright protection, advantages and disadvantages of digital watermark usage are produced. Different watermarking algorithms are analyzed. Based on analysis results most effective algorithm is chosen – differential energy watermarking. It is noticed that the method excels at providing image integrity. Digital watermark embedding system should prevent illegal access to the digital watermark and its container. Requirements for digital watermark are produced. Possible image attacks are reviewed. Modern modifications of embedding algorithms are studied. Robustness of the differential energy watermark is investigated. Robustness is a special value, which formulae is given further in the article. DEW method modification is proposed, it’s advantages over original algorithm are described. Digital watermark serves as an additional layer of defense which is in most cases unknown to the violator. Scope of studied image attacks includes compression, filtration, scaling. In conclusion, it’s possible to use DEW watermarking in copyright protection, violator can easily be detected if images with embedded information are exchanged.

  9. A new approach to pre-processing digital image for wavelet-based watermark

    Science.gov (United States)

    Agreste, Santa; Andaloro, Guido

    2008-11-01

    The growth of the Internet has increased the phenomenon of digital piracy, in multimedia objects, like software, image, video, audio and text. Therefore it is strategic to individualize and to develop methods and numerical algorithms, which are stable and have low computational cost, that will allow us to find a solution to these problems. We describe a digital watermarking algorithm for color image protection and authenticity: robust, not blind, and wavelet-based. The use of Discrete Wavelet Transform is motivated by good time-frequency features and a good match with Human Visual System directives. These two combined elements are important for building an invisible and robust watermark. Moreover our algorithm can work with any image, thanks to the step of pre-processing of the image that includes resize techniques that adapt to the size of the original image for Wavelet transform. The watermark signal is calculated in correlation with the image features and statistic properties. In the detection step we apply a re-synchronization between the original and watermarked image according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has been shown to be resistant against geometric, filtering, and StirMark attacks with a low rate of false alarm.

  10. An Enhanced Data Integrity Model In Mobile Cloud Environment Using Digital Signature Algorithm And Robust Reversible Watermarking

    Directory of Open Access Journals (Sweden)

    Boukari Souley

    2017-10-01

    Full Text Available the increase use of hand held devices such as smart phones to access multimedia content in the cloud is increasing with rise and growth in information technology. Mobile cloud computing is increasingly used today because it allows users to have access to variety of resources in the cloud such as image video audio and software applications with minimal usage of their inbuilt resources such as storage memory by using the one available in the cloud. The major challenge faced with mobile cloud computing is security. Watermarking and digital signature are some techniques used to provide security and authentication on user data in the cloud. Watermarking is a technique used to embed digital data within a multimedia content such as image video or audio in order to prevent authorized access to those content by intruders whereas digital signature is used to identify and verify user data when accessed. In this work we implemented digital signature and robust reversible image watermarking in order enhance mobile cloud computing security and integrity of data by providing double authentication techniques. The results obtained show the effectiveness of combining the two techniques robust reversible watermarking and digital signature by providing strong authentication to ensures data integrity and extract the original content watermarked without changes.

  11. A Blind Adaptive Color Image Watermarking Scheme Based on Principal Component Analysis, Singular Value Decomposition and Human Visual System

    Directory of Open Access Journals (Sweden)

    M. Imran

    2017-09-01

    Full Text Available A blind adaptive color image watermarking scheme based on principal component analysis, singular value decomposition, and human visual system is proposed. The use of principal component analysis to decorrelate the three color channels of host image, improves the perceptual quality of watermarked image. Whereas, human visual system and fuzzy inference system helped to improve both imperceptibility and robustness by selecting adaptive scaling factor, so that, areas more prone to noise can be added with more information as compared to less prone areas. To achieve security, location of watermark embedding is kept secret and used as key at the time of watermark extraction, whereas, for capacity both singular values and vectors are involved in watermark embedding process. As a result, four contradictory requirements; imperceptibility, robustness, security and capacity are achieved as suggested by results. Both subjective and objective methods are acquired to examine the performance of proposed schemes. For subjective analysis the watermarked images and watermarks extracted from attacked watermarked images are shown. For objective analysis of proposed scheme in terms of imperceptibility, peak signal to noise ratio, structural similarity index, visual information fidelity and normalized color difference are used. Whereas, for objective analysis in terms of robustness, normalized correlation, bit error rate, normalized hamming distance and global authentication rate are used. Security is checked by using different keys to extract the watermark. The proposed schemes are compared with state-of-the-art watermarking techniques and found better performance as suggested by results.

  12. A Novel Robust Audio Watermarking Algorithm by Modifying the Average Amplitude in Transform Domain

    Directory of Open Access Journals (Sweden)

    Qiuling Wu

    2018-05-01

    Full Text Available In order to improve the robustness and imperceptibility in practical application, a novel audio watermarking algorithm with strong robustness is proposed by exploring the multi-resolution characteristic of discrete wavelet transform (DWT and the energy compaction capability of discrete cosine transform (DCT. The human auditory system is insensitive to the minor changes in the frequency components of the audio signal, so the watermarks can be embedded by slightly modifying the frequency components of the audio signal. The audio fragments segmented from the cover audio signal are decomposed by DWT to obtain several groups of wavelet coefficients with different frequency bands, and then the fourth level detail coefficient is selected to be divided into the former packet and the latter packet, which are executed for DCT to get two sets of transform domain coefficients (TDC respectively. Finally, the average amplitudes of the two sets of TDC are modified to embed the binary image watermark according to the special embedding rule. The watermark extraction is blind without the carrier audio signal. Experimental results confirm that the proposed algorithm has good imperceptibility, large payload capacity and strong robustness when resisting against various attacks such as MP3 compression, low-pass filtering, re-sampling, re-quantization, amplitude scaling, echo addition and noise corruption.

  13. REGION OF NON-INTEREST BASED DIGITAL IMAGE WATERMARKING USING NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Bibi Isac

    2011-11-01

    Full Text Available Copyrights protection of digital data become inevitable in current world. Digital watermarks have been recently proposed as secured scheme for copyright protection, authentication, source tracking, and broadcast monitoring of video, audio, text data and digital images. In this paper a method to embed a watermark in region of non-interest (RONI and a method for adaptive calculation of strength factor using neural network are proposed. The embedding and extraction processes are carried out in the transform domain by using Discrete Wavelet Transform (DWT. Finally, the algorithm robustness is tested against noise addition attacks and geometric distortion attacks. The results authenticate that the proposed watermarking algorithm does not degrade the quality of cover image as the watermark is inserted only in region of non-interest and is resistive to attacks.

  14. Improving Robustness of Biometric Identity Determination with Digital Watermarking

    Directory of Open Access Journals (Sweden)

    Juha Partala

    2016-01-01

    Full Text Available The determination of an identity from noisy biometric measurements is a continuing challenge. In many applications, such as identity-based encryption, the identity needs to be known with virtually 100% certainty. The determination of identities with such precision from face images taken under a wide range of natural situations is still an unsolved problem. We propose a digital watermarking based method to aid face recognizers to tackle this problem in applications. In particular, we suggest embedding multiple face dependent watermarks into an image to serve as expert knowledge on the corresponding identities to identity-based schemes. This knowledge could originate, for example, from the tagging of those people on a social network. In our proposal, a single payload consists of a correction vector that can be added to the extracted biometric template to compile a nearly noiseless identity. It also supports the removal of a person from the image. If a particular face is censored, the corresponding identity is also removed. Based on our experiments, our method is robust against JPEG compression, image filtering, and occlusion and enables a reliable determination of an identity without side information.

  15. Adaptive Watermarking Algorithm in DCT Domain Based on Chaos

    Directory of Open Access Journals (Sweden)

    Wenhao Wang

    2013-05-01

    Full Text Available In order to improve the security, robustness and invisibility of the digital watermarking, a new adaptive watermarking algorithm is proposed in this paper. Firstly, this algorithm uses chaos sequence, which Logistic chaotic mapping produces, to encrypt the watermark image. And then the original image is divided into many sub-blocks and discrete cosine transform (DCT.The watermark information is embedded into sub-blocks medium coefficients. With the features of Human Visual System (HVS and image texture sufficiently taken into account during embedding, the embedding intensity of watermark is able to adaptively adjust according to HVS and texture characteristic. The watermarking is embedded into the different sub-blocks coefficients. Experiment results haven shown that the proposed algorithm is robust against the attacks of general image processing methods, such as noise, cut, filtering and JPEG compression, and receives a good tradeoff between invisible and robustness, and better security.

  16. Visible digital watermarking system using perceptual models

    Science.gov (United States)

    Cheng, Qiang; Huang, Thomas S.

    2001-03-01

    This paper presents a visible watermarking system using perceptual models. %how and why A watermark image is overlaid translucently onto a primary image, for the purposes of immediate claim of copyright, instantaneous recognition of owner or creator, or deterrence to piracy of digital images or video. %perceptual The watermark is modulated by exploiting combined DCT-domain and DWT-domain perceptual models. % so that the watermark is visually uniform. The resulting watermarked image is visually pleasing and unobtrusive. The location, size and strength of the watermark vary randomly with the underlying image. The randomization makes the automatic removal of the watermark difficult even though the algorithm is known publicly but the key to the random sequence generator. The experiments demonstrate that the watermarked images have pleasant visual effect and strong robustness. The watermarking system can be used in copyright notification and protection.

  17. A Non-blind Color Image Watermarking Scheme Resistent Against Geometric Attacks

    Directory of Open Access Journals (Sweden)

    A. Ghafoor

    2012-12-01

    Full Text Available A non-blind color image watermarking scheme using principle component analysis, discrete wavelet transform and singular value decomposition is proposed. The color components are uncorrelated using principle component analysis. The watermark is embedded into the singular values of discrete wavelet transformed sub-band associated with principle component containing most of the color information. The scheme was tested against various attacks (including histogram equalization, rotation, Gaussian noise, scaling, cropping, Y-shearing, X-shearing, median filtering, affine transformation, translation, salt & pepper, sharpening, to check robustness. The results of proposed scheme are compared with state-of-the-art existing color watermarking schemes using normalized correlation coefficient and peak signal to noise ratio. The simulation results show that proposed scheme is robust and imperceptible.

  18. Wavelet packet transform-based robust video watermarking technique

    Indian Academy of Sciences (India)

    If any conflict happens to the copyright identification and authentication, ... the present work is concentrated on the robust digital video watermarking. .... the wavelet decomposition, resulting in a new family of orthonormal bases for function ...

  19. Color Image Secret Watermarking Erase and Write Algorithm Based on SIFT

    Science.gov (United States)

    Qu, Jubao

    The use of adaptive characteristics of SIFT, image features, the implementation of the write, erase operations on Extraction and color image hidden watermarking. From the experimental results, this algorithm has better imperceptibility and at the same time, is robust against geometric attacks and common signal processing.

  20. Audio watermarking robust against D/A and A/D conversions

    Directory of Open Access Journals (Sweden)

    Xiang Shijun

    2011-01-01

    Full Text Available Abstract Digital audio watermarking robust against digital-to-analog (D/A and analog-to-digital (A/D conversions is an important issue. In a number of watermark application scenarios, D/A and A/D conversions are involved. In this article, we first investigate the degradation due to DA/AD conversions via sound cards, which can be decomposed into volume change, additional noise, and time-scale modification (TSM. Then, we propose a solution for DA/AD conversions by considering the effect of the volume change, additional noise and TSM. For the volume change, we introduce relation-based watermarking method by modifying groups of the energy relation of three adjacent DWT coefficient sections. For the additional noise, we pick up the lowest-frequency coefficients for watermarking. For the TSM, the synchronization technique (with synchronization codes and an interpolation processing operation is exploited. Simulation tests show the proposed audio watermarking algorithm provides a satisfactory performance to DA/AD conversions and those common audio processing manipulations.

  1. Image Watermarking Algorithm Based on Multiobjective Ant Colony Optimization and Singular Value Decomposition in Wavelet Domain

    Directory of Open Access Journals (Sweden)

    Khaled Loukhaoukha

    2013-01-01

    Full Text Available We present a new optimal watermarking scheme based on discrete wavelet transform (DWT and singular value decomposition (SVD using multiobjective ant colony optimization (MOACO. A binary watermark is decomposed using a singular value decomposition. Then, the singular values are embedded in a detailed subband of host image. The trade-off between watermark transparency and robustness is controlled by multiple scaling factors (MSFs instead of a single scaling factor (SSF. Determining the optimal values of the multiple scaling factors (MSFs is a difficult problem. However, a multiobjective ant colony optimization is used to determine these values. Experimental results show much improved performances of the proposed scheme in terms of transparency and robustness compared to other watermarking schemes. Furthermore, it does not suffer from the problem of high probability of false positive detection of the watermarks.

  2. Wavelet versus DCT-based spread spectrum watermarking of image databases

    Science.gov (United States)

    Mitrea, Mihai P.; Zaharia, Titus B.; Preteux, Francoise J.; Vlad, Adriana

    2004-05-01

    This paper addresses the issue of oblivious robust watermarking, within the framework of colour still image database protection. We present an original method which complies with all the requirements nowadays imposed to watermarking applications: robustness (e.g. low-pass filtering, print & scan, StirMark), transparency (both quality and fidelity), low probability of false alarm, obliviousness and multiple bit recovering. The mark is generated from a 64 bit message (be it a logo, a serial number, etc.) by means of a Spread Spectrum technique and is embedded into DWT (Discrete Wavelet Transform) domain, into certain low frequency coefficients, selected according to the hierarchy of their absolute values. The best results were provided by the (9,7) bi-orthogonal transform. The experiments were carried out on 1200 image sequences, each of them of 32 images. Note that these sequences represented several types of images: natural, synthetic, medical, etc. and each time we obtained the same good results. These results are compared with those we already obtained for the DCT domain, the differences being pointed out and discussed.

  3. Comparison of DCT, SVD and BFOA based multimodal biometric watermarking system

    Directory of Open Access Journals (Sweden)

    S. Anu H. Nair

    2015-12-01

    Full Text Available Digital image watermarking is a major domain for hiding the biometric information, in which the watermark data are made to be concealed inside a host image imposing imperceptible change in the picture. Due to the advance in digital image watermarking, the majority of research aims to make a reliable improvement in robustness to prevent the attack. The reversible invisible watermarking scheme is used for fingerprint and iris multimodal biometric system. A novel approach is used for fusing different biometric modalities. Individual unique modalities of fingerprint and iris biometric are extracted and fused using different fusion techniques. The performance of different fusion techniques is evaluated and the Discrete Wavelet Transform fusion method is identified as the best. Then the best fused biometric template is watermarked into a cover image. The various watermarking techniques such as the Discrete Cosine Transform (DCT, Singular Value Decomposition (SVD and Bacterial Foraging Optimization Algorithm (BFOA are implemented to the fused biometric feature image. Performance of watermarking systems is compared using different metrics. It is found that the watermarked images are found robust over different attacks and they are able to reverse the biometric template for Bacterial Foraging Optimization Algorithm (BFOA watermarking technique.

  4. An optical color image watermarking scheme by using compressive sensing with human visual characteristics in gyrator domain

    Science.gov (United States)

    Liansheng, Sui; Bei, Zhou; Zhanmin, Wang; Ailing, Tian

    2017-05-01

    A novel optical color image watermarking scheme considering human visual characteristics is presented in gyrator transform domain. Initially, an appropriate reference image is constructed of significant blocks chosen from the grayscale host image by evaluating visual characteristics such as visual entropy and edge entropy. Three components of the color watermark image are compressed based on compressive sensing, and the corresponding results are combined to form the grayscale watermark. Then, the frequency coefficients of the watermark image are fused into the frequency data of the gyrator-transformed reference image. The fused result is inversely transformed and partitioned, and eventually the watermarked image is obtained by mapping the resultant blocks into their original positions. The scheme can reconstruct the watermark with high perceptual quality and has the enhanced security due to high sensitivity of the secret keys. Importantly, the scheme can be implemented easily under the framework of double random phase encoding with the 4f optical system. To the best of our knowledge, it is the first report on embedding the color watermark into the grayscale host image which will be out of attacker's expectation. Simulation results are given to verify the feasibility and its superior performance in terms of noise and occlusion robustness.

  5. Securing Biometric Images using Reversible Watermarking

    OpenAIRE

    Thampi, Sabu M.; Jacob, Ann Jisma

    2011-01-01

    Biometric security is a fast growing area. Protecting biometric data is very important since it can be misused by attackers. In order to increase security of biometric data there are different methods in which watermarking is widely accepted. A more acceptable, new important development in this area is reversible watermarking in which the original image can be completely restored and the watermark can be retrieved. But reversible watermarking in biometrics is an understudied area. Reversible ...

  6. A Novel Approach in Security Using Gyration Slab with Watermarking Technique

    Science.gov (United States)

    Rupa, Ch.

    2016-09-01

    In this paper, a novel security approach is proposed to improve the security and robustness of the data. It uses three levels of security to protect the sensitive data. In the first level, the data is to be protected by Gyration slab encryption algorithm. Result of the first level has to be embedded into an image as original using our earlier paper concept PLSB into a second level of security. The resultant image from the second level is considered as watermark Image. In the third level, the watermark image is embedded into the original image. Here watermark image and original image are similar. The final output of the proposed security approach is a watermarked image which holds the stego image. This method provides more security and robustness than the existing approaches. The main properties of the proposed approach are Gyration slab operations and watermark image and original image are similar. These can reduce the Brute-force attack and improve the confusion and diffusion principles. The main strengths of this paper are cryptanalysis, steganalysis, watermark analysis with reports.

  7. Digital Image Authentication Algorithm Based on Fragile Invisible Watermark and MD-5 Function in the DWT Domain

    Directory of Open Access Journals (Sweden)

    Nehad Hameed Hussein

    2015-04-01

    Full Text Available Using watermarking techniques and digital signatures can better solve the problems of digital images transmitted on the Internet like forgery, tampering, altering, etc. In this paper we proposed invisible fragile watermark and MD-5 based algorithm for digital image authenticating and tampers detecting in the Discrete Wavelet Transform DWT domain. The digital image is decomposed using 2-level DWT and the middle and high frequency sub-bands are used for watermark and digital signature embedding. The authentication data are embedded in number of the coefficients of these sub-bands according to the adaptive threshold based on the watermark length and the coefficients of each DWT level. These sub-bands are used because they are less sensitive to the Human Visual System (HVS and preserve high image fidelity. MD-5 and RSA algorithms are used for generating the digital signature from the watermark data that is also embedded in the medical image. We apply the algorithm on number of medical images. The Electronic Patient Record (EPR is used as watermark data. Experiments demonstrate the effectiveness of our algorithm in terms of robustness, invisibility, and fragility. Watermark and digital signature can be extracted without the need to the original image.

  8. A game-theoretic architecture for visible watermarking system of ACOCOA (adaptive content and contrast aware technique

    Directory of Open Access Journals (Sweden)

    Tsai Min-Jen

    2011-01-01

    Full Text Available Abstract Digital watermarking techniques have been developed to protect the intellectual property. A digital watermarking system is basically judged based on two characteristics: security robustness and image quality. In order to obtain a robust visible watermarking in practice, we present a novel watermarking algorithm named adaptive content and contrast aware (ACOCOA, which considers the host image content and watermark texture. In addition, we propose a powerful security architecture against attacks for visible watermarking system which is based on game-theoretic approach that provides an equilibrium condition solution for the decision maker by studying the effects of transmission power on intensity and perceptual efficiency. The experimental results demonstrate that the feasibility of the proposed approach not only provides effectiveness and robustness for the watermarked images, but also allows the watermark encoder to obtain the best adaptive watermarking strategy under attacks.

  9. improvement of digital image watermarking techniques based on FPGA implementation

    International Nuclear Information System (INIS)

    EL-Hadedy, M.E

    2006-01-01

    digital watermarking provides the ownership of a piece of digital data by marking the considered data invisibly or visibly. this can be used to protect several types of multimedia objects such as audio, text, image and video. this thesis demonstrates the different types of watermarking techniques such as (discrete cosine transform (DCT) and discrete wavelet transform (DWT) and their characteristics. then, it classifies these techniques declaring their advantages and disadvantages. an improved technique with distinguished features, such as peak signal to noise ratio ( PSNR) and similarity ratio (SR) has been introduced. the modified technique has been compared with the other techniques by measuring heir robustness against differ attacks. finally, field programmable gate arrays (FPGA) based implementation and comparison, for the proposed watermarking technique have been presented and discussed

  10. Dual plane multiple spatial watermarking with self-encryption

    Indian Academy of Sciences (India)

    Watermarking has established itself as a promising solution in the context of digital image copyright protection. Frequency domain watermarking is mainly preferred due to associated robustness and perceptual issues but requires a large amount of computation. On the other hand spatial domain watermarking is much faster ...

  11. Detect Image Tamper by Semi-Fragile Digital Watermarking

    Institute of Scientific and Technical Information of China (English)

    LIUFeilong; WANGYangsheng

    2004-01-01

    To authenticate the integrity of image while resisting some valid image processing such as JPEG compression, a semi-fragile image watermarking is described. Image name, one of the image features, has been used as the key of pseudo-random function to generate the special watermarks for the different image. Watermarks are embedded by changing the relationship between the blocks' DCT DC coefficients, and the image tamper are detected with the relationship of these DCT DC coefficients.Experimental results show that the proposed technique can resist JPEG compression, and detect image tamper in the meantime.

  12. A QR code based zero-watermarking scheme for authentication of medical images in teleradiology cloud.

    Science.gov (United States)

    Seenivasagam, V; Velumani, R

    2013-01-01

    Healthcare institutions adapt cloud based archiving of medical images and patient records to share them efficiently. Controlled access to these records and authentication of images must be enforced to mitigate fraudulent activities and medical errors. This paper presents a zero-watermarking scheme implemented in the composite Contourlet Transform (CT)-Singular Value Decomposition (SVD) domain for unambiguous authentication of medical images. Further, a framework is proposed for accessing patient records based on the watermarking scheme. The patient identification details and a link to patient data encoded into a Quick Response (QR) code serves as the watermark. In the proposed scheme, the medical image is not subjected to degradations due to watermarking. Patient authentication and authorized access to patient data are realized on combining a Secret Share with the Master Share constructed from invariant features of the medical image. The Hu's invariant image moments are exploited in creating the Master Share. The proposed system is evaluated with Checkmark software and is found to be robust to both geometric and non geometric attacks.

  13. A QR Code Based Zero-Watermarking Scheme for Authentication of Medical Images in Teleradiology Cloud

    Directory of Open Access Journals (Sweden)

    V. Seenivasagam

    2013-01-01

    Full Text Available Healthcare institutions adapt cloud based archiving of medical images and patient records to share them efficiently. Controlled access to these records and authentication of images must be enforced to mitigate fraudulent activities and medical errors. This paper presents a zero-watermarking scheme implemented in the composite Contourlet Transform (CT—Singular Value Decomposition (SVD domain for unambiguous authentication of medical images. Further, a framework is proposed for accessing patient records based on the watermarking scheme. The patient identification details and a link to patient data encoded into a Quick Response (QR code serves as the watermark. In the proposed scheme, the medical image is not subjected to degradations due to watermarking. Patient authentication and authorized access to patient data are realized on combining a Secret Share with the Master Share constructed from invariant features of the medical image. The Hu’s invariant image moments are exploited in creating the Master Share. The proposed system is evaluated with Checkmark software and is found to be robust to both geometric and non geometric attacks.

  14. Reversible and Embedded Watermarking of Medical Images for Telemedicine

    Directory of Open Access Journals (Sweden)

    Chung-Yen Su

    2015-08-01

    Full Text Available In this paper, we propose a new reversible watermarking of medical images for the applications in telemedicine. By using a bit-stream insertion scheme, the patient’s information can be treated as a watermark and can be embedded into the bit-stream of a cover image for remote transmission. The proposed method simplifies the design of traditional image coding after a reversible watermarking. Experimental results show that the compression ratio can be achieved up to 3.025. The watermarking capacity can be obtained over 0.75 bpp for some common images. In addition, the watermark can be extracted exactly and the cover image can be reconstructed either in a lossless way or a lossy way. The obtained results also show the improvement with respect to previous works.

  15. Imperceptible reversible watermarking of radiographic images based on quantum noise masking.

    Science.gov (United States)

    Pan, Wei; Bouslimi, Dalel; Karasad, Mohamed; Cozic, Michel; Coatrieux, Gouenou

    2018-07-01

    Advances in information and communication technologies boost the sharing and remote access to medical images. Along with this evolution, needs in terms of data security are also increased. Watermarking can contribute to better protect images by dissimulating into their pixels some security attributes (e.g., digital signature, user identifier). But, to take full advantage of this technology in healthcare, one key problem to address is to ensure that the image distortion induced by the watermarking process does not endanger the image diagnosis value. To overcome this issue, reversible watermarking is one solution. It allows watermark removal with the exact recovery of the image. Unfortunately, reversibility does not mean that imperceptibility constraints are relaxed. Indeed, once the watermark removed, the image is unprotected. It is thus important to ensure the invisibility of reversible watermark in order to ensure a permanent image protection. We propose a new fragile reversible watermarking scheme for digital radiographic images, the main originality of which stands in masking a reversible watermark into the image quantum noise (the dominant noise in radiographic images). More clearly, in order to ensure the watermark imperceptibility, our scheme differentiates the image black background, where message embedding is conducted into pixel gray values with the well-known histogram shifting (HS) modulation, from the anatomical object, where HS is applied to wavelet detail coefficients, masking the watermark with the image quantum noise. In order to maintain the watermark embedder and reader synchronized in terms of image partitioning and insertion domain, our scheme makes use of different classification processes that are invariant to message embedding. We provide the theoretical performance limits of our scheme into the image quantum noise in terms of image distortion and message size (i.e. capacity). Experiments conducted on more than 800 12 bits radiographic images

  16. An optimized digital watermarking algorithm in wavelet domain based on differential evolution for color image.

    Science.gov (United States)

    Cui, Xinchun; Niu, Yuying; Zheng, Xiangwei; Han, Yingshuai

    2018-01-01

    In this paper, a new color watermarking algorithm based on differential evolution is proposed. A color host image is first converted from RGB space to YIQ space, which is more suitable for the human visual system. Then, apply three-level discrete wavelet transformation to luminance component Y and generate four different frequency sub-bands. After that, perform singular value decomposition on these sub-bands. In the watermark embedding process, apply discrete wavelet transformation to a watermark image after the scrambling encryption processing. Our new algorithm uses differential evolution algorithm with adaptive optimization to choose the right scaling factors. Experimental results show that the proposed algorithm has a better performance in terms of invisibility and robustness.

  17. Combining Haar Wavelet and Karhunen Loeve Transforms for Medical Images Watermarking

    Directory of Open Access Journals (Sweden)

    Mohamed Ali Hajjaji

    2014-01-01

    Full Text Available This paper presents a novel watermarking method, applied to the medical imaging domain, used to embed the patient’s data into the corresponding image or set of images used for the diagnosis. The main objective behind the proposed technique is to perform the watermarking of the medical images in such a way that the three main attributes of the hidden information (i.e., imperceptibility, robustness, and integration rate can be jointly ameliorated as much as possible. These attributes determine the effectiveness of the watermark, resistance to external attacks, and increase the integration rate. In order to improve the robustness, a combination of the characteristics of Discrete Wavelet and Karhunen Loeve Transforms is proposed. The Karhunen Loeve Transform is applied on the subblocks (sized 8×8 of the different wavelet coefficients (in the HL2, LH2, and HH2 subbands. In this manner, the watermark will be adapted according to the energy values of each of the Karhunen Loeve components, with the aim of ensuring a better watermark extraction under various types of attacks. For the correct identification of inserted data, the use of an Errors Correcting Code (ECC mechanism is required for the check and, if possible, the correction of errors introduced into the inserted data. Concerning the enhancement of the imperceptibility factor, the main goal is to determine the optimal value of the visibility factor, which depends on several parameters of the DWT and the KLT transforms. As a first step, a Fuzzy Inference System (FIS has been set up and then applied to determine an initial visibility factor value. Several features extracted from the Cooccurrence matrix are used as an input to the FIS and used to determine an initial visibility factor for each block; these values are subsequently reweighted in function of the eigenvalues extracted from each subblock. Regarding the integration rate, the previous works insert one bit per coefficient. In our

  18. Robustness Analysis of Dynamic Watermarks

    Directory of Open Access Journals (Sweden)

    Ivan V. Nechta

    2017-06-01

    Full Text Available In this paper we consider previously known scheme of dynamic watermarks embedding (Ra- dix-n that is used for preventing illegal use of software. According to the scheme a watermark is dynamic linked data structure (graph, which is created in memory during program execution. Hidden data, such as information about author, can be represented in a different type of graph structure. This data can be extracted and demonstrated in judicial proceedings. This paper declared that the above mentioned scheme was previously one of the most reliable, has a number of features that allows an attacker to detect a stage of watermark construction in the program, and therefore it can be corrupted or deleted. The author of this article shows the weakness of Radix-N scheme, which consists in the fact that we can reveal dynamic data structures of a program by using information received from some API-functions hooker which catches function calls of dynamic memory allocation. One of these data structures is the watermark. Pointers on dynamically created objects (arrays, variables, class items, etc. of a program can be detected by content analysis of computer's RAM. Different dynamic objects in memory interconnected by pointers form dynamic data structures of a program such as lists, stacks, trees and other graphs (including the watermark. Our experiment shows that in the vast majority of cases the amount of data structure in programs is small, which increases probability of a successful attack. Also we present an algorithm for finding connected components of a graph with linear time-consuming in cases where the number of nodes is about 106. On the basis of the experimental findings the new watermarking scheme has been presented, which is resistant to the proposed attack. It is offered to use different graph structure representation of a watermark, where edges are implemented using unique signatures. Our scheme uses content encrypting of graph nodes (except signature

  19. A robust and secure watermarking scheme based on singular ...

    Indian Academy of Sciences (India)

    Dhirubhai Ambani Institute of Information and Communication Technology,. Gandhinagar 382 007 ... required. Watermarked image is subjected to various forms of manipulations on communication channel. ..... J. Image Graphics. 9(1): 506–512.

  20. A robust H.264/AVC video watermarking scheme with drift compensation.

    Science.gov (United States)

    Jiang, Xinghao; Sun, Tanfeng; Zhou, Yue; Wang, Wan; Shi, Yun-Qing

    2014-01-01

    A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression.

  1. Watermarking Algorithms for 3D NURBS Graphic Data

    Directory of Open Access Journals (Sweden)

    Jae Jun Lee

    2004-10-01

    Full Text Available Two watermarking algorithms for 3D nonuniform rational B-spline (NURBS graphic data are proposed: one is appropriate for the steganography, and the other for watermarking. Instead of directly embedding data into the parameters of NURBS, the proposed algorithms embed data into the 2D virtual images extracted by parameter sampling of 3D model. As a result, the proposed steganography algorithm can embed information into more places of the surface than the conventional algorithm, while preserving the data size of the model. Also, any existing 2D watermarking technique can be used for the watermarking of 3D NURBS surfaces. From the experiment, it is found that the algorithm for the watermarking is robust to the attacks on weights, control points, and knots. It is also found to be robust to the remodeling of NURBS models.

  2. Watermarking techniques for electronic delivery of remote sensing images

    Science.gov (United States)

    Barni, Mauro; Bartolini, Franco; Magli, Enrico; Olmo, Gabriella

    2002-09-01

    Earth observation missions have recently attracted a growing interest, mainly due to the large number of possible applications capable of exploiting remotely sensed data and images. Along with the increase of market potential, the need arises for the protection of the image products. Such a need is a very crucial one, because the Internet and other public/private networks have become preferred means of data exchange. A critical issue arising when dealing with digital image distribution is copyright protection. Such a problem has been largely addressed by resorting to watermarking technology. A question that obviously arises is whether the requirements imposed by remote sensing imagery are compatible with existing watermarking techniques. On the basis of these motivations, the contribution of this work is twofold: assessment of the requirements imposed by remote sensing applications on watermark-based copyright protection, and modification of two well-established digital watermarking techniques to meet such constraints. More specifically, the concept of near-lossless watermarking is introduced and two possible algorithms matching such a requirement are presented. Experimental results are shown to measure the impact of watermark introduction on a typical remote sensing application, i.e., unsupervised image classification.

  3. Robust and Blind 3D Mesh Watermarking in Spatial Domain Based on Faces Categorization and Sorting

    Science.gov (United States)

    Molaei, Amir Masoud; Ebrahimnezhad, Hossein; Sedaaghi, Mohammad Hossein

    2016-06-01

    In this paper, a 3D watermarking algorithm in spatial domain is presented with blind detection. In the proposed method, a negligible visual distortion is observed in host model. Initially, a preprocessing is applied on the 3D model to make it robust against geometric transformation attacks. Then, a number of triangle faces are determined as mark triangles using a novel systematic approach in which faces are categorized and sorted robustly. In order to enhance the capability of information retrieval by attacks, block watermarks are encoded using Reed-Solomon block error-correcting code before embedding into the mark triangles. Next, the encoded watermarks are embedded in spherical coordinates. The proposed method is robust against additive noise, mesh smoothing and quantization attacks. Also, it is stout next to geometric transformation, vertices and faces reordering attacks. Moreover, the proposed algorithm is designed so that it is robust against the cropping attack. Simulation results confirm that the watermarked models confront very low distortion if the control parameters are selected properly. Comparison with other methods demonstrates that the proposed method has good performance against the mesh smoothing attacks.

  4. A Visual Cryptography Based Watermark Technology for Individual and Group Images

    Directory of Open Access Journals (Sweden)

    Azzam Sleit

    2007-04-01

    Full Text Available The ease by which digital information can be duplicated and distributed has led to the need for effective copyright protection tools. Various techniques including watermarking have been introduced in attempt to address these growing concerns. Most watermarking algorithms call for a piece of information to be hidden directly in media content, in such a way that it is imperceptible to a human observer, but detectable by a computer. This paper presents an improved cryptographic watermark method based on Hwang and Naor-Shamir [1, 2] approaches. The technique does not require that the watermark pattern to be embedded in to the original digital image. Verification information is generated and used to validate the ownership of the image or a group of images. The watermark pattern can be any bitmap image. Experimental results show that the proposed method can recover the watermark pattern from the marked image (or group of images even if major changes are reflected on the original digital image or any member of the image group such as rotation, scaling and distortion.

  5. Robust and Imperceptible Watermarking of Video Streams for Low Power Devices

    Science.gov (United States)

    Ishtiaq, Muhammad; Jaffar, M. Arfan; Khan, Muhammad A.; Jan, Zahoor; Mirza, Anwar M.

    With the advent of internet, every aspect of life is going online. From online working to watching videos, everything is now available on the internet. With the greater business benefits, increased availability and other online business advantages, there is a major challenge of security and ownership of data. Videos downloaded from an online store can easily be shared among non-intended or unauthorized users. Invisible watermarking is used to hide copyright protection information in the videos. The existing methods of watermarking are less robust and imperceptible and also the computational complexity of these methods does not suit low power devices. In this paper, we have proposed a new method to address the problem of robustness and imperceptibility. Experiments have shown that our method has better robustness and imperceptibility as well as our method is computationally efficient than previous approaches in practice. Hence our method can easily be applied on low power devices.

  6. A Robust H.264/AVC Video Watermarking Scheme with Drift Compensation

    Directory of Open Access Journals (Sweden)

    Xinghao Jiang

    2014-01-01

    Full Text Available A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression.

  7. Mobile Watermarking against Geometrical Distortions

    Directory of Open Access Journals (Sweden)

    Jing Zhang

    2015-08-01

    Full Text Available Mobile watermarking robust to geometrical distortions is still a great challenge. In mobile watermarking, efficient computation is necessary because mobile devices have very limited resources due to power consumption. In this paper, we propose a low-complexity geometrically resilient watermarking approach based on the optimal tradeoff circular harmonic function (OTCHF correlation filter and the minimum average correlation energy Mellin radial harmonic (MACE-MRH correlation filter. By the rotation, translation and scale tolerance properties of the two kinds of filter, the proposed watermark detector can be robust to geometrical attacks. The embedded watermark is weighted by a perceptual mask which matches very well with the properties of the human visual system. Before correlation, a whitening process is utilized to improve watermark detection reliability. Experimental results demonstrate that the proposed watermarking approach is computationally efficient and robust to geometrical distortions.

  8. Reversible Integer Wavelet Transform for the Joint of Image Encryption and Watermarking

    Directory of Open Access Journals (Sweden)

    Bin Wang

    2015-01-01

    Full Text Available In recent years, signal processing in the encrypted domain has attracted considerable research interest, especially embedding watermarking in encrypted image. In this work, a novel joint of image encryption and watermarking based on reversible integer wavelet transform is proposed. Firstly, the plain-image is encrypted by chaotic maps and reversible integer wavelet transform. Then the lossless watermarking is embedded in the encrypted image by reversible integer wavelet transform and histogram modification. Finally an encrypted image containing watermarking is obtained by the inverse integer wavelet transform. What is more, the original image and watermarking can be completely recovered by inverse process. Numerical experimental results and comparing with previous works show that the proposed scheme possesses higher security and embedding capacity than previous works. It is suitable for protecting the image information.

  9. Sonic Watermarking

    Directory of Open Access Journals (Sweden)

    Ryuki Tachibana

    2004-10-01

    Full Text Available Audio watermarking has been used mainly for digital sound. In this paper, we extend the range of its applications to live performances with a new composition method for real-time audio watermarking. Sonic watermarking mixes the sound of the watermark signal and the host sound in the air to detect illegal music recordings recorded from auditoriums. We propose an audio watermarking algorithm for sonic watermarking that increases the magnitudes of the host signal only in segmented areas pseudorandomly chosen in the time-frequency plane. The result of a MUSHRA subjective listening test assesses the acoustic quality of the method in the range of “excellent quality.” The robustness is dependent on the type of music samples. For popular and orchestral music, a watermark can be stably detected from music samples that have been sonic-watermarked and then once compressed in an MPEG 1 layer 3 file.

  10. Optical asymmetric watermarking using modified wavelet fusion and diffractive imaging

    Science.gov (United States)

    Mehra, Isha; Nishchal, Naveen K.

    2015-05-01

    In most of the existing image encryption algorithms the generated keys are in the form of a noise like distribution with a uniform distributed histogram. However, the noise like distribution is an apparent sign indicating the presence of the keys. If the keys are to be transferred through some communication channels, then this may lead to a security problem. This is because; the noise like features may easily catch people's attention and bring more attacks. To address this problem it is required to transfer the keys to some other meaningful images to disguise the attackers. The watermarking schemes are complementary to image encryption schemes. In most of the iterative encryption schemes, support constraints play an important role of the keys in order to decrypt the meaningful data. In this article, we have transferred the support constraints which are generated by axial translation of CCD camera using amplitude-, and phase- truncation approach, into different meaningful images. This has been done by developing modified fusion technique in wavelet transform domain. The second issue is, in case, the meaningful images are caught by the attacker then how to solve the copyright protection. To resolve this issue, watermark detection plays a crucial role. For this purpose, it is necessary to recover the original image using the retrieved watermarks/support constraints. To address this issue, four asymmetric keys have been generated corresponding to each watermarked image to retrieve the watermarks. For decryption, an iterative phase retrieval algorithm is applied to extract the plain-texts from corresponding retrieved watermarks.

  11. An Efficient Semi-fragile Watermarking Scheme for Tamper Localization and Recovery

    Science.gov (United States)

    Hou, Xiang; Yang, Hui; Min, Lianquan

    2018-03-01

    To solve the problem that remote sensing images are vulnerable to be tampered, a semi-fragile watermarking scheme was proposed. Binary random matrix was used as the authentication watermark, which was embedded by quantizing the maximum absolute value of directional sub-bands coefficients. The average gray level of every non-overlapping 4×4 block was adopted as the recovery watermark, which was embedded in the least significant bit. Watermarking detection could be done directly without resorting to the original images. Experimental results showed our method was robust against rational distortions to a certain extent. At the same time, it was fragile to malicious manipulation, and realized accurate localization and approximate recovery of the tampered regions. Therefore, this scheme can protect the security of remote sensing image effectively.

  12. Robust video watermarking via optimization algorithm for quantization of pseudo-random semi-global statistics

    Science.gov (United States)

    Kucukgoz, Mehmet; Harmanci, Oztan; Mihcak, Mehmet K.; Venkatesan, Ramarathnam

    2005-03-01

    In this paper, we propose a novel semi-blind video watermarking scheme, where we use pseudo-random robust semi-global features of video in the three dimensional wavelet transform domain. We design the watermark sequence via solving an optimization problem, such that the features of the mark-embedded video are the quantized versions of the features of the original video. The exact realizations of the algorithmic parameters are chosen pseudo-randomly via a secure pseudo-random number generator, whose seed is the secret key, that is known (resp. unknown) by the embedder and the receiver (resp. by the public). We experimentally show the robustness of our algorithm against several attacks, such as conventional signal processing modifications and adversarial estimation attacks.

  13. Dual watermarking technique with multiple biometric watermarks

    Indian Academy of Sciences (India)

    of digital content. Digital watermarking is useful in DRM systems as it can hide information ... making an unauthorized use. It is the .... a watermark and a binary decision, whether the digital media is watermarked or not is done by ..... AC coefficients, which mainly reflect the texture features of image, are taken into account to.

  14. A Novel Texture-Quantization-Based Reversible Multiple Watermarking Scheme Applied to Health Information System.

    Science.gov (United States)

    Turuk, Mousami; Dhande, Ashwin

    2018-04-01

    The recent innovations in information and communication technologies have appreciably changed the panorama of health information system (HIS). These advances provide new means to process, handle, and share medical images and also augment the medical image security issues in terms of confidentiality, reliability, and integrity. Digital watermarking has emerged as new era that offers acceptable solutions to the security issues in HIS. Texture is a significant feature to detect the embedding sites in an image, which further leads to substantial improvement in the robustness. However, considering the perspective of digital watermarking, this feature has received meager attention in the reported literature. This paper exploits the texture property of an image and presents a novel hybrid texture-quantization-based approach for reversible multiple watermarking. The watermarked image quality has been accessed by peak signal to noise ratio (PSNR), structural similarity measure (SSIM), and universal image quality index (UIQI), and the obtained results are superior to the state-of-the-art methods. The algorithm has been evaluated on a variety of medical imaging modalities (CT, MRA, MRI, US) and robustness has been verified, considering various image processing attacks including JPEG compression. The proposed scheme offers additional security using repetitive embedding of BCH encoded watermarks and ADM encrypted ECG signal. Experimental results achieved a maximum of 22,616 bits hiding capacity with PSNR of 53.64 dB.

  15. COMPARATIVE ANALYSIS OF APPLICATION EFFICIENCY OF ORTHOGONAL TRANSFORMATIONS IN FREQUENCY ALGORITHMS FOR DIGITAL IMAGE WATERMARKING

    Directory of Open Access Journals (Sweden)

    Vladimir A. Batura

    2014-11-01

    Full Text Available The efficiency of orthogonal transformations application in the frequency algorithms of the digital watermarking of still images is examined. Discrete Hadamard transform, discrete cosine transform and discrete Haar transform are selected. Their effectiveness is determined by the invisibility of embedded in digital image watermark and its resistance to the most common image processing operations: JPEG-compression, noising, changing of the brightness and image size, histogram equalization. The algorithm for digital watermarking and its embedding parameters remain unchanged at these orthogonal transformations. Imperceptibility of embedding is defined by the peak signal to noise ratio, watermark stability– by Pearson's correlation coefficient. Embedding is considered to be invisible, if the value of the peak signal to noise ratio is not less than 43 dB. Embedded watermark is considered to be resistant to a specific attack, if the Pearson’s correlation coefficient is not less than 0.5. Elham algorithm based on the image entropy is chosen for computing experiment. Computing experiment is carried out according to the following algorithm: embedding of a digital watermark in low-frequency area of the image (container by Elham algorithm, exposure to a harmful influence on the protected image (cover image, extraction of a digital watermark. These actions are followed by quality assessment of cover image and watermark on the basis of which efficiency of orthogonal transformation is defined. As a result of computing experiment it was determined that the choice of the specified orthogonal transformations at identical algorithm and parameters of embedding doesn't influence the degree of imperceptibility for a watermark. Efficiency of discrete Hadamard transform and discrete cosine transformation in relation to the attacks chosen for experiment was established based on the correlation indicators. Application of discrete Hadamard transform increases

  16. Design and evaluation of sparse quantization index modulation watermarking schemes

    Science.gov (United States)

    Cornelis, Bruno; Barbarien, Joeri; Dooms, Ann; Munteanu, Adrian; Cornelis, Jan; Schelkens, Peter

    2008-08-01

    In the past decade the use of digital data has increased significantly. The advantages of digital data are, amongst others, easy editing, fast, cheap and cross-platform distribution and compact storage. The most crucial disadvantages are the unauthorized copying and copyright issues, by which authors and license holders can suffer considerable financial losses. Many inexpensive methods are readily available for editing digital data and, unlike analog information, the reproduction in the digital case is simple and robust. Hence, there is great interest in developing technology that helps to protect the integrity of a digital work and the copyrights of its owners. Watermarking, which is the embedding of a signal (known as the watermark) into the original digital data, is one method that has been proposed for the protection of digital media elements such as audio, video and images. In this article, we examine watermarking schemes for still images, based on selective quantization of the coefficients of a wavelet transformed image, i.e. sparse quantization-index modulation (QIM) watermarking. Different grouping schemes for the wavelet coefficients are evaluated and experimentally verified for robustness against several attacks. Wavelet tree-based grouping schemes yield a slightly improved performance over block-based grouping schemes. Additionally, the impact of the deployment of error correction codes on the most promising configurations is examined. The utilization of BCH-codes (Bose, Ray-Chaudhuri, Hocquenghem) results in an improved robustness as long as the capacity of the error codes is not exceeded (cliff-effect).

  17. Countermeasures for unintentional and intentional video watermarking attacks

    Science.gov (United States)

    Deguillaume, Frederic; Csurka, Gabriela; Pun, Thierry

    2000-05-01

    These last years, the rapidly growing digital multimedia market has revealed an urgent need for effective copyright protection mechanisms. Therefore, digital audio, image and video watermarking has recently become a very active area of research, as a solution to this problem. Many important issues have been pointed out, one of them being the robustness to non-intentional and intentional attacks. This paper studies some attacks and proposes countermeasures applied to videos. General attacks are lossy copying/transcoding such as MPEG compression and digital/analog (D/A) conversion, changes of frame-rate, changes of display format, and geometrical distortions. More specific attacks are sequence edition, and statistical attacks such as averaging or collusion. Averaging attack consists of averaging locally consecutive frames to cancel the watermark. This attack works well for schemes which embed random independent marks into frames. In the collusion attack the watermark is estimated from single frames (based on image denoising), and averaged over different scenes for better accuracy. The estimated watermark is then subtracted from each frame. Collusion requires that the same mark is embedded into all frames. The proposed countermeasures first ensures robustness to general attacks by spread spectrum encoding in the frequency domain and by the use of an additional template. Secondly, a Bayesian criterion, evaluating the probability of a correctly decoded watermark, is used for rejection of outliers, and to implement an algorithm against statistical attacks. The idea is to embed randomly chosen marks among a finite set of marks, into subsequences of videos which are long enough to resist averaging attacks, but short enough to avoid collusion attacks. The Bayesian criterion is needed to select the correct mark at the decoding step. Finally, the paper presents experimental results showing the robustness of the proposed method.

  18. Performance analysis of chaotic and white watermarks in the presence of common watermark attacks

    Energy Technology Data Exchange (ETDEWEB)

    Mooney, Aidan [Department of Computer Science, NUI Maynooth, Co. Kildare (Ireland)], E-mail: amooney@cs.nuim.ie; Keating, John G. [Department of Computer Science, NUI Maynooth, Co. Kildare (Ireland)], E-mail: john.keating@nuim.ie; Heffernan, Daniel M. [Department of Mathematical Physics, NUI Maynooth, Co. Kildare (Ireland); School of Theoretical Physics, Dublin Institute for Advanced Studies, Dublin 4 (Ireland)], E-mail: dmh@thphys.nuim.ie

    2009-10-15

    Digital watermarking is a technique that aims to embed a piece of information permanently into some digital media, which may be used at a later stage to prove owner authentication and attempt to provide protection to documents. The most common watermark types used to date are pseudorandom number sequences which possess a white spectrum. Chaotic watermark sequences have been receiving increasing interest recently and have been shown to be an alternative to the pseudorandom watermark types. In this paper the performance of pseudorandom watermarks and chaotic watermarks in the presence of common watermark attacks is performed. The chaotic watermarks are generated from the iteration of the skew tent map, the Bernoulli map and the logistic map. The analysis focuses on the watermarked images after they have been subjected to common image distortion attacks. The capacities of each of these images are also calculated. It is shown that signals generated from lowpass chaotic signals have superior performance over the other signal types analysed for the attacks studied.

  19. Content Preserving Watermarking for Medical Images Using Shearlet Transform and SVD

    Science.gov (United States)

    Favorskaya, M. N.; Savchina, E. I.

    2017-05-01

    Medical Image Watermarking (MIW) is a special field of a watermarking due to the requirements of the Digital Imaging and COmmunications in Medicine (DICOM) standard since 1993. All 20 parts of the DICOM standard are revised periodically. The main idea of the MIW is to embed various types of information including the doctor's digital signature, fragile watermark, electronic patient record, and main watermark in a view of region of interest for the doctor into the host medical image. These four types of information are represented in different forms; some of them are encrypted according to the DICOM requirements. However, all types of information ought to be resulted into the generalized binary stream for embedding. The generalized binary stream may have a huge volume. Therefore, not all watermarking methods can be applied successfully. Recently, the digital shearlet transform had been introduced as a rigorous mathematical framework for the geometric representation of multi-dimensional data. Some modifications of the shearlet transform, particularly the non-subsampled shearlet transform, can be associated to a multi-resolution analysis that provides a fully shift-invariant, multi-scale, and multi-directional expansion. During experiments, a quality of the extracted watermarks under the JPEG compression and typical internet attacks was estimated using several metrics, including the peak signal to noise ratio, structural similarity index measure, and bit error rate.

  20. The Modified Frequency Algorithm of Digital Watermarking of Still Images Resistant to JPEG Compression

    Directory of Open Access Journals (Sweden)

    V. A. Batura

    2015-01-01

    Full Text Available Digital watermarking is an effective copyright protection for multimedia products (in particular, still images. Digital marking represents process of embedding into object of protection of a digital watermark which is invisible for a human eye. However there is rather large number of the harmful influences capable to destroy the watermark which is embedded into the still image. The most widespread attack is JPEG compression that is caused by efficiency of this format of compression and its big prevalence on the Internet.The new algorithm which is modification of algorithm of Elham is presented in the present article. The algorithm of digital marking of motionless images carries out embedding of a watermark in frequency coefficients of discrete Hadamard transform of the chosen image blocks. The choice of blocks of the image for embedding of a digital watermark is carried out on the basis of the set threshold of entropy of pixels. The choice of low-frequency coefficients for embedding is carried out on the basis of comparison of values of coefficients of discrete cosine transformation with a predetermined threshold, depending on the product of the built-in watermark coefficient on change coefficient.Resistance of new algorithm to compression of JPEG, noising, filtration, change of color, the size and histogram equalization is in details analysed. Research of algorithm consists in comparison of the appearance taken from the damaged image of a watermark with the introduced logo. Ability of algorithm to embedding of a watermark with a minimum level of distortions of the image is in addition analysed. It is established that the new algorithm in comparison by initial algorithm of Elham showed full resistance to compression of JPEG, and also the improved resistance to a noising, change of brightness and histogram equalization.The developed algorithm can be used for copyright protection on the static images. Further studies will be used to study the

  1. Frequency Hopping Method for Audio Watermarking

    Directory of Open Access Journals (Sweden)

    A. Anastasijević

    2012-11-01

    Full Text Available This paper evaluates the degradation of audio content for a perceptible removable watermark. Two different approaches to embedding the watermark in the spectral domain were investigated. The frequencies for watermark embedding are chosen according to a pseudorandom sequence making the methods robust. Consequentially, the lower quality audio can be used for promotional purposes. For a fee, the watermark can be removed with a secret watermarking key. Objective and subjective testing was conducted in order to measure degradation level for the watermarked music samples and to examine residual distortion for different parameters of the watermarking algorithm and different music genres.

  2. A proposed security technique based on watermarking and encryption for digital imaging and communications in medicine

    Directory of Open Access Journals (Sweden)

    Mohamed M. Abd-Eldayem

    2013-03-01

    Full Text Available Nowadays; modern Hospital Data Management Systems (HDMSs are applied in a computer network; in addition medicinal equipments produce medical images in a digital form. HDMS must store and exchange these images in a secured environment to provide image integrity and patient privacy. The reversible watermarking techniques can be used to provide the integrity and the privacy. In this paper, a security technique based on watermarking and encryption is proposed to be used for Digital Imaging and Communications in Medicine (DICOM. It provides patient authentication, information confidentiality and integrity based on reversible watermark. To achieve integrity service at the sender side; a hash value based on encrypted MD5 is determined from the image. And to satisfy the reversible feature; R–S-Vector is determined from the image and is compressed based on a Huffman compression algorithm. After that to provide confidentiality and authentication services: the compressed R–S-Vector, the hash value and patient ID are concatenated to form a watermark then this watermark is encrypted using AES encryption technique, finally the watermark is embedded inside the medical image. Experimental results prove that the proposed technique can provide patient authentication services, image integrity service and information confidentiality service with excellent efficiency. Concluded results for all tested DICOM medical images and natural images show the following: BER equals 0, both of SNR and PSNR are consistent and have large values, and MSE has low value; the average values of SNR, PSNR and MSE are 52 dB, 57 dB and 0.12 respectively. Therefore, watermarked images have high imperceptibility, invisibility and transparency. In addition, the watermark extracted from the image at the receiver side is identical to the watermark embedded into the image in the sender side; as a result, the proposed technique is totally reversible, and the embedded watermark does not

  3. Robust image authentication in the presence of noise

    CERN Document Server

    2015-01-01

    This book addresses the problems that hinder image authentication in the presence of noise. It considers the advantages and disadvantages of existing algorithms for image authentication and shows new approaches and solutions for robust image authentication. The state of the art algorithms are compared and, furthermore, innovative approaches and algorithms are introduced. The introduced algorithms are applied to improve image authentication, watermarking and biometry.    Aside from presenting new directions and algorithms for robust image authentication in the presence of noise, as well as image correction, this book also:   Provides an overview of the state of the art algorithms for image authentication in the presence of noise and modifications, as well as a comparison of these algorithms, Presents novel algorithms for robust image authentication, whereby the image is tried to be corrected and authenticated, Examines different views for the solution of problems connected to image authentication in the pre...

  4. A new Watermarking System based on Discrete Cosine Transform (DCT) in color biometric images.

    Science.gov (United States)

    Dogan, Sengul; Tuncer, Turker; Avci, Engin; Gulten, Arif

    2012-08-01

    This paper recommend a biometric color images hiding approach An Watermarking System based on Discrete Cosine Transform (DCT), which is used to protect the security and integrity of transmitted biometric color images. Watermarking is a very important hiding information (audio, video, color image, gray image) technique. It is commonly used on digital objects together with the developing technology in the last few years. One of the common methods used for hiding information on image files is DCT method which used in the frequency domain. In this study, DCT methods in order to embed watermark data into face images, without corrupting their features.

  5. Practical Challenges for Digital Watermarking Applications

    Directory of Open Access Journals (Sweden)

    Sharma Ravi K

    2002-01-01

    Full Text Available The field of digital watermarking has recently seen numerous articles covering novel techniques, theoretical studies, attacks, and analysis. In this paper, we focus on an emerging application to highlight practical challenges for digital watermarking applications. Challenges include design considerations, requirements analysis, choice of watermarking techniques, speed, robustness, and the tradeoffs involved. We describe common attributes of watermarking systems and discuss the challenges in developing real world applications. Our application uses digital watermarking to connect ordinary toys to the digital world. The application captures important aspects of watermarking systems and illustrates some of the design issues faced.

  6. Adaptive Digital Watermarking Scheme Based on Support Vector Machines and Optimized Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaoyi Zhou

    2018-01-01

    Full Text Available Digital watermarking is an effective solution to the problem of copyright protection, thus maintaining the security of digital products in the network. An improved scheme to increase the robustness of embedded information on the basis of discrete cosine transform (DCT domain is proposed in this study. The embedding process consisted of two main procedures. Firstly, the embedding intensity with support vector machines (SVMs was adaptively strengthened by training 1600 image blocks which are of different texture and luminance. Secondly, the embedding position with the optimized genetic algorithm (GA was selected. To optimize GA, the best individual in the first place of each generation directly went into the next generation, and the best individual in the second position participated in the crossover and the mutation process. The transparency reaches 40.5 when GA’s generation number is 200. A case study was conducted on a 256 × 256 standard Lena image with the proposed method. After various attacks (such as cropping, JPEG compression, Gaussian low-pass filtering (3,0.5, histogram equalization, and contrast increasing (0.5,0.6 on the watermarked image, the extracted watermark was compared with the original one. Results demonstrate that the watermark can be effectively recovered after these attacks. Even though the algorithm is weak against rotation attacks, it provides high quality in imperceptibility and robustness and hence it is a successful candidate for implementing novel image watermarking scheme meeting real timelines.

  7. Novel Iris Biometric Watermarking Based on Singular Value Decomposition and Discrete Cosine Transform

    Directory of Open Access Journals (Sweden)

    Jinyu Lu

    2014-01-01

    Full Text Available A novel iris biometric watermarking scheme is proposed focusing on iris recognition instead of the traditional watermark for increasing the security of the digital products. The preprocess of iris image is to be done firstly, which generates the iris biometric template from person's eye images. And then the templates are to be on discrete cosine transform; the value of the discrete cosine is encoded to BCH error control coding. The host image is divided into four areas equally correspondingly. The BCH codes are embedded in the singular values of each host image's coefficients which are obtained through discrete cosine transform (DCT. Numerical results reveal that proposed method can extract the watermark effectively and illustrate its security and robustness.

  8. A New Pixels Flipping Method for Huge Watermarking Capacity of the Invoice Font Image

    Directory of Open Access Journals (Sweden)

    Li Li

    2014-01-01

    Full Text Available Invoice printing just has two-color printing, so invoice font image can be seen as binary image. To embed watermarks into invoice image, the pixels need to be flipped. The more huge the watermark is, the more the pixels need to be flipped. We proposed a new pixels flipping method in invoice image for huge watermarking capacity. The pixels flipping method includes one novel interpolation method for binary image, one flippable pixels evaluation mechanism, and one denoising method based on gravity center and chaos degree. The proposed interpolation method ensures that the invoice image keeps features well after scaling. The flippable pixels evaluation mechanism ensures that the pixels keep better connectivity and smoothness and the pattern has highest structural similarity after flipping. The proposed denoising method makes invoice font image smoother and fiter for human vision. Experiments show that the proposed flipping method not only keeps the invoice font structure well but also improves watermarking capacity.

  9. A new pixels flipping method for huge watermarking capacity of the invoice font image.

    Science.gov (United States)

    Li, Li; Hou, Qingzheng; Lu, Jianfeng; Xu, Qishuai; Dai, Junping; Mao, Xiaoyang; Chang, Chin-Chen

    2014-01-01

    Invoice printing just has two-color printing, so invoice font image can be seen as binary image. To embed watermarks into invoice image, the pixels need to be flipped. The more huge the watermark is, the more the pixels need to be flipped. We proposed a new pixels flipping method in invoice image for huge watermarking capacity. The pixels flipping method includes one novel interpolation method for binary image, one flippable pixels evaluation mechanism, and one denoising method based on gravity center and chaos degree. The proposed interpolation method ensures that the invoice image keeps features well after scaling. The flippable pixels evaluation mechanism ensures that the pixels keep better connectivity and smoothness and the pattern has highest structural similarity after flipping. The proposed denoising method makes invoice font image smoother and fiter for human vision. Experiments show that the proposed flipping method not only keeps the invoice font structure well but also improves watermarking capacity.

  10. Dual Level Digital Watermarking for Images

    Science.gov (United States)

    Singh, V. K.; Singh, A. K.

    2010-11-01

    More than 700 years ago, watermarks were used in Italy to indicate the paper brand and the mill that produced it. By the 18th century watermarks began to be used as anti counterfeiting measures on money and other documents.The term watermark was introduced near the end of the 18th century. It was probably given because the marks resemble the effects of water on paper. The first example of a technology similar to digital watermarking is a patent filed in 1954 by Emil Hembrooke for identifying music works. In 1988, Komatsu and Tominaga appear to be the first to use the term "digital watermarking". Consider the following hypothetical situations. You go to a shop, buy some goods and at the counter you are given a currency note you have never come across before. How do you verify that it is not counterfeit? Or say you go to a stationery shop and ask for a ream of bond paper. How do you verify that you have actually been given what you asked for? How does a philatelist verify the authenticity of a stamp? In all these cases, the watermark is used to authenticate. Watermarks have been in existence almost from the time paper has been in use. The impression created by the mesh moulds on the slurry of fibre and water remains on the paper. It serves to identify the manufacturer and thus authenticate the product without actually degrading the aesthetics and utility of the stock. It also makes forgery significantly tougher. Even today, important government and legal documents are watermarked. But what is watermarking, when it comes to digital data? Information is no longer present on a physical material but is represented as a series of zeros and ones. Duplication of information is achieved easily by just reproducing that combination of zeros and ones. How then can one protect ownership rights and authenticate data? The digital watermark is the same as that of conventional watermarks.

  11. Using digital watermarking to enhance security in wireless medical image transmission.

    Science.gov (United States)

    Giakoumaki, Aggeliki; Perakis, Konstantinos; Banitsas, Konstantinos; Giokas, Konstantinos; Tachakra, Sapal; Koutsouris, Dimitris

    2010-04-01

    During the last few years, wireless networks have been increasingly used both inside hospitals and in patients' homes to transmit medical information. In general, wireless networks suffer from decreased security. However, digital watermarking can be used to secure medical information. In this study, we focused on combining wireless transmission and digital watermarking technologies to better secure the transmission of medical images within and outside the hospital. We utilized an integrated system comprising the wireless network and the digital watermarking module to conduct a series of tests. The test results were evaluated by medical consultants. They concluded that the images suffered no visible quality degradation and maintained their diagnostic integrity. The proposed integrated system presented reasonable stability, and its performance was comparable to that of a fixed network. This system can enhance security during the transmission of medical images through a wireless channel.

  12. Image Watermarking Scheme for Specifying False Positive Probability and Bit-pattern Embedding

    Science.gov (United States)

    Sayama, Kohei; Nakamoto, Masayoshi; Muneyasu, Mitsuji; Ohno, Shuichi

    This paper treats a discrete wavelet transform(DWT)-based image watermarking with considering the false positive probability and bit-pattern embedding. We propose an iterative embedding algorithm of watermarking signals which are K sets pseudo-random numbers generated by a secret key. In the detection, K correlations between the watermarked DWT coefficients and watermark signals are computed by using the secret key. L correlations are made available for the judgment of the watermark presence with specified false positive probability, and the other K-L correlations are corresponding to the bit-pattern signal. In the experiment, we show the detection results with specified false positive probability and the bit-pattern recovery, and the comparison of the proposed method against JPEG compression, scaling down and cropping.

  13. Watermarking textures in video games

    Science.gov (United States)

    Liu, Huajian; Berchtold, Waldemar; Schäfer, Marcel; Lieb, Patrick; Steinebach, Martin

    2014-02-01

    Digital watermarking is a promising solution to video game piracy. In this paper, based on the analysis of special challenges and requirements in terms of watermarking textures in video games, a novel watermarking scheme for DDS textures in video games is proposed. To meet the performance requirements in video game applications, the proposed algorithm embeds the watermark message directly in the compressed stream in DDS files and can be straightforwardly applied in watermark container technique for real-time embedding. Furthermore, the embedding approach achieves high watermark payload to handle collusion secure fingerprinting codes with extreme length. Hence, the scheme is resistant to collusion attacks, which is indispensable in video game applications. The proposed scheme is evaluated in aspects of transparency, robustness, security and performance. Especially, in addition to classical objective evaluation, the visual quality and playing experience of watermarked games is assessed subjectively in game playing.

  14. Watermarking on 3D mesh based on spherical wavelet transform.

    Science.gov (United States)

    Jin, Jian-Qiu; Dai, Min-Ya; Bao, Hu-Jun; Peng, Qun-Sheng

    2004-03-01

    In this paper we propose a robust watermarking algorithm for 3D mesh. The algorithm is based on spherical wavelet transform. Our basic idea is to decompose the original mesh into a series of details at different scales by using spherical wavelet transform; the watermark is then embedded into the different levels of details. The embedding process includes: global sphere parameterization, spherical uniform sampling, spherical wavelet forward transform, embedding watermark, spherical wavelet inverse transform, and at last resampling the mesh watermarked to recover the topological connectivity of the original model. Experiments showed that our algorithm can improve the capacity of the watermark and the robustness of watermarking against attacks.

  15. Quantum Watermarking Scheme Based on INEQR

    Science.gov (United States)

    Zhou, Ri-Gui; Zhou, Yang; Zhu, Changming; Wei, Lai; Zhang, Xiafen; Ian, Hou

    2018-04-01

    Quantum watermarking technology protects copyright by embedding invisible quantum signal in quantum multimedia data. In this paper, a watermarking scheme based on INEQR was presented. Firstly, the watermark image is extended to achieve the requirement of embedding carrier image. Secondly, the swap and XOR operation is used on the processed pixels. Since there is only one bit per pixel, XOR operation can achieve the effect of simple encryption. Thirdly, both the watermark image extraction and embedding operations are described, where the key image, swap operation and LSB algorithm are used. When the embedding is made, the binary image key is changed. It means that the watermark has been embedded. Of course, if the watermark image is extracted, the key's state need detected. When key's state is |1>, this extraction operation is carried out. Finally, for validation of the proposed scheme, both the Signal-to-noise ratio (PSNR) and the security of the scheme are analyzed.

  16. Imperceptible watermarking for security of fundus images in tele-ophthalmology applications and computer-aided diagnosis of retina diseases.

    Science.gov (United States)

    Singh, Anushikha; Dutta, Malay Kishore

    2017-12-01

    The authentication and integrity verification of medical images is a critical and growing issue for patients in e-health services. Accurate identification of medical images and patient verification is an essential requirement to prevent error in medical diagnosis. The proposed work presents an imperceptible watermarking system to address the security issue of medical fundus images for tele-ophthalmology applications and computer aided automated diagnosis of retinal diseases. In the proposed work, patient identity is embedded in fundus image in singular value decomposition domain with adaptive quantization parameter to maintain perceptual transparency for variety of fundus images like healthy fundus or disease affected image. In the proposed method insertion of watermark in fundus image does not affect the automatic image processing diagnosis of retinal objects & pathologies which ensure uncompromised computer-based diagnosis associated with fundus image. Patient ID is correctly recovered from watermarked fundus image for integrity verification of fundus image at the diagnosis centre. The proposed watermarking system is tested in a comprehensive database of fundus images and results are convincing. results indicate that proposed watermarking method is imperceptible and it does not affect computer vision based automated diagnosis of retinal diseases. Correct recovery of patient ID from watermarked fundus image makes the proposed watermarking system applicable for authentication of fundus images for computer aided diagnosis and Tele-ophthalmology applications. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. INCREASE OF STABILITY AT JPEG COMPRESSION OF THE DIGITAL WATERMARKS EMBEDDED IN STILL IMAGES

    Directory of Open Access Journals (Sweden)

    V. A. Batura

    2015-07-01

    Full Text Available Subject of Research. The paper deals with creation and research of method for increasing stability at JPEG compressing of digital watermarks embedded in still images. Method. A new algorithm of digital watermarking for still images which embeds digital watermark into a still image via modification of frequency coefficients for Hadamard discrete transformation is presented. The choice of frequency coefficients for embedding of a digital watermark is based on existence of sharp change of their values after modification at the maximum compression of JPEG. The choice of blocks of pixels for embedding is based on the value of their entropy. The new algorithm was subjected to the analysis of resistance to an image compression, noising, filtration, change of size, color and histogram equalization. Elham algorithm possessing a good resistance to JPEG compression was chosen for comparative analysis. Nine gray-scale images were selected as objects for protection. Obscurity of the distortions embedded in them was defined on the basis of the peak value of a signal to noise ratio which should be not lower than 43 dB for obscurity of the brought distortions. Resistibility of embedded watermark was determined by the Pearson correlation coefficient, which value should not be below 0.5 for the minimum allowed stability. The algorithm of computing experiment comprises: watermark embedding into each test image by the new algorithm and Elham algorithm; introducing distortions to the object of protection; extracting of embedded information with its subsequent comparison with the original. Parameters of the algorithms were chosen so as to provide approximately the same level of distortions introduced into the images. Main Results. The method of preliminary processing of digital watermark presented in the paper makes it possible to reduce significantly the volume of information embedded in the still image. The results of numerical experiment have shown that the

  18. Watermarking-based protection of remote sensing images: requirements and possible solutions

    Science.gov (United States)

    Barni, Mauro; Bartolini, Franco; Cappellini, Vito; Magli, Enrico; Olmo, Gabriella

    2001-12-01

    Earth observation missions have recently attracted ag rowing interest form the scientific and industrial communities, mainly due to the large number of possible applications capable to exploit remotely sensed data and images. Along with the increase of market potential, the need arises for the protection of the image products from non-authorized use. Such a need is a very crucial one even because the Internet and other public/private networks have become preferred means of data exchange. A crucial issue arising when dealing with digital image distribution is copyright protection. Such a problem has been largely addressed by resorting to watermarking technology. A question that obviously arises is whether the requirements imposed by remote sensing imagery are compatible with existing watermarking techniques. On the basis of these motivations, the contribution of this work is twofold: i) assessment of the requirements imposed by the characteristics of remotely sensed images on watermark-based copyright protection ii) analysis of the state-of-the-art, and performance evaluation of existing algorithms in terms of the requirements at the previous point.

  19. Authentication and recovery of medical diagnostic image using dual reversible digital watermarking.

    Science.gov (United States)

    Deng, Xiaohong; Chen, Zhigang; Zeng, Feng; Zhang, Yaoping; Mao, Yimin

    2013-03-01

    This paper proposes a new region-based tampering detection and recovering method that utilizes both reversible digital watermarking and quad-tree decomposition for medical diagnostic image's authentication. Firstly, the quad-tree decomposition is used to divide the original image into blocks with high homogeneity, and then we computer pixels' linear interpolation as each block's recovery feature. Secondly, these recovery features as the first layer watermarking information is embedded by using simple invertible integer transformation. In order to enhance the proposed method's security, the logistic chaotic map is exploited to choose each block's reference pixel. The second layer watermark comprises by the quad-tree information and essential parameters for extraction are embedded by LSB replacement. In the authentication phase, the embedded watermark is extracted and the source image is recovered, and the similar linear interpolation technique is utilized to get each block's feature. Therefore, the tampering detection and localization can be achieved through comparing the extracted feature with the recomputed one, and the extracted feature can be used to recover those tampered regions with high similarity to their original state. Experimental results show that, compared with previous similar existing scheme, the proposed method not only achieves high embedding capacity and good visual quality of marked and restored image, but also has more accuracy for tampering detection.

  20. A Spatial Domain Quantum Watermarking Scheme

    International Nuclear Information System (INIS)

    Wei Zhan-Hong; Chen Xiu-Bo; Niu Xin-Xin; Yang Yi-Xian; Xu Shu-Jiang

    2016-01-01

    This paper presents a spatial domain quantum watermarking scheme. For a quantum watermarking scheme, a feasible quantum circuit is a key to achieve it. This paper gives a feasible quantum circuit for the presented scheme. In order to give the quantum circuit, a new quantum multi-control rotation gate, which can be achieved with quantum basic gates, is designed. With this quantum circuit, our scheme can arbitrarily control the embedding position of watermark images on carrier images with the aid of auxiliary qubits. Besides reversely acting the given quantum circuit, the paper gives another watermark extracting algorithm based on quantum measurements. Moreover, this paper also gives a new quantum image scrambling method and its quantum circuit. Differ from other quantum watermarking schemes, all given quantum circuits can be implemented with basic quantum gates. Moreover, the scheme is a spatial domain watermarking scheme, and is not based on any transform algorithm on quantum images. Meanwhile, it can make sure the watermark be secure even though the watermark has been found. With the given quantum circuit, this paper implements simulation experiments for the presented scheme. The experimental result shows that the scheme does well in the visual quality and the embedding capacity. (paper)

  1. KEAMANAN CITRA DENGAN WATERMARKING MENGGUNAKAN PENGEMBANGAN ALGORITMA LEAST SIGNIFICANT BIT

    Directory of Open Access Journals (Sweden)

    Kurniawan Kurniawan

    2015-01-01

    Full Text Available Image security is a process to save digital. One method of securing image digital is watermarking using Least Significant Bit algorithm. Main concept of image security using LSB algorithm is to replace bit value of image at specific location so that created pattern. The pattern result of replacing the bit value of image is called by watermark. Giving watermark at image digital using LSB algorithm has simple concept so that the information which is embedded will lost easily when attacked such as noise attack or compression. So need modification like development of LSB algorithm. This is done to decrease distortion of watermark information against those attacks. In this research is divided by 6 process which are color extraction of cover image, busy area search, watermark embed, count the accuracy of watermark embed, watermark extraction, and count the accuracy of watermark extraction. Color extraction of cover image is process to get blue color component from cover image. Watermark information will embed at busy area by search the area which has the greatest number of unsure from cover image. Then watermark image is embedded into cover image so that produce watermarked image using some development of LSB algorithm and search the accuracy by count the Peak Signal to Noise Ratio value. Before the watermarked image is extracted, need to test by giving noise and doing compression into jpg format. The accuracy of extraction result is searched by count the Bit Error Rate value.

  2. Chrominance watermark for mobile applications

    Science.gov (United States)

    Reed, Alastair; Rogers, Eliot; James, Dan

    2010-01-01

    Creating an imperceptible watermark which can be read by a broad range of cell phone cameras is a difficult problem. The problems are caused by the inherently low resolution and noise levels of typical cell phone cameras. The quality limitations of these devices compared to a typical digital camera are caused by the small size of the cell phone and cost trade-offs made by the manufacturer. In order to achieve this, a low resolution watermark is required which can be resolved by a typical cell phone camera. The visibility of a traditional luminance watermark was too great at this lower resolution, so a chrominance watermark was developed. The chrominance watermark takes advantage of the relatively low sensitivity of the human visual system to chrominance changes. This enables a chrominance watermark to be inserted into an image which is imperceptible to the human eye but can be read using a typical cell phone camera. Sample images will be presented showing images with a very low visibility which can be easily read by a typical cell phone camera.

  3. A new approach of watermarking technique by means multichannel wavelet functions

    Science.gov (United States)

    Agreste, Santa; Puccio, Luigia

    2012-12-01

    The digital piracy involving images, music, movies, books, and so on, is a legal problem that has not found a solution. Therefore it becomes crucial to create and to develop methods and numerical algorithms in order to solve the copyright problems. In this paper we focus the attention on a new approach of watermarking technique applied to digital color images. Our aim is to describe the realized watermarking algorithm based on multichannel wavelet functions with multiplicity r = 3, called MCWM 1.0. We report a large experimentation and some important numerical results in order to show the robustness of the proposed algorithm to geometrical attacks.

  4. Illustration Watermarking for Digital Images: An Investigation of Hierarchical Signal Inheritances for Nested Object-based Embedding

    Science.gov (United States)

    2007-02-23

    approach for signal-level watermark inheritance. 15. SUBJECT TERMS EOARD, Steganography , Image Fusion, Data Mining, Image ...189, Geneva, Switzerland, 2006 [V Block-Luminance and Blue Channel LSB Wet Paper Code Image Watermarking, accepte publication in: Proceedings of...SPIE Electronic Imaging , Security, Steganography , and Wa- termarking of Multimedia Contents IX, 2007 Interaction with a project from German Science

  5. Novel Variants of a Histogram Shift-Based Reversible Watermarking Technique for Medical Images to Improve Hiding Capacity

    Directory of Open Access Journals (Sweden)

    Vishakha Kelkar

    2017-01-01

    Full Text Available In telemedicine systems, critical medical data is shared on a public communication channel. This increases the risk of unauthorised access to patient’s information. This underlines the importance of secrecy and authentication for the medical data. This paper presents two innovative variations of classical histogram shift methods to increase the hiding capacity. The first technique divides the image into nonoverlapping blocks and embeds the watermark individually using the histogram method. The second method separates the region of interest and embeds the watermark only in the region of noninterest. This approach preserves the medical information intact. This method finds its use in critical medical cases. The high PSNR (above 45 dB obtained for both techniques indicates imperceptibility of the approaches. Experimental results illustrate superiority of the proposed approaches when compared with other methods based on histogram shifting techniques. These techniques improve embedding capacity by 5–15% depending on the image type, without affecting the quality of the watermarked image. Both techniques also enable lossless reconstruction of the watermark and the host medical image. A higher embedding capacity makes the proposed approaches attractive for medical image watermarking applications without compromising the quality of the image.

  6. Novel Variants of a Histogram Shift-Based Reversible Watermarking Technique for Medical Images to Improve Hiding Capacity

    Science.gov (United States)

    Tuckley, Kushal

    2017-01-01

    In telemedicine systems, critical medical data is shared on a public communication channel. This increases the risk of unauthorised access to patient's information. This underlines the importance of secrecy and authentication for the medical data. This paper presents two innovative variations of classical histogram shift methods to increase the hiding capacity. The first technique divides the image into nonoverlapping blocks and embeds the watermark individually using the histogram method. The second method separates the region of interest and embeds the watermark only in the region of noninterest. This approach preserves the medical information intact. This method finds its use in critical medical cases. The high PSNR (above 45 dB) obtained for both techniques indicates imperceptibility of the approaches. Experimental results illustrate superiority of the proposed approaches when compared with other methods based on histogram shifting techniques. These techniques improve embedding capacity by 5–15% depending on the image type, without affecting the quality of the watermarked image. Both techniques also enable lossless reconstruction of the watermark and the host medical image. A higher embedding capacity makes the proposed approaches attractive for medical image watermarking applications without compromising the quality of the image. PMID:29104744

  7. Watermarking Techniques Using Least Significant Bit Algorithm for Digital Image Security Standard Solution- Based Android

    Directory of Open Access Journals (Sweden)

    Ari Muzakir

    2017-05-01

    Full Text Available Ease of deployment of digital image through the internet has positive and negative sides, especially for owners of the original digital image. The positive side of the ease of rapid deployment is the owner of that image deploys digital image files to various sites in the world address. While the downside is that if there is no copyright that serves as protector of the image it will be very easily recognized ownership by other parties. Watermarking is one solution to protect the copyright and know the results of the digital image. With Digital Image Watermarking, copyright resulting digital image will be protected through the insertion of additional information such as owner information and the authenticity of the digital image. The least significant bit (LSB is one of the algorithm is simple and easy to understand. The results of the simulations carried out using android smartphone shows that the LSB watermarking technique is not able to be seen by naked human eye, meaning there is no significant difference in the image of the original files with images that have been inserted watermarking. The resulting image has dimensions of 640x480 with a bit depth of 32 bits. In addition, to determine the function of the ability of the device (smartphone in processing the image using this application used black box testing. 

  8. Adaptive Watermarking Scheme Using Biased Shift of Quantization Index

    Directory of Open Access Journals (Sweden)

    Young-Ho Seo

    2010-01-01

    Full Text Available We propose a watermark embedding and extracting method for blind watermarking. It uses the characteristics of a scalar quantizer to comply with the recommendation in JPEG, MPEG series, or JPEG2000. Our method performs embedding of a watermark bit by shifting the corresponding frequency transform coefficient (the watermark position to a quantization index according to the value of the watermark bit, which prevents from losing the watermark information during the data compression process. The watermark can be embedded simultaneously to the quantization process without an additional process for watermarking, which means it can be performed at the same speed to the compression process. In the embedding process, a Linear Feedback Shift Register (LFSR is used to hide the watermark informations and the watermark positions. The experimental results showed that the proposed method satisfies enough robustness and imperceptibility that are the major requirements for watermarking.

  9. Improving digital image watermarking by means of optimal channel selection

    NARCIS (Netherlands)

    Huynh-The, Thien; Banos Legran, Oresti; Lee, Sungyoung; Yoon, Yongik; Le-Tien, Thuong

    2016-01-01

    Supporting safe and resilient authentication and integrity of digital images is of critical importance in a time of enormous creation and sharing of these contents. This paper presents an improved digital image watermarking model based on a coefficient quantization technique that intelligently

  10. Enhancing security of fingerprints through contextual biometric watermarking.

    Science.gov (United States)

    Noore, Afzel; Singh, Richa; Vatsa, Mayank; Houck, Max M

    2007-07-04

    This paper presents a novel digital watermarking technique using face and demographic text data as multiple watermarks for verifying the chain of custody and protecting the integrity of a fingerprint image. The watermarks are embedded in selected texture regions of a fingerprint image using discrete wavelet transform. Experimental results show that modifications in these locations are visually imperceptible and maintain the minutiae details. The integrity of the fingerprint image is verified through the high matching scores obtained from an automatic fingerprint identification system. There is also a high degree of visual correlation between the embedded images, and the extracted images from the watermarked fingerprint. The degree of similarity is computed using pixel-based metrics and human visual system metrics. The results also show that the proposed watermarked fingerprint and the extracted images are resilient to common attacks such as compression, filtering, and noise.

  11. Advances in audio watermarking based on singular value decomposition

    CERN Document Server

    Dhar, Pranab Kumar

    2015-01-01

    This book introduces audio watermarking methods for copyright protection, which has drawn extensive attention for securing digital data from unauthorized copying. The book is divided into two parts. First, an audio watermarking method in discrete wavelet transform (DWT) and discrete cosine transform (DCT) domains using singular value decomposition (SVD) and quantization is introduced. This method is robust against various attacks and provides good imperceptible watermarked sounds. Then, an audio watermarking method in fast Fourier transform (FFT) domain using SVD and Cartesian-polar transformation (CPT) is presented. This method has high imperceptibility and high data payload and it provides good robustness against various attacks. These techniques allow media owners to protect copyright and to show authenticity and ownership of their material in a variety of applications.   ·         Features new methods of audio watermarking for copyright protection and ownership protection ·         Outl...

  12. Lossless Authentication Watermarking Based on Adaptive Modular Arithmetic

    Directory of Open Access Journals (Sweden)

    H. Yang

    2010-04-01

    Full Text Available Reversible watermarking schemes based on modulo-256 addition may cause annoying salt-and-pepper noise. To avoid the salt-and-pepper noise, a reversible watermarking scheme using human visual perception characteristics and adaptive modular arithmetic is proposed. First, a high-bit residual image is obtained by extracting the most significant bits (MSB of the original image, and a new spatial visual perception model is built according to the high-bit residual image features. Second, the watermark strength and the adaptive divisor of modulo operation for each pixel are determined by the visual perception model. Finally, the watermark is embedded into different least significant bits (LSB of original image with adaptive modulo addition. The original image can be losslessly recovered if the stego-image has not been altered. Extensive experiments show that the proposed algorithm eliminates the salt-and-pepper noise effectively, and the visual quality of the stego-image with the proposed algorithm has been dramatically improved over some existing reversible watermarking algorithms. Especially, the stegoimage of this algorithm has about 9.9864 dB higher PSNR value than that of modulo-256 addition based reversible watermarking scheme.

  13. Quantum watermarking scheme through Arnold scrambling and LSB steganography

    Science.gov (United States)

    Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping

    2017-09-01

    Based on the NEQR of quantum images, a new quantum gray-scale image watermarking scheme is proposed through Arnold scrambling and least significant bit (LSB) steganography. The sizes of the carrier image and the watermark image are assumed to be 2n× 2n and n× n, respectively. Firstly, a classical n× n sized watermark image with 8-bit gray scale is expanded to a 2n× 2n sized image with 2-bit gray scale. Secondly, through the module of PA-MOD N, the expanded watermark image is scrambled to a meaningless image by the Arnold transform. Then, the expanded scrambled image is embedded into the carrier image by the steganography method of LSB. Finally, the time complexity analysis is given. The simulation experiment results show that our quantum circuit has lower time complexity, and the proposed watermarking scheme is superior to others.

  14. A New Quantum Watermarking Based on Quantum Wavelet Transforms

    International Nuclear Information System (INIS)

    Heidari, Shahrokh; Pourarian, Mohammad Rasoul; Naseri, Mosayeb; Gheibi, Reza; Baghfalaki, Masoud; Farouk, Ahmed

    2017-01-01

    Quantum watermarking is a technique to embed specific information, usually the owner’s identification, into quantum cover data such for copyright protection purposes. In this paper, a new scheme for quantum watermarking based on quantum wavelet transforms is proposed which includes scrambling, embedding and extracting procedures. The invisibility and robustness performances of the proposed watermarking method is confirmed by simulation technique. The invisibility of the scheme is examined by the peak-signal-to-noise ratio (PSNR) and the histogram calculation. Furthermore the robustness of the scheme is analyzed by the Bit Error Rate (BER) and the Correlation Two-Dimensional (Corr 2-D) calculation. The simulation results indicate that the proposed watermarking scheme indicate not only acceptable visual quality but also a good resistance against different types of attack. (paper)

  15. Visible Watermarking Technique Based on Human Visual System for Single Sensor Digital Cameras

    Directory of Open Access Journals (Sweden)

    Hector Santoyo-Garcia

    2017-01-01

    Full Text Available In this paper we propose a visible watermarking algorithm, in which a visible watermark is embedded into the Bayer Colour Filter Array (CFA domain. The Bayer CFA is the most common raw image representation for images captured by single sensor digital cameras equipped in almost all mobile devices. In proposed scheme, the captured image is watermarked before it is compressed and stored in the storage system. Then this method enforces the rightful ownership of the watermarked image, since there is no other version of the image rather than the watermarked one. We also take into consideration the Human Visual System (HVS so that the proposed technique provides desired characteristics of a visible watermarking scheme, such that the embedded watermark is sufficiently perceptible and at same time not obtrusive in colour and grey-scale images. Unlike other Bayer CFA domain visible watermarking algorithms, in which only binary watermark pattern is supported, proposed watermarking algorithm allows grey-scale and colour images as watermark patterns. It is suitable for advertisement purpose, such as digital library and e-commerce, besides copyright protection.

  16. A joint FED watermarking system using spatial fusion for verifying the security issues of teleradiology.

    Science.gov (United States)

    Viswanathan, P; Krishna, P Venkata

    2014-05-01

    Teleradiology allows transmission of medical images for clinical data interpretation to provide improved e-health care access, delivery, and standards. The remote transmission raises various ethical and legal issues like image retention, fraud, privacy, malpractice liability, etc. A joint FED watermarking system means a joint fingerprint/encryption/dual watermarking system is proposed for addressing these issues. The system combines a region based substitution dual watermarking algorithm using spatial fusion, stream cipher algorithm using symmetric key, and fingerprint verification algorithm using invariants. This paper aims to give access to the outcomes of medical images with confidentiality, availability, integrity, and its origin. The watermarking, encryption, and fingerprint enrollment are conducted jointly in protection stage such that the extraction, decryption, and verification can be applied independently. The dual watermarking system, introducing two different embedding schemes, one used for patient data and other for fingerprint features, reduces the difficulty in maintenance of multiple documents like authentication data, personnel and diagnosis data, and medical images. The spatial fusion algorithm, which determines the region of embedding using threshold from the image to embed the encrypted patient data, follows the exact rules of fusion resulting in better quality than other fusion techniques. The four step stream cipher algorithm using symmetric key for encrypting the patient data with fingerprint verification system using algebraic invariants improves the robustness of the medical information. The experiment result of proposed scheme is evaluated for security and quality analysis in DICOM medical images resulted well in terms of attacks, quality index, and imperceptibility.

  17. A Robust Blind Quantum Copyright Protection Method for Colored Images Based on Owner's Signature

    Science.gov (United States)

    Heidari, Shahrokh; Gheibi, Reza; Houshmand, Monireh; Nagata, Koji

    2017-08-01

    Watermarking is the imperceptible embedding of watermark bits into multimedia data in order to use for different applications. Among all its applications, copyright protection is the most prominent usage which conceals information about the owner in the carrier, so as to prohibit others from assertion copyright. This application requires high level of robustness. In this paper, a new blind quantum copyright protection method based on owners's signature in RGB images is proposed. The method utilizes one of the RGB channels as indicator and two remained channels are used for embedding information about the owner. In our contribution the owner's signature is considered as a text. Therefore, in order to embed in colored image as watermark, a new quantum representation of text based on ASCII character set is offered. Experimental results which are analyzed in MATLAB environment, exhibit that the presented scheme shows good performance against attacks and can be used to find out who the real owner is. Finally, the discussed quantum copyright protection method is compared with a related work that our analysis confirm that the presented scheme is more secure and applicable than the previous ones currently found in the literature.

  18. Further attacks on Yeung-Mintzer fragile watermarking scheme

    Science.gov (United States)

    Fridrich, Jessica; Goljan, Miroslav; Memon, Nasir D.

    2000-05-01

    In this paper, we describe new and improved attacks on the authentication scheme previously proposed by Yeung and Mintzer. Previous attacks assumed that the binary watermark logo inserted in an image for the purposes of authentication was known. Here we remove that assumption and show how the scheme is still vulnerable, even if the binary logo is not known but the attacker has access to multiple images that have been watermarked with the same secret key and contain the same (but unknown) logo. We present two attacks. The first attack infers the secret watermark insertion function and the binary logo, given multiple images authenticated with the same key and containing the same logo. We show that a very good approximation to the logo and watermark insertion function can be constructed using as few as two images. With color images, one needs many more images, nevertheless the attack is still feasible. The second attack we present, which we call the 'collage-attack' is a variation of the Holliman-Memon counterfeiting attack. The proposed variation does not require knowledge of the watermark logo and produces counterfeits of superior quality by means of a suitable dithering process that we develop.

  19. StirMark Benchmark: audio watermarking attacks based on lossy compression

    Science.gov (United States)

    Steinebach, Martin; Lang, Andreas; Dittmann, Jana

    2002-04-01

    StirMark Benchmark is a well-known evaluation tool for watermarking robustness. Additional attacks are added to it continuously. To enable application based evaluation, in our paper we address attacks against audio watermarks based on lossy audio compression algorithms to be included in the test environment. We discuss the effect of different lossy compression algorithms like MPEG-2 audio Layer 3, Ogg or VQF on a selection of audio test data. Our focus is on changes regarding the basic characteristics of the audio data like spectrum or average power and on removal of embedded watermarks. Furthermore we compare results of different watermarking algorithms and show that lossy compression is still a challenge for most of them. There are two strategies for adding evaluation of robustness against lossy compression to StirMark Benchmark: (a) use of existing free compression algorithms (b) implementation of a generic lossy compression simulation. We discuss how such a model can be implemented based on the results of our tests. This method is less complex, as no real psycho acoustic model has to be applied. Our model can be used for audio watermarking evaluation of numerous application fields. As an example, we describe its importance for e-commerce applications with watermarking security.

  20. Lossless Data Embedding—New Paradigm in Digital Watermarking

    Directory of Open Access Journals (Sweden)

    Jessica Fridrich

    2002-02-01

    Full Text Available One common drawback of virtually all current data embedding methods is the fact that the original image is inevitably distorted due to data embedding itself. This distortion typically cannot be removed completely due to quantization, bit-replacement, or truncation at the grayscales 0 and 255. Although the distortion is often quite small and perceptual models are used to minimize its visibility, the distortion may not be acceptable for medical imagery (for legal reasons or for military images inspected under nonstandard viewing conditions (after enhancement or extreme zoom. In this paper, we introduce a new paradigm for data embedding in images (lossless data embedding that has the property that the distortion due to embedding can be completely removed from the watermarked image after the embedded data has been extracted. We present lossless embedding methods for the uncompressed formats (BMP, TIFF and for the JPEG format. We also show how the concept of lossless data embedding can be used as a powerful tool to achieve a variety of nontrivial tasks, including lossless authentication using fragile watermarks, steganalysis of LSB embedding, and distortion-free robust watermarking.

  1. Hiding correlation-based Watermark templates using secret modulation

    NARCIS (Netherlands)

    Lichtenauer, J.; Setyawan, I.; Lagendijk, R.

    2004-01-01

    A possible solution to the difficult problem of geometrical distortion of watermarked images in a blind watermarking scenario is to use a template grid in the autocorrelation function. However, the important drawback of this method is that the watermark itself can be estimated and subtracted, or the

  2. A joint asymmetric watermarking and image encryption scheme

    Science.gov (United States)

    Boato, G.; Conotter, V.; De Natale, F. G. B.; Fontanari, C.

    2008-02-01

    Here we introduce a novel watermarking paradigm designed to be both asymmetric, i.e., involving a private key for embedding and a public key for detection, and commutative with a suitable encryption scheme, allowing both to cipher watermarked data and to mark encrypted data without interphering with the detection process. In order to demonstrate the effectiveness of the above principles, we present an explicit example where the watermarking part, based on elementary linear algebra, and the encryption part, exploiting a secret random permutation, are integrated in a commutative scheme.

  3. A Secure Watermarking Scheme for Buyer-Seller Identification and Copyright Protection

    Science.gov (United States)

    Ahmed, Fawad; Sattar, Farook; Siyal, Mohammed Yakoob; Yu, Dan

    2006-12-01

    We propose a secure watermarking scheme that integrates watermarking with cryptography for addressing some important issues in copyright protection. We address three copyright protection issues—buyer-seller identification, copyright infringement, and ownership verification. By buyer-seller identification, we mean that a successful watermark extraction at the buyer's end will reveal the identities of the buyer and seller of the watermarked image. For copyright infringement, our proposed scheme enables the seller to identify the specific buyer from whom an illegal copy of the watermarked image has originated, and further prove this fact to a third party. For multiple ownership claims, our scheme enables a legal seller to claim his/her ownership in the court of law. We will show that the combination of cryptography with watermarking not only increases the security of the overall scheme, but it also enables to associate identities of buyer/seller with their respective watermarked images.

  4. Improved Bit Rate Control for Real-Time MPEG Watermarking

    Directory of Open Access Journals (Sweden)

    Pranata Sugiri

    2004-01-01

    Full Text Available The alteration of compressed video bitstream due to embedding of digital watermark tends to produce unpredictable video bit rate variations which may in turn lead to video playback buffer overflow/underflow or transmission bandwidth violation problems. This paper presents a novel bit rate control technique for real-time MPEG watermarking applications. In our experiments, spread spectrum watermarks are embedded in the quantized DCT domain without requantization and motion reestimation to achieve fast watermarking. The proposed bit rate control scheme evaluates the combined bit lengths of a set of multiple watermarked VLC codewords, and successively replaces watermarked VLC codewords having the largest increase in bit length with their corresponding unmarked VLC codewords until a target bit length is achieved. The proposed method offers flexibility and scalability, which are neglected by similar works reported in the literature. Experimental results show that the proposed bit rate control scheme is effective in meeting the bit rate targets and capable of improving the watermark detection robustness for different video contents compressed at different bit rates.

  5. A Lightweight Buyer-Seller Watermarking Protocol

    Directory of Open Access Journals (Sweden)

    Yongdong Wu

    2008-01-01

    Full Text Available The buyer-seller watermarking protocol enables a seller to successfully identify a traitor from a pirated copy, while preventing the seller from framing an innocent buyer. Based on finite field theory and the homomorphic property of public key cryptosystems such as RSA, several buyer-seller watermarking protocols (N. Memon and P. W. Wong (2001 and C.-L. Lei et al. (2004 have been proposed previously. However, those protocols require not only large computational power but also substantial network bandwidth. In this paper, we introduce a new buyer-seller protocol that overcomes those weaknesses by managing the watermarks. Compared with the earlier protocols, ours is n times faster in terms of computation, where n is the number of watermark elements, while incurring only O(1/lN times communication overhead given the finite field parameter lN. In addition, the quality of the watermarked image generated with our method is better, using the same watermark strength.

  6. Digital watermarking techniques and trends

    CERN Document Server

    Nematollahi, Mohammad Ali; Rosales, Hamurabi Gamboa

    2017-01-01

    This book presents the state-of-the-arts application of digital watermarking in audio, speech, image, video, 3D mesh graph, text, software, natural language, ontology, network stream, relational database, XML, and hardware IPs. It also presents new and recent algorithms in digital watermarking for copyright protection and discusses future trends in the field. Today, the illegal manipulation of genuine digital objects and products represents a considerable problem in the digital world. Offering an effective solution, digital watermarking can be applied to protect intellectual property, as well as fingerprinting, enhance the security and proof-of-authentication through unsecured channels.

  7. Design of an H.264/SVC resilient watermarking scheme

    Science.gov (United States)

    Van Caenegem, Robrecht; Dooms, Ann; Barbarien, Joeri; Schelkens, Peter

    2010-01-01

    The rapid dissemination of media technologies has lead to an increase of unauthorized copying and distribution of digital media. Digital watermarking, i.e. embedding information in the multimedia signal in a robust and imperceptible manner, can tackle this problem. Recently, there has been a huge growth in the number of different terminals and connections that can be used to consume multimedia. To tackle the resulting distribution challenges, scalable coding is often employed. Scalable coding allows the adaptation of a single bit-stream to varying terminal and transmission characteristics. As a result of this evolution, watermarking techniques that are robust against scalable compression become essential in order to control illegal copying. In this paper, a watermarking technique resilient against scalable video compression using the state-of-the-art H.264/SVC codec is therefore proposed and evaluated.

  8. A comparative study of chaotic and white noise signals in digital watermarking

    International Nuclear Information System (INIS)

    Mooney, Aidan; Keating, John G.; Pitas, Ioannis

    2008-01-01

    Digital watermarking is an ever increasing and important discipline, especially in the modern electronically-driven world. Watermarking aims to embed a piece of information into digital documents which their owner can use to prove that the document is theirs, at a later stage. In this paper, performance analysis of watermarking schemes is performed on white noise sequences and chaotic sequences for the purpose of watermark generation. Pseudorandom sequences are compared with chaotic sequences generated from the chaotic skew tent map. In particular, analysis is performed on highpass signals generated from both these watermark generation schemes, along with analysis on lowpass watermarks and white noise watermarks. This analysis focuses on the watermarked images after they have been subjected to common image distortion attacks. It is shown that signals generated from highpass chaotic signals have superior performance than highpass noise signals, in the presence of such attacks. It is also shown that watermarks generated from lowpass chaotic signals have superior performance over the other signal types analysed

  9. Dual watermarking technique with multiple biometric watermarks

    Indian Academy of Sciences (India)

    affect the visual quality of the original art. On the contrary, removable visible watermarking .... Significant motivation for using biometric features such as face, voice and signature as a watermark is that face and ... These are the major reasons which motivated us to propose multimodal biometric watermarking. When the ...

  10. A blind video watermarking scheme resistant to rotation and collusion attacks

    Directory of Open Access Journals (Sweden)

    Amlan Karmakar

    2016-04-01

    Full Text Available In this paper, Discrete Cosine Transform (DCT based blind video watermarking algorithm is proposed, which is perceptually invisible and robust against rotation and collusion attacks. To make the scheme resistant against rotation, watermark is embedded within the square blocks, placed on the middle position of every luminance channel. Then Zernike moments of those square blocks are calculated. The rotation invariance property of the Complex Zernike moments is exploited to predict the rotation angle of the video at the time of extraction of watermark bits. To make the scheme robust against collusion, design of the scheme is done in such a way that the embedding blocks will vary for the successive frames of the video. A Pseudo Random Number (PRN generator and a permutation vector are used to achieve the goal. The experimental results show that the scheme is robust against conventional video attacks, rotation attack and collusion attacks.

  11. Efficiently Synchronized Spread-Spectrum Audio Watermarking with Improved Psychoacoustic Model

    Directory of Open Access Journals (Sweden)

    Xing He

    2008-01-01

    Full Text Available This paper presents an audio watermarking scheme which is based on an efficiently synchronized spread-spectrum technique and a new psychoacoustic model computed using the discrete wavelet packet transform. The psychoacoustic model takes advantage of the multiresolution analysis of a wavelet transform, which closely approximates the standard critical band partition. The goal of this model is to include an accurate time-frequency analysis and to calculate both the frequency and temporal masking thresholds directly in the wavelet domain. Experimental results show that this watermarking scheme can successfully embed watermarks into digital audio without introducing audible distortion. Several common watermark attacks were applied and the results indicate that the method is very robust to those attacks.

  12. Encryption and watermark-treated medical image against hacking disease-An immune convention in spatial and frequency domains.

    Science.gov (United States)

    Lakshmi, C; Thenmozhi, K; Rayappan, John Bosco Balaguru; Amirtharajan, Rengarajan

    2018-06-01

    Digital Imaging and Communications in Medicine (DICOM) is one among the significant formats used worldwide for the representation of medical images. Undoubtedly, medical-image security plays a crucial role in telemedicine applications. Merging encryption and watermarking in medical-image protection paves the way for enhancing the authentication and safer transmission over open channels. In this context, the present work on DICOM image encryption has employed a fuzzy chaotic map for encryption and the Discrete Wavelet Transform (DWT) for watermarking. The proposed approach overcomes the limitation of the Arnold transform-one of the most utilised confusion mechanisms in image ciphering. Various metrics have substantiated the effectiveness of the proposed medical-image encryption algorithm. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Attacks, applications, and evaluation of known watermarking algorithms with Checkmark

    Science.gov (United States)

    Meerwald, Peter; Pereira, Shelby

    2002-04-01

    The Checkmark benchmarking tool was introduced to provide a framework for application-oriented evaluation of watermarking schemes. In this article we introduce new attacks and applications into the existing Checkmark framework. In addition to describing new attacks and applications, we also compare the performance of some well-known watermarking algorithms (proposed by Bruyndonckx,Cox, Fridrich, Dugad, Kim, Wang, Xia, Xie, Zhu and Pereira) with respect to the Checkmark benchmark. In particular, we consider the non-geometric application which contains tests that do not change the geometry of image. This attack constraint is artificial, but yet important for research purposes since a number of algorithms may be interesting, but would score poorly with respect to specific applications simply because geometric compensation has not been incorporated. We note, however, that with the help of image registration, even research algorithms that do not have counter-measures against geometric distortion -- such as a template or reference watermark -- can be evaluated. In the first version of the Checkmark benchmarking program, application-oriented evaluation was introduced, along with many new attacks not already considered in the literature. A second goal of this paper is to introduce new attacks and new applications into the Checkmark framework. In particular, we introduce the following new applications: video frame watermarking, medical imaging and watermarking of logos. Video frame watermarking includes low compression attacks and distortions which warp the edges of the video as well as general projective transformations which may result from someone filming the screen at a cinema. With respect to medical imaging, only small distortions are considered and furthermore it is essential that no distortions are present at embedding. Finally for logos, we consider images of small sizes and particularly compression, scaling, aspect ratio and other small distortions. The challenge

  14. Two-layer fragile watermarking method secured with chaotic map for authentication of digital Holy Quran.

    Science.gov (United States)

    Khalil, Mohammed S; Kurniawan, Fajri; Khan, Muhammad Khurram; Alginahi, Yasser M

    2014-01-01

    This paper presents a novel watermarking method to facilitate the authentication and detection of the image forgery on the Quran images. Two layers of embedding scheme on wavelet and spatial domain are introduced to enhance the sensitivity of fragile watermarking and defend the attacks. Discrete wavelet transforms are applied to decompose the host image into wavelet prior to embedding the watermark in the wavelet domain. The watermarked wavelet coefficient is inverted back to spatial domain then the least significant bits is utilized to hide another watermark. A chaotic map is utilized to blur the watermark to make it secure against the local attack. The proposed method allows high watermark payloads, while preserving good image quality. Experiment results confirm that the proposed methods are fragile and have superior tampering detection even though the tampered area is very small.

  15. Object-Oriented Wavelet-Layered Digital Watermarking Technique

    Institute of Scientific and Technical Information of China (English)

    LIU Xiao-yun; YU Jue-bang; LI Ming-yu

    2005-01-01

    In this paper, an object-oriented digital watermarking technique is proposed in the wavelet domain for still images. According to the difference of recognition degree of the human eye to the different region of the image, the image is divided into the interested region and uninterested region of human eye vision in this scheme. Using the relativity of position and the difference to ocular sensitivity of the multiresolution wavelet among each subband, the image is processed with layered watermarking append technique. Experimental results show that the proposed technique successfully survives image processing operations, additive noise and JPEG compression.

  16. Watermarking security

    CERN Document Server

    Bas, Patrick; Cayre, François; Doërr, Gwenaël; Mathon, Benjamin

    2016-01-01

    This book explains how to measure the security of a watermarking scheme, how to design secure schemes but also how to attack popular watermarking schemes. This book gathers the most recent achievements in the field of watermarking security by considering both parts of this cat and mouse game. This book is useful to industrial practitioners who would like to increase the security of their watermarking applications and for academics to quickly master this fascinating domain.

  17. Digital watermarking opportunities enabled by mobile media proliferation

    Science.gov (United States)

    Modro, Sierra; Sharma, Ravi K.

    2009-02-01

    Consumer usages of mobile devices and electronic media are changing. Mobile devices now include increased computational capabilities, mobile broadband access, better integrated sensors, and higher resolution screens. These enhanced features are driving increased consumption of media such as images, maps, e-books, audio, video, and games. As users become more accustomed to using mobile devices for media, opportunities arise for new digital watermarking usage models. For example, transient media, like images being displayed on screens, could be watermarked to provide a link between mobile devices. Applications based on these emerging usage models utilizing watermarking can provide richer user experiences and drive increased media consumption. We describe the enabling factors and highlight a few of the usage models and new opportunities. We also outline how the new opportunities are driving further innovation in watermarking technologies. We discuss challenges in market adoption of applications based on these usage models.

  18. Information hiding techniques for steganography and digital watermarking

    CERN Document Server

    Katzenbeisser, Stefan

    2000-01-01

    Steganography, a means by which two or more parties may communicate using ""invisible"" or ""subliminal"" communication, and watermarking, a means of hiding copyright data in images, are becoming necessary components of commercial multimedia applications that are subject to illegal use. This new book is the first comprehensive survey of steganography and watermarking and their application to modern communications and multimedia.Handbook of Information Hiding: Steganography and Watermarking helps you understand steganography, the history of this previously neglected element of cryptography, the

  19. Spread spectrum image data hiding in the encrypted discrete cosine transform coefficients

    Science.gov (United States)

    Zhang, Xiaoqiang; Wang, Z. Jane

    2013-10-01

    Digital watermarking and data hiding are important tools for digital rights protection of media data. Spread spectrum (SS)-based watermarking and data-hiding approaches are popular due to their outstanding robustness, but their security might not be sufficient. To improve the security of SS, a SS-based image data-hiding approach is proposed by encrypting the discrete cosine transform coefficients of the host image with the piecewise linear chaotic map, before the operation of watermark embedding. To evaluate the performance of the proposed approach, simulations and analyses of its robustness and security are carried out. The average bit-error-rate values on 100 real images from the Berkeley segmentation dataset under the JPEG compression, additive Gaussian noise, salt and pepper noise, and cropping attacks are reported. Experimental results show that the proposed approach can maintain the high robustness of traditional SS schemes and, meanwhile, also improve the security. The proposed approach can extend the key space of traditional SS schemes from 10 to 10 and thus can resist brute-force attack and unauthorized detection watermark attack.

  20. Copyright protection of remote sensing imagery by means of digital watermarking

    Science.gov (United States)

    Barni, Mauro; Bartolini, Franco; Cappellini, Vito; Magli, Enrico; Olmo, Gabriella; Zanini, R.

    2001-12-01

    The demand for remote sensing data has increased dramatically mainly due to the large number of possible applications capable to exploit remotely sensed data and images. As in many other fields, along with the increase of market potential and product diffusion, the need arises for some sort of protection of the image products from unauthorized use. Such a need is a very crucial one even because the Internet and other public/private networks have become preferred and effective means of data exchange. An important issue arising when dealing with digital image distribution is copyright protection. Such a problem has been largely addressed by resorting to watermarking technology. Before applying watermarking techniques developed for multimedia applications to remote sensing applications, it is important that the requirements imposed by remote sensing imagery are carefully analyzed to investigate whether they are compatible with existing watermarking techniques. On the basis of these motivations, the contribution of this work is twofold: (1) assessment of the requirements imposed by the characteristics of remotely sensed images on watermark-based copyright protection; (2) discussion of a case study where the performance of two popular, state-of-the-art watermarking techniques are evaluated by the light of the requirements at the previous point.

  1. Dual watermarking scheme for secure buyer-seller watermarking protocol

    Science.gov (United States)

    Mehra, Neelesh; Shandilya, Madhu

    2012-04-01

    A buyer-seller watermarking protocol utilize watermarking along with cryptography for copyright and copy protection for the seller and meanwhile it also preserve buyers rights for privacy. It enables a seller to successfully identify a malicious seller from a pirated copy, while preventing the seller from framing an innocent buyer and provide anonymity to buyer. Up to now many buyer-seller watermarking protocols have been proposed which utilize more and more cryptographic scheme to solve many common problems such as customer's rights, unbinding problem, buyer's anonymity problem and buyer's participation in the dispute resolution. But most of them are infeasible since the buyer may not have knowledge of cryptography. Another issue is the number of steps to complete the protocols are large, a buyer needs to interact with different parties many times in these protocols, which is very inconvenient for buyer. To overcome these drawbacks, in this paper we proposed dual watermarking scheme in encrypted domain. Since neither of watermark has been generated by buyer so a general layman buyer can use the protocol.

  2. Watermarking spot colors in packaging

    Science.gov (United States)

    Reed, Alastair; Filler, TomáÅ.¡; Falkenstern, Kristyn; Bai, Yang

    2015-03-01

    In January 2014, Digimarc announced Digimarc® Barcode for the packaging industry to improve the check-out efficiency and customer experience for retailers. Digimarc Barcode is a machine readable code that carries the same information as a traditional Universal Product Code (UPC) and is introduced by adding a robust digital watermark to the package design. It is imperceptible to the human eye but can be read by a modern barcode scanner at the Point of Sale (POS) station. Compared to a traditional linear barcode, Digimarc Barcode covers the whole package with minimal impact on the graphic design. This significantly improves the Items per Minute (IPM) metric, which retailers use to track the checkout efficiency since it closely relates to their profitability. Increasing IPM by a few percent could lead to potential savings of millions of dollars for retailers, giving them a strong incentive to add the Digimarc Barcode to their packages. Testing performed by Digimarc showed increases in IPM of at least 33% using the Digimarc Barcode, compared to using a traditional barcode. A method of watermarking print ready image data used in the commercial packaging industry is described. A significant proportion of packages are printed using spot colors, therefore spot colors needs to be supported by an embedder for Digimarc Barcode. Digimarc Barcode supports the PANTONE spot color system, which is commonly used in the packaging industry. The Digimarc Barcode embedder allows a user to insert the UPC code in an image while minimizing perceptibility to the Human Visual System (HVS). The Digimarc Barcode is inserted in the printing ink domain, using an Adobe Photoshop plug-in as the last step before printing. Since Photoshop is an industry standard widely used by pre-press shops in the packaging industry, a Digimarc Barcode can be easily inserted and proofed.

  3. Digital watermarks in electronic document circulation

    Directory of Open Access Journals (Sweden)

    Vitaliy Grigorievich Ivanenko

    2017-07-01

    Full Text Available This paper reviews different protection methods for electronic documents, their good and bad qualities. Common attacks on electronic documents are analyzed. Digital signature and ways of eliminating its flaws are studied. Different digital watermark embedding methods are described, they are divided into 2 types. The solution to protection of electronic documents is based on embedding digital watermarks. Comparative analysis of this methods is given. As a result, the most convenient method is suggested – reversible data hiding. It’s remarked that this technique excels at securing the integrity of the container and its digital watermark. Digital watermark embedding system should prevent illegal access to the digital watermark and its container. Digital watermark requirements for electronic document protection are produced. Legal aspect of copyright protection is reviewed. Advantages of embedding digital watermarks in electronic documents are produced. Modern reversible data hiding techniques are studied. Distinctive features of digital watermark use in Russia are highlighted. Digital watermark serves as an additional layer of defense, that is in most cases unknown to the violator. With an embedded digital watermark, it’s impossible to misappropriate the authorship of the document, even if the intruder signs his name on it. Therefore, digital watermarks can act as an effective additional tool to protect electronic documents.

  4. On the pinned field image binarization for signature generation in image ownership verification method

    Directory of Open Access Journals (Sweden)

    Chang Hsuan

    2011-01-01

    Full Text Available Abstract The issue of pinned field image binarization for signature generation in the ownership verification of the protected image is investigated. The pinned field explores the texture information of the protected image and can be employed to enhance the watermark robustness. In the proposed method, four optimization schemes are utilized to determine the threshold values for transforming the pinned field into a binary feature image, which is then utilized to generate an effective signature image. Experimental results show that the utilization of optimization schemes can significantly improve the signature robustness from the previous method (Lee and Chang, Opt. Eng. 49 (9, 097005, 2010. While considering both the watermark retrieval rate and the computation speed, the genetic algorithm is strongly recommended. In addition, compared with Chang and Lin's scheme (J. Syst. Softw. 81 (7, 1118-1129, 2008, the proposed scheme also has better performance.

  5. Digital Watermarks Using Discrete Wavelet Transformation and Spectrum Spreading

    Directory of Open Access Journals (Sweden)

    Ryousuke Takai

    2003-12-01

    Full Text Available In recent tears, digital media makes rapid progress through the development of digital technology. Digital media normally assures fairly high quality, nevertheless can be easily reproduced in a perfect form. This perfect reproducibility takes and advantage from a certain point of view, while it produces an essential disadvantage, since digital media is frequently copied illegally. Thus the problem of the copyright protection becomes a very important issue. A solution of this problem is to embed digital watermarks that is not perceived clearly by usual people, but represents the proper right of original product. In our method, the images data in the frequency domain are transformed by the Discrete Wavelet Transform and analyzed by the multi resolution approximation, [1]. Further, the spectrum spreading is executed by using PN-sequences. Choi and Aizawa [7] embed watermarks by using block correlation of DCT coefficients. Thus, we apply Discrete Cosine Transformation, abbreviated to DCT, instead of the Fourier transformation in order to embed watermarks.If the value of this variance is high then we decide that the block has bigger magnitude for visual fluctuations. Henceforth, we may embed stronger watermarks, which gives resistance for images processing, such as attacks and/or compressions.

  6. A Secure and Robust Object-Based Video Authentication System

    Directory of Open Access Journals (Sweden)

    He Dajun

    2004-01-01

    Full Text Available An object-based video authentication system, which combines watermarking, error correction coding (ECC, and digital signature techniques, is presented for protecting the authenticity between video objects and their associated backgrounds. In this system, a set of angular radial transformation (ART coefficients is selected as the feature to represent the video object and the background, respectively. ECC and cryptographic hashing are applied to those selected coefficients to generate the robust authentication watermark. This content-based, semifragile watermark is then embedded into the objects frame by frame before MPEG4 coding. In watermark embedding and extraction, groups of discrete Fourier transform (DFT coefficients are randomly selected, and their energy relationships are employed to hide and extract the watermark. The experimental results demonstrate that our system is robust to MPEG4 compression, object segmentation errors, and some common object-based video processing such as object translation, rotation, and scaling while securely preventing malicious object modifications. The proposed solution can be further incorporated into public key infrastructure (PKI.

  7. A compressive sensing based secure watermark detection and privacy preserving storage framework.

    Science.gov (United States)

    Qia Wang; Wenjun Zeng; Jun Tian

    2014-03-01

    Privacy is a critical issue when the data owners outsource data storage or processing to a third party computing service, such as the cloud. In this paper, we identify a cloud computing application scenario that requires simultaneously performing secure watermark detection and privacy preserving multimedia data storage. We then propose a compressive sensing (CS)-based framework using secure multiparty computation (MPC) protocols to address such a requirement. In our framework, the multimedia data and secret watermark pattern are presented to the cloud for secure watermark detection in a CS domain to protect the privacy. During CS transformation, the privacy of the CS matrix and the watermark pattern is protected by the MPC protocols under the semi-honest security model. We derive the expected watermark detection performance in the CS domain, given the target image, watermark pattern, and the size of the CS matrix (but without the CS matrix itself). The correctness of the derived performance has been validated by our experiments. Our theoretical analysis and experimental results show that secure watermark detection in the CS domain is feasible. Our framework can also be extended to other collaborative secure signal processing and data-mining applications in the cloud.

  8. A text zero-watermarking method based on keyword dense interval

    Science.gov (United States)

    Yang, Fan; Zhu, Yuesheng; Jiang, Yifeng; Qing, Yin

    2017-07-01

    Digital watermarking has been recognized as a useful technology for the copyright protection and authentication of digital information. However, rarely did the former methods focus on the key content of digital carrier. The idea based on the protection of key content is more targeted and can be considered in different digital information, including text, image and video. In this paper, we use text as research object and a text zero-watermarking method which uses keyword dense interval (KDI) as the key content is proposed. First, we construct zero-watermarking model by introducing the concept of KDI and giving the method of KDI extraction. Second, we design detection model which includes secondary generation of zero-watermark and the similarity computing method of keyword distribution. Besides, experiments are carried out, and the results show that the proposed method gives better performance than other available methods especially in the attacks of sentence transformation and synonyms substitution.

  9. Digital watermarking and steganography fundamentals and techniques

    CERN Document Server

    Shih, Frank Y

    2007-01-01

    Introduction Digital Watermarking Digital Steganography Differences between Watermarking and Steganography A Brief History Appendix: Selected List of Books on Watermarking and Steganography Classification in Digital Watermarking Classification Based on Characteristics Classification Based on Applications Mathematical Preliminaries  Least-Significant-Bit Substitution Discrete Fourier Transform (DFT) Discrete Cosine Transform Discrete Wavelet Transform Random Sequence Generation  The Chaotic M

  10. Reversible Watermarking Using Prediction-Error Expansion and Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Guangyong Gao

    2015-01-01

    Full Text Available Currently, the research for reversible watermarking focuses on the decreasing of image distortion. Aiming at this issue, this paper presents an improvement method to lower the embedding distortion based on the prediction-error expansion (PE technique. Firstly, the extreme learning machine (ELM with good generalization ability is utilized to enhance the prediction accuracy for image pixel value during the watermarking embedding, and the lower prediction error results in the reduction of image distortion. Moreover, an optimization operation for strengthening the performance of ELM is taken to further lessen the embedding distortion. With two popular predictors, that is, median edge detector (MED predictor and gradient-adjusted predictor (GAP, the experimental results for the classical images and Kodak image set indicate that the proposed scheme achieves improvement for the lowering of image distortion compared with the classical PE scheme proposed by Thodi et al. and outperforms the improvement method presented by Coltuc and other existing approaches.

  11. Dual plane multiple spatial watermarking with self-encryption

    Indian Academy of Sciences (India)

    media are serious challenges. That is why ... ficient to represent the identity of owner is embedded into image and at the .... tion is dependent on user preference for ex-general social networking user may require watermarking but with less.

  12. Security protection of DICOM medical images using dual-layer reversible watermarking with tamper detection capability.

    Science.gov (United States)

    Tan, Chun Kiat; Ng, Jason Changwei; Xu, Xiaotian; Poh, Chueh Loo; Guan, Yong Liang; Sheah, Kenneth

    2011-06-01

    Teleradiology applications and universal availability of patient records using web-based technology are rapidly gaining importance. Consequently, digital medical image security has become an important issue when images and their pertinent patient information are transmitted across public networks, such as the Internet. Health mandates such as the Health Insurance Portability and Accountability Act require healthcare providers to adhere to security measures in order to protect sensitive patient information. This paper presents a fully reversible, dual-layer watermarking scheme with tamper detection capability for medical images. The scheme utilizes concepts of public-key cryptography and reversible data-hiding technique. The scheme was tested using medical images in DICOM format. The results show that the scheme is able to ensure image authenticity and integrity, and to locate tampered regions in the images.

  13. Digital audio watermarking fundamentals, techniques and challenges

    CERN Document Server

    Xiang, Yong; Yan, Bin

    2017-01-01

    This book offers comprehensive coverage on the most important aspects of audio watermarking, from classic techniques to the latest advances, from commonly investigated topics to emerging research subdomains, and from the research and development achievements to date, to current limitations, challenges, and future directions. It also addresses key topics such as reversible audio watermarking, audio watermarking with encryption, and imperceptibility control methods. The book sets itself apart from the existing literature in three main ways. Firstly, it not only reviews classical categories of audio watermarking techniques, but also provides detailed descriptions, analysis and experimental results of the latest work in each category. Secondly, it highlights the emerging research topic of reversible audio watermarking, including recent research trends, unique features, and the potentials of this subdomain. Lastly, the joint consideration of audio watermarking and encryption is also reviewed. With the help of this...

  14. Watermark: An Application and Methodology and Application for Interactive and intelligent Decision Support for Groundwater Systems

    Science.gov (United States)

    Pierce, S. A.; Wagner, K.; Schwartz, S.; Gentle, J. N., Jr.

    2016-12-01

    Critical water resources face the effects of historic drought, increased demand, and potential contamination, the need has never been greater to develop resources to effectively communicate conservation and protection across a broad audience and geographical area. The Watermark application and macro-analysis methodology merges topical analysis of context rich corpus from policy texts with multi-attributed solution sets from integrated models of water resource and other subsystems, such as mineral, food, energy, or environmental systems to construct a scalable, robust, and reproducible approach for identifying links between policy and science knowledge bases. The Watermark application is an open-source, interactive workspace to support science-based visualization and decision making. Designed with generalization in mind, Watermark is a flexible platform that allows for data analysis and inclusion of large datasets with an interactive front-end capable of connecting with other applications as well as advanced computing resources. In addition, the Watermark analysis methodology offers functionality that streamlines communication with non-technical users for policy, education, or engagement with groups around scientific topics of societal relevance. The technology stack for Watermark was selected with the goal of creating a robust and dynamic modular codebase that can be adjusted to fit many use cases and scale to support usage loads that range between simple data display to complex scientific simulation-based modelling and analytics. The methodology uses to topical analysis and simulation-optimization to systematically analyze the policy and management realities of resource systems and explicitly connect the social and problem contexts with science-based and engineering knowledge from models. A case example demonstrates use in a complex groundwater resources management study highlighting multi-criteria spatial decision making and uncertainty comparisons.

  15. Drift-free MPEG-4 AVC semi-fragile watermarking

    Science.gov (United States)

    Hasnaoui, M.; Mitrea, M.

    2014-02-01

    While intra frame drifting is a concern for all types of MPEG-4 AVC compressed-domain video processing applications, it has a particular negative impact in watermarking. In order to avoid the drift drawbacks, two classes of solutions are currently considered in the literature. They try either to compensate the drift distortions at the expense of complex decoding/estimation algorithms or to restrict the insertion to the blocks which are not involved in the prediction, thus reducing the data payload. The present study follows a different approach. First, it algebraically models the drift distortion spread problem by considering the analytic expressions of the MPEG-4 AVC encoding operations. Secondly, it solves the underlying algebraic system under drift-free constraints. Finally, the advanced solution is adapted to take into account the watermarking peculiarities. The experiments consider an m-QIM semi-fragile watermarking method and a video surveillance corpus of 80 minutes. For prescribed data payload (100 bit/s), robustness (BER < 0.1 against transcoding at 50% in stream size), fragility (frame modification detection with accuracies of 1/81 from the frame size and 3s) and complexity constraints, the modified insertion results in gains in transparency of 2 dB in PSNR, of 0.4 in AAD, of 0.002 in IF, of 0.03 in SC, of 0.017 NCC and 22 in DVQ.

  16. DNA watermarks in non-coding regulatory sequences

    Directory of Open Access Journals (Sweden)

    Pyka Martin

    2009-07-01

    Full Text Available Abstract Background DNA watermarks can be applied to identify the unauthorized use of genetically modified organisms. It has been shown that coding regions can be used to encrypt information into living organisms by using the DNA-Crypt algorithm. Yet, if the sequence of interest presents a non-coding DNA sequence, either the function of a resulting functional RNA molecule or a regulatory sequence, such as a promoter, could be affected. For our studies we used the small cytoplasmic RNA 1 in yeast and the lac promoter region of Escherichia coli. Findings The lac promoter was deactivated by the integrated watermark. In addition, the RNA molecules displayed altered configurations after introducing a watermark, but surprisingly were functionally intact, which has been verified by analyzing the growth characteristics of both wild type and watermarked scR1 transformed yeast cells. In a third approach we introduced a second overlapping watermark into the lac promoter, which did not affect the promoter activity. Conclusion Even though the watermarked RNA and one of the watermarked promoters did not show any significant differences compared to the wild type RNA and wild type promoter region, respectively, it cannot be generalized that other RNA molecules or regulatory sequences behave accordingly. Therefore, we do not recommend integrating watermark sequences into regulatory regions.

  17. QIM blind video watermarking scheme based on Wavelet transform and principal component analysis

    Directory of Open Access Journals (Sweden)

    Nisreen I. Yassin

    2014-12-01

    Full Text Available In this paper, a blind scheme for digital video watermarking is proposed. The security of the scheme is established by using one secret key in the retrieval of the watermark. Discrete Wavelet Transform (DWT is applied on each video frame decomposing it into a number of sub-bands. Maximum entropy blocks are selected and transformed using Principal Component Analysis (PCA. Quantization Index Modulation (QIM is used to quantize the maximum coefficient of the PCA blocks of each sub-band. Then, the watermark is embedded into the selected suitable quantizer values. The proposed scheme is tested using a number of video sequences. Experimental results show high imperceptibility. The computed average PSNR exceeds 45 dB. Finally, the scheme is applied on two medical videos. The proposed scheme shows high robustness against several attacks such as JPEG coding, Gaussian noise addition, histogram equalization, gamma correction, and contrast adjustment in both cases of regular videos and medical videos.

  18. DNA-based watermarks using the DNA-Crypt algorithm

    Directory of Open Access Journals (Sweden)

    Barnekow Angelika

    2007-05-01

    Full Text Available Abstract Background The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. Results The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. Conclusion The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms.

  19. DNA-based watermarks using the DNA-Crypt algorithm.

    Science.gov (United States)

    Heider, Dominik; Barnekow, Angelika

    2007-05-29

    The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms.

  20. DNA-based watermarks using the DNA-Crypt algorithm

    Science.gov (United States)

    Heider, Dominik; Barnekow, Angelika

    2007-01-01

    Background The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. Results The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. Conclusion The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms. PMID:17535434

  1. Selectively Encrypted Pull-Up Based Watermarking of Biometric data

    Science.gov (United States)

    Shinde, S. A.; Patel, Kushal S.

    2012-10-01

    Biometric authentication systems are becoming increasingly popular due to their potential usage in information security. However, digital biometric data (e.g. thumb impression) are themselves vulnerable to security attacks. There are various methods are available to secure biometric data. In biometric watermarking the data are embedded in an image container and are only retrieved if the secrete key is available. This container image is encrypted to have more security against the attack. As wireless devices are equipped with battery as their power supply, they have limited computational capabilities; therefore to reduce energy consumption we use the method of selective encryption of container image. The bit pull-up-based biometric watermarking scheme is based on amplitude modulation and bit priority which reduces the retrieval error rate to great extent. By using selective Encryption mechanism we expect more efficiency in time at the time of encryption as well as decryption. Significant reduction in error rate is expected to be achieved by the bit pull-up method.

  2. Efficient Hybrid Watermarking Scheme for Security and Transmission Bit Rate Enhancement of 3D Color-Plus-Depth Video Communication

    Science.gov (United States)

    El-Shafai, W.; El-Rabaie, S.; El-Halawany, M.; Abd El-Samie, F. E.

    2018-03-01

    Three-Dimensional Video-plus-Depth (3DV + D) comprises diverse video streams captured by different cameras around an object. Therefore, there is a great need to fulfill efficient compression to transmit and store the 3DV + D content in compressed form to attain future resource bounds whilst preserving a decisive reception quality. Also, the security of the transmitted 3DV + D is a critical issue for protecting its copyright content. This paper proposes an efficient hybrid watermarking scheme for securing the 3DV + D transmission, which is the homomorphic transform based Singular Value Decomposition (SVD) in Discrete Wavelet Transform (DWT) domain. The objective of the proposed watermarking scheme is to increase the immunity of the watermarked 3DV + D to attacks and achieve adequate perceptual quality. Moreover, the proposed watermarking scheme reduces the transmission-bandwidth requirements for transmitting the color-plus-depth 3DV over limited-bandwidth wireless networks through embedding the depth frames into the color frames of the transmitted 3DV + D. Thus, it saves the transmission bit rate and subsequently it enhances the channel bandwidth-efficiency. The performance of the proposed watermarking scheme is compared with those of the state-of-the-art hybrid watermarking schemes. The comparisons depend on both the subjective visual results and the objective results; the Peak Signal-to-Noise Ratio (PSNR) of the watermarked frames and the Normalized Correlation (NC) of the extracted watermark frames. Extensive simulation results on standard 3DV + D sequences have been conducted in the presence of attacks. The obtained results confirm that the proposed hybrid watermarking scheme is robust in the presence of attacks. It achieves not only very good perceptual quality with appreciated PSNR values and saving in the transmission bit rate, but also high correlation coefficient values in the presence of attacks compared to the existing hybrid watermarking schemes.

  3. MATLAB Algorithms for Rapid Detection and Embedding of Palindrome and Emordnilap Electronic Watermarks in Simulated Chemical and Biological Image Data

    National Research Council Canada - National Science Library

    Robbins, Ronny C

    2004-01-01

    .... This is similar to words such as STOP which when flipped left right gives the new word POTS. Emordnilap is palindrome spelled backwards. This paper explores the use of MATLAB algorithms in the rapid detection and embedding of palindrome and emordnilap electronic watermarks in simulated chemical and biological Image Data.

  4. Removable Watermarking Sebagai Pengendalian Terhadap Cyber Crime Pada Audio Digital

    Directory of Open Access Journals (Sweden)

    Reyhani Lian Putri

    2017-08-01

    Full Text Available Perkembangan teknologi informasi yang pesat menuntut penggunanya untuk lebih berhati-hati seiring semakin meningkatnya cyber crime.Banyak pihak telah mengembangkan berbagai teknik perlindungan data digital, salah satunya adalah watermarking. Teknologi watermarking berfungsi untuk memberikan identitas, melindungi, atau menandai data digital, baik audio, citra, ataupun video, yang mereka miliki. Akan tetapi, teknik tersebut masih dapat diretas oleh oknum-oknum yang tidak bertanggung jawab.Pada penelitian ini, proses watermarking diterapkan pada audio digital dengan menyisipkan watermark yang terdengar jelas oleh indera pendengaran manusia (perceptible pada audio host.Hal ini bertujuan agar data audio dapat terlindungi dan apabila ada pihak lain yang ingin mendapatkan data audio tersebut harus memiliki “kunci” untuk menghilangkan watermark. Proses removable watermarking ini dilakukan pada data watermark yang sudah diketahui metode penyisipannya, agar watermark dapat dihilangkan sehingga kualitas audio menjadi lebih baik. Dengan menggunakan metode ini diperoleh kinerja audio watermarking pada nilai distorsi tertinggi dengan rata-rata nilai SNR sebesar7,834 dB dan rata-rata nilai ODG sebesar -3,77.Kualitas audio meningkat setelah watermark dihilangkan, di mana rata-rata SNR menjadi sebesar 24,986 dB dan rata-rata ODG menjadi sebesar -1,064 serta nilai MOS sebesar 4,40.

  5. A Maximum Entropy-Based Chaotic Time-Variant Fragile Watermarking Scheme for Image Tampering Detection

    Directory of Open Access Journals (Sweden)

    Guo-Jheng Yang

    2013-08-01

    Full Text Available The fragile watermarking technique is used to protect intellectual property rights while also providing security and rigorous protection. In order to protect the copyright of the creators, it can be implanted in some representative text or totem. Because all of the media on the Internet are digital, protection has become a critical issue, and determining how to use digital watermarks to protect digital media is thus the topic of our research. This paper uses the Logistic map with parameter u = 4 to generate chaotic dynamic behavior with the maximum entropy 1. This approach increases the security and rigor of the protection. The main research target of information hiding is determining how to hide confidential data so that the naked eye cannot see the difference. Next, we introduce one method of information hiding. Generally speaking, if the image only goes through Arnold’s cat map and the Logistic map, it seems to lack sufficient security. Therefore, our emphasis is on controlling Arnold’s cat map and the initial value of the chaos system to undergo small changes and generate different chaos sequences. Thus, the current time is used to not only make encryption more stringent but also to enhance the security of the digital media.

  6. Statistical amplitude scale estimation for quantization-based watermarking

    NARCIS (Netherlands)

    Shterev, I.D.; Lagendijk, I.L.; Heusdens, R.

    2004-01-01

    Quantization-based watermarking schemes are vulnerable to amplitude scaling. Therefore the scaling factor has to be accounted for either at the encoder, or at the decoder, prior to watermark decoding. In this paper we derive the marginal probability density model for the watermarked and attacked

  7. Video Watermarking Implementation Based on FPGA

    International Nuclear Information System (INIS)

    EL-ARABY, W.S.M.S.

    2012-01-01

    The sudden increase in watermarking interest is most likely due to the increase in concern over copyright protection of content. With the rapid growth of the Internet and the multimedia systems in distributed environments, digital data owners are now easier to transfer multimedia documents across the Internet. However, current technology does not protect their copyrights properly. This leads to wide interest of multimedia security and multimedia copyright protection and it has become a great concern to the public in recent years. In the early days, encryption and control access techniques were used to protect the ownership of media. Recently, the watermarking techniques are utilized to keep safely the copyrights. In this thesis, a fast and secure invisible video watermark technique has been introduced. The technique based mainly on DCT and Low Frequency using pseudo random number (PN) sequence generator for embedding algorithm. The system has been realized using VHDL and the results have been verified using MATLAB. The implementation of the introduced watermark system done using Xilinx chip (XCV800). The implementation results show that the total area of watermark technique is 45% of total FPGA area with maximum delay equals 16.393ns. The experimental results show that the two techniques have mean square error (MSE) equal to 0.0133 and peak signal to noise ratio (PSNR) equal to 66.8984db. The results have been demonstrated and compared with conventional watermark technique using DCT.

  8. A Fast DCT Algorithm for Watermarking in Digital Signal Processor

    Directory of Open Access Journals (Sweden)

    S. E. Tsai

    2017-01-01

    Full Text Available Discrete cosine transform (DCT has been an international standard in Joint Photographic Experts Group (JPEG format to reduce the blocking effect in digital image compression. This paper proposes a fast discrete cosine transform (FDCT algorithm that utilizes the energy compactness and matrix sparseness properties in frequency domain to achieve higher computation performance. For a JPEG image of 8×8 block size in spatial domain, the algorithm decomposes the two-dimensional (2D DCT into one pair of one-dimensional (1D DCTs with transform computation in only 24 multiplications. The 2D spatial data is a linear combination of the base image obtained by the outer product of the column and row vectors of cosine functions so that inverse DCT is as efficient. Implementation of the FDCT algorithm shows that embedding a watermark image of 32 × 32 block pixel size in a 256 × 256 digital image can be completed in only 0.24 seconds and the extraction of watermark by inverse transform is within 0.21 seconds. The proposed FDCT algorithm is shown more efficient than many previous works in computation.

  9. A 3D Printing Model Watermarking Algorithm Based on 3D Slicing and Feature Points

    Directory of Open Access Journals (Sweden)

    Giao N. Pham

    2018-02-01

    Full Text Available With the increase of three-dimensional (3D printing applications in many areas of life, a large amount of 3D printing data is copied, shared, and used several times without any permission from the original providers. Therefore, copyright protection and ownership identification for 3D printing data in communications or commercial transactions are practical issues. This paper presents a novel watermarking algorithm for 3D printing models based on embedding watermark data into the feature points of a 3D printing model. Feature points are determined and computed by the 3D slicing process along the Z axis of a 3D printing model. The watermark data is embedded into a feature point of a 3D printing model by changing the vector length of the feature point in OXY space based on the reference length. The x and y coordinates of the feature point will be then changed according to the changed vector length that has been embedded with a watermark. Experimental results verified that the proposed algorithm is invisible and robust to geometric attacks, such as rotation, scaling, and translation. The proposed algorithm provides a better method than the conventional works, and the accuracy of the proposed algorithm is much higher than previous methods.

  10. A New Reversible Database Watermarking Approach with Firefly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Mustafa Bilgehan Imamoglu

    2017-01-01

    Full Text Available Up-to-date information is crucial in many fields such as medicine, science, and stock market, where data should be distributed to clients from a centralized database. Shared databases are usually stored in data centers where they are distributed over insecure public access network, the Internet. Sharing may result in a number of problems such as unauthorized copies, alteration of data, and distribution to unauthorized people for reuse. Researchers proposed using watermarking to prevent problems and claim digital rights. Many methods are proposed recently to watermark databases to protect digital rights of owners. Particularly, optimization based watermarking techniques draw attention, which results in lower distortion and improved watermark capacity. Difference expansion watermarking (DEW with Firefly Algorithm (FFA, a bioinspired optimization technique, is proposed to embed watermark into relational databases in this work. Best attribute values to yield lower distortion and increased watermark capacity are selected efficiently by the FFA. Experimental results indicate that FFA has reduced complexity and results in less distortion and improved watermark capacity compared to similar works reported in the literature.

  11. Watermarking in E-commerce

    OpenAIRE

    Peyman Rahmati; Andy Adler; Thomas Tran

    2013-01-01

    A major challenge for E-commerce and content-based businesses is the possibility of altering identity documents or other digital data. This paper shows a watermark-based approach to protect digital identity documents against a Print-Scan (PS) attack. We propose a secure ID card authentication system based on watermarking. For authentication purposes, a user/customer is asked to upload a scanned picture of a passport or ID card through the internet to fulfill a transaction online. To provide s...

  12. A Novel Application for Text Watermarking in Digital Reading

    Science.gov (United States)

    Zhang, Jin; Li, Qing-Cheng; Wang, Cong; Fang, Ji

    Although watermarking research has made great strides in theoretical aspect, its lack of application in business could not be covered. It is due to few people pays attention to usage of the information carried by watermarking. This paper proposes a new watermarking application method. After digital document being reorganized with advertisement together, watermarking is designed to carry this structure of new document. It will release advertisement as interference information under attack. On the one hand, reducing the quality of digital works could inhabit unauthorized distribution. On the other hand, advertisement can benefit copyright holders as compensation. Moreover implementation detail, attack evaluation and watermarking algorithm correlation are also discussed through an experiment based on txt file.

  13. Wavelet-Based Watermarking and Compression for ECG Signals with Verification Evaluation

    Directory of Open Access Journals (Sweden)

    Kuo-Kun Tseng

    2014-02-01

    Full Text Available In the current open society and with the growth of human rights, people are more and more concerned about the privacy of their information and other important data. This study makes use of electrocardiography (ECG data in order to protect individual information. An ECG signal can not only be used to analyze disease, but also to provide crucial biometric information for identification and authentication. In this study, we propose a new idea of integrating electrocardiogram watermarking and compression approach, which has never been researched before. ECG watermarking can ensure the confidentiality and reliability of a user’s data while reducing the amount of data. In the evaluation, we apply the embedding capacity, bit error rate (BER, signal-to-noise ratio (SNR, compression ratio (CR, and compressed-signal to noise ratio (CNR methods to assess the proposed algorithm. After comprehensive evaluation the final results show that our algorithm is robust and feasible.

  14. Digital Watermark Tracking using Intelligent Multi-Agents System

    Directory of Open Access Journals (Sweden)

    Nagaraj V. DHARWADKAR

    2010-01-01

    Full Text Available E-commerce has become a huge business and adriving factor in the development of the Internet. Onlineshopping services are well established. Due to the evolution of2G and 3G mobile networks, soon online shopping services arecomplemented by their wireless counterparts. Furthermore, inthe recent years online delivery of digital media, such as MP3audio or video or image is very popular and will become anincreasingly important part of E-commerce. The advantage ofinternet is sharing the valuable digital data which lead to misuseof digital data. To resolve the problem of misuse of digital dataon Internet we need to have strong Digital rights monitoringsystem. Digital Rights Management (DRM is fairly youngdiscipline, while some of its underlying technologies have beenknown from many years. The use of DRM for managing andprotecting intellectual property rights is a comparatively newfield. In this paper we propose a model for online digital imagelibrary copyright protection based on watermark trackingSystem.In our proposed model the tracking of watermarks onremote host nodes is done using active mobile agents. The multiagentsystem architecture is used in watermark tracking whichsupports the coordination of several component tasks acrossdistributed and flexible networks of information sources.Whereas a centralized system is susceptible to system-widefailures and processing bottlenecks, multi-agent systems aremore reliable, especially given the likelihood of individualcomponent failures.

  15. A novel perceptually adaptive image watermarking scheme by ...

    African Journals Online (AJOL)

    Threshold and modification value were selected adaptively for each image block, which improved robustness and transparency. The proposed algorithm was able to withstand a variety of attacks and image processing operations like rotation, cropping, noise addition, resizing, lossy compression and etc. The experimental ...

  16. Clinical Data Warehouse Watermarking: Impact on Syndromic Measure.

    Science.gov (United States)

    Bouzille, Guillaume; Pan, Wei; Franco-Contreras, Javier; Cuggia, Marc; Coatrieux, Gouenou

    2017-01-01

    Watermarking appears as a promising tool for the traceability of shared medical databases as it allows hiding the traceability information into the database itself. However, it is necessary to ensure that the distortion resulting from this process does not hinder subsequent data analysis. In this paper, we present the preliminary results of a study on the impact of watermarking in the estimation of flu activities. These results show that flu epidemics periods can be estimated without significant perturbation even when considering a moderate watermark distortion.

  17. Digital Watermarks -RE-SONANCE--Ise-Pt-emb-er

    Indian Academy of Sciences (India)

    That depends on the type of security required. Visible watermarks ... the locations of the words within text lines, thus watermarking the document uniquely. ... serious attack made possible by powerful word processors. The easiest way to beat ...

  18. Semifragile Speech Watermarking Based on Least Significant Bit Replacement of Line Spectral Frequencies

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Nematollahi

    2017-01-01

    Full Text Available There are various techniques for speech watermarking based on modifying the linear prediction coefficients (LPCs; however, the estimated and modified LPCs vary from each other even without attacks. Because line spectral frequency (LSF has less sensitivity to watermarking than LPC, watermark bits are embedded into the maximum number of LSFs by applying the least significant bit replacement (LSBR method. To reduce the differences between estimated and modified LPCs, a checking loop is added to minimize the watermark extraction error. Experimental results show that the proposed semifragile speech watermarking method can provide high imperceptibility and that any manipulation of the watermark signal destroys the watermark bits since manipulation changes it to a random stream of bits.

  19. Individually Watermarked Information Distributed Scalable by Modified Transforms

    Science.gov (United States)

    2009-10-01

    inverse of the secret transform is needed. Each trusted recipient has a unique inverse transform that is similar to the inverse of the original...transform. The elements of this individual inverse transform are given by the individual descrambling key. After applying the individual inverse ... transform the retrieved image is embedded with a recipient individual watermark. Souce 1 I Decode IW1 Decode IW2 Decode ISC Scramb K Recipient 3

  20. Hamming Code Based Watermarking Scheme for 3D Model Verification

    Directory of Open Access Journals (Sweden)

    Jen-Tse Wang

    2014-01-01

    Full Text Available Due to the explosive growth of the Internet and maturing of 3D hardware techniques, protecting 3D objects becomes a more and more important issue. In this paper, a public hamming code based fragile watermarking technique is proposed for 3D objects verification. An adaptive watermark is generated from each cover model by using the hamming code technique. A simple least significant bit (LSB substitution technique is employed for watermark embedding. In the extraction stage, the hamming code based watermark can be verified by using the hamming code checking without embedding any verification information. Experimental results shows that 100% vertices of the cover model can be watermarked, extracted, and verified. It also shows that the proposed method can improve security and achieve low distortion of stego object.

  1. Multimedia watermarking techniques and applications

    CERN Document Server

    Kirovski, Darko

    2006-01-01

    Intellectual property owners must continually exploit new ways of reproducing, distributing, and marketing their products. However, the threat of piracy looms as a major problem with digital distribution and storage technologies. Multimedia Watermarking Techniques and Applications covers all current and future trends in the design of modern systems that use watermarking to protect multimedia content. Containing the works of contributing authors who are worldwide experts in the field, this volume is intended for researchers and practitioners, as well as for those who want a broad understanding

  2. Digital Watermarking of Autonomous Vehicles Imagery and Video Communication

    Science.gov (United States)

    2005-10-01

    Watermarking of Autonomous Vehicles Imagery and Video Communications Executive Summary We have developed, implemented and tested a known-host-state methodology...2005 Final 01-06-2004->31-08-2005 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Digital Watermarking of Autonomous Vehicles Imagery 5b. GRANTNUMBER and...college of ENGINEERING Center for Advanced VI LLANOVA Communications U N I V E R S I T Y FINAL TECHNICAL REPORT Digital Watermarking of Autonomous

  3. Facilitating Watermark Insertion by Preprocessing Media

    Directory of Open Access Journals (Sweden)

    Matt L. Miller

    2004-10-01

    Full Text Available There are several watermarking applications that require the deployment of a very large number of watermark embedders. These applications often have severe budgetary constraints that limit the computation resources that are available. Under these circumstances, only simple embedding algorithms can be deployed, which have limited performance. In order to improve performance, we propose preprocessing the original media. It is envisaged that this preprocessing occurs during content creation and has no budgetary or computational constraints. Preprocessing combined with simple embedding creates a watermarked Work, the performance of which exceeds that of simple embedding alone. However, this performance improvement is obtained without any increase in the computational complexity of the embedder. Rather, the additional computational burden is shifted to the preprocessing stage. A simple example of this procedure is described and experimental results confirm our assertions.

  4. Multimedia security watermarking, steganography, and forensics

    CERN Document Server

    Shih, Frank Y

    2012-01-01

    Multimedia Security: Watermarking, Steganography, and Forensics outlines essential principles, technical information, and expert insights on multimedia security technology used to prove that content is authentic and has not been altered. Illustrating the need for improved content security as the Internet and digital multimedia applications rapidly evolve, this book presents a wealth of everyday protection application examples in fields including multimedia mining and classification, digital watermarking, steganography, and digital forensics. Giving readers an in-depth overview of different asp

  5. A Hybrid Digital-Signature and Zero-Watermarking Approach for Authentication and Protection of Sensitive Electronic Documents

    Science.gov (United States)

    Kabir, Muhammad N.; Alginahi, Yasser M.

    2014-01-01

    This paper addresses the problems and threats associated with verification of integrity, proof of authenticity, tamper detection, and copyright protection for digital-text content. Such issues were largely addressed in the literature for images, audio, and video, with only a few papers addressing the challenge of sensitive plain-text media under known constraints. Specifically, with text as the predominant online communication medium, it becomes crucial that techniques are deployed to protect such information. A number of digital-signature, hashing, and watermarking schemes have been proposed that essentially bind source data or embed invisible data in a cover media to achieve its goal. While many such complex schemes with resource redundancies are sufficient in offline and less-sensitive texts, this paper proposes a hybrid approach based on zero-watermarking and digital-signature-like manipulations for sensitive text documents in order to achieve content originality and integrity verification without physically modifying the cover text in anyway. The proposed algorithm was implemented and shown to be robust against undetected content modifications and is capable of confirming proof of originality whilst detecting and locating deliberate/nondeliberate tampering. Additionally, enhancements in resource utilisation and reduced redundancies were achieved in comparison to traditional encryption-based approaches. Finally, analysis and remarks are made about the current state of the art, and future research issues are discussed under the given constraints. PMID:25254247

  6. A Hybrid Digital-Signature and Zero-Watermarking Approach for Authentication and Protection of Sensitive Electronic Documents

    Directory of Open Access Journals (Sweden)

    Omar Tayan

    2014-01-01

    Full Text Available This paper addresses the problems and threats associated with verification of integrity, proof of authenticity, tamper detection, and copyright protection for digital-text content. Such issues were largely addressed in the literature for images, audio, and video, with only a few papers addressing the challenge of sensitive plain-text media under known constraints. Specifically, with text as the predominant online communication medium, it becomes crucial that techniques are deployed to protect such information. A number of digital-signature, hashing, and watermarking schemes have been proposed that essentially bind source data or embed invisible data in a cover media to achieve its goal. While many such complex schemes with resource redundancies are sufficient in offline and less-sensitive texts, this paper proposes a hybrid approach based on zero-watermarking and digital-signature-like manipulations for sensitive text documents in order to achieve content originality and integrity verification without physically modifying the cover text in anyway. The proposed algorithm was implemented and shown to be robust against undetected content modifications and is capable of confirming proof of originality whilst detecting and locating deliberate/nondeliberate tampering. Additionally, enhancements in resource utilisation and reduced redundancies were achieved in comparison to traditional encryption-based approaches. Finally, analysis and remarks are made about the current state of the art, and future research issues are discussed under the given constraints.

  7. DNA watermarks: A proof of concept

    Directory of Open Access Journals (Sweden)

    Barnekow Angelika

    2008-04-01

    Full Text Available Abstract Background DNA-based watermarks are helpful tools to identify the unauthorized use of genetically modified organisms (GMOs protected by patents. In silico analyses showed that in coding regions synonymous codons can be used to insert encrypted information into the genome of living organisms by using the DNA-Crypt algorithm. Results We integrated an authenticating watermark in the Vam7 sequence. For our investigations we used a mutant Saccharomyces cerevisiae strain, called CG783, which has an amber mutation within the Vam7 sequence. The CG783 cells are unable to sporulate and in addition display an abnormal vacuolar morphology. Transformation of CG783 with pRS314 Vam7 leads to a phenotype very similar to the wildtype yeast strain CG781. The integrated watermark did not influence the function of Vam7 and the resulting phenotype of the CG783 cells transformed with pRS314 Vam7-TB shows no significant differences compared to the CG783 cells transformed with pRS314 Vam7. Conclusion From our experiments we conclude that the DNA watermarks produced by DNA-Crypt do not influence the translation from mRNA into protein. By analyzing the vacuolar morphology, growth rate and ability to sporulate we confirmed that the resulting Vam7 protein was functionally active.

  8. Light Weight MP3 Watermarking Method for Mobile Terminals

    Science.gov (United States)

    Takagi, Koichi; Sakazawa, Shigeyuki; Takishima, Yasuhiro

    This paper proposes a novel MP3 watermarking method which is applicable to a mobile terminal with limited computational resources. Considering that in most cases the embedded information is copyright information or metadata, which should be extracted before playing back audio contents, the watermark detection process should be executed at high speed. However, when conventional methods are used with a mobile terminal, it takes a considerable amount of time to detect a digital watermark. This paper focuses on scalefactor manipulation to enable high speed watermark embedding/detection for MP3 audio and also proposes the manipulation method which minimizes audio quality degradation adaptively. Evaluation tests showed that the proposed method is capable of embedding 3 bits/frame information without degrading audio quality and detecting it at very high speed. Finally, this paper describes application examples for authentication with a digital signature.

  9. A Bernoulli Gaussian Watermark for Detecting Integrity Attacks in Control Systems

    Energy Technology Data Exchange (ETDEWEB)

    Weerakkody, Sean [Carnegie Mellon Univ., Pittsburgh, PA (United States); Ozel, Omur [Carnegie Mellon Univ., Pittsburgh, PA (United States); Sinopoli, Bruno [Carnegie Mellon Univ., Pittsburgh, PA (United States)

    2017-11-02

    We examine the merit of Bernoulli packet drops in actively detecting integrity attacks on control systems. The aim is to detect an adversary who delivers fake sensor measurements to a system operator in order to conceal their effect on the plant. Physical watermarks, or noisy additive Gaussian inputs, have been previously used to detect several classes of integrity attacks in control systems. In this paper, we consider the analysis and design of Gaussian physical watermarks in the presence of packet drops at the control input. On one hand, this enables analysis in a more general network setting. On the other hand, we observe that in certain cases, Bernoulli packet drops can improve detection performance relative to a purely Gaussian watermark. This motivates the joint design of a Bernoulli-Gaussian watermark which incorporates both an additive Gaussian input and a Bernoulli drop process. We characterize the effect of such a watermark on system performance as well as attack detectability in two separate design scenarios. Here, we consider a correlation detector for attack recognition. We then propose efficiently solvable optimization problems to intelligently select parameters of the Gaussian input and the Bernoulli drop process while addressing security and performance trade-offs. Finally, we provide numerical results which illustrate that a watermark with packet drops can indeed outperform a Gaussian watermark.

  10. Optimized Watermarking for Light Field Rendering based Free-View TV

    DEFF Research Database (Denmark)

    Apostolidis, Evlampios; Kounalakis, Tsampikos; Manifavas, Charalampos

    2013-01-01

    In Free-View Television the viewers select freely the viewing position and angle of the transmitted multiview video. It is apparent that copyright and copy protection problems exist, since a video of this arbitrarily selected view can be recorded and then misused. In this context, the watermark...... introduced by the watermark’s insertion-extraction scheme. Therefore, we ended up to the best five Mathematical Distributions, and we concluded that the watermark’s robustness in FTV case does not depend only on the FTV image’s characteristics, but it also relies on the characteristics of the Mathematical...

  11. A Novel Image Authentication with Tamper Localization and Self-Recovery in Encrypted Domain Based on Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    2018-01-01

    Full Text Available This paper proposes a novel tamper detection, localization, and recovery scheme for encrypted images with Discrete Wavelet Transformation (DWT and Compressive Sensing (CS. The original image is first transformed into DWT domain and divided into important part, that is, low-frequency part, and unimportant part, that is, high-frequency part. For low-frequency part contains the main information of image, traditional chaotic encryption is employed. Then, high-frequency part is encrypted with CS to vacate space for watermark. The scheme takes the processed original image content as watermark, from which the characteristic digest values are generated. Comparing with the existing image authentication algorithms, the proposed scheme can realize not only tamper detection and localization but also tamper recovery. Moreover, tamper recovery is based on block division and the recovery accuracy varies with the contents that are possibly tampered. If either the watermark or low-frequency part is tampered, the recovery accuracy is 100%. The experimental results show that the scheme can not only distinguish the type of tamper and find the tampered blocks but also recover the main information of the original image. With great robustness and security, the scheme can adequately meet the need of secure image transmission under unreliable conditions.

  12. The First 50 Years of Electronic Watermarking

    Directory of Open Access Journals (Sweden)

    Ingemar J. Cox

    2002-02-01

    Full Text Available Electronic watermarking can be traced back as far as 1954. The last 10 years has seen considerable interest in digital watermarking, due, in large part, to concerns about illegal piracy of copyrighted content. In this paper, we consider the following questions: is the interest warranted? What are the commercial applications of the technology? What scientific progress has been made in the last 10 years? What are the most exciting areas for research? And where might the next 10 years take us? In our opinion, the interest in watermarking is appropriate. However, we expect that copyright applications will be overshadowed by applications such as broadcast monitoring, authentication, and tracking content distributed within corporations. We further see a variety of applications emerging that add value to media, such as annotation and linking content to the Web. These latter applications may turn out to be the most compelling. Considerable progress has been made toward enabling these applications—perceptual modelling, security threats and countermeasures, and the development of a bag of tricks for efficient implementations. Further progress is needed in methods for handling geometric and temporal distortions. We expect other exciting developments to arise from research in informed watermarking.

  13. Forensic Analysis of Digital Image Tampering

    Science.gov (United States)

    2004-12-01

    analysis of when each method fails, which Chapter 4 discusses. Finally, a test image containing an invisible watermark using LSB steganography is...2.2 – Example of invisible watermark using Steganography Software F5 ............. 8 Figure 2.3 – Example of copy-move image forgery [12...used to embed the hidden watermark is Steganography Software F5 version 11+ discussed in Section 2.2. Original JPEG Image – 580 x 435 – 17.4

  14. A Cloud-User Protocol Based on Ciphertext Watermarking Technology

    Directory of Open Access Journals (Sweden)

    Keyang Liu

    2017-01-01

    Full Text Available With the growth of cloud computing technology, more and more Cloud Service Providers (CSPs begin to provide cloud computing service to users and ask for users’ permission of using their data to improve the quality of service (QoS. Since these data are stored in the form of plain text, they bring about users’ worry for the risk of privacy leakage. However, the existing watermark embedding and encryption technology is not suitable for protecting the Right to Be Forgotten. Hence, we propose a new Cloud-User protocol as a solution for plain text outsourcing problem. We only allow users and CSPs to embed the ciphertext watermark, which is generated and embedded by Trusted Third Party (TTP, into the ciphertext data for transferring. Then, the receiver decrypts it and obtains the watermarked data in plain text. In the arbitration stage, feature extraction and the identity of user will be used to identify the data. The fixed Hamming distance code can help raise the system’s capability for watermarks as much as possible. Extracted watermark can locate the unauthorized distributor and protect the right of honest CSP. The results of experiments demonstrate the security and validity of our protocol.

  15. Location-Aware Cross-Layer Design Using Overlay Watermarks

    Directory of Open Access Journals (Sweden)

    Paul Ho

    2007-04-01

    Full Text Available A new orthogonal frequency division multiplexing (OFDM system embedded with overlay watermarks for location-aware cross-layer design is proposed in this paper. One major advantage of the proposed system is the multiple functionalities the overlay watermark provides, which includes a cross-layer signaling interface, a transceiver identification for position-aware routing, as well as its basic role as a training sequence for channel estimation. Wireless terminals are typically battery powered and have limited wireless communication bandwidth. Therefore, efficient collaborative signal processing algorithms that consume less energy for computation and less bandwidth for communication are needed. Transceiver aware of its location can also improve the routing efficiency by selective flooding or selective forwarding data only in the desired direction, since in most cases the location of a wireless host is unknown. In the proposed OFDM system, location information of a mobile for efficient routing can be easily derived when a unique watermark is associated with each individual transceiver. In addition, cross-layer signaling and other interlayer interactive information can be exchanged with a new data pipe created by modulating the overlay watermarks. We also study the channel estimation and watermark removal techniques at the physical layer for the proposed overlay OFDM. Our channel estimator iteratively estimates the channel impulse response and the combined signal vector from the overlay OFDM signal. Cross-layer design that leads to low-power consumption and more efficient routing is investigated.

  16. Blind Compressed Image Watermarking for Noisy Communication Channels

    Science.gov (United States)

    2015-10-26

    Lenna test image [11] for our simulations, and gradient projection for sparse recon- struction (GPSR) [12] to solve the convex optimization prob- lem...E. Candes, J. Romberg , and T. Tao, “Robust uncertainty prin- ciples: exact signal reconstruction from highly incomplete fre- quency information,” IEEE...Images - Requirements and Guidelines,” ITU-T Recommen- dation T.81, 1992. [6] M. Gkizeli, D. Pados, and M. Medley, “Optimal signature de - sign for

  17. Histogram Modification and Wavelet Transform for High Performance Watermarking

    Directory of Open Access Journals (Sweden)

    Ying-Shen Juang

    2012-01-01

    Full Text Available This paper proposes a reversible watermarking technique for natural images. According to the similarity of neighbor coefficients’ values in wavelet domain, most differences between two adjacent pixels are close to zero. The histogram is built based on these difference statistics. As more peak points can be used for secret data hiding, the hiding capacity is improved compared with those conventional methods. Moreover, as the differences concentricity around zero is improved, the transparency of the host image can be increased. Experimental results and comparison show that the proposed method has both advantages in hiding capacity and transparency.

  18. A Sequential Circuit-Based IP Watermarking Algorithm for Multiple Scan Chains in Design-for-Test

    Directory of Open Access Journals (Sweden)

    C. Wu

    2011-06-01

    Full Text Available In Very Large Scale Integrated Circuits (VLSI design, the existing Design-for-Test(DFT based watermarking techniques usually insert watermark through reordering scan cells, which causes large resource overhead, low security and coverage rate of watermark detection. A novel scheme was proposed to watermark multiple scan chains in DFT for solving the problems. The proposed scheme adopts DFT scan test model of VLSI design, and uses a Linear Feedback Shift Register (LFSR for pseudo random test vector generation. All of the test vectors are shifted in scan input for the construction of multiple scan chains with minimum correlation. Specific registers in multiple scan chains will be changed by the watermark circuit for watermarking the design. The watermark can be effectively detected without interference with normal function of the circuit, even after the chip is packaged. The experimental results on several ISCAS benchmarks show that the proposed scheme has lower resource overhead, probability of coincidence and higher coverage rate of watermark detection by comparing with the existing methods.

  19. A detailed study of the generation of optically detectable watermarks using the logistic map

    International Nuclear Information System (INIS)

    Mooney, Aidan; Keating, John G.; Heffernan, Daniel M.

    2006-01-01

    A digital watermark is a visible, or preferably invisible, identification code that is permanently embedded in digital media, to prove owner authentication and provide protection for documents. Given the interest in watermark generation using chaotic functions a detailed study of one chaotic function for this purpose is performed. In this paper, we present an approach for the generation of watermarks using the logistic map. Using this function, in conjunction with seed management, it is possible to generate chaotic sequences that may be used to create highpass or lowpass digital watermarks. In this paper we provide a detailed study on the generation of optically detectable watermarks and we provide some guidelines on successful chaotic watermark generation using the logistic map, and show using a recently published scheme, how care must be taken in the selection of the function seed

  20. Distortion-Free Watermarking Approach for Relational Database Integrity Checking

    Directory of Open Access Journals (Sweden)

    Lancine Camara

    2014-01-01

    Full Text Available Nowadays, internet is becoming a suitable way of accessing the databases. Such data are exposed to various types of attack with the aim to confuse the ownership proofing or the content protection. In this paper, we propose a new approach based on fragile zero watermarking for the authentication of numeric relational data. Contrary to some previous databases watermarking techniques which cause some distortions in the original database and may not preserve the data usability constraints, our approach simply seeks to generate the watermark from the original database. First, the adopted method partitions the database relation into independent square matrix groups. Then, group-based watermarks are securely generated and registered in a trusted third party. The integrity verification is performed by computing the determinant and the diagonal’s minor for each group. As a result, tampering can be localized up to attribute group level. Theoretical and experimental results demonstrate that the proposed technique is resilient against tuples insertion, tuples deletion, and attributes values modification attacks. Furthermore, comparison with recent related effort shows that our scheme performs better in detecting multifaceted attacks.

  1. Multimodal biometric digital watermarking on immigrant visas for homeland security

    Science.gov (United States)

    Sasi, Sreela; Tamhane, Kirti C.; Rajappa, Mahesh B.

    2004-08-01

    Passengers with immigrant Visa's are a major concern to the International Airports due to the various fraud operations identified. To curb tampering of genuine Visa, the Visa's should contain human identification information. Biometric characteristic is a common and reliable way to authenticate the identity of an individual [1]. A Multimodal Biometric Human Identification System (MBHIS) that integrates iris code, DNA fingerprint, and the passport number on the Visa photograph using digital watermarking scheme is presented. Digital Watermarking technique is well suited for any system requiring high security [2]. Ophthalmologists [3], [4], [5] suggested that iris scan is an accurate and nonintrusive optical fingerprint. DNA sequence can be used as a genetic barcode [6], [7]. While issuing Visa at the US consulates, the DNA sequence isolated from saliva, the iris code and passport number shall be digitally watermarked in the Visa photograph. This information is also recorded in the 'immigrant database'. A 'forward watermarking phase' combines a 2-D DWT transformed digital photograph with the personal identification information. A 'detection phase' extracts the watermarked information from this VISA photograph at the port of entry, from which iris code can be used for identification and DNA biometric for authentication, if an anomaly arises.

  2. A good performance watermarking LDPC code used in high-speed optical fiber communication system

    Science.gov (United States)

    Zhang, Wenbo; Li, Chao; Zhang, Xiaoguang; Xi, Lixia; Tang, Xianfeng; He, Wenxue

    2015-07-01

    A watermarking LDPC code, which is a strategy designed to improve the performance of the traditional LDPC code, was introduced. By inserting some pre-defined watermarking bits into original LDPC code, we can obtain a more correct estimation about the noise level in the fiber channel. Then we use them to modify the probability distribution function (PDF) used in the initial process of belief propagation (BP) decoding algorithm. This algorithm was tested in a 128 Gb/s PDM-DQPSK optical communication system and results showed that the watermarking LDPC code had a better tolerances to polarization mode dispersion (PMD) and nonlinearity than that of traditional LDPC code. Also, by losing about 2.4% of redundancy for watermarking bits, the decoding efficiency of the watermarking LDPC code is about twice of the traditional one.

  3. Smart security and securing data through watermarking

    Science.gov (United States)

    Singh, Ritesh; Kumar, Lalit; Banik, Debraj; Sundar, S.

    2017-11-01

    The growth of image processing in embedded system has provided the boon of enhancing the security in various sectors. This lead to the developing of various protective strategies, which will be needed by private or public sectors for cyber security purposes. So, we have developed a method which uses digital water marking and locking mechanism for the protection of any closed premises. This paper describes a contemporary system based on user name, user id, password and encryption technique which can be placed in banks, protected offices to beef the security up. The burglary can be abated substantially by using a proactive safety structure. In this proposed framework, we are using water-marking in spatial domain to encode and decode the image and PIR(Passive Infrared Sensor) sensor to detect the existence of person in any close area.

  4. A novel approach to correct the coded aperture misalignment for fast neutron imaging

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, F. N.; Hu, H. S., E-mail: huasi-hu@mail.xjtu.edu.cn; Wang, D. M.; Jia, J. [School of Energy and Power Engineering, Xi’an Jiaotong University, Xi’an 710049 (China); Zhang, T. K. [Laser Fusion Research Center, CAEP, Mianyang, 621900 Sichuan (China); Jia, Q. G. [Institute of Applied Physics and Computational Mathematics, Beijing 100094 (China)

    2015-12-15

    Aperture alignment is crucial for the diagnosis of neutron imaging because it has significant impact on the coding imaging and the understanding of the neutron source. In our previous studies on the neutron imaging system with coded aperture for large field of view, “residual watermark,” certain extra information that overlies reconstructed image and has nothing to do with the source is discovered if the peak normalization is employed in genetic algorithms (GA) to reconstruct the source image. Some studies on basic properties of residual watermark indicate that the residual watermark can characterize coded aperture and can thus be used to determine the location of coded aperture relative to the system axis. In this paper, we have further analyzed the essential conditions for the existence of residual watermark and the requirements of the reconstruction algorithm for the emergence of residual watermark. A gamma coded imaging experiment has been performed to verify the existence of residual watermark. Based on the residual watermark, a correction method for the aperture misalignment has been studied. A multiple linear regression model of the position of coded aperture axis, the position of residual watermark center, and the gray barycenter of neutron source with twenty training samples has been set up. Using the regression model and verification samples, we have found the position of the coded aperture axis relative to the system axis with an accuracy of approximately 20 μm. Conclusively, a novel approach has been established to correct the coded aperture misalignment for fast neutron coded imaging.

  5. Parameterization of LSB in Self-Recovery Speech Watermarking Framework in Big Data Mining

    Directory of Open Access Journals (Sweden)

    Shuo Li

    2017-01-01

    Full Text Available The privacy is a major concern in big data mining approach. In this paper, we propose a novel self-recovery speech watermarking framework with consideration of trustable communication in big data mining. In the framework, the watermark is the compressed version of the original speech. The watermark is embedded into the least significant bit (LSB layers. At the receiver end, the watermark is used to detect the tampered area and recover the tampered speech. To fit the complexity of the scenes in big data infrastructures, the LSB is treated as a parameter. This work discusses the relationship between LSB and other parameters in terms of explicit mathematical formulations. Once the LSB layer has been chosen, the best choices of other parameters are then deduced using the exclusive method. Additionally, we observed that six LSB layers are the limit for watermark embedding when the total bit layers equaled sixteen. Experimental results indicated that when the LSB layers changed from six to three, the imperceptibility of watermark increased, while the quality of the recovered signal decreased accordingly. This result was a trade-off and different LSB layers should be chosen according to different application conditions in big data infrastructures.

  6. Invisible watermarking optical camera communication and compatibility issues of IEEE 802.15.7r1 specification

    Science.gov (United States)

    Le, Nam-Tuan

    2017-05-01

    Copyright protection and information security are two most considered issues of digital data following the development of internet and computer network. As an important solution for protection, watermarking technology has become one of the challenged roles in industry and academic research. The watermarking technology can be classified by two categories: visible watermarking and invisible watermarking. With invisible technique, there is an advantage on user interaction because of the visibility. By applying watermarking for communication, it will be a challenge and a new direction for communication technology. In this paper we will propose one new research on communication technology using optical camera communications (OCC) based invisible watermarking. Beside the analysis on performance of proposed system, we also suggest the frame structure of PHY and MAC layer for IEEE 802.15.7r1 specification which is a revision of visible light communication (VLC) standardization.

  7. The study of watermark bar code recognition with light transmission theory

    Science.gov (United States)

    Zhang, Fan; Liu, Tiegen; Zhang, Lianxiang; Zhang, Xiaojun

    2004-10-01

    Watermark bar code is one of the latest anti-counterfeiting technologies, which is applicable to a series of security documents, especially banknotes. With watermark bar codes embedded euro banknotes as an example, a system is designed for watermark bar code detection and recognition based on light transmission theory. We obtain light transmission curves of different denominations along different sampling lines which are paralleled to the latitudinal axis of the banknote. By calculating the correlation coefficient between different light transmission curves, the system can not only distinguish the reference banknote from either the counterfeit ones or other denominations, but also demonstrates high consistency and repeatability.

  8. Wavelet Based Hilbert Transform with Digital Design and Application to QCM-SS Watermarking

    Directory of Open Access Journals (Sweden)

    S. P. Maity

    2008-04-01

    Full Text Available In recent time, wavelet transforms are used extensively for efficient storage, transmission and representation of multimedia signals. Hilbert transform pairs of wavelets is the basic unit of many wavelet theories such as complex filter banks, complex wavelet and phaselet etc. Moreover, Hilbert transform finds various applications in communications and signal processing such as generation of single sideband (SSB modulation, quadrature carrier multiplexing (QCM and bandpass representation of a signal. Thus wavelet based discrete Hilbert transform design draws much attention of researchers for couple of years. This paper proposes an (i algorithm for generation of low computation cost Hilbert transform pairs of symmetric filter coefficients using biorthogonal wavelets, (ii approximation to its rational coefficients form for its efficient hardware realization and without much loss in signal representation, and finally (iii development of QCM-SS (spread spectrum image watermarking scheme for doubling the payload capacity. Simulation results show novelty of the proposed Hilbert transform design and its application to watermarking compared to existing algorithms.

  9. A Privacy-Preserving Outsourcing Data Storage Scheme with Fragile Digital Watermarking-Based Data Auditing

    Directory of Open Access Journals (Sweden)

    Xinyue Cao

    2016-01-01

    Full Text Available Cloud storage has been recognized as the popular solution to solve the problems of the rising storage costs of IT enterprises for users. However, outsourcing data to the cloud service providers (CSPs may leak some sensitive privacy information, as the data is out of user’s control. So how to ensure the integrity and privacy of outsourced data has become a big challenge. Encryption and data auditing provide a solution toward the challenge. In this paper, we propose a privacy-preserving and auditing-supporting outsourcing data storage scheme by using encryption and digital watermarking. Logistic map-based chaotic cryptography algorithm is used to preserve the privacy of outsourcing data, which has a fast operation speed and a good effect of encryption. Local histogram shifting digital watermark algorithm is used to protect the data integrity which has high payload and makes the original image restored losslessly if the data is verified to be integrated. Experiments show that our scheme is secure and feasible.

  10. A model for the distribution of watermarked digital content on mobile networks

    Science.gov (United States)

    Frattolillo, Franco; D'Onofrio, Salvatore

    2006-10-01

    Although digital watermarking can be considered one of the key technologies to implement the copyright protection of digital contents distributed on the Internet, most of the content distribution models based on watermarking protocols proposed in literature have been purposely designed for fixed networks and cannot be easily adapted to mobile networks. On the contrary, the use of mobile devices currently enables new types of services and business models, and this makes the development of new content distribution models for mobile environments strategic in the current scenario of the Internet. This paper presents and discusses a distribution model of watermarked digital contents for such environments able to achieve a trade-off between the needs of efficiency and security.

  11. Video watermarking for mobile phone applications

    Science.gov (United States)

    Mitrea, M.; Duta, S.; Petrescu, M.; Preteux, F.

    2005-08-01

    Nowadays, alongside with the traditional voice signal, music, video, and 3D characters tend to become common data to be run, stored and/or processed on mobile phones. Hence, to protect their related intellectual property rights also becomes a crucial issue. The video sequences involved in such applications are generally coded at very low bit rates. The present paper starts by presenting an accurate statistical investigation on such a video as well as on a very dangerous attack (the StirMark attack). The obtained results are turned into practice when adapting a spread spectrum watermarking method to such applications. The informed watermarking approach was also considered: an outstanding method belonging to this paradigm has been adapted and re evaluated under the low rate video constraint. The experimental results were conducted in collaboration with the SFR mobile services provider in France. They also allow a comparison between the spread spectrum and informed embedding techniques.

  12. Robust Short-Lag Spatial Coherence Imaging.

    Science.gov (United States)

    Nair, Arun Asokan; Tran, Trac Duy; Bell, Muyinatu A Lediju

    2018-03-01

    Short-lag spatial coherence (SLSC) imaging displays the spatial coherence between backscattered ultrasound echoes instead of their signal amplitudes and is more robust to noise and clutter artifacts when compared with traditional delay-and-sum (DAS) B-mode imaging. However, SLSC imaging does not consider the content of images formed with different lags, and thus does not exploit the differences in tissue texture at each short-lag value. Our proposed method improves SLSC imaging by weighting the addition of lag values (i.e., M-weighting) and by applying robust principal component analysis (RPCA) to search for a low-dimensional subspace for projecting coherence images created with different lag values. The RPCA-based projections are considered to be denoised versions of the originals that are then weighted and added across lags to yield a final robust SLSC (R-SLSC) image. Our approach was tested on simulation, phantom, and in vivo liver data. Relative to DAS B-mode images, the mean contrast, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) improvements with R-SLSC images are 21.22 dB, 2.54, and 2.36, respectively, when averaged over simulated, phantom, and in vivo data and over all lags considered, which corresponds to mean improvements of 96.4%, 121.2%, and 120.5%, respectively. When compared with SLSC images, the corresponding mean improvements with R-SLSC images were 7.38 dB, 1.52, and 1.30, respectively (i.e., mean improvements of 14.5%, 50.5%, and 43.2%, respectively). Results show great promise for smoothing out the tissue texture of SLSC images and enhancing anechoic or hypoechoic target visibility at higher lag values, which could be useful in clinical tasks such as breast cyst visualization, liver vessel tracking, and obese patient imaging.

  13. Physical Watermarking for Securing Cyber-Physical Systems via Packet Drop Injections

    Energy Technology Data Exchange (ETDEWEB)

    Ozel, Omur [Carnegie Mellon Univ., Pittsburgh, PA (United States); Weekrakkody, Sean [Carnegie Mellon Univ., Pittsburgh, PA (United States); Sinopoli, Bruno [Carnegie Mellon Univ., Pittsburgh, PA (United States)

    2017-10-23

    Physical watermarking is a well known solution for detecting integrity attacks on Cyber-Physical Systems (CPSs) such as the smart grid. Here, a random control input is injected into the system in order to authenticate physical dynamics and sensors which may have been corrupted by adversaries. Packet drops may naturally occur in a CPS due to network imperfections. To our knowledge, previous work has not considered the role of packet drops in detecting integrity attacks. In this paper, we investigate the merit of injecting Bernoulli packet drops into the control inputs sent to actuators as a new physical watermarking scheme. With the classical linear quadratic objective function and an independent and identically distributed packet drop injection sequence, we study the effect of packet drops on meeting security and control objectives. Our results indicate that the packet drops could act as a potential physical watermark for attack detection in CPSs.

  14. Secure Oblivious Hiding, Authentication, Tamper Proofing, and Verification Techniques

    National Research Council Canada - National Science Library

    Fridrich, Jessica

    2002-01-01

    In this report, we describe an algorithm for robust visual hash functions with applications to digital image watermarking for authentication and integrity verification of video data and still images...

  15. Wavelet based mobile video watermarking: spread spectrum vs. informed embedding

    Science.gov (United States)

    Mitrea, M.; Prêteux, F.; Duţă, S.; Petrescu, M.

    2005-11-01

    The cell phone expansion provides an additional direction for digital video content distribution: music clips, news, sport events are more and more transmitted toward mobile users. Consequently, from the watermarking point of view, a new challenge should be taken: very low bitrate contents (e.g. as low as 64 kbit/s) are now to be protected. Within this framework, the paper approaches for the first time the mathematical models for two random processes, namely the original video to be protected and a very harmful attack any watermarking method should face the StirMark attack. By applying an advanced statistical investigation (combining the Chi square, Ro, Fisher and Student tests) in the discrete wavelet domain, it is established that the popular Gaussian assumption can be very restrictively used when describing the former process and has nothing to do with the latter. As these results can a priori determine the performances of several watermarking methods, both of spread spectrum and informed embedding types, they should be considered in the design stage.

  16. Robust Steganography Using LSB-XOR and Image Sharing

    OpenAIRE

    Adak, Chandranath

    2013-01-01

    Hiding and securing the secret digital information and data that are transmitted over the internet is of widespread and most challenging interest. This paper presents a new idea of robust steganography using bitwise-XOR operation between stego-key-image-pixel LSB (Least Significant Bit) value and secret message-character ASCII-binary value (or, secret image-pixel value). The stego-key-image is shared in dual-layer using odd-even position of each pixel to make the system robust. Due to image s...

  17. A modified non-binary LDPC scheme based on watermark symbols in high speed optical transmission systems

    Science.gov (United States)

    Wang, Liming; Qiao, Yaojun; Yu, Qian; Zhang, Wenbo

    2016-04-01

    We introduce a watermark non-binary low-density parity check code (NB-LDPC) scheme, which can estimate the time-varying noise variance by using prior information of watermark symbols, to improve the performance of NB-LDPC codes. And compared with the prior-art counterpart, the watermark scheme can bring about 0.25 dB improvement in net coding gain (NCG) at bit error rate (BER) of 1e-6 and 36.8-81% reduction of the iteration numbers. Obviously, the proposed scheme shows great potential in terms of error correction performance and decoding efficiency.

  18. Color Image Authentication and Recovery via Adaptive Encoding

    Directory of Open Access Journals (Sweden)

    Chun-Hung Chen

    2014-01-01

    Full Text Available We describe an authentication and recovery scheme for color image protection based on adaptive encoding. The image blocks are categorized based on their contents and different encoding schemes are applied according to their types. Such adaptive encoding results in better image quality and more robust image authentication. The approximations of the luminance and chromatic channels are carefully calculated, and for the purpose of reducing the data size, differential coding is used to encode the channels with variable size according to the characteristic of the block. The recovery data which represents the approximation and the detail of the image is embedded for data protection. The necessary data is well protected by using error correcting coding and duplication. The experimental results demonstrate that our technique is able to identify and localize image tampering, while preserving high quality for both watermarked and recovered images.

  19. Tutela del diritto di proprieta' delle immagini digitali: Implementazione di un algoritmo di watermark mediante funzioni wavelet

    Directory of Open Access Journals (Sweden)

    Prestipino, D

    2004-01-01

    Full Text Available Protection of copyright of the digital images is a critical element for the multimedia Web applications, e-books, virtual picture gallery. This problem is today receiving growing attention due to the pervasive diffusion of Internet technology. This work shows the watermark as solution to this problem and describes a new wavelet-based algorithm, called WM1.0, which is invisible, private, strong. WM1.0 watermaks a subset of digital images building the ecclesiastic on-line art collection. The Owner of the images and related information is the Italian Episcopal Conference, whereas the Publisher is I.D.S., an ICT company located in Messina.

  20. LDPC and SHA based iris recognition for image authentication

    Directory of Open Access Journals (Sweden)

    K. Seetharaman

    2012-11-01

    Full Text Available We introduce a novel way to authenticate an image using Low Density Parity Check (LDPC and Secure Hash Algorithm (SHA based iris recognition method with reversible watermarking scheme, which is based on Integer Wavelet Transform (IWT and threshold embedding technique. The parity checks and parity matrix of LDPC encoding and cancellable biometrics i.e., hash string of unique iris code from SHA-512 are embedded into an image for authentication purpose using reversible watermarking scheme based on IWT and threshold embedding technique. Simply by reversing the embedding process, the original image, parity checks, parity matrix and SHA-512 hash are extracted back from watermarked-image. For authentication, the new hash string produced by employing SHA-512 on error corrected iris code from live person is compared with hash string extracted from watermarked-image. The LDPC code reduces the hamming distance for genuine comparisons by a larger amount than for the impostor comparisons. This results in better separation between genuine and impostor users which improves the authentication performance. Security of this scheme is very high due to the security complexity of SHA-512, which is 2256 under birthday attack. Experimental results show that this approach can assure more accurate authentication with a low false rejection or false acceptance rate and outperforms the prior arts in terms of PSNR.

  1. SISTEM LEGALISIR SCAN IJASAH ONLINE BERBASIS QR CODE DAN WATERMARKING

    Directory of Open Access Journals (Sweden)

    Erwin Yudi Hidayat

    2015-05-01

    Full Text Available Dokumen ijasah memiliki arti penting bagi pemiliknya sebagai bukti seseorang telah menyelesaikan satu tahap studi yang ditempuh. Ijasah juga termasuk syarat utama ketika seseorang melamar kerja. Universitas Dian Nuswantoro (UDINUS memerlukan sistem yang handal untuk mengelola legalisir ijasah dengan cara digital dan online. Meskipun unggul dalam penyimpanan, ijasah dalam bentuk digital dapat dimodifikasi dan disalahgunakan dengan mudah. Untuk itu, perlindungan terhadap legalisir ijasah digital sangat diperlukan untuk menghindari penyalahgunaan oleh pihak yang tidak berwenang. Metode verifikasi pertama adalah Quick Response (QR Code. Metode kedua disebut watermarking. Hasil yang diperoleh menunjukkan, metode ini dapat diaplikasikan pada legalisir ijasah di lingkungan UDINUS untuk mempermudah pencarian data dan meminimalkan kemungkinan modifikasi dokumen ijasah digital. Kata Kunci: legalisir, ijasah, QR Code, watermarking.

  2. Detection and isolation of routing attacks through sensor watermarking

    NARCIS (Netherlands)

    Ferrari, R.; Herdeiro Teixeira, A.M.; Sun, J; Jiang, Z-P

    2017-01-01

    In networked control systems, leveraging the peculiarities of the cyber-physical domains and their interactions may lead to novel detection and defense mechanisms against malicious cyber-attacks. In this paper, we propose a multiplicative sensor watermarking scheme, where each sensor's output is

  3. Digital Watermarks Enabling E-Commerce Strategies: Conditional and User Specific Access to Services and Resources

    Science.gov (United States)

    Dittmann, Jana; Steinebach, Martin; Wohlmacher, Petra; Ackermann, Ralf

    2002-12-01

    Digital watermarking is well known as enabling technology to prove ownership on copyrighted material, detect originators of illegally made copies, monitor the usage of the copyrighted multimedia data and analyze the spread spectrum of the data over networks and servers. Research has shown that data hiding techniques can be applied successfully to other application areas like manipulations recognition. In this paper, we show our innovative approach for integrating watermark and cryptography based methods within a framework of new application scenarios spanning a wide range from dedicated and user specific services, "Try&Buy" mechanisms to general means for long-term customer relationships. The tremendous recent efforts to develop and deploy ubiquitous mobile communication possibilities are changing the demands but also possibilities for establishing new business and commerce relationships. Especially we motivate annotation watermarks and aspects of M-Commerce to show important scenarios for access control. Based on a description of the challenges of the application domain and our latest work we discuss, which methods can be used for establishing services in a fast convenient and secure way for conditional access services based on digital watermarking combined with cryptographic techniques. We introduce an example scenario for digital audio and an overview of steps in order to establish these concepts practically.

  4. Fingerprinting with Wow

    Science.gov (United States)

    Yu, Eugene; Craver, Scott

    2006-02-01

    Wow, or time warping caused by speed fluctuations in analog audio equipment, provides a wealth of applications in watermarking. Very subtle temporal distortion has been used to defeat watermarks, and as components in watermarking systems. In the image domain, the analogous warping of an image's canvas has been used both to defeat watermarks and also proposed to prevent collusion attacks on fingerprinting systems. In this paper, we explore how subliminal levels of wow can be used for steganography and fingerprinting. We present both a low-bitrate robust solution and a higher-bitrate solution intended for steganographic communication. As already observed, such a fingerprinting algorithm naturally discourages collusion by averaging, owing to flanging effects when misaligned audio is averaged. Another advantage of warping is that even when imperceptible, it can be beyond the reach of compression algorithms. We use this opportunity to debunk the common misconception that steganography is impossible under "perfect compression."

  5. Multi-focus Image Fusion Using Epifluorescence Microscopy for Robust Vascular Segmentation

    OpenAIRE

    Pelapur, Rengarajan; Prasath, Surya; Palaniappan, Kannappan

    2014-01-01

    We are building a computerized image analysis system for Dura Mater vascular network from fluorescence microscopy images. We propose a system that couples a multi-focus image fusion module with a robust adaptive filtering based segmentation. The robust adaptive filtering scheme handles noise without destroying small structures, and the multi focal image fusion considerably improves the overall segmentation quality by integrating information from multiple images. Based on the segmenta...

  6. Fast Watermarking of MPEG-1/2 Streams Using Compressed-Domain Perceptual Embedding and a Generalized Correlator Detector

    Directory of Open Access Journals (Sweden)

    Briassouli Alexia

    2004-01-01

    Full Text Available A novel technique is proposed for watermarking of MPEG-1 and MPEG-2 compressed video streams. The proposed scheme is applied directly in the domain of MPEG-1 system streams and MPEG-2 program streams (multiplexed streams. Perceptual models are used during the embedding process in order to avoid degradation of the video quality. The watermark is detected without the use of the original video sequence. A modified correlation-based detector is introduced that applies nonlinear preprocessing before correlation. Experimental evaluation demonstrates that the proposed scheme is able to withstand several common attacks. The resulting watermarking system is very fast and therefore suitable for copyright protection of compressed video.

  7. From watermarking to in-band enrichment: future trends

    Science.gov (United States)

    Mitrea, M.; Prêteux, F.

    2009-02-01

    Coming across with the emerging Knowledge Society, the enriched video is nowadays a hot research topic, from both academic and industrial perspectives. The principle consists in associating to the video stream some metadata of various types (textual, audio, video, executable codes, ...). This new content is to be further exploited in a large variety of applications, like interactive DTV, games, e-learning, and data mining, for instance. This paper brings into evidence the potentiality of the watermarking techniques for such an application. By inserting the enrichment data into the very video to be enriched, three main advantages are ensured. First, no additional complexity is required from the terminal and the representation format point of view. Secondly, no backward compatibility issue is encountered, thus allowing a unique system to accommodate services from several generations. Finally, the network adaptation constraints are alleviated. The discussion is structured on both theoretical aspects (the accurate evaluation of the watermarking capacity in several reallife scenarios) as well as on applications developed under the framework of the R&D contracts conducted at the ARTEMIS Department.

  8. Robust Image Hashing Using Radon Transform and Invariant Features

    Directory of Open Access Journals (Sweden)

    Y.L. Liu

    2016-09-01

    Full Text Available A robust image hashing method based on radon transform and invariant features is proposed for image authentication, image retrieval, and image detection. Specifically, an input image is firstly converted into a counterpart with a normalized size. Then the invariant centroid algorithm is applied to obtain the invariant feature point and the surrounding circular area, and the radon transform is employed to acquire the mapping coefficient matrix of the area. Finally, the hashing sequence is generated by combining the feature vectors and the invariant moments calculated from the coefficient matrix. Experimental results show that this method not only can resist against the normal image processing operations, but also some geometric distortions. Comparisons of receiver operating characteristic (ROC curve indicate that the proposed method outperforms some existing methods in classification between perceptual robustness and discrimination.

  9. A robust nonlinear filter for image restoration.

    Science.gov (United States)

    Koivunen, V

    1995-01-01

    A class of nonlinear regression filters based on robust estimation theory is introduced. The goal of the filtering is to recover a high-quality image from degraded observations. Models for desired image structures and contaminating processes are employed, but deviations from strict assumptions are allowed since the assumptions on signal and noise are typically only approximately true. The robustness of filters is usually addressed only in a distributional sense, i.e., the actual error distribution deviates from the nominal one. In this paper, the robustness is considered in a broad sense since the outliers may also be due to inappropriate signal model, or there may be more than one statistical population present in the processing window, causing biased estimates. Two filtering algorithms minimizing a least trimmed squares criterion are provided. The design of the filters is simple since no scale parameters or context-dependent threshold values are required. Experimental results using both real and simulated data are presented. The filters effectively attenuate both impulsive and nonimpulsive noise while recovering the signal structure and preserving interesting details.

  10. Digimarc MediaBridge: the birth of a consumer product from concept to commercial application

    Science.gov (United States)

    Perry, Burt; MacIntosh, Brian; Cushman, David

    2002-04-01

    This paper examines the issues encountered in the development and commercial deployment of a system based on digital watermarking technology. The paper provides an overview of the development of digital watermarking technology and the first applications to use the technology. It also looks at how we took the concept of digital watermarking as a communications channel within a digital environment and applied it to the physical print world to produce the Digimarc MediaBridge product. We describe the engineering tradeoffs that were made to balance competing requirements of watermark robustness, image quality, embedding process, detection speed and end user ease of use. Today, the Digimarc MediaBridge product links printed materials to auxiliary information about the content, via the Internet, to provide enhanced informational marketing, promotion, advertising and commerce opportunities.

  11. Accurate and robust brain image alignment using boundary-based registration.

    Science.gov (United States)

    Greve, Douglas N; Fischl, Bruce

    2009-10-15

    The fine spatial scales of the structures in the human brain represent an enormous challenge to the successful integration of information from different images for both within- and between-subject analysis. While many algorithms to register image pairs from the same subject exist, visual inspection shows that their accuracy and robustness to be suspect, particularly when there are strong intensity gradients and/or only part of the brain is imaged. This paper introduces a new algorithm called Boundary-Based Registration, or BBR. The novelty of BBR is that it treats the two images very differently. The reference image must be of sufficient resolution and quality to extract surfaces that separate tissue types. The input image is then aligned to the reference by maximizing the intensity gradient across tissue boundaries. Several lower quality images can be aligned through their alignment with the reference. Visual inspection and fMRI results show that BBR is more accurate than correlation ratio or normalized mutual information and is considerably more robust to even strong intensity inhomogeneities. BBR also excels at aligning partial-brain images to whole-brain images, a domain in which existing registration algorithms frequently fail. Even in the limit of registering a single slice, we show the BBR results to be robust and accurate.

  12. Digital watermarking for secure and adaptive teleconferencing

    Science.gov (United States)

    Vorbrueggen, Jan C.; Thorwirth, Niels

    2002-04-01

    The EC-sponsored project ANDROID aims to develop a management system for secure active networks. Active network means allowing the network's customers to execute code (Java-based so-called proxylets) on parts of the network infrastructure. Secure means that the network operator nonetheless retains full control over the network and its resources, and that proxylets use ANDROID-developed facilities to provide secure applications. Management is based on policies and allows autonomous, distributed decisions and actions to be taken. Proxylets interface with the system via policies; among actions they can take is controlling execution of other proxylets or redirection of network traffic. Secure teleconferencing is used as the application to demonstrate the approach's advantages. A way to control a teleconference's data streams is to use digital watermarking of the video, audio and/or shared-whiteboard streams, providing an imperceptible and inseparable side channel that delivers information from originating or intermediate stations to downstream stations. Depending on the information carried by the watermark, these stations can take many different actions. Examples are forwarding decisions based on security classifications (possibly time-varying) at security boundaries, set-up and tear-down of virtual private networks, intelligent and adaptive transcoding, recorder or playback control (e.g., speaking off the record), copyright protection, and sender authentication.

  13. Enhanced echolocation via robust statistics and super-resolution of sonar images

    Science.gov (United States)

    Kim, Kio

    Echolocation is a process in which an animal uses acoustic signals to exchange information with environments. In a recent study, Neretti et al. have shown that the use of robust statistics can significantly improve the resiliency of echolocation against noise and enhance its accuracy by suppressing the development of sidelobes in the processing of an echo signal. In this research, the use of robust statistics is extended to problems in underwater explorations. The dissertation consists of two parts. Part I describes how robust statistics can enhance the identification of target objects, which in this case are cylindrical containers filled with four different liquids. Particularly, this work employs a variation of an existing robust estimator called an L-estimator, which was first suggested by Koenker and Bassett. As pointed out by Au et al.; a 'highlight interval' is an important feature, and it is closely related with many other important features that are known to be crucial for dolphin echolocation. A varied L-estimator described in this text is used to enhance the detection of highlight intervals, which eventually leads to a successful classification of echo signals. Part II extends the problem into 2 dimensions. Thanks to the advances in material and computer technology, various sonar imaging modalities are available on the market. By registering acoustic images from such video sequences, one can extract more information on the region of interest. Computer vision and image processing allowed application of robust statistics to the acoustic images produced by forward looking sonar systems, such as Dual-frequency Identification Sonar and ProViewer. The first use of robust statistics for sonar image enhancement in this text is in image registration. Random Sampling Consensus (RANSAC) is widely used for image registration. The registration algorithm using RANSAC is optimized for sonar image registration, and the performance is studied. The second use of robust

  14. An Interactive Concert Program Based on Infrared Watermark and Audio Synthesis

    Science.gov (United States)

    Wang, Hsi-Chun; Lee, Wen-Pin Hope; Liang, Feng-Ju

    The objective of this research is to propose a video/audio system which allows the user to listen the typical music notes in the concert program under infrared detection. The system synthesizes audio with different pitches and tempi in accordance with the encoded data in a 2-D barcode embedded in the infrared watermark. The digital halftoning technique has been used to fabricate the infrared watermark composed of halftone dots by both amplitude modulation (AM) and frequency modulation (FM). The results show that this interactive system successfully recognizes the barcode and synthesizes audio under infrared detection of a concert program which is also valid for human observation of the contents. This interactive video/audio system has greatly expanded the capability of the printout paper to audio display and also has many potential value-added applications.

  15. A Novel Video Data-Source Authentication Model Based on Digital Watermarking and MAC in Multicast

    Institute of Scientific and Technical Information of China (English)

    ZHAO Anjun; LU Xiangli; GUO Lei

    2006-01-01

    A novel video data authentication model based on digital video watermarking and MAC (message authentication code) in multicast protocol is proposed in this paper. The digital watermarking which composes of the MAC of the significant video content, the key and instant authentication data is embedded into the insignificant video component by the MLUT (modified look-up table) video watermarking technology. We explain a method that does not require storage of each data packet for a time, thus making receiver not vulnerable to DOS (denial of service) attack. So the video packets can be authenticated instantly without large volume buffer in the receivers. TESLA(timed efficient stream loss-tolerant authentication) does not explain how to select the suitable value for d, which is an important parameter in multicast source authentication. So we give a method to calculate the key disclosure delay (number of intervals). Simulation results show that the proposed algorithms improve the performance of data source authentication in multicast.

  16. Associated diacritical watermarking approach to protect sensitive arabic digital texts

    Science.gov (United States)

    Kamaruddin, Nurul Shamimi; Kamsin, Amirrudin; Hakak, Saqib

    2017-10-01

    Among multimedia content, one of the most predominant medium is text content. There have been lots of efforts to protect and secure text information over the Internet. The limitations of existing works have been identified in terms of watermark capacity, time complexity and memory complexity. In this work, an invisible digital watermarking approach has been proposed to protect and secure the most sensitive text i.e. Digital Holy Quran. The proposed approach works by XOR-ing only those Quranic letters that has certain diacritics associated with it. Due to sensitive nature of Holy Quran, diacritics play vital role in the meaning of the particular verse. Hence, securing letters with certain diacritics will preserve the original meaning of Quranic verses in case of alternation attempt. Initial results have shown that the proposed approach is promising with less memory complexity and time complexity compared to existing approaches.

  17. Implementation of Digital Watermarking Using MATLAB Software

    OpenAIRE

    Karnpriya Vyas; Kirti Sethiya; Sonu Jain

    2012-01-01

    Digital watermarking holds significant promise as one of the keys to protecting proprietary digital content in the coming years. It focuses on embedding information inside a digital object such that the embedded information is in separable bound to the object. The proposed scheme has been implemented on MATLAB, as it is a high level technical computing language and interactive environment for algorithm development, data visualization, data analysis, and numerical computation. We w...

  18. Robust linear registration of CT images using random regression forests

    Science.gov (United States)

    Konukoglu, Ender; Criminisi, Antonio; Pathak, Sayan; Robertson, Duncan; White, Steve; Haynor, David; Siddiqui, Khan

    2011-03-01

    Global linear registration is a necessary first step for many different tasks in medical image analysis. Comparing longitudinal studies1, cross-modality fusion2, and many other applications depend heavily on the success of the automatic registration. The robustness and efficiency of this step is crucial as it affects all subsequent operations. Most common techniques cast the linear registration problem as the minimization of a global energy function based on the image intensities. Although these algorithms have proved useful, their robustness in fully automated scenarios is still an open question. In fact, the optimization step often gets caught in local minima yielding unsatisfactory results. Recent algorithms constrain the space of registration parameters by exploiting implicit or explicit organ segmentations, thus increasing robustness4,5. In this work we propose a novel robust algorithm for automatic global linear image registration. Our method uses random regression forests to estimate posterior probability distributions for the locations of anatomical structures - represented as axis aligned bounding boxes6. These posterior distributions are later integrated in a global linear registration algorithm. The biggest advantage of our algorithm is that it does not require pre-defined segmentations or regions. Yet it yields robust registration results. We compare the robustness of our algorithm with that of the state of the art Elastix toolbox7. Validation is performed via 1464 pair-wise registrations in a database of very diverse 3D CT images. We show that our method decreases the "failure" rate of the global linear registration from 12.5% (Elastix) to only 1.9%.

  19. a Review of Digital Watermarking and Copyright Control Technology for Cultural Relics

    Science.gov (United States)

    Liu, H.; Hou, M.; Hu, Y.

    2018-04-01

    With the rapid growth of the application and sharing of the 3-D model data in the protection of cultural relics, the problem of Shared security and copyright control of the three-dimensional model of cultural relics is becoming increasingly prominent. Followed by a digital watermarking copyright control has become the frontier technology of 3-D model security protection of cultural relics and effective means, related technology research and application in recent years also got further development. 3-D model based on cultural relics digital watermarking and copyright control technology, introduces the research background and demand, its unique characteristics were described, and its development and application of the algorithm are discussed, and the prospects of the future development trend and some problems and the solution.

  20. Highly Robust Statistical Methods in Medical Image Analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2012-01-01

    Roč. 32, č. 2 (2012), s. 3-16 ISSN 0208-5216 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : robust statistics * classification * faces * robust image analysis * forensic science Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.208, year: 2012 http://www.ibib.waw.pl/bbe/bbefulltext/BBE_32_2_003_FT.pdf

  1. Medical Image Tamper Detection Based on Passive Image Authentication.

    Science.gov (United States)

    Ulutas, Guzin; Ustubioglu, Arda; Ustubioglu, Beste; V Nabiyev, Vasif; Ulutas, Mustafa

    2017-12-01

    Telemedicine has gained popularity in recent years. Medical images can be transferred over the Internet to enable the telediagnosis between medical staffs and to make the patient's history accessible to medical staff from anywhere. Therefore, integrity protection of the medical image is a serious concern due to the broadcast nature of the Internet. Some watermarking techniques are proposed to control the integrity of medical images. However, they require embedding of extra information (watermark) into image before transmission. It decreases visual quality of the medical image and can cause false diagnosis. The proposed method uses passive image authentication mechanism to detect the tampered regions on medical images. Structural texture information is obtained from the medical image by using local binary pattern rotation invariant (LBPROT) to make the keypoint extraction techniques more successful. Keypoints on the texture image are obtained with scale invariant feature transform (SIFT). Tampered regions are detected by the method by matching the keypoints. The method improves the keypoint-based passive image authentication mechanism (they do not detect tampering when the smooth region is used for covering an object) by using LBPROT before keypoint extraction because smooth regions also have texture information. Experimental results show that the method detects tampered regions on the medical images even if the forged image has undergone some attacks (Gaussian blurring/additive white Gaussian noise) or the forged regions are scaled/rotated before pasting.

  2. Microscopy image segmentation tool: Robust image data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Valmianski, Ilya, E-mail: ivalmian@ucsd.edu; Monton, Carlos; Schuller, Ivan K. [Department of Physics and Center for Advanced Nanoscience, University of California San Diego, 9500 Gilman Drive, La Jolla, California 92093 (United States)

    2014-03-15

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  3. Microscopy image segmentation tool: Robust image data analysis

    Science.gov (United States)

    Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K.

    2014-03-01

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  4. Microscopy image segmentation tool: Robust image data analysis

    International Nuclear Information System (INIS)

    Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K.

    2014-01-01

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy

  5. Robust algebraic image enhancement for intelligent control systems

    Science.gov (United States)

    Lerner, Bao-Ting; Morrelli, Michael

    1993-01-01

    Robust vision capability for intelligent control systems has been an elusive goal in image processing. The computationally intensive techniques a necessary for conventional image processing make real-time applications, such as object tracking and collision avoidance difficult. In order to endow an intelligent control system with the needed vision robustness, an adequate image enhancement subsystem capable of compensating for the wide variety of real-world degradations, must exist between the image capturing and the object recognition subsystems. This enhancement stage must be adaptive and must operate with consistency in the presence of both statistical and shape-based noise. To deal with this problem, we have developed an innovative algebraic approach which provides a sound mathematical framework for image representation and manipulation. Our image model provides a natural platform from which to pursue dynamic scene analysis, and its incorporation into a vision system would serve as the front-end to an intelligent control system. We have developed a unique polynomial representation of gray level imagery and applied this representation to develop polynomial operators on complex gray level scenes. This approach is highly advantageous since polynomials can be manipulated very easily, and are readily understood, thus providing a very convenient environment for image processing. Our model presents a highly structured and compact algebraic representation of grey-level images which can be viewed as fuzzy sets.

  6. Robust simultaneous detection of coronary borders in complex images

    International Nuclear Information System (INIS)

    Sonka, M.; Winniford, M.D.; Collins, S.M.

    1995-01-01

    Visual estimation of coronary obstruction severity from angiograms suffers from poor inter- and intraobserver reproducibility and is often inaccurate. In spite of the widely recognized limitations of visual analysis, automated methods have not found widespread clinical use, in part because they too frequently fail to accurately identify vessel borders. The authors have developed a robust method for simultaneous detection of left and right coronary borders that is suitable for analysis of complex images with poor contrast, nearby or overlapping structures, or branching vessels. The reliability of the simultaneous border detection method and that of their previously reported conventional border detection method were tested in 130 complex images, selected because conventional automated border detection might be expected to fail. Conventional analysis failed to yield acceptable borders in 65/130 or 50% of images. Simultaneous border detection was much more robust (p < .001) and failed in only 15/130 or 12% of complex images. Simultaneous border detection identified stenosis diameters that correlated significantly better with observer-derived stenosis diameters than did diameters obtained with conventional border detection (p < 0.001). Simultaneous detection of left and right coronary borders is highly robust and has substantial promise for enhancing the utility of quantitative coronary angiography in the clinical setting

  7. A Robust Photogrammetric Processing Method of Low-Altitude UAV Images

    Directory of Open Access Journals (Sweden)

    Mingyao Ai

    2015-02-01

    Full Text Available Low-altitude Unmanned Aerial Vehicles (UAV images which include distortion, illumination variance, and large rotation angles are facing multiple challenges of image orientation and image processing. In this paper, a robust and convenient photogrammetric approach is proposed for processing low-altitude UAV images, involving a strip management method to automatically build a standardized regional aerial triangle (AT network, a parallel inner orientation algorithm, a ground control points (GCPs predicting method, and an improved Scale Invariant Feature Transform (SIFT method to produce large number of evenly distributed reliable tie points for bundle adjustment (BA. A multi-view matching approach is improved to produce Digital Surface Models (DSM and Digital Orthophoto Maps (DOM for 3D visualization. Experimental results show that the proposed approach is robust and feasible for photogrammetric processing of low-altitude UAV images and 3D visualization of products.

  8. The analysis of image feature robustness using cometcloud

    Directory of Open Access Journals (Sweden)

    Xin Qi

    2012-01-01

    Full Text Available The robustness of image features is a very important consideration in quantitative image analysis. The objective of this paper is to investigate the robustness of a range of image texture features using hematoxylin stained breast tissue microarray slides which are assessed while simulating different imaging challenges including out of focus, changes in magnification and variations in illumination, noise, compression, distortion, and rotation. We employed five texture analysis methods and tested them while introducing all of the challenges listed above. The texture features that were evaluated include co-occurrence matrix, center-symmetric auto-correlation, texture feature coding method, local binary pattern, and texton. Due to the independence of each transformation and texture descriptor, a network structured combination was proposed and deployed on the Rutgers private cloud. The experiments utilized 20 randomly selected tissue microarray cores. All the combinations of the image transformations and deformations are calculated, and the whole feature extraction procedure was completed in 70 minutes using a cloud equipped with 20 nodes. Center-symmetric auto-correlation outperforms all the other four texture descriptors but also requires the longest computational time. It is roughly 10 times slower than local binary pattern and texton. From a speed perspective, both the local binary pattern and texton features provided excellent performance for classification and content-based image retrieval.

  9. Robust image registration for multiple exposure high dynamic range image synthesis

    Science.gov (United States)

    Yao, Susu

    2011-03-01

    Image registration is an important preprocessing technique in high dynamic range (HDR) image synthesis. This paper proposed a robust image registration method for aligning a group of low dynamic range images (LDR) that are captured with different exposure times. Illumination change and photometric distortion between two images would result in inaccurate registration. We propose to transform intensity image data into phase congruency to eliminate the effect of the changes in image brightness and use phase cross correlation in the Fourier transform domain to perform image registration. Considering the presence of non-overlapped regions due to photometric distortion, evolutionary programming is applied to search for the accurate translation parameters so that the accuracy of registration is able to be achieved at a hundredth of a pixel level. The proposed algorithm works well for under and over-exposed image registration. It has been applied to align LDR images for synthesizing high quality HDR images..

  10. Robust image obfuscation for privacy protection in Web 2.0 applications

    Science.gov (United States)

    Poller, Andreas; Steinebach, Martin; Liu, Huajian

    2012-03-01

    We present two approaches to robust image obfuscation based on permutation of image regions and channel intensity modulation. The proposed concept of robust image obfuscation is a step towards end-to-end security in Web 2.0 applications. It helps to protect the privacy of the users against threats caused by internet bots and web applications that extract biometric and other features from images for data-linkage purposes. The approaches described in this paper consider that images uploaded to Web 2.0 applications pass several transformations, such as scaling and JPEG compression, until the receiver downloads them. In contrast to existing approaches, our focus is on usability, therefore the primary goal is not a maximum of security but an acceptable trade-off between security and resulting image quality.

  11. Robust T1-weighted structural brain imaging and morphometry at 7T using MP2RAGE.

    Directory of Open Access Journals (Sweden)

    Kieran R O'Brien

    Full Text Available PURPOSE: To suppress the noise, by sacrificing some of the signal homogeneity for numerical stability, in uniform T1 weighted (T1w images obtained with the magnetization prepared 2 rapid gradient echoes sequence (MP2RAGE and to compare the clinical utility of these robust T1w images against the uniform T1w images. MATERIALS AND METHODS: 8 healthy subjects (29.0 ± 4.1 years; 6 Male, who provided written consent, underwent two scan sessions within a 24 hour period on a 7T head-only scanner. The uniform and robust T1w image volumes were calculated inline on the scanner. Two experienced radiologists qualitatively rated the images for: general image quality; 7T specific artefacts; and, local structure definition. Voxel-based and volume-based morphometry packages were used to compare the segmentation quality between the uniform and robust images. Statistical differences were evaluated by using a positive sided Wilcoxon rank test. RESULTS: The robust image suppresses background noise inside and outside the skull. The inhomogeneity introduced was ranked as mild. The robust image was significantly ranked higher than the uniform image for both observers (observer 1/2, p-value = 0.0006/0.0004. In particular, an improved delineation of the pituitary gland, cerebellar lobes was observed in the robust versus uniform T1w image. The reproducibility of the segmentation results between repeat scans improved (p-value = 0.0004 from an average volumetric difference across structures of ≈ 6.6% to ≈ 2.4% for the uniform image and robust T1w image respectively. CONCLUSIONS: The robust T1w image enables MP2RAGE to produce, clinically familiar T1w images, in addition to T1 maps, which can be readily used in uniform morphometry packages.

  12. A Joint Watermarking and ROI Coding Scheme for Annotating Traffic Surveillance Videos

    Directory of Open Access Journals (Sweden)

    Su Po-Chyi

    2010-01-01

    Full Text Available We propose a new application of information hiding by employing the digital watermarking techniques to facilitate the data annotation in traffic surveillance videos. There are two parts in the proposed scheme. The first part is the object-based watermarking, in which the information of each vehicle collected by the intelligent transportation system will be conveyed/stored along with the visual data via information hiding. The scheme is integrated with H.264/AVC, which is assumed to be adopted by the surveillance system, to achieve an efficient implementation. The second part is a Region of Interest (ROI rate control mechanism for encoding traffic surveillance videos, which helps to improve the overall performance. The quality of vehicles in the video will be better preserved and a good rate-distortion performance can be attained. Experimental results show that this potential scheme works well in traffic surveillance videos.

  13. Fractal Image Coding with Digital Watermarks

    Directory of Open Access Journals (Sweden)

    Z. Klenovicova

    2000-12-01

    Full Text Available In this paper are presented some results of implementation of digitalwatermarking methods into image coding based on fractal principles. Thepaper focuses on two possible approaches of embedding digitalwatermarks into fractal code of images - embedding digital watermarksinto parameters for position of similar blocks and coefficients ofblock similarity. Both algorithms were analyzed and verified on grayscale static images.

  14. A survey of passive technology for digital image forensics

    Institute of Scientific and Technical Information of China (English)

    LUO Weiqi; QU Zhenhua; PAN Feng; HUANG Jiwu

    2007-01-01

    Over the past years,digital images have been widely used in the Internet and other applications.Whilst image processing techniques are developing at a rapid speed,tampering with digital images without leaving any obvious traces becomes easier and easier.This may give rise to some problems such as image authentication.A new passive technology for image forensics has evolved quickly during the last few years.Unlike the signature-based or watermark-based methods,the new technology does not need any signature generated or watermark embedded in advance,it assumes that different imaging devices or processing would introduce different inherent patterns into the output images.These underlying patterns are consistent in the original untampered images and would be altered after some kind of manipulations.Thus,they can be used as evidence for image source identification and alteration detection.In this paper,we will discuss this new forensics technology and give an overview of the prior literatures.Some concluding remarks are made about the state of the art and the challenges in this novel technology.

  15. Robust Methods for Image Processing in Anthropology and Biomedicine

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    -, č. 86 (2011), s. 53-53 ISSN 0926-4981 Institutional research plan: CEZ:AV0Z10300504 Keywords : image analysis * robust estimation * forensic anthropology Subject RIV: BB - Applied Statistics, Operational Research

  16. Speech watermarking: an approach for the forensic analysis of digital telephonic recordings.

    Science.gov (United States)

    Faundez-Zanuy, Marcos; Lucena-Molina, Jose J; Hagmüller, Martin

    2010-07-01

    In this article, the authors discuss the problem of forensic authentication of digital audio recordings. Although forensic audio has been addressed in several articles, the existing approaches are focused on analog magnetic recordings, which are less prevalent because of the large amount of digital recorders available on the market (optical, solid state, hard disks, etc.). An approach based on digital signal processing that consists of spread spectrum techniques for speech watermarking is presented. This approach presents the advantage that the authentication is based on the signal itself rather than the recording format. Thus, it is valid for usual recording devices in police-controlled telephone intercepts. In addition, our proposal allows for the introduction of relevant information such as the recording date and time and all the relevant data (this is not always possible with classical systems). Our experimental results reveal that the speech watermarking procedure does not interfere in a significant way with the posterior forensic speaker identification.

  17. Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion

    Directory of Open Access Journals (Sweden)

    Yuanshen Zhao

    2016-01-01

    Full Text Available Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the  a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost.

  18. Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion.

    Science.gov (United States)

    Zhao, Yuanshen; Gong, Liang; Huang, Yixiang; Liu, Chengliang

    2016-01-29

    Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the  a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ) color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost.

  19. LVTTL Based Energy Efficient Watermark Generator Design and Implementation on FPGA

    DEFF Research Database (Denmark)

    Pandey, Bishwajeet; Kaur, Amanpreet; Kumar, Tanesh

    2014-01-01

    -transistor logic (LVTTL) IO standard is used in this design to make it power optimized. This design is implemented on Kintex-7 FPGA, Device XC7K70T and -3 speed grades. When we are scaling the device operating frequency from 100GHz to 5GHz, there is 94.93% saving in total power of the watermark generator...

  20. Performance evaluation of TDT soil water content and watermark soil water potential sensors

    Science.gov (United States)

    This study evaluated the performance of digitized Time Domain Transmissometry (TDT) soil water content sensors (Acclima, Inc., Meridian, ID) and resistance-based soil water potential sensors (Watermark 200, Irrometer Company, Inc., Riverside, CA) in two soils. The evaluation was performed by compar...

  1. An Algorithm for Data Hiding in Radiographic Images and ePHI/R Application

    Directory of Open Access Journals (Sweden)

    Aqsa Rashid

    2018-01-01

    Full Text Available Telemedicine is the use of Information and Communication Technology (ICT for clinical health care from a distance. The exchange of radiographic images and electronic patient health information/records (ePHI/R for diagnostic purposes has the risk of confidentiality, ownership identity, and authenticity. In this paper, a data hiding technique for ePHI/R is proposed. The color information in the cover image is used for key generation, and stego-images are produced with ideal case. As a result, the whole stego-system is perfectly secure. This method includes the features of watermarking and steganography techniques. The method is applied to radiographic images. For the radiographic images, this method resembles watermarking, which is an ePHI/R data system. Experiments show promising results for the application of this method to radiographic images in ePHI/R for both transmission and storage purpose.

  2. A New Digital Watermarking Method for Data Integrity Protection in the Perception Layer of IoT

    Directory of Open Access Journals (Sweden)

    Guoyin Zhang

    2017-01-01

    Full Text Available Since its introduction, IoT (Internet of Things has enjoyed vigorous support from governments and research institutions around the world, and remarkable achievements have been obtained. The perception layer of IoT plays an important role as a link between the IoT and the real world; the security has become a bottleneck restricting the further development of IoT. The perception layer is a self-organizing network system consisting of various resource-constrained sensor nodes through wireless communication. Accordingly, the costly encryption mechanism cannot be applied to the perception layer. In this paper, a novel lightweight data integrity protection scheme based on fragile watermark is proposed to solve the contradiction between the security and restricted resource of perception layer. To improve the security, we design a position random watermark (PRW strategy to calculate the embedding position by temporal dynamics of sensing data. The digital watermark is generated by one-way hash function SHA-1 before embedding to the dynamic computed position. In this way, the security vulnerabilities introduced by fixed embedding position can not only be solved effectively, but also achieve zero disturbance to the data. The security analysis and simulation results show that the proposed scheme can effectively ensure the integrity of the data at low cost.

  3. STEGO TRANSFORMATION OF SPATIAL DOMAIN OF COVER IMAGE ROBUST AGAINST ATTACKS ON EMBEDDED MESSAGE

    Directory of Open Access Journals (Sweden)

    Kobozeva A.

    2014-04-01

    Full Text Available One of the main requirements to steganografic algorithm to be developed is robustness against disturbing influences, that is, to attacks against the embedded message. It was shown that guaranteeing the stego algorithm robustness does not depend on whether the additional information is embedded into the spatial or transformation domain of the cover image. Given the existing advantages of the spatial domain of the cover image in organization of embedding and extracting processes, a sufficient condition for ensuring robustness of such stego transformation was obtained in this work. It was shown that the amount of brightness correction related to the pixels of the cover image block is similar to the amount of correction related to the maximum singular value of the corresponding matrix of the block in case of embedding additional data that ensures robustness against attacks on the embedded message. Recommendations were obtained for selecting the size of the cover image block used in stego transformation as one of the parameters determining the calculation error of stego message. Given the inversely correspondence between the stego capacity of the stego channel being organized and the size of the cover image block, l=8 value was recommended.

  4. Robust linearized image reconstruction for multifrequency EIT of the breast.

    Science.gov (United States)

    Boverman, Gregory; Kao, Tzu-Jen; Kulkarni, Rujuta; Kim, Bong Seok; Isaacson, David; Saulnier, Gary J; Newell, Jonathan C

    2008-10-01

    Electrical impedance tomography (EIT) is a developing imaging modality that is beginning to show promise for detecting and characterizing tumors in the breast. At Rensselaer Polytechnic Institute, we have developed a combined EIT-tomosynthesis system that allows for the coregistered and simultaneous analysis of the breast using EIT and X-ray imaging. A significant challenge in EIT is the design of computationally efficient image reconstruction algorithms which are robust to various forms of model mismatch. Specifically, we have implemented a scaling procedure that is robust to the presence of a thin highly-resistive layer of skin at the boundary of the breast and we have developed an algorithm to detect and exclude from the image reconstruction electrodes that are in poor contact with the breast. In our initial clinical studies, it has been difficult to ensure that all electrodes make adequate contact with the breast, and thus procedures for the use of data sets containing poorly contacting electrodes are particularly important. We also present a novel, efficient method to compute the Jacobian matrix for our linearized image reconstruction algorithm by reducing the computation of the sensitivity for each voxel to a quadratic form. Initial clinical results are presented, showing the potential of our algorithms to detect and localize breast tumors.

  5. A copyright protection scheme for digital images based on shuffled singular value decomposition and visual cryptography.

    Science.gov (United States)

    Devi, B Pushpa; Singh, Kh Manglem; Roy, Sudipta

    2016-01-01

    This paper proposes a new watermarking algorithm based on the shuffled singular value decomposition and the visual cryptography for copyright protection of digital images. It generates the ownership and identification shares of the image based on visual cryptography. It decomposes the image into low and high frequency sub-bands. The low frequency sub-band is further divided into blocks of same size after shuffling it and then the singular value decomposition is applied to each randomly selected block. Shares are generated by comparing one of the elements in the first column of the left orthogonal matrix with its corresponding element in the right orthogonal matrix of the singular value decomposition of the block of the low frequency sub-band. The experimental results show that the proposed scheme clearly verifies the copyright of the digital images, and is robust to withstand several image processing attacks. Comparison with the other related visual cryptography-based algorithms reveals that the proposed method gives better performance. The proposed method is especially resilient against the rotation attack.

  6. Robust Image Analysis of Faces for Genetic Applications

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2010-01-01

    Roč. 6, č. 2 (2010), s. 95-102 ISSN 1801-5603 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : object localization * template matching * eye or mouth detection * robust correlation analysis * image denoising Subject RIV: BB - Applied Statistics, Operational Research http://www.ejbi.cz/articles/201012/47/1.html

  7. A robust state-space kinetics-guided framework for dynamic PET image reconstruction

    International Nuclear Information System (INIS)

    Tong, S; Alessio, A M; Kinahan, P E; Liu, H; Shi, P

    2011-01-01

    Dynamic PET image reconstruction is a challenging issue due to the low SNR and the large quantity of spatio-temporal data. We propose a robust state-space image reconstruction (SSIR) framework for activity reconstruction in dynamic PET. Unlike statistically-based frame-by-frame methods, tracer kinetic modeling is incorporated to provide physiological guidance for the reconstruction, harnessing the temporal information of the dynamic data. Dynamic reconstruction is formulated in a state-space representation, where a compartmental model describes the kinetic processes in a continuous-time system equation, and the imaging data are expressed in a discrete measurement equation. Tracer activity concentrations are treated as the state variables, and are estimated from the dynamic data. Sampled-data H ∞ filtering is adopted for robust estimation. H ∞ filtering makes no assumptions on the system and measurement statistics, and guarantees bounded estimation error for finite-energy disturbances, leading to robust performance for dynamic data with low SNR and/or errors. This alternative reconstruction approach could help us to deal with unpredictable situations in imaging (e.g. data corruption from failed detector blocks) or inaccurate noise models. Experiments on synthetic phantom and patient PET data are performed to demonstrate feasibility of the SSIR framework, and to explore its potential advantages over frame-by-frame statistical reconstruction approaches.

  8. Dynamic QoS Evaluation of Multimedia Contents in Wireless Networks by “Double-Boomerang” Watermarking

    Directory of Open Access Journals (Sweden)

    Gaetano Giunta

    2010-03-01

    Full Text Available This work presents a cooperative network-aware processing of multimedia content for dynamic quality of service management in wireless IP networks. Our technique can be also used for quality control in UMTS environments, exploiting the tracing watermarking recently introduced in literature. In this work, we use the transmitted video-sequences to monitor the QoS in a videoconference call. The video-sequence of every active user travels on the communication link, one time as video (transparent mode, one time as watermark (hidden mode describing a boomerang trajectory. The results obtained through our simulation trials confirm the validity of such approach. In fact, the advantages of distributing the management process are (i an easier and more precise localization of the cause of QoS problems, (ii a better knowledge of local situations, (iii a lower complexity for a single QoS agent and (iv an increase in possible actions.

  9. Recent Advances in Information Hiding and Applications

    CERN Document Server

    Huang, Hsiang-Cheh; Jain, Lakhmi; Zhao, Yao

    2013-01-01

    This research book presents a sample of recent advances in information hiding techniques and their applications. It includes:   Image data hiding scheme based on vector quantization and image graph coloring The copyright protection system for Android platform Reversible data hiding ICA-based image and video watermarking Content-based invariant image watermarking Single bitmap block truncation coding of color images using cat swarm optimization Genetic-based wavelet packet watermarking for copyright protection Lossless text steganography in compression coding Fast and low-distortion capacity acoustic synchronized acoustic-to-acoustic steganography scheme Video watermarking with shot detection

  10. Watermarking patient data in encrypted medical images

    Indian Academy of Sciences (India)

    Due to the advancement of technology, internet has become an ... area including important information and must be stored without any distortion. .... Although someone with the knowledge of encryption key can obtain a decrypted image and ... ical image management, in: Engineering in Medicine and Biology Society.

  11. Image retrieval by information fusion based on scalable vocabulary tree and robust Hausdorff distance

    Science.gov (United States)

    Che, Chang; Yu, Xiaoyang; Sun, Xiaoming; Yu, Boyang

    2017-12-01

    In recent years, Scalable Vocabulary Tree (SVT) has been shown to be effective in image retrieval. However, for general images where the foreground is the object to be recognized while the background is cluttered, the performance of the current SVT framework is restricted. In this paper, a new image retrieval framework that incorporates a robust distance metric and information fusion is proposed, which improves the retrieval performance relative to the baseline SVT approach. First, the visual words that represent the background are diminished by using a robust Hausdorff distance between different images. Second, image matching results based on three image signature representations are fused, which enhances the retrieval precision. We conducted intensive experiments on small-scale to large-scale image datasets: Corel-9, Corel-48, and PKU-198, where the proposed Hausdorff metric and information fusion outperforms the state-of-the-art methods by about 13, 15, and 15%, respectively.

  12. Adaptive and robust statistical methods for processing near-field scanning microwave microscopy images.

    Science.gov (United States)

    Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P

    2015-03-01

    Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical. Published by Elsevier B.V.

  13. Strict integrity control of biomedical images

    Science.gov (United States)

    Coatrieux, Gouenou; Maitre, Henri; Sankur, Bulent

    2001-08-01

    The control of the integrity and authentication of medical images is becoming ever more important within the Medical Information Systems (MIS). The intra- and interhospital exchange of images, such as in the PACS (Picture Archiving and Communication Systems), and the ease of copying, manipulation and distribution of images have brought forth the security aspects. In this paper we focus on the role of watermarking for MIS security and address the problem of integrity control of medical images. We discuss alternative schemes to extract verification signatures and compare their tamper detection performance.

  14. Research on improving image recognition robustness by combining multiple features with associative memory

    Science.gov (United States)

    Guo, Dongwei; Wang, Zhe

    2018-05-01

    Convolutional neural networks (CNN) achieve great success in computer vision, it can learn hierarchical representation from raw pixels and has outstanding performance in various image recognition tasks [1]. However, CNN is easy to be fraudulent in terms of it is possible to produce images totally unrecognizable to human eyes that CNNs believe with near certainty are familiar objects. [2]. In this paper, an associative memory model based on multiple features is proposed. Within this model, feature extraction and classification are carried out by CNN, T-SNE and exponential bidirectional associative memory neural network (EBAM). The geometric features extracted from CNN and the digital features extracted from T-SNE are associated by EBAM. Thus we ensure the recognition of robustness by a comprehensive assessment of the two features. In our model, we can get only 8% error rate with fraudulent data. In systems that require a high safety factor or some key areas, strong robustness is extremely important, if we can ensure the image recognition robustness, network security will be greatly improved and the social production efficiency will be extremely enhanced.

  15. Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis

    Science.gov (United States)

    Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.

    2016-01-01

    Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

  16. An Authentication Technique Based on Classification

    Institute of Scientific and Technical Information of China (English)

    李钢; 杨杰

    2004-01-01

    We present a novel watermarking approach based on classification for authentication, in which a watermark is embedded into the host image. When the marked image is modified, the extracted watermark is also different to the original watermark, and different kinds of modification lead to different extracted watermarks. In this paper, different kinds of modification are considered as classes, and we used classification algorithm to recognize the modifications with high probability. Simulation results show that the proposed method is potential and effective.

  17. Robust rooftop extraction from visible band images using higher order CRF

    KAUST Repository

    Li, Er; Femiani, John; Xu, Shibiao; Zhang, Xiaopeng; Wonka, Peter

    2015-01-01

    In this paper, we propose a robust framework for building extraction in visible band images. We first get an initial classification of the pixels based on an unsupervised presegmentation. Then, we develop a novel conditional random field (CRF

  18. Robust reflective ghost imaging against different partially polarized thermal light

    Science.gov (United States)

    Li, Hong-Guo; Wang, Yan; Zhang, Rui-Xue; Zhang, De-Jian; Liu, Hong-Chao; Li, Zong-Guo; Xiong, Jun

    2018-03-01

    We theoretically study the influence of degree of polarization (DOP) of thermal light on the contrast-to-noise ratio (CNR) of the reflective ghost imaging (RGI), which is a novel and indirect imaging modality. An expression for the CNR of RGI with partially polarized thermal light is carefully derived, which suggests a weak dependence of CNR on the DOP, especially when the ratio of the object size to the speckle size of thermal light has a large value. Different from conventional imaging approaches, our work reveals that RGI is much more robust against the DOP of the light source, which thereby has advantages in practical applications, such as remote sensing.

  19. Multiscale registration of remote sensing image using robust SIFT features in Steerable-Domain

    Directory of Open Access Journals (Sweden)

    Xiangzeng Liu

    2011-12-01

    Full Text Available This paper proposes a multiscale registration technique using robust Scale Invariant Feature Transform (SIFT features in Steerable-Domain, which can deal with the large variations of scale, rotation and illumination between images. First, a new robust SIFT descriptor is presented, which is invariant under affine transformation. Then, an adaptive similarity measure is developed according to the robust SIFT descriptor and the adaptive normalized cross correlation of feature point’s neighborhood. Finally, the corresponding feature points can be determined by the adaptive similarity measure in Steerable-Domain of the two input images, and the final refined transformation parameters determined by using gradual optimization are adopted to achieve the registration results. Quantitative comparisons of our algorithm with the related methods show a significant improvement in the presence of large scale, rotation changes, and illumination contrast. The effectiveness of the proposed method is demonstrated by the experimental results.

  20. Instantaneous, Simple, and Reversible Revealing of Invisible Patterns Encrypted in Robust Hollow Sphere Colloidal Photonic Crystals.

    Science.gov (United States)

    Zhong, Kuo; Li, Jiaqi; Liu, Liwang; Van Cleuvenbergen, Stijn; Song, Kai; Clays, Koen

    2018-05-04

    The colors of photonic crystals are based on their periodic crystalline structure. They show clear advantages over conventional chromophores for many applications, mainly due to their anti-photobleaching and responsiveness to stimuli. More specifically, combining colloidal photonic crystals and invisible patterns is important in steganography and watermarking for anticounterfeiting applications. Here a convenient way to imprint robust invisible patterns in colloidal crystals of hollow silica spheres is presented. While these patterns remain invisible under static environmental humidity, even up to near 100% relative humidity, they are unveiled immediately (≈100 ms) and fully reversibly by dynamic humid flow, e.g., human breath. They reveal themselves due to the extreme wettability of the patterned (etched) regions, as confirmed by contact angle measurements. The liquid surface tension threshold to induce wetting (revealing the imprinted invisible images) is evaluated by thermodynamic predictions and subsequently verified by exposure to various vapors with different surface tension. The color of the patterned regions is furthermore independently tuned by vapors with different refractive indices. Such a system can play a key role in applications such as anticounterfeiting, identification, and vapor sensing. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Comparison of Video Steganography Methods for Watermark Embedding

    Directory of Open Access Journals (Sweden)

    Griberman David

    2016-05-01

    Full Text Available The paper focuses on the comparison of video steganography methods for the purpose of digital watermarking in the context of copyright protection. Four embedding methods that use Discrete Cosine and Discrete Wavelet Transforms have been researched and compared based on their embedding efficiency and fidelity. A video steganography program has been developed in the Java programming language with all of the researched methods implemented for experiments. The experiments used 3 video containers with different amounts of movement. The impact of the movement has been addressed in the paper as well as the ways of potential improvement of embedding efficiency using adaptive embedding based on the movement amount. Results of the research have been verified using a survey with 17 participants.

  2. Information hiding techniques for infrared images: exploring the state-of-the art and challenges

    Science.gov (United States)

    Pomponiu, Victor; Cavagnino, Davide; Botta, Marco; Nejati, Hossein

    2015-10-01

    The proliferation of Infrared technology and imaging systems enables a different perspective to tackle many computer vision problems in defense and security applications. Infrared images are widely used by the law enforcement, Homeland Security and military organizations to achieve a significant advantage or situational awareness, and thus is vital to protect these data against malicious attacks. Concurrently, sophisticated malware are developed which are able to disrupt the security and integrity of these digital media. For instance, illegal distribution and manipulation are possible malicious attacks to the digital objects. In this paper we explore the use of a new layer of defense for the integrity of the infrared images through the aid of information hiding techniques such as watermarking. In this context, we analyze the efficiency of several optimal decoding schemes for the watermark inserted into the Singular Value Decomposition (SVD) domain of the IR images using an additive spread spectrum (SS) embedding framework. In order to use the singular values (SVs) of the IR images with the SS embedding we adopt several restrictions that ensure that the values of the SVs will maintain their statistics. For both the optimal maximum likelihood decoder and sub-optimal decoders we assume that the PDF of SVs can be modeled by the Weibull distribution. Furthermore, we investigate the challenges involved in protecting and assuring the integrity of IR images such as data complexity and the error probability behavior, i.e., the probability of detection and the probability of false detection, for the applied optimal decoders. By taking into account the efficiency and the necessary auxiliary information for decoding the watermark, we discuss the suitable decoder for various operating situations. Experimental results are carried out on a large dataset of IR images to show the imperceptibility and efficiency of the proposed scheme against various attack scenarios.

  3. Efficient and robust model-to-image alignment using 3D scale-invariant features.

    Science.gov (United States)

    Toews, Matthew; Wells, William M

    2013-04-01

    This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Robust bladder image registration by redefining data-term in total variational approach

    Science.gov (United States)

    Ali, Sharib; Daul, Christian; Galbrun, Ernest; Amouroux, Marine; Guillemin, François; Blondel, Walter

    2015-03-01

    Cystoscopy is the standard procedure for clinical diagnosis of bladder cancer diagnosis. Bladder carcinoma in situ are often multifocal and spread over large areas. In vivo, localization and follow-up of these tumors and their nearby sites is necessary. But, due to the small field of view (FOV) of the cystoscopic video images, urologists cannot easily interpret the scene. Bladder mosaicing using image registration facilitates this interpretation through the visualization of entire lesions with respect to anatomical landmarks. The reference white light (WL) modality is affected by a strong variability in terms of texture, illumination conditions and motion blur. Moreover, in the complementary fluorescence light (FL) modality, the texture is visually different from that of the WL. Existing algorithms were developed for a particular modality and scene conditions. This paper proposes a more general on fly image registration approach for dealing with these variability issues in cystoscopy. To do so, we present a novel, robust and accurate image registration scheme by redefining the data-term of the classical total variational (TV) approach. Quantitative results on realistic bladder phantom images are used for verifying accuracy and robustness of the proposed model. This method is also qualitatively assessed with patient data mosaicing for both WL and FL modalities.

  5. A new robust markerless method for automatic image-to-patient registration in image-guided neurosurgery system.

    Science.gov (United States)

    Liu, Yinlong; Song, Zhijian; Wang, Manning

    2017-12-01

    Compared with the traditional point-based registration in the image-guided neurosurgery system, surface-based registration is preferable because it does not use fiducial markers before image scanning and does not require image acquisition dedicated for navigation purposes. However, most existing surface-based registration methods must include a manual step for coarse registration, which increases the registration time and elicits some inconvenience and uncertainty. A new automatic surface-based registration method is proposed, which applies 3D surface feature description and matching algorithm to obtain point correspondences for coarse registration and uses the iterative closest point (ICP) algorithm in the last step to obtain an image-to-patient registration. Both phantom and clinical data were used to execute automatic registrations and target registration error (TRE) calculated to verify the practicality and robustness of the proposed method. In phantom experiments, the registration accuracy was stable across different downsampling resolutions (18-26 mm) and different support radii (2-6 mm). In clinical experiments, the mean TREs of two patients by registering full head surfaces were 1.30 mm and 1.85 mm. This study introduced a new robust automatic surface-based registration method based on 3D feature matching. The method achieved sufficient registration accuracy with different real-world surface regions in phantom and clinical experiments.

  6. Automated robust registration of grossly misregistered whole-slide images with varying stains

    Science.gov (United States)

    Litjens, G.; Safferling, K.; Grabe, N.

    2016-03-01

    Cancer diagnosis and pharmaceutical research increasingly depend on the accurate quantification of cancer biomarkers. Identification of biomarkers is usually performed through immunohistochemical staining of cancer sections on glass slides. However, combination of multiple biomarkers from a wide variety of immunohistochemically stained slides is a tedious process in traditional histopathology due to the switching of glass slides and re-identification of regions of interest by pathologists. Digital pathology now allows us to apply image registration algorithms to digitized whole-slides to align the differing immunohistochemical stains automatically. However, registration algorithms need to be robust to changes in color due to differing stains and severe changes in tissue content between slides. In this work we developed a robust registration methodology to allow for fast coarse alignment of multiple immunohistochemical stains to the base hematyoxylin and eosin stained image. We applied HSD color model conversion to obtain a less stain color dependent representation of the whole-slide images. Subsequently, optical density thresholding and connected component analysis were used to identify the relevant regions for registration. Template matching using normalized mutual information was applied to provide initial translation and rotation parameters, after which a cost function-driven affine registration was performed. The algorithm was validated using 40 slides from 10 prostate cancer patients, with landmark registration error as a metric. Median landmark registration error was around 180 microns, which indicates performance is adequate for practical application. None of the registrations failed, indicating the robustness of the algorithm.

  7. Human visual system-based color image steganography using the contourlet transform

    Science.gov (United States)

    Abdul, W.; Carré, P.; Gaborit, P.

    2010-01-01

    We present a steganographic scheme based on the contourlet transform which uses the contrast sensitivity function (CSF) to control the force of insertion of the hidden information in a perceptually uniform color space. The CIELAB color space is used as it is well suited for steganographic applications because any change in the CIELAB color space has a corresponding effect on the human visual system as is very important for steganographic schemes to be undetectable by the human visual system (HVS). The perceptual decomposition of the contourlet transform gives it a natural advantage over other decompositions as it can be molded with respect to the human perception of different frequencies in an image. The evaluation of the imperceptibility of the steganographic scheme with respect to the color perception of the HVS is done using standard methods such as the structural similarity (SSIM) and CIEDE2000. The robustness of the inserted watermark is tested against JPEG compression.

  8. TU-G-303-02: Robust Radiomics Methods for PET and CT Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Aerts, H. [Brigham and Women’s Hospital and Dana-Farber Cancer Institute (United States)

    2015-06-15

    ‘Radiomics’ refers to studies that extract a large amount of quantitative information from medical imaging studies as a basis for characterizing a specific aspect of patient health. Radiomics models can be built to address a wide range of outcome predictions, clinical decisions, basic cancer biology, etc. For example, radiomics models can be built to predict the aggressiveness of an imaged cancer, cancer gene expression characteristics (radiogenomics), radiation therapy treatment response, etc. Technically, radiomics brings together quantitative imaging, computer vision/image processing, and machine learning. In this symposium, speakers will discuss approaches to radiomics investigations, including: longitudinal radiomics, radiomics combined with other biomarkers (‘pan-omics’), radiomics for various imaging modalities (CT, MRI, and PET), and the use of registered multi-modality imaging datasets as a basis for radiomics. There are many challenges to the eventual use of radiomics-derived methods in clinical practice, including: standardization and robustness of selected metrics, accruing the data required, building and validating the resulting models, registering longitudinal data that often involve significant patient changes, reliable automated cancer segmentation tools, etc. Despite the hurdles, results achieved so far indicate the tremendous potential of this general approach to quantifying and using data from medical images. Specific applications of radiomics to be presented in this symposium will include: the longitudinal analysis of patients with low-grade gliomas; automatic detection and assessment of patients with metastatic bone lesions; image-based monitoring of patients with growing lymph nodes; predicting radiotherapy outcomes using multi-modality radiomics; and studies relating radiomics with genomics in lung cancer and glioblastoma. Learning Objectives: Understanding the basic image features that are often used in radiomic models. Understanding

  9. A Robust Identification of the Protein Standard Bands in Two-Dimensional Electrophoresis Gel Images

    Directory of Open Access Journals (Sweden)

    Serackis Artūras

    2017-12-01

    Full Text Available The aim of the investigation presented in this paper was to develop a software-based assistant for the protein analysis workflow. The prior characterization of the unknown protein in two-dimensional electrophoresis gel images is performed according to the molecular weight and isoelectric point of each protein spot estimated from the gel image before further sequence analysis by mass spectrometry. The paper presents a method for automatic and robust identification of the protein standard band in a two-dimensional gel image. In addition, the method introduces the identification of the positions of the markers, prepared by using pre-selected proteins with known molecular mass. The robustness of the method was achieved by using special validation rules in the proposed original algorithms. In addition, a self-organizing map-based decision support algorithm is proposed, which takes Gabor coefficients as image features and searches for the differences in preselected vertical image bars. The experimental investigation proved the good performance of the new algorithms included into the proposed method. The detection of the protein standard markers works without modification of algorithm parameters on two-dimensional gel images obtained by using different staining and destaining procedures, which results in different average levels of intensity in the images.

  10. A robust method for processing scanning probe microscopy images and determining nanoobject position and dimensions

    NARCIS (Netherlands)

    Silly, F.

    2009-01-01

    P>Processing of scanning probe microscopy (SPM) images is essential to explore nanoscale phenomena. Image processing and pattern recognition techniques are developed to improve the accuracy and consistency of nanoobject and surface characterization. We present a robust and versatile method to

  11. Damaged Watermarks Detection in Frequency Domain as a Primary Method for Video Concealment

    Directory of Open Access Journals (Sweden)

    Robert Hudec

    2011-01-01

    Full Text Available This paper deals with video transmission over lossy communication networks. The main idea is to develop video concealment method for information losses and errors correction. At the beginning, three main groups of video concealment methods, divided by encoder/decoder collaboration, are briefly described. The modified algorithm based on the detection and filtration of damaged watermark blocks encapsulated to the transmitted video was developed. Finally, the efficiency of developed algorithm is presented in experimental part of this paper.

  12. Robust Imaging Methodology for Challenging Environments: Wave Equation Dispersion Inversion of Surface Waves

    KAUST Repository

    Li, Jing; Schuster, Gerard T.; Zeng, Zhaofa

    2017-01-01

    A robust imaging technology is reviewed that provide subsurface information in challenging environments: wave-equation dispersion inversion (WD) of surface waves for the shear velocity model. We demonstrate the benefits and liabilities of the method

  13. Robust and efficient method for matching features in omnidirectional images

    Science.gov (United States)

    Zhu, Qinyi; Zhang, Zhijiang; Zeng, Dan

    2018-04-01

    Binary descriptors have been widely used in many real-time applications due to their efficiency. These descriptors are commonly designed for perspective images but perform poorly on omnidirectional images, which are severely distorted. To address this issue, this paper proposes tangent plane BRIEF (TPBRIEF) and adapted log polar grid-based motion statistics (ALPGMS). TPBRIEF projects keypoints to a unit sphere and applies the fixed test set in BRIEF descriptor on the tangent plane of the unit sphere. The fixed test set is then backprojected onto the original distorted images to construct the distortion invariant descriptor. TPBRIEF directly enables keypoint detecting and feature describing on original distorted images, whereas other approaches correct the distortion through image resampling, which introduces artifacts and adds time cost. With ALPGMS, omnidirectional images are divided into circular arches named adapted log polar grids. Whether a match is true or false is then determined by simply thresholding the match numbers in a grid pair where the two matched points located. Experiments show that TPBRIEF greatly improves the feature matching accuracy and ALPGMS robustly removes wrong matches. Our proposed method outperforms the state-of-the-art methods.

  14. Sadhana | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    In the era of digital information, there are multiple danger zones like copyright and integrity violations, of digital object. In case of any dispute during rights violation, content creator can prove ownership by recovering the watermark. Two most important prerequisites for an efficient watermarking scheme are robustness and ...

  15. Robust and adaptive band-to-band image transform of UAS miniature multi-lens multispectral camera

    Science.gov (United States)

    Jhan, Jyun-Ping; Rau, Jiann-Yeou; Haala, Norbert

    2018-03-01

    Utilizing miniature multispectral (MS) or hyperspectral (HS) cameras by mounting them on an Unmanned Aerial System (UAS) has the benefits of convenience and flexibility to collect remote sensing imagery for precision agriculture, vegetation monitoring, and environment investigation applications. Most miniature MS cameras adopt a multi-lens structure to record discrete MS bands of visible and invisible information. The differences in lens distortion, mounting positions, and viewing angles among lenses mean that the acquired original MS images have significant band misregistration errors. We have developed a Robust and Adaptive Band-to-Band Image Transform (RABBIT) method for dealing with the band co-registration of various types of miniature multi-lens multispectral cameras (Mini-MSCs) to obtain band co-registered MS imagery for remote sensing applications. The RABBIT utilizes modified projective transformation (MPT) to transfer the multiple image geometry of a multi-lens imaging system to one sensor geometry, and combines this with a robust and adaptive correction (RAC) procedure to correct several systematic errors and to obtain sub-pixel accuracy. This study applies three state-of-the-art Mini-MSCs to evaluate the RABBIT method's performance, specifically the Tetracam Miniature Multiple Camera Array (MiniMCA), Micasense RedEdge, and Parrot Sequoia. Six MS datasets acquired at different target distances and dates, and locations are also applied to prove its reliability and applicability. Results prove that RABBIT is feasible for different types of Mini-MSCs with accurate, robust, and rapid image processing efficiency.

  16. Towards an efficient and robust foot classification from pedobarographic images.

    Science.gov (United States)

    Oliveira, Francisco P M; Sousa, Andreia; Santos, Rubim; Tavares, João Manuel R S

    2012-01-01

    This paper presents a new computational framework for automatic foot classification from digital plantar pressure images. It classifies the foot as left or right and simultaneously calculates two well-known footprint indices: the Cavanagh's arch index (AI) and the modified AI. The accuracy of the framework was evaluated using a set of plantar pressure images from two common pedobarographic devices. The results were outstanding, as all feet under analysis were correctly classified as left or right and no significant differences were observed between the footprint indices calculated using the computational solution and the traditional manual method. The robustness of the proposed framework to arbitrary foot orientations and to the acquisition device was also tested and confirmed.

  17. Seismic image watermarking using optimized wavelets

    International Nuclear Information System (INIS)

    Mufti, M.

    2010-01-01

    Geotechnical processes and technologies are becoming more and more sophisticated by the use of computer and information technology. This has made the availability, authenticity and security of geo technical data even more important. One of the most common methods of storing and sharing seismic data images is through standardized SEG- Y file format.. Geo technical industry is now primarily data centric. The analytic and detection capability of seismic processing tool is heavily dependent on the correctness of the contents of the SEG-Y data file. This paper describes a method through an optimized wavelet transform technique which prevents unauthorized alteration and/or use of seismic data. (author)

  18. Sadhana | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Home; Journals; Sadhana; Volume 37; Issue 4. Issue front cover thumbnail. Volume 37, Issue 4. August 2012, pages 425-537. pp 425-440. A robust and secure watermarking scheme based on singular values replacement · Akshya Kumar Gupta Mehul S Raval · More Details Abstract Fulltext PDF. Digital watermarking is an ...

  19. A robust and hierarchical approach for the automatic co-registration of intensity and visible images

    Science.gov (United States)

    González-Aguilera, Diego; Rodríguez-Gonzálvez, Pablo; Hernández-López, David; Luis Lerma, José

    2012-09-01

    This paper presents a new robust approach to integrate intensity and visible images which have been acquired with a terrestrial laser scanner and a calibrated digital camera, respectively. In particular, an automatic and hierarchical method for the co-registration of both sensors is developed. The approach integrates several existing solutions to improve the performance of the co-registration between range-based and visible images: the Affine Scale-Invariant Feature Transform (A-SIFT), the epipolar geometry, the collinearity equations, the Groebner basis solution and the RANdom SAmple Consensus (RANSAC), integrating a voting scheme. The approach presented herein improves the existing co-registration approaches in automation, robustness, reliability and accuracy.

  20. Exploration of Least Significant Bit Based Watermarking and Its Robustness against Salt and Pepper Noise

    OpenAIRE

    Kamaldeep Joshi; Rajkumar Yadav; Sachin Allwadhi

    2016-01-01

    Image steganography is the best aspect of information hiding. In this, the information is hidden within an image and the image travels openly on the Internet. The Least Significant Bit (LSB) is one of the most popular methods of image steganography. In this method, the information bit is hidden at the LSB of the image pixel. In one bit LSB steganography method, the total numbers of the pixels and the total number of message bits are equal to each other. In this paper, the LSB method of image ...

  1. Robustness of phase retrieval methods in x-ray phase contrast imaging: A comparison

    International Nuclear Information System (INIS)

    Yan, Aimin; Wu, Xizeng; Liu, Hong

    2011-01-01

    Purpose: The robustness of the phase retrieval methods is of critical importance for limiting and reducing radiation doses involved in x-ray phase contrast imaging. This work is to compare the robustness of two phase retrieval methods by analyzing the phase maps retrieved from the experimental images of a phantom. Methods: Two phase retrieval methods were compared. One method is based on the transport of intensity equation (TIE) for phase contrast projections, and the TIE-based method is the most commonly used method for phase retrieval in the literature. The other is the recently developed attenuation-partition based (AP-based) phase retrieval method. The authors applied these two methods to experimental projection images of an air-bubble wrap phantom for retrieving the phase map of the bubble wrap. The retrieved phase maps obtained by using the two methods are compared. Results: In the wrap's phase map retrieved by using the TIE-based method, no bubble is recognizable, hence, this method failed completely for phase retrieval from these bubble wrap images. Even with the help of the Tikhonov regularization, the bubbles are still hardly visible and buried in the cluttered background in the retrieved phase map. The retrieved phase values with this method are grossly erroneous. In contrast, in the wrap's phase map retrieved by using the AP-based method, the bubbles are clearly recovered. The retrieved phase values with the AP-based method are reasonably close to the estimate based on the thickness-based measurement. The authors traced these stark performance differences of the two methods to their different techniques employed to deal with the singularity problem involved in the phase retrievals. Conclusions: This comparison shows that the conventional TIE-based phase retrieval method, regardless if Tikhonov regularization is used or not, is unstable against the noise in the wrap's projection images, while the AP-based phase retrieval method is shown in these

  2. A Synchronisation Method For Informed Spread-Spectrum Audiowatermarking

    Directory of Open Access Journals (Sweden)

    Pierre-Yves Fulchiron

    2003-12-01

    Full Text Available Under perfect synchronisation conditions, watermarking schemes employing asymmetric spread-spectrum techniques are suitable for copy-protection of audio signals. This paper proposes to combine the use of a robust psychoacoustic projection for the extraction of a watermark feature vector along with non-linear detection functions optimised with side-information. The new proposed scheme benefits from an increased level of security through the use of asymmetric detectors. We apply this scheme to real audio signals and experimental results show an increased robustness to desynchronisation attacks such as random cropping.

  3. Multimedia forensics and security foundations, innovations, and applications

    CERN Document Server

    Fouad, Mohamed; Manaf, Azizah; Zamani, Mazdak; Ahmad, Rabiah; Kacprzyk, Janusz

    2017-01-01

    This book presents recent applications and approaches as well as challenges in digital forensic science. One of the evolving challenges that is covered in the book is the cloud forensic analysis which applies the digital forensic science over the cloud computing paradigm for conducting either live or static investigations within the cloud environment. The book also covers the theme of multimedia forensics and watermarking in the area of information security. That includes highlights on intelligence techniques designed for detecting significant changes in image and video sequences. Moreover, the theme proposes recent robust and computationally efficient digital watermarking techniques. The last part of the book provides several digital forensics related applications, including areas such as evidence acquisition enhancement, evidence evaluation, cryptography, and finally, live investigation through the importance of reconstructing the botnet attack scenario to show the malicious activities and files as evidence...

  4. Robust feature estimation by non-rigid hierarchical image registration and its application in disparity measurement

    Science.gov (United States)

    Badshah, Amir; Choudhry, Aadil Jaleel; Ullah, Shan

    2017-03-01

    Industries are moving towards automation in order to increase productivity and ensure quality. Variety of electronic and electromagnetic systems are being employed to assist human operator in fast and accurate quality inspection of products. Majority of these systems are equipped with cameras and rely on diverse image processing algorithms. Information is lost in 2D image, therefore acquiring accurate 3D data from 2D images is an open issue. FAST, SURF and SIFT are well-known spatial domain techniques for features extraction and henceforth image registration to find correspondence between images. The efficiency of these methods is measured in terms of the number of perfect matches found. A novel fast and robust technique for stereo-image processing is proposed. It is based on non-rigid registration using modified normalized phase correlation. The proposed method registers two images in hierarchical fashion using quad-tree structure. The registration process works through global to local level resulting in robust matches even in presence of blur and noise. The computed matches can further be utilized to determine disparity and depth for industrial product inspection. The same can be used in driver assistance systems. The preliminary tests on Middlebury dataset produced satisfactory results. The execution time for a 413 x 370 stereo-pair is 500ms approximately on a low cost DSP.

  5. Inverse consistent non-rigid image registration based on robust point set matching

    Science.gov (United States)

    2014-01-01

    Background Robust point matching (RPM) has been extensively used in non-rigid registration of images to robustly register two sets of image points. However, except for the location at control points, RPM cannot estimate the consistent correspondence between two images because RPM is a unidirectional image matching approach. Therefore, it is an important issue to make an improvement in image registration based on RPM. Methods In our work, a consistent image registration approach based on the point sets matching is proposed to incorporate the property of inverse consistency and improve registration accuracy. Instead of only estimating the forward transformation between the source point sets and the target point sets in state-of-the-art RPM algorithms, the forward and backward transformations between two point sets are estimated concurrently in our algorithm. The inverse consistency constraints are introduced to the cost function of RPM and the fuzzy correspondences between two point sets are estimated based on both the forward and backward transformations simultaneously. A modified consistent landmark thin-plate spline registration is discussed in detail to find the forward and backward transformations during the optimization of RPM. The similarity of image content is also incorporated into point matching in order to improve image matching. Results Synthetic data sets, medical images are employed to demonstrate and validate the performance of our approach. The inverse consistent errors of our algorithm are smaller than RPM. Especially, the topology of transformations is preserved well for our algorithm for the large deformation between point sets. Moreover, the distance errors of our algorithm are similar to that of RPM, and they maintain a downward trend as whole, which demonstrates the convergence of our algorithm. The registration errors for image registrations are evaluated also. Again, our algorithm achieves the lower registration errors in same iteration number

  6. Multiscale Region-Level VHR Image Change Detection via Sparse Change Descriptor and Robust Discriminative Dictionary Learning

    Directory of Open Access Journals (Sweden)

    Yuan Xu

    2015-01-01

    Full Text Available Very high resolution (VHR image change detection is challenging due to the low discriminative ability of change feature and the difficulty of change decision in utilizing the multilevel contextual information. Most change feature extraction techniques put emphasis on the change degree description (i.e., in what degree the changes have happened, while they ignore the change pattern description (i.e., how the changes changed, which is of equal importance in characterizing the change signatures. Moreover, the simultaneous consideration of the classification robust to the registration noise and the multiscale region-consistent fusion is often neglected in change decision. To overcome such drawbacks, in this paper, a novel VHR image change detection method is proposed based on sparse change descriptor and robust discriminative dictionary learning. Sparse change descriptor combines the change degree component and the change pattern component, which are encoded by the sparse representation error and the morphological profile feature, respectively. Robust change decision is conducted by multiscale region-consistent fusion, which is implemented by the superpixel-level cosparse representation with robust discriminative dictionary and the conditional random field model. Experimental results confirm the effectiveness of the proposed change detection technique.

  7. Robust digital image inpainting algorithm in the wireless environment

    Science.gov (United States)

    Karapetyan, G.; Sarukhanyan, H. G.; Agaian, S. S.

    2014-05-01

    Image or video inpainting is the process/art of retrieving missing portions of an image without introducing undesirable artifacts that are undetectable by an ordinary observer. An image/video can be damaged due to a variety of factors, such as deterioration due to scratches, laser dazzling effects, wear and tear, dust spots, loss of data when transmitted through a channel, etc. Applications of inpainting include image restoration (removing laser dazzling effects, dust spots, date, text, time, etc.), image synthesis (texture synthesis), completing panoramas, image coding, wireless transmission (recovery of the missing blocks), digital culture protection, image de-noising, fingerprint recognition, and film special effects and production. Most inpainting methods can be classified in two key groups: global and local methods. Global methods are used for generating large image regions from samples while local methods are used for filling in small image gaps. Each method has its own advantages and limitations. For example, the global inpainting methods perform well on textured image retrieval, whereas the classical local methods perform poorly. In addition, some of the techniques are computationally intensive; exceeding the capabilities of most currently used mobile devices. In general, the inpainting algorithms are not suitable for the wireless environment. This paper presents a new and efficient scheme that combines the advantages of both local and global methods into a single algorithm. Particularly, it introduces a blind inpainting model to solve the above problems by adaptively selecting support area for the inpainting scheme. The proposed method is applied to various challenging image restoration tasks, including recovering old photos, recovering missing data on real and synthetic images, and recovering the specular reflections in endoscopic images. A number of computer simulations demonstrate the effectiveness of our scheme and also illustrate the main properties

  8. Symmetric geometric transfer matrix partial volume correction for PET imaging: principle, validation and robustness

    Science.gov (United States)

    Sattarivand, Mike; Kusano, Maggie; Poon, Ian; Caldwell, Curtis

    2012-11-01

    Limited spatial resolution of positron emission tomography (PET) often requires partial volume correction (PVC) to improve the accuracy of quantitative PET studies. Conventional region-based PVC methods use co-registered high resolution anatomical images (e.g. computed tomography (CT) or magnetic resonance images) to identify regions of interest. Spill-over between regions is accounted for by calculating regional spread functions (RSFs) in a geometric transfer matrix (GTM) framework. This paper describes a new analytically derived symmetric GTM (sGTM) method that relies on spill-over between RSFs rather than between regions. It is shown that the sGTM is mathematically equivalent to Labbe's method; however it is a region-based method rather than a voxel-based method and it avoids handling large matrices. The sGTM method was validated using two three-dimensional (3D) digital phantoms and one physical phantom. A 3D digital sphere phantom with sphere diameters ranging from 5 to 30 mm and a sphere-to-background uptake ratio of 3-to-1 was used. A 3D digital brain phantom was used with four different anatomical regions and a background region with different activities assigned to each region. A physical sphere phantom with the same geometry and uptake as the digital sphere phantom was manufactured and PET-CT images were acquired. Using these three phantoms, the performance of the sGTM method was assessed against that of the GTM method in terms of accuracy, precision, noise propagation and robustness. The robustness was assessed by applying mis-registration errors and errors in estimates of PET point spread function (PSF). In all three phantoms, the results showed that the sGTM method has accuracy similar to that of the GTM method and within 5%. However, the sGTM method showed better precision and noise propagation than the GTM method, especially for spheres smaller than 13 mm. Moreover, the sGTM method was more robust than the GTM method when mis-registration errors or

  9. Symmetric geometric transfer matrix partial volume correction for PET imaging: principle, validation and robustness

    International Nuclear Information System (INIS)

    Sattarivand, Mike; Caldwell, Curtis; Kusano, Maggie; Poon, Ian

    2012-01-01

    Limited spatial resolution of positron emission tomography (PET) often requires partial volume correction (PVC) to improve the accuracy of quantitative PET studies. Conventional region-based PVC methods use co-registered high resolution anatomical images (e.g. computed tomography (CT) or magnetic resonance images) to identify regions of interest. Spill-over between regions is accounted for by calculating regional spread functions (RSFs) in a geometric transfer matrix (GTM) framework. This paper describes a new analytically derived symmetric GTM (sGTM) method that relies on spill-over between RSFs rather than between regions. It is shown that the sGTM is mathematically equivalent to Labbe's method; however it is a region-based method rather than a voxel-based method and it avoids handling large matrices. The sGTM method was validated using two three-dimensional (3D) digital phantoms and one physical phantom. A 3D digital sphere phantom with sphere diameters ranging from 5 to 30 mm and a sphere-to-background uptake ratio of 3-to-1 was used. A 3D digital brain phantom was used with four different anatomical regions and a background region with different activities assigned to each region. A physical sphere phantom with the same geometry and uptake as the digital sphere phantom was manufactured and PET-CT images were acquired. Using these three phantoms, the performance of the sGTM method was assessed against that of the GTM method in terms of accuracy, precision, noise propagation and robustness. The robustness was assessed by applying mis-registration errors and errors in estimates of PET point spread function (PSF). In all three phantoms, the results showed that the sGTM method has accuracy similar to that of the GTM method and within 5%. However, the sGTM method showed better precision and noise propagation than the GTM method, especially for spheres smaller than 13 mm. Moreover, the sGTM method was more robust than the GTM method when mis-registration errors or

  10. Robust 3D–2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation

    International Nuclear Information System (INIS)

    Otake, Yoshito; Wang, Adam S; Webster Stayman, J; Siewerdsen, Jeffrey H; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Khanna, A Jay; Gokaslan, Ziya L

    2013-01-01

    We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with ‘success’ defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust

  11. Robust 3D–2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation

    Energy Technology Data Exchange (ETDEWEB)

    Otake, Yoshito; Wang, Adam S; Webster Stayman, J; Siewerdsen, Jeffrey H [Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD (United States); Uneri, Ali [Department of Computer Science, Johns Hopkins University, Baltimore MD (United States); Kleinszig, Gerhard; Vogt, Sebastian [Siemens Healthcare, Erlangen (Germany); Khanna, A Jay [Department of Orthopaedic Surgery, Johns Hopkins University, Baltimore MD (United States); Gokaslan, Ziya L, E-mail: jeff.siewerdsen@jhu.edu [Department of Neurosurgery, Johns Hopkins University, Baltimore MD (United States)

    2013-12-07

    We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with ‘success’ defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust

  12. Multiple Constraints Based Robust Matching of Poor-Texture Close-Range Images for Monitoring a Simulated Landslide

    Directory of Open Access Journals (Sweden)

    Gang Qiao

    2016-05-01

    Full Text Available Landslides are one of the most destructive geo-hazards that can bring about great threats to both human lives and infrastructures. Landslide monitoring has been always a research hotspot. In particular, landslide simulation experimentation is an effective tool in landslide research to obtain critical parameters that help understand the mechanism and evaluate the triggering and controlling factors of slope failure. Compared with other traditional geotechnical monitoring approaches, the close-range photogrammetry technique shows potential in tracking and recording the 3D surface deformation and failure processes. In such cases, image matching usually plays a critical role in stereo image processing for the 3D geometric reconstruction. However, the complex imaging conditions such as rainfall, mass movement, illumination, and ponding will reduce the texture quality of the stereo images, bringing about difficulties in the image matching process and resulting in very sparse matches. To address this problem, this paper presents a multiple-constraints based robust image matching approach for poor-texture close-range images particularly useful in monitoring a simulated landslide. The Scale Invariant Feature Transform (SIFT algorithm was first applied to the stereo images for generation of scale-invariate feature points, followed by a two-step matching process: feature-based image matching and area-based image matching. In the first feature-based matching step, the triangulation process was performed based on the SIFT matches filtered by the Fundamental Matrix (FM and a robust checking procedure, to serve as the basic constraints for feature-based iterated matching of all the non-matched SIFT-derived feature points inside each triangle. In the following area-based image-matching step, the corresponding points of the non-matched features in each triangle of the master image were predicted in the homologous triangle of the searching image by using geometric

  13. Short-circuit current density imaging of crystalline silicon solar cells via lock-in thermography: Robustness and simplifications

    International Nuclear Information System (INIS)

    Fertig, Fabian; Greulich, Johannes; Rein, Stefan

    2014-01-01

    Spatially resolved determination of solar cell parameters is beneficial for loss analysis and optimization of conversion efficiency. One key parameter that has been challenging to access by an imaging technique on solar cell level is short-circuit current density. This work discusses the robustness of a recently suggested approach to determine short-circuit current density spatially resolved based on a series of lock-in thermography images and options for a simplified image acquisition procedure. For an accurate result, one or two emissivity-corrected illuminated lock-in thermography images and one dark lock-in thermography image have to be recorded. The dark lock-in thermography image can be omitted if local shunts are negligible. Furthermore, it is shown that omitting the correction of lock-in thermography images for local emissivity variations only leads to minor distortions for standard silicon solar cells. Hence, adequate acquisition of one image only is sufficient to generate a meaningful map of short-circuit current density. Beyond that, this work illustrates the underlying physics of the recently proposed method and demonstrates its robustness concerning varying excitation conditions and locally increased series resistance. Experimentally gained short-circuit current density images are validated for monochromatic illumination in comparison to the reference method of light-beam induced current

  14. Robust generative asymmetric GMM for brain MR image segmentation.

    Science.gov (United States)

    Ji, Zexuan; Xia, Yong; Zheng, Yuhui

    2017-11-01

    Accurate segmentation of brain tissues from magnetic resonance (MR) images based on the unsupervised statistical models such as Gaussian mixture model (GMM) has been widely studied during last decades. However, most GMM based segmentation methods suffer from limited accuracy due to the influences of noise and intensity inhomogeneity in brain MR images. To further improve the accuracy for brain MR image segmentation, this paper presents a Robust Generative Asymmetric GMM (RGAGMM) for simultaneous brain MR image segmentation and intensity inhomogeneity correction. First, we develop an asymmetric distribution to fit the data shapes, and thus construct a spatial constrained asymmetric model. Then, we incorporate two pseudo-likelihood quantities and bias field estimation into the model's log-likelihood, aiming to exploit the neighboring priors of within-cluster and between-cluster and to alleviate the impact of intensity inhomogeneity, respectively. Finally, an expectation maximization algorithm is derived to iteratively maximize the approximation of the data log-likelihood function to overcome the intensity inhomogeneity in the image and segment the brain MR images simultaneously. To demonstrate the performances of the proposed algorithm, we first applied the proposed algorithm to a synthetic brain MR image to show the intermediate illustrations and the estimated distribution of the proposed algorithm. The next group of experiments is carried out in clinical 3T-weighted brain MR images which contain quite serious intensity inhomogeneity and noise. Then we quantitatively compare our algorithm to state-of-the-art segmentation approaches by using Dice coefficient (DC) on benchmark images obtained from IBSR and BrainWeb with different level of noise and intensity inhomogeneity. The comparison results on various brain MR images demonstrate the superior performances of the proposed algorithm in dealing with the noise and intensity inhomogeneity. In this paper, the RGAGMM

  15. Evaluation of the robustness of estimating five components from a skin spectral image

    Science.gov (United States)

    Akaho, Rina; Hirose, Misa; Tsumura, Norimichi

    2018-04-01

    We evaluated the robustness of a method used to estimate five components (i.e., melanin, oxy-hemoglobin, deoxy-hemoglobin, shading, and surface reflectance) from the spectral reflectance of skin at five wavelengths against noise and a change in epidermis thickness. We also estimated the five components from recorded images of age spots and circles under the eyes using the method. We found that noise in the image must be no more 0.1% to accurately estimate the five components and that the thickness of the epidermis affects the estimation. We acquired the distribution of major causes for age spots and circles under the eyes by applying the method to recorded spectral images.

  16. Glioblastoma cells labeled by robust Raman tags for enhancing imaging contrast.

    Science.gov (United States)

    Huang, Li-Ching; Chang, Yung-Ching; Wu, Yi-Syuan; Sun, Wei-Lun; Liu, Chan-Chuan; Sze, Chun-I; Chen, Shiuan-Yeh

    2018-05-01

    Complete removal of a glioblastoma multiforme (GBM), a highly malignant brain tumor, is challenging due to its infiltrative characteristics. Therefore, utilizing imaging agents such as fluorophores to increase the contrast between GBM and normal cells can help neurosurgeons to locate residual cancer cells during image guided surgery. In this work, Raman tag based labeling and imaging for GBM cells in vitro is described and evaluated. The cell membrane of a GBM adsorbs a substantial amount of functionalized Raman tags through overexpression of the epidermal growth factor receptor (EGFR) and "broadcasts" stronger pre-defined Raman signals than normal cells. The average ratio between Raman signals from a GBM cell and autofluorescence from a normal cell can be up to 15. In addition, the intensity of these images is stable under laser illuminations without suffering from the severe photo-bleaching that usually occurs in fluorescent imaging. Our results show that labeling and imaging GBM cells via robust Raman tags is a viable alternative method to distinguish them from normal cells. This Raman tag based method can be used solely or integrated into an existing fluorescence system to improve the identification of infiltrative glial tumor cells around the boundary, which will further reduce GBM recurrence. In addition, it can also be applied/extended to other types of cancer to improve the effectiveness of image guided surgery.

  17. Sadhana | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Enhancing security of DICOM images during storage and transmission in distributed environment · A Lavanya V Natarajan · More Details Abstract Fulltext PDF. Digital watermarking is proposed as a method to enhance medical data security. Medical image watermarking requires extreme care when embedding additional ...

  18. ATMAD: robust image analysis for Automatic Tissue MicroArray De-arraying.

    Science.gov (United States)

    Nguyen, Hoai Nam; Paveau, Vincent; Cauchois, Cyril; Kervrann, Charles

    2018-04-19

    Over the last two decades, an innovative technology called Tissue Microarray (TMA), which combines multi-tissue and DNA microarray concepts, has been widely used in the field of histology. It consists of a collection of several (up to 1000 or more) tissue samples that are assembled onto a single support - typically a glass slide - according to a design grid (array) layout, in order to allow multiplex analysis by treating numerous samples under identical and standardized conditions. However, during the TMA manufacturing process, the sample positions can be highly distorted from the design grid due to the imprecision when assembling tissue samples and the deformation of the embedding waxes. Consequently, these distortions may lead to severe errors of (histological) assay results when the sample identities are mismatched between the design and its manufactured output. The development of a robust method for de-arraying TMA, which localizes and matches TMA samples with their design grid, is therefore crucial to overcome the bottleneck of this prominent technology. In this paper, we propose an Automatic, fast and robust TMA De-arraying (ATMAD) approach dedicated to images acquired with brightfield and fluorescence microscopes (or scanners). First, tissue samples are localized in the large image by applying a locally adaptive thresholding on the isotropic wavelet transform of the input TMA image. To reduce false detections, a parametric shape model is considered for segmenting ellipse-shaped objects at each detected position. Segmented objects that do not meet the size and the roundness criteria are discarded from the list of tissue samples before being matched with the design grid. Sample matching is performed by estimating the TMA grid deformation under the thin-plate model. Finally, thanks to the estimated deformation, the true tissue samples that were preliminary rejected in the early image processing step are recognized by running a second segmentation step. We

  19. Image Acquisition of Robust Vision Systems to Monitor Blurred Objects in Hazy Smoking Environments

    International Nuclear Information System (INIS)

    Ahn, Yongjin; Park, Seungkyu; Baik, Sunghoon; Kim, Donglyul; Nam, Sungmo; Jeong, Kyungmin

    2014-01-01

    Image information in disaster area or radiation area of nuclear industry is an important data for safety inspection and preparing appropriate damage control plans. So, robust vision system for structures and facilities in blurred smoking environments, such as the places of a fire and detonation, is essential in remote monitoring. Vision systems can't acquire an image when the illumination light is blocked by disturbance materials, such as smoke, fog, dust. The vision system based on wavefront correction can be applied to blurred imaging environments and the range-gated imaging system can be applied to both of blurred imaging and darken light environments. Wavefront control is a widely used technique to improve the performance of optical systems by actively correcting wavefront distortions, such as atmospheric turbulence, thermally-induced distortions, and laser or laser device aberrations, which can reduce the peak intensity and smear an acquired image. The principal applications of wavefront control are for improving the image quality in optical imaging systems such as infrared astronomical telescopes, in imaging and tracking rapidly moving space objects, and in compensating for laser beam distortion through the atmosphere. A conventional wavefront correction system consists of a wavefront sensor, a deformable mirror and a control computer. The control computer measures the wavefront distortions using a wavefront sensor and corrects it using a deformable mirror in a closed-loop. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and 3D images is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra

  20. Image Acquisition of Robust Vision Systems to Monitor Blurred Objects in Hazy Smoking Environments

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Yongjin; Park, Seungkyu; Baik, Sunghoon; Kim, Donglyul; Nam, Sungmo; Jeong, Kyungmin [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    Image information in disaster area or radiation area of nuclear industry is an important data for safety inspection and preparing appropriate damage control plans. So, robust vision system for structures and facilities in blurred smoking environments, such as the places of a fire and detonation, is essential in remote monitoring. Vision systems can't acquire an image when the illumination light is blocked by disturbance materials, such as smoke, fog, dust. The vision system based on wavefront correction can be applied to blurred imaging environments and the range-gated imaging system can be applied to both of blurred imaging and darken light environments. Wavefront control is a widely used technique to improve the performance of optical systems by actively correcting wavefront distortions, such as atmospheric turbulence, thermally-induced distortions, and laser or laser device aberrations, which can reduce the peak intensity and smear an acquired image. The principal applications of wavefront control are for improving the image quality in optical imaging systems such as infrared astronomical telescopes, in imaging and tracking rapidly moving space objects, and in compensating for laser beam distortion through the atmosphere. A conventional wavefront correction system consists of a wavefront sensor, a deformable mirror and a control computer. The control computer measures the wavefront distortions using a wavefront sensor and corrects it using a deformable mirror in a closed-loop. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and 3D images is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra

  1. Advances in low-level color image processing

    CERN Document Server

    Smolka, Bogdan

    2014-01-01

    Color perception plays an important role in object recognition and scene understanding both for humans and intelligent vision systems. Recent advances in digital color imaging and computer hardware technology have led to an explosion in the use of color images in a variety of applications including medical imaging, content-based image retrieval, biometrics, watermarking, digital inpainting, remote sensing, visual quality inspection, among many others. As a result, automated processing and analysis of color images has become an active area of research, to which the large number of publications of the past two decades bears witness. The multivariate nature of color image data presents new challenges for researchers and practitioners as the numerous methods developed for single channel images are often not directly applicable to multichannel  ones. The goal of this volume is to summarize the state-of-the-art in the early stages of the color image processing pipeline.

  2. Robust Single Image Super-Resolution via Deep Networks With Sparse Prior.

    Science.gov (United States)

    Liu, Ding; Wang, Zhaowen; Wen, Bihan; Yang, Jianchao; Han, Wei; Huang, Thomas S

    2016-07-01

    Single image super-resolution (SR) is an ill-posed problem, which tries to recover a high-resolution image from its low-resolution observation. To regularize the solution of the problem, previous methods have focused on designing good priors for natural images, such as sparse representation, or directly learning the priors from a large data set with models, such as deep neural networks. In this paper, we argue that domain expertise from the conventional sparse coding model can be combined with the key ingredients of deep learning to achieve further improved results. We demonstrate that a sparse coding model particularly designed for SR can be incarnated as a neural network with the merit of end-to-end optimization over training data. The network has a cascaded structure, which boosts the SR performance for both fixed and incremental scaling factors. The proposed training and testing schemes can be extended for robust handling of images with additional degradation, such as noise and blurring. A subjective assessment is conducted and analyzed in order to thoroughly evaluate various SR techniques. Our proposed model is tested on a wide range of images, and it significantly outperforms the existing state-of-the-art methods for various scaling factors both quantitatively and perceptually.

  3. 3D-2D image registration for target localization in spine surgery: investigation of similarity metrics providing robustness to content mismatch

    Science.gov (United States)

    De Silva, T.; Uneri, A.; Ketcha, M. D.; Reaungamornrat, S.; Kleinszig, G.; Vogt, S.; Aygun, N.; Lo, S.-F.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-04-01

    In image-guided spine surgery, robust three-dimensional to two-dimensional (3D-2D) registration of preoperative computed tomography (CT) and intraoperative radiographs can be challenged by the image content mismatch associated with the presence of surgical instrumentation and implants as well as soft-tissue resection or deformation. This work investigates image similarity metrics in 3D-2D registration offering improved robustness against mismatch, thereby improving performance and reducing or eliminating the need for manual masking. The performance of four gradient-based image similarity metrics (gradient information (GI), gradient correlation (GC), gradient information with linear scaling (GS), and gradient orientation (GO)) with a multi-start optimization strategy was evaluated in an institutional review board-approved retrospective clinical study using 51 preoperative CT images and 115 intraoperative mobile radiographs. Registrations were tested with and without polygonal masks as a function of the number of multistarts employed during optimization. Registration accuracy was evaluated in terms of the projection distance error (PDE) and assessment of failure modes (PDE  >  30 mm) that could impede reliable vertebral level localization. With manual polygonal masking and 200 multistarts, the GC and GO metrics exhibited robust performance with 0% gross failures and median PDE  interquartile range (IQR)) and a median runtime of 84 s (plus upwards of 1-2 min for manual masking). Excluding manual polygonal masks and decreasing the number of multistarts to 50 caused the GC-based registration to fail at a rate of  >14% however, GO maintained robustness with a 0% gross failure rate. Overall, the GI, GC, and GS metrics were susceptible to registration errors associated with content mismatch, but GO provided robust registration (median PDE  =  5.5 mm, 2.6 mm IQR) without manual masking and with an improved runtime (29.3 s). The GO metric

  4. Forming and detection of digital watermarks in the System for Automatic Identification of VHF Transmissions

    Directory of Open Access Journals (Sweden)

    О. В. Шишкін

    2013-07-01

    Full Text Available Forming and detection algorithms for digital watermarks are designed for automatic identification of VHF radiotelephone transmissions in the maritime and aeronautical mobile services. An audible insensitivity and interference resistance of embedded digital data are provided by means of OFDM technology jointly with normalized distortions distribution and data packet detection by the hash-function. Experiments were carried out on the base of ship’s radio station RT-2048 Sailor and USB ADC-DAC module of type Е14-140M L-CARD in the off-line processing regime in Matlab medium

  5. RNA Imaging with Multiplexed Error Robust Fluorescence in situ Hybridization

    Science.gov (United States)

    Moffitt, Jeffrey R.; Zhuang, Xiaowei

    2016-01-01

    Quantitative measurements of both the copy number and spatial distribution of large fractions of the transcriptome in single-cells could revolutionize our understanding of a variety of cellular and tissue behaviors in both healthy and diseased states. Single-molecule Fluorescence In Situ Hybridization (smFISH)—an approach where individual RNAs are labeled with fluorescent probes and imaged in their native cellular and tissue context—provides both the copy number and spatial context of RNAs but has been limited in the number of RNA species that can be measured simultaneously. Here we describe Multiplexed Error Robust Fluorescence In Situ Hybridization (MERFISH), a massively parallelized form of smFISH that can image and identify hundreds to thousands of different RNA species simultaneously with high accuracy in individual cells in their native spatial context. We provide detailed protocols on all aspects of MERFISH, including probe design, data collection, and data analysis to allow interested laboratories to perform MERFISH measurements themselves. PMID:27241748

  6. A Synchronisation Method For Informed Spread-Spectrum Audiowatermarking

    OpenAIRE

    Pierre-Yves Fulchiron; Barry O'Donovan; Guenole Silvestre; Neil Hurley

    2003-01-01

    Under perfect synchronisation conditions, watermarking schemes employing asymmetric spread-spectrum techniques are suitable for copy-protection of audio signals. This paper proposes to combine the use of a robust psychoacoustic projection for the extraction of a watermark feature vector along with non-linear detection functions optimised with side-information. The new proposed scheme benefits from an increased level of security through the use of asymmetric detectors. We apply this scheme to ...

  7. Robust methods for automatic image-to-world registration in cone-beam CT interventional guidance

    International Nuclear Information System (INIS)

    Dang, H.; Otake, Y.; Schafer, S.; Stayman, J. W.; Kleinszig, G.; Siewerdsen, J. H.

    2012-01-01

    Purpose: Real-time surgical navigation relies on accurate image-to-world registration to align the coordinate systems of the image and patient. Conventional manual registration can present a workflow bottleneck and is prone to manual error and intraoperator variability. This work reports alternative means of automatic image-to-world registration, each method involving an automatic registration marker (ARM) used in conjunction with C-arm cone-beam CT (CBCT). The first involves a Known-Model registration method in which the ARM is a predefined tool, and the second is a Free-Form method in which the ARM is freely configurable. Methods: Studies were performed using a prototype C-arm for CBCT and a surgical tracking system. A simple ARM was designed with markers comprising a tungsten sphere within infrared reflectors to permit detection of markers in both x-ray projections and by an infrared tracker. The Known-Model method exercised a predefined specification of the ARM in combination with 3D-2D registration to estimate the transformation that yields the optimal match between forward projection of the ARM and the measured projection images. The Free-Form method localizes markers individually in projection data by a robust Hough transform approach extended from previous work, backprojected to 3D image coordinates based on C-arm geometric calibration. Image-domain point sets were transformed to world coordinates by rigid-body point-based registration. The robustness and registration accuracy of each method was tested in comparison to manual registration across a range of body sites (head, thorax, and abdomen) of interest in CBCT-guided surgery, including cases with interventional tools in the radiographic scene. Results: The automatic methods exhibited similar target registration error (TRE) and were comparable or superior to manual registration for placement of the ARM within ∼200 mm of C-arm isocenter. Marker localization in projection data was robust across all

  8. Combination of surface and borehole seismic data for robust target-oriented imaging

    Science.gov (United States)

    Liu, Yi; van der Neut, Joost; Arntsen, Børge; Wapenaar, Kees

    2016-05-01

    A novel application of seismic interferometry (SI) and Marchenko imaging using both surface and borehole data is presented. A series of redatuming schemes is proposed to combine both data sets for robust deep local imaging in the presence of velocity uncertainties. The redatuming schemes create a virtual acquisition geometry where both sources and receivers lie at the horizontal borehole level, thus only a local velocity model near the borehole is needed for imaging, and erroneous velocities in the shallow area have no effect on imaging around the borehole level. By joining the advantages of SI and Marchenko imaging, a macrovelocity model is no longer required and the proposed schemes use only single-component data. Furthermore, the schemes result in a set of virtual data that have fewer spurious events and internal multiples than previous virtual source redatuming methods. Two numerical examples are shown to illustrate the workflow and to demonstrate the benefits of the method. One is a synthetic model and the other is a realistic model of a field in the North Sea. In both tests, improved local images near the boreholes are obtained using the redatumed data without accurate velocities, because the redatumed data are close to the target.

  9. Robust inverse scattering full waveform seismic tomography for imaging complex structure

    International Nuclear Information System (INIS)

    Nurhandoko, Bagus Endar B.; Sukmana, Indriani; Wibowo, Satryo; Deny, Agus; Kurniadi, Rizal; Widowati, Sri; Mubarok, Syahrul; Susilowati; Kaswandhi

    2012-01-01

    Seismic tomography becomes important tool recently for imaging complex subsurface. It is well known that imaging complex rich fault zone is difficult. In this paper, The application of time domain inverse scattering wave tomography to image the complex fault zone would be shown on this paper, especially an efficient time domain inverse scattering tomography and their run in cluster parallel computer which has been developed. This algorithm is purely based on scattering theory through solving Lippmann Schwienger integral by using Born's approximation. In this paper, it is shown the robustness of this algorithm especially in avoiding the inversion trapped in local minimum to reach global minimum. A large data are solved by windowing and blocking technique of memory as well as computation. Parameter of windowing computation is based on shot gather's aperture. This windowing technique reduces memory as well as computation significantly. This parallel algorithm is done by means cluster system of 120 processors from 20 nodes of AMD Phenom II. Benchmarking of this algorithm is done by means Marmoussi model which can be representative of complex rich fault area. It is shown that the proposed method can image clearly the rich fault and complex zone in Marmoussi model even though the initial model is quite far from the true model. Therefore, this method can be as one of solution to image the very complex mode.

  10. Robust inverse scattering full waveform seismic tomography for imaging complex structure

    Energy Technology Data Exchange (ETDEWEB)

    Nurhandoko, Bagus Endar B.; Sukmana, Indriani; Wibowo, Satryo; Deny, Agus; Kurniadi, Rizal; Widowati, Sri; Mubarok, Syahrul; Susilowati; Kaswandhi [Wave Inversion and Subsurface Fluid Imaging Research (WISFIR) Lab., Complex System Research Division, Physics Department, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung. and Rock Fluid Imaging Lab., Rock Physics and Cluster C (Indonesia); Rock Fluid Imaging Lab., Rock Physics and Cluster Computing Center, Bandung (Indonesia); Physics Department of Institut Teknologi Bandung (Indonesia); Rock Fluid Imaging Lab., Rock Physics and Cluster Computing Center, Bandung, Indonesia and Institut Teknologi Telkom, Bandung (Indonesia); Rock Fluid Imaging Lab., Rock Physics and Cluster Computing Center, Bandung (Indonesia)

    2012-06-20

    Seismic tomography becomes important tool recently for imaging complex subsurface. It is well known that imaging complex rich fault zone is difficult. In this paper, The application of time domain inverse scattering wave tomography to image the complex fault zone would be shown on this paper, especially an efficient time domain inverse scattering tomography and their run in cluster parallel computer which has been developed. This algorithm is purely based on scattering theory through solving Lippmann Schwienger integral by using Born's approximation. In this paper, it is shown the robustness of this algorithm especially in avoiding the inversion trapped in local minimum to reach global minimum. A large data are solved by windowing and blocking technique of memory as well as computation. Parameter of windowing computation is based on shot gather's aperture. This windowing technique reduces memory as well as computation significantly. This parallel algorithm is done by means cluster system of 120 processors from 20 nodes of AMD Phenom II. Benchmarking of this algorithm is done by means Marmoussi model which can be representative of complex rich fault area. It is shown that the proposed method can image clearly the rich fault and complex zone in Marmoussi model even though the initial model is quite far from the true model. Therefore, this method can be as one of solution to image the very complex mode.

  11. Implicitly Weighted Methods in Robust Image Analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2012-01-01

    Roč. 44, č. 3 (2012), s. 449-462 ISSN 0924-9907 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : robustness * high breakdown point * outlier detection * robust correlation analysis * template matching * face recognition Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.767, year: 2012

  12. Secure public cloud platform for medical images sharing.

    Science.gov (United States)

    Pan, Wei; Coatrieux, Gouenou; Bouslimi, Dalel; Prigent, Nicolas

    2015-01-01

    Cloud computing promises medical imaging services offering large storage and computing capabilities for limited costs. In this data outsourcing framework, one of the greatest issues to deal with is data security. To do so, we propose to secure a public cloud platform devoted to medical image sharing by defining and deploying a security policy so as to control various security mechanisms. This policy stands on a risk assessment we conducted so as to identify security objectives with a special interest for digital content protection. These objectives are addressed by means of different security mechanisms like access and usage control policy, partial-encryption and watermarking.

  13. 3D–2D image registration for target localization in spine surgery: investigation of similarity metrics providing robustness to content mismatch

    International Nuclear Information System (INIS)

    De Silva, T; Ketcha, M D; Siewerdsen, J H; Uneri, A; Reaungamornrat, S; Kleinszig, G; Vogt, S; Aygun, N; Lo, S-F; Wolinsky, J-P

    2016-01-01

    In image-guided spine surgery, robust three-dimensional to two-dimensional (3D–2D) registration of preoperative computed tomography (CT) and intraoperative radiographs can be challenged by the image content mismatch associated with the presence of surgical instrumentation and implants as well as soft-tissue resection or deformation. This work investigates image similarity metrics in 3D–2D registration offering improved robustness against mismatch, thereby improving performance and reducing or eliminating the need for manual masking. The performance of four gradient-based image similarity metrics (gradient information (GI), gradient correlation (GC), gradient information with linear scaling (GS), and gradient orientation (GO)) with a multi-start optimization strategy was evaluated in an institutional review board-approved retrospective clinical study using 51 preoperative CT images and 115 intraoperative mobile radiographs. Registrations were tested with and without polygonal masks as a function of the number of multistarts employed during optimization. Registration accuracy was evaluated in terms of the projection distance error (PDE) and assessment of failure modes (PDE  >  30 mm) that could impede reliable vertebral level localization. With manual polygonal masking and 200 multistarts, the GC and GO metrics exhibited robust performance with 0% gross failures and median PDE  <  6.4 mm (±4.4 mm interquartile range (IQR)) and a median runtime of 84 s (plus upwards of 1–2 min for manual masking). Excluding manual polygonal masks and decreasing the number of multistarts to 50 caused the GC-based registration to fail at a rate of  >14%; however, GO maintained robustness with a 0% gross failure rate. Overall, the GI, GC, and GS metrics were susceptible to registration errors associated with content mismatch, but GO provided robust registration (median PDE  =  5.5 mm, 2.6 mm IQR) without manual masking and with an improved

  14. Robust Manhattan Frame Estimation From a Single RGB-D Image

    KAUST Repository

    Bernard Ghanem; Heilbron, Fabian Caba; Niebles, Juan Carlos; Thabet, Ali Kassem

    2015-01-01

    This paper proposes a new framework for estimating the Manhattan Frame (MF) of an indoor scene from a single RGB-D image. Our technique formulates this problem as the estimation of a rotation matrix that best aligns the normals of the captured scene to a canonical world axes. By introducing sparsity constraints, our method can simultaneously estimate the scene MF, the surfaces in the scene that are best aligned to one of three coordinate axes, and the outlier surfaces that do not align with any of the axes. To test our approach, we contribute a new set of annotations to determine ground truth MFs in each image of the popular NYUv2 dataset. We use this new benchmark to experimentally demonstrate that our method is more accurate, faster, more reliable and more robust than the methods used in the literature. We further motivate our technique by showing how it can be used to address the RGB-D SLAM problem in indoor scenes by incorporating it into and improving the performance of a popular RGB-D SLAM method.

  15. Robust Manhattan Frame Estimation From a Single RGB-D Image

    KAUST Repository

    Bernard Ghanem

    2015-06-02

    This paper proposes a new framework for estimating the Manhattan Frame (MF) of an indoor scene from a single RGB-D image. Our technique formulates this problem as the estimation of a rotation matrix that best aligns the normals of the captured scene to a canonical world axes. By introducing sparsity constraints, our method can simultaneously estimate the scene MF, the surfaces in the scene that are best aligned to one of three coordinate axes, and the outlier surfaces that do not align with any of the axes. To test our approach, we contribute a new set of annotations to determine ground truth MFs in each image of the popular NYUv2 dataset. We use this new benchmark to experimentally demonstrate that our method is more accurate, faster, more reliable and more robust than the methods used in the literature. We further motivate our technique by showing how it can be used to address the RGB-D SLAM problem in indoor scenes by incorporating it into and improving the performance of a popular RGB-D SLAM method.

  16. Fusion of an Ensemble of Augmented Image Detectors for Robust Object Detection.

    Science.gov (United States)

    Wei, Pan; Ball, John E; Anderson, Derek T

    2018-03-17

    A significant challenge in object detection is accurate identification of an object's position in image space, whereas one algorithm with one set of parameters is usually not enough, and the fusion of multiple algorithms and/or parameters can lead to more robust results. Herein, a new computational intelligence fusion approach based on the dynamic analysis of agreement among object detection outputs is proposed. Furthermore, we propose an online versus just in training image augmentation strategy. Experiments comparing the results both with and without fusion are presented. We demonstrate that the augmented and fused combination results are the best, with respect to higher accuracy rates and reduction of outlier influences. The approach is demonstrated in the context of cone, pedestrian and box detection for Advanced Driver Assistance Systems (ADAS) applications.

  17. Fusion of an Ensemble of Augmented Image Detectors for Robust Object Detection

    Directory of Open Access Journals (Sweden)

    Pan Wei

    2018-03-01

    Full Text Available A significant challenge in object detection is accurate identification of an object’s position in image space, whereas one algorithm with one set of parameters is usually not enough, and the fusion of multiple algorithms and/or parameters can lead to more robust results. Herein, a new computational intelligence fusion approach based on the dynamic analysis of agreement among object detection outputs is proposed. Furthermore, we propose an online versus just in training image augmentation strategy. Experiments comparing the results both with and without fusion are presented. We demonstrate that the augmented and fused combination results are the best, with respect to higher accuracy rates and reduction of outlier influences. The approach is demonstrated in the context of cone, pedestrian and box detection for Advanced Driver Assistance Systems (ADAS applications.

  18. Robust and Accurate Image-Based Georeferencing Exploiting Relative Orientation Constraints

    Science.gov (United States)

    Cavegn, S.; Blaser, S.; Nebiker, S.; Haala, N.

    2018-05-01

    Urban environments with extended areas of poor GNSS coverage as well as indoor spaces that often rely on real-time SLAM algorithms for camera pose estimation require sophisticated georeferencing in order to fulfill our high requirements of a few centimeters for absolute 3D point measurement accuracies. Since we focus on image-based mobile mapping, we extended the structure-from-motion pipeline COLMAP with georeferencing capabilities by integrating exterior orientation parameters from direct sensor orientation or SLAM as well as ground control points into bundle adjustment. Furthermore, we exploit constraints for relative orientation parameters among all cameras in bundle adjustment, which leads to a significant robustness and accuracy increase especially by incorporating highly redundant multi-view image sequences. We evaluated our integrated georeferencing approach on two data sets, one captured outdoors by a vehicle-based multi-stereo mobile mapping system and the other captured indoors by a portable panoramic mobile mapping system. We obtained mean RMSE values for check point residuals between image-based georeferencing and tachymetry of 2 cm in an indoor area, and 3 cm in an urban environment where the measurement distances are a multiple compared to indoors. Moreover, in comparison to a solely image-based procedure, our integrated georeferencing approach showed a consistent accuracy increase by a factor of 2-3 at our outdoor test site. Due to pre-calibrated relative orientation parameters, images of all camera heads were oriented correctly in our challenging indoor environment. By performing self-calibration of relative orientation parameters among respective cameras of our vehicle-based mobile mapping system, remaining inaccuracies from suboptimal test field calibration were successfully compensated.

  19. ROBUST AND ACCURATE IMAGE-BASED GEOREFERENCING EXPLOITING RELATIVE ORIENTATION CONSTRAINTS

    Directory of Open Access Journals (Sweden)

    S. Cavegn

    2018-05-01

    Full Text Available Urban environments with extended areas of poor GNSS coverage as well as indoor spaces that often rely on real-time SLAM algorithms for camera pose estimation require sophisticated georeferencing in order to fulfill our high requirements of a few centimeters for absolute 3D point measurement accuracies. Since we focus on image-based mobile mapping, we extended the structure-from-motion pipeline COLMAP with georeferencing capabilities by integrating exterior orientation parameters from direct sensor orientation or SLAM as well as ground control points into bundle adjustment. Furthermore, we exploit constraints for relative orientation parameters among all cameras in bundle adjustment, which leads to a significant robustness and accuracy increase especially by incorporating highly redundant multi-view image sequences. We evaluated our integrated georeferencing approach on two data sets, one captured outdoors by a vehicle-based multi-stereo mobile mapping system and the other captured indoors by a portable panoramic mobile mapping system. We obtained mean RMSE values for check point residuals between image-based georeferencing and tachymetry of 2 cm in an indoor area, and 3 cm in an urban environment where the measurement distances are a multiple compared to indoors. Moreover, in comparison to a solely image-based procedure, our integrated georeferencing approach showed a consistent accuracy increase by a factor of 2–3 at our outdoor test site. Due to pre-calibrated relative orientation parameters, images of all camera heads were oriented correctly in our challenging indoor environment. By performing self-calibration of relative orientation parameters among respective cameras of our vehicle-based mobile mapping system, remaining inaccuracies from suboptimal test field calibration were successfully compensated.

  20. Analytical robustness of quantitative NIR chemical imaging for Islamic paper characterization

    Science.gov (United States)

    Mahgoub, Hend; Gilchrist, John R.; Fearn, Thomas; Strlič, Matija

    2017-07-01

    Recently, spectral imaging techniques such as Multispectral (MSI) and Hyperspectral Imaging (HSI) have gained importance in the field of heritage conservation. This paper explores the analytical robustness of quantitative chemical imaging for Islamic paper characterization by focusing on the effect of different measurement and processing parameters, i.e. acquisition conditions and calibration on the accuracy of the collected spectral data. This will provide a better understanding of the technique that can provide a measure of change in collections through imaging. For the quantitative model, special calibration target was devised using 105 samples from a well-characterized reference Islamic paper collection. Two material properties were of interest: starch sizing and cellulose degree of polymerization (DP). Multivariate data analysis methods were used to develop discrimination and regression models which were used as an evaluation methodology for the metrology of quantitative NIR chemical imaging. Spectral data were collected using a pushbroom HSI scanner (Gilden Photonics Ltd) in the 1000-2500 nm range with a spectral resolution of 6.3 nm using a mirror scanning setup and halogen illumination. Data were acquired at different measurement conditions and acquisition parameters. Preliminary results showed the potential of the evaluation methodology to show that measurement parameters such as the use of different lenses and different scanning backgrounds may not have a great influence on the quantitative results. Moreover, the evaluation methodology allowed for the selection of the best pre-treatment method to be applied to the data.

  1. A rapid and robust gradient measurement technique using dynamic single-point imaging.

    Science.gov (United States)

    Jang, Hyungseok; McMillan, Alan B

    2017-09-01

    We propose a new gradient measurement technique based on dynamic single-point imaging (SPI), which allows simple, rapid, and robust measurement of k-space trajectory. To enable gradient measurement, we utilize the variable field-of-view (FOV) property of dynamic SPI, which is dependent on gradient shape. First, one-dimensional (1D) dynamic SPI data are acquired from a targeted gradient axis, and then relative FOV scaling factors between 1D images or k-spaces at varying encoding times are found. These relative scaling factors are the relative k-space position that can be used for image reconstruction. The gradient measurement technique also can be used to estimate the gradient impulse response function for reproducible gradient estimation as a linear time invariant system. The proposed measurement technique was used to improve reconstructed image quality in 3D ultrashort echo, 2D spiral, and multi-echo bipolar gradient-echo imaging. In multi-echo bipolar gradient-echo imaging, measurement of the k-space trajectory allowed the use of a ramp-sampled trajectory for improved acquisition speed (approximately 30%) and more accurate quantitative fat and water separation in a phantom. The proposed dynamic SPI-based method allows fast k-space trajectory measurement with a simple implementation and no additional hardware for improved image quality. Magn Reson Med 78:950-962, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  2. Fast and robust ray casting algorithms for virtual X-ray imaging

    International Nuclear Information System (INIS)

    Freud, N.; Duvauchelle, P.; Letang, J.M.; Babot, D.

    2006-01-01

    Deterministic calculations based on ray casting techniques are known as a powerful alternative to the Monte Carlo approach to simulate X- or γ-ray imaging modalities (e.g. digital radiography and computed tomography), whenever computation time is a critical issue. One of the key components, from the viewpoint of computing resource expense, is the algorithm which determines the path length travelled by each ray through complex 3D objects. This issue has given rise to intensive research in the field of 3D rendering (in the visible light domain) during the last decades. The present work proposes algorithmic solutions adapted from state-of-the-art computer graphics to carry out ray casting in X-ray imaging configurations. This work provides an algorithmic basis to simulate direct transmission of X-rays, as well as scattering and secondary emission of radiation. Emphasis is laid on the speed and robustness issues. Computation times are given in a typical case of radiography simulation

  3. Fast and robust ray casting algorithms for virtual X-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Freud, N. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France)]. E-mail: Nicolas.Freud@insa-lyon.fr; Duvauchelle, P. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France); Letang, J.M. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France); Babot, D. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France)

    2006-07-15

    Deterministic calculations based on ray casting techniques are known as a powerful alternative to the Monte Carlo approach to simulate X- or {gamma}-ray imaging modalities (e.g. digital radiography and computed tomography), whenever computation time is a critical issue. One of the key components, from the viewpoint of computing resource expense, is the algorithm which determines the path length travelled by each ray through complex 3D objects. This issue has given rise to intensive research in the field of 3D rendering (in the visible light domain) during the last decades. The present work proposes algorithmic solutions adapted from state-of-the-art computer graphics to carry out ray casting in X-ray imaging configurations. This work provides an algorithmic basis to simulate direct transmission of X-rays, as well as scattering and secondary emission of radiation. Emphasis is laid on the speed and robustness issues. Computation times are given in a typical case of radiography simulation.

  4. Robust medical image segmentation for hyperthermia treatment planning

    International Nuclear Information System (INIS)

    Neufeld, E.; Chavannes, N.; Kuster, N.; Samaras, T.

    2005-01-01

    Full text: This work is part of an ongoing effort to develop a comprehensive hyperthermia treatment planning (HTP) tool. The goal is to unify all the steps necessary to perform treatment planning - from image segmentation to optimization of the energy deposition pattern - in a single tool. The bases of the HTP software are the routines and know-how developed in our TRINTY project that resulted the commercial EM platform SEMCAD-X. It incorporates the non-uniform finite-difference time-domain (FDTD) method, permitting the simulation of highly detailed models. Subsequently, in order to create highly resolved patient models, a powerful and robust segmentation tool is needed. A toolbox has been created that allows the flexible combination of various segmentation methods as well as several pre-and postprocessing functions. It works primarily with CT and MRI images, which it can read in various formats. A wide variety of segmentation methods has been implemented. This includes thresholding techniques (k-means classification, expectation maximization and modal histogram analysis for automatic threshold detection, multi-dimensional if required), region growing methods (with hysteretic behavior and simultaneous competitive growing), an interactive marker based watershed transformation, level-set methods (homogeneity and edge based, fast-marching), a flexible live-wire implementation as well as fuzzy connectedness. Due to the large number of tissues that need to be segmented for HTP, no methods that rely on prior knowledge have been implemented. Various edge extraction routines, distance transforms, smoothing techniques (convolutions, anisotropic diffusion, sigma filter...), connected component analysis, topologically flexible interpolation, image algebra and morphological operations are available. Moreover, contours or surfaces can be extracted, simplified and exported. Using these different techniques on several samples, the following conclusions have been drawn: Due to the

  5. Motion robust high resolution 3D free-breathing pulmonary MRI using dynamic 3D image self-navigator.

    Science.gov (United States)

    Jiang, Wenwen; Ong, Frank; Johnson, Kevin M; Nagle, Scott K; Hope, Thomas A; Lustig, Michael; Larson, Peder E Z

    2018-06-01

    To achieve motion robust high resolution 3D free-breathing pulmonary MRI utilizing a novel dynamic 3D image navigator derived directly from imaging data. Five-minute free-breathing scans were acquired with a 3D ultrashort echo time (UTE) sequence with 1.25 mm isotropic resolution. From this data, dynamic 3D self-navigating images were reconstructed under locally low rank (LLR) constraints and used for motion compensation with one of two methods: a soft-gating technique to penalize the respiratory motion induced data inconsistency, and a respiratory motion-resolved technique to provide images of all respiratory motion states. Respiratory motion estimation derived from the proposed dynamic 3D self-navigator of 7.5 mm isotropic reconstruction resolution and a temporal resolution of 300 ms was successful for estimating complex respiratory motion patterns. This estimation improved image quality compared to respiratory belt and DC-based navigators. Respiratory motion compensation with soft-gating and respiratory motion-resolved techniques provided good image quality from highly undersampled data in volunteers and clinical patients. An optimized 3D UTE sequence combined with the proposed reconstruction methods can provide high-resolution motion robust pulmonary MRI. Feasibility was shown in patients who had irregular breathing patterns in which our approach could depict clinically relevant pulmonary pathologies. Magn Reson Med 79:2954-2967, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  6. A statistically harmonized alignment-classification in image space enables accurate and robust alignment of noisy images in single particle analysis.

    Science.gov (United States)

    Kawata, Masaaki; Sato, Chikara

    2007-06-01

    In determining the three-dimensional (3D) structure of macromolecular assemblies in single particle analysis, a large representative dataset of two-dimensional (2D) average images from huge number of raw images is a key for high resolution. Because alignments prior to averaging are computationally intensive, currently available multireference alignment (MRA) software does not survey every possible alignment. This leads to misaligned images, creating blurred averages and reducing the quality of the final 3D reconstruction. We present a new method, in which multireference alignment is harmonized with classification (multireference multiple alignment: MRMA). This method enables a statistical comparison of multiple alignment peaks, reflecting the similarities between each raw image and a set of reference images. Among the selected alignment candidates for each raw image, misaligned images are statistically excluded, based on the principle that aligned raw images of similar projections have a dense distribution around the correctly aligned coordinates in image space. This newly developed method was examined for accuracy and speed using model image sets with various signal-to-noise ratios, and with electron microscope images of the Transient Receptor Potential C3 and the sodium channel. In every data set, the newly developed method outperformed conventional methods in robustness against noise and in speed, creating 2D average images of higher quality. This statistically harmonized alignment-classification combination should greatly improve the quality of single particle analysis.

  7. Resolution and robustness to noise of the sensitivity-based method for microwave imaging with data acquired on cylindrical surfaces

    International Nuclear Information System (INIS)

    Zhang, Yifan; Tu, Sheng; Amineh, Reza K; Nikolova, Natalia K

    2012-01-01

    The spatial resolution limit of a Jacobian-based microwave imaging algorithm and its robustness to noise are evaluated. The focus here is on tomographic systems where the wideband data are acquired with a vertically scanned circular sensor array and at each scanning step a 2D image is reconstructed in the plane of the sensor array. The theoretical resolution is obtained as one-half of the maximum-frequency wavelength with far-zone data and about two-thirds of the array radius with near-zone data. Validation examples are given using analytical electromagnetic models. The algorithm is shown to be robust to noise when the response data are corrupted by Gaussian white noise. (paper)

  8. Robust image alignment for cryogenic transmission electron microscopy.

    Science.gov (United States)

    McLeod, Robert A; Kowal, Julia; Ringler, Philippe; Stahlberg, Henning

    2017-03-01

    Cryo-electron microscopy recently experienced great improvements in structure resolution due to direct electron detectors with improved contrast and fast read-out leading to single electron counting. High frames rates enabled dose fractionation, where a long exposure is broken into a movie, permitting specimen drift to be registered and corrected. The typical approach for image registration, with high shot noise and low contrast, is multi-reference (MR) cross-correlation. Here we present the software package Zorro, which provides robust drift correction for dose fractionation by use of an intensity-normalized cross-correlation and logistic noise model to weight each cross-correlation in the MR model and filter each cross-correlation optimally. Frames are reliably registered by Zorro with low dose and defocus. Methods to evaluate performance are presented, by use of independently-evaluated even- and odd-frame stacks by trajectory comparison and Fourier ring correlation. Alignment of tiled sub-frames is also introduced, and demonstrated on an example dataset. Zorro source code is available at github.com/CINA/zorro. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Understanding robustness as an image of sustainable agriculture

    NARCIS (Netherlands)

    Goede, de D.M.

    2014-01-01

    The general aim of the research described in this thesis is to contribute to a better understanding of the conceptualisation of robustness in agricultural science as well as its relevance to sustainability. Robustness rapidly gained attention as a potential solution for a variety of

  10. A robust anomaly based change detection method for time-series remote sensing images

    Science.gov (United States)

    Shoujing, Yin; Qiao, Wang; Chuanqing, Wu; Xiaoling, Chen; Wandong, Ma; Huiqin, Mao

    2014-03-01

    Time-series remote sensing images record changes happening on the earth surface, which include not only abnormal changes like human activities and emergencies (e.g. fire, drought, insect pest etc.), but also changes caused by vegetation phenology and climate changes. Yet, challenges occur in analyzing global environment changes and even the internal forces. This paper proposes a robust Anomaly Based Change Detection method (ABCD) for time-series images analysis by detecting abnormal points in data sets, which do not need to follow a normal distribution. With ABCD we can detect when and where changes occur, which is the prerequisite condition of global change studies. ABCD was tested initially with 10-day SPOT VGT NDVI (Normalized Difference Vegetation Index) times series tracking land cover type changes, seasonality and noise, then validated to real data in a large area in Jiangxi, south of China. Initial results show that ABCD can precisely detect spatial and temporal changes from long time series images rapidly.

  11. SU-F-J-19: Robust Region-Of-Interest (ROI) for Consistent Registration On Deteriorated Surface Images

    Energy Technology Data Exchange (ETDEWEB)

    Kang, H; Malin, M; Chmura, S; Hasan, Y; Al-Hallaq, H [The Department of Radiation and Cellular Oncology, The University of Chicago Medicine, Chicago, IL (United States)

    2016-06-15

    Purpose: For African-American patients receiving breast radiotherapy with a bolus, skin darkening can affect the surface visualization when using optical imaging for daily positioning and gating at deep-inspiration breath holds (DIBH). Our goal is to identify a region-of-interest (ROI) that is robust against deteriorating surface image quality due to skin darkening. Methods: We study four patients whose post-mastectomy surfaces are imaged daily with AlignRT (VisionRT, UK) for DIBH radiotherapy and whose surface image quality is degraded toward the end of treatment. To simulate the effects of skin darkening, surfaces from the first ten fractions of each patient are systematically degraded by 25–35%, 40–50% and 65–75% of the total area of the clinically used ROI-ipsilateral-chestwall. The degraded surfaces are registered to the reference surface in six degrees-of-freedom. To identify a robust ROI, three additional reference ROIs — ROI-chest+abdomen, ROI-bilateral-chest and ROI-extended-ipsilateral-chestwall are created and registered to the degraded surfaces. Differences in registration using these ROIs are compared to that using ROI-ipsilateral-chestwall. Results: For three patients, the deviations in the registrations to ROI-ipsilateral-chestwall are > 2.0, 3.1 and 7.9mm on average for 25–35%, 40–50% and 65–75% degraded surfaces, respectively. Rotational deviations reach 11.1° in pitch. For the last patient, registration is consistent to within 2.6mm even on the 65–75% degraded surfaces, possibly because the surface topography has more distinct features. For ROI-bilateral-chest and ROI-extended-ipsilateral-chest registrations deviate in a similar pattern. However, registration on ROI-chest+abdomen is robust to deteriorating image qualities to within 4.2mm for all four patients. Conclusion: Registration deviations using ROI-ipsilateral-chestwall can reach 9.8mm on the 40–50% degraded surfaces. Caution is required when using AlignRT for patients

  12. Using Image Gradients to Improve Robustness of Digital Image Correlation to Non-uniform Illumination: Effects of Weighting and Normalization Choices

    KAUST Repository

    Xu, Jiangping

    2015-03-05

    Changes in the light condition affect the solution of intensity-based digital image correlation algorithms. One natural way to decrease the influence of illumination is to consider the gradients of the image rather than the image itself when building the objective function. In this work, a weighted normalized gradient-based algorithm, is proposed. This algorithm optimizes the sum-of-squared difference between the weighted normalized gradients of the reference and deformed images. Due to the lower sensitivity of the gradient to the illumination variation, this algorithm is more robust and accurate than the intensity-based algorithm in case of illumination variations. Yet, it comes with a higher sensitivity to noise that can be mitigated by designing the relevant weighting and normalization of the image gradient. Numerical results demonstrate that the proposed algorithm gives better results in case of linear/non-linear space-based and non-linear gray value-based illumination variation. The proposed algorithm still performs better than the intensity-based algorithm in case of illumination variations and noisy data provided the images are pre-smoothed with a Gaussian low-pass filter in numerical and experimental examples.

  13. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    Directory of Open Access Journals (Sweden)

    Dongming Li

    2017-04-01

    Full Text Available An adaptive optics (AO system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  14. Effect of using different cover image quality to obtain robust selective embedding in steganography

    Science.gov (United States)

    Abdullah, Karwan Asaad; Al-Jawad, Naseer; Abdulla, Alan Anwer

    2014-05-01

    One of the common types of steganography is to conceal an image as a secret message in another image which normally called a cover image; the resulting image is called a stego image. The aim of this paper is to investigate the effect of using different cover image quality, and also analyse the use of different bit-plane in term of robustness against well-known active attacks such as gamma, statistical filters, and linear spatial filters. The secret messages are embedded in higher bit-plane, i.e. in other than Least Significant Bit (LSB), in order to resist active attacks. The embedding process is performed in three major steps: First, the embedding algorithm is selectively identifying useful areas (blocks) for embedding based on its lighting condition. Second, is to nominate the most useful blocks for embedding based on their entropy and average. Third, is to select the right bit-plane for embedding. This kind of block selection made the embedding process scatters the secret message(s) randomly around the cover image. Different tests have been performed for selecting a proper block size and this is related to the nature of the used cover image. Our proposed method suggests a suitable embedding bit-plane as well as the right blocks for the embedding. Experimental results demonstrate that different image quality used for the cover images will have an effect when the stego image is attacked by different active attacks. Although the secret messages are embedded in higher bit-plane, but they cannot be recognised visually within the stegos image.

  15. Shadow Areas Robust Matching Among Image Sequence in Planetary Landing

    Science.gov (United States)

    Ruoyan, Wei; Xiaogang, Ruan; Naigong, Yu; Xiaoqing, Zhu; Jia, Lin

    2017-01-01

    In this paper, an approach for robust matching shadow areas in autonomous visual navigation and planetary landing is proposed. The approach begins with detecting shadow areas, which are extracted by Maximally Stable Extremal Regions (MSER). Then, an affine normalization algorithm is applied to normalize the areas. Thirdly, a descriptor called Multiple Angles-SIFT (MA-SIFT) that coming from SIFT is proposed, the descriptor can extract more features of an area. Finally, for eliminating the influence of outliers, a method of improved RANSAC based on Skinner Operation Condition is proposed to extract inliers. At last, series of experiments are conducted to test the performance of the approach this paper proposed, the results show that the approach can maintain the matching accuracy at a high level even the differences among the images are obvious with no attitude measurements supplied.

  16. Robustness Analysis of Visual QA Models by Basic Questions

    KAUST Repository

    Huang, Jia-Hong

    2017-09-14

    Visual Question Answering (VQA) models should have both high robustness and accuracy. Unfortunately, most of the current VQA research only focuses on accuracy because there is a lack of proper methods to measure the robustness of VQA models. There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the ranked basic questions, with similarity scores, of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question about the given image. We claim that a robust VQA model is one, whose performance is not changed much when related basic questions as also made available to it as input. We formulate the basic questions generation problem as a LASSO optimization, and also propose a large scale Basic Question Dataset (BQD) and Rscore (novel robustness measure), for analyzing the robustness of VQA models. We hope our BQD will be used as a benchmark for to evaluate the robustness of VQA models, so as to help the community build more robust and accurate VQA models.

  17. Robustness Analysis of Visual QA Models by Basic Questions

    KAUST Repository

    Huang, Jia-Hong; Alfadly, Modar; Ghanem, Bernard

    2017-01-01

    Visual Question Answering (VQA) models should have both high robustness and accuracy. Unfortunately, most of the current VQA research only focuses on accuracy because there is a lack of proper methods to measure the robustness of VQA models. There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the ranked basic questions, with similarity scores, of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question about the given image. We claim that a robust VQA model is one, whose performance is not changed much when related basic questions as also made available to it as input. We formulate the basic questions generation problem as a LASSO optimization, and also propose a large scale Basic Question Dataset (BQD) and Rscore (novel robustness measure), for analyzing the robustness of VQA models. We hope our BQD will be used as a benchmark for to evaluate the robustness of VQA models, so as to help the community build more robust and accurate VQA models.

  18. Robust rooftop extraction from visible band images using higher order CRF

    KAUST Repository

    Li, Er

    2015-08-01

    In this paper, we propose a robust framework for building extraction in visible band images. We first get an initial classification of the pixels based on an unsupervised presegmentation. Then, we develop a novel conditional random field (CRF) formulation to achieve accurate rooftops extraction, which incorporates pixel-level information and segment-level information for the identification of rooftops. Comparing with the commonly used CRF model, a higher order potential defined on segment is added in our model, by exploiting region consistency and shape feature at segment level. Our experiments show that the proposed higher order CRF model outperforms the state-of-the-art methods both at pixel and object levels on rooftops with complex structures and sizes in challenging environments. © 1980-2012 IEEE.

  19. Motion-robust diffusion tensor acquisition at routine 3T magnetic resonance imaging

    International Nuclear Information System (INIS)

    Yasmin, Hasina; Abe, Osamu; Masutani, Yoshitaka; Hayashi, Naoto; Ohtomo, Kuni; Kabasawa, Hiroyuki; Aoki, Shigeki

    2010-01-01

    We compared different acquisition and reconstruction methods in phantom and human studies in the clinical setting to validate our hypothesis that optimizing the k-space acquisition and reconstruction method could decrease motion artifacts. Diffusion tensor images of a water phantom were obtained with three table displacement magnitudes: 1 mm, 2 mm, and 3 mm. Images were reconstructed using homodyne and zero-fill reconstruction. Overscanning in 8- and 16-k y lines was tested. We performed visual assessment of the artifacts using reconstructed coronal images and analyzed them with Wilcoxon signed-ranks test both for phantom and human studies. Also, fractional anisotropy (FA) changes between acquisition methods were compared. Artifacts due to smaller displacement (1 and 2 mm) were significantly reduced in 16-k y overscan with zero filling. The Wilcoxon signed-ranks test showed significant differences (P<0.031 for reconstruction methods and P<0.016 for overscanning methods). FA changes were statistically significant (P<0.037; Student's t-test). The Wilcoxon signed-ranks test showed significant reductions (P<0.005) in the human study. Motion-induced artifacts can be reduced by optimizing acquisition and reconstruction methods. The techniques described in this study offer an effective method for robust estimation of diffusion tensor in the presence of motion-related artifactual data points. (author)

  20. Quantum red-green-blue image steganography

    Science.gov (United States)

    Heidari, Shahrokh; Pourarian, Mohammad Rasoul; Gheibi, Reza; Naseri, Mosayeb; Houshmand, Monireh

    One of the most considering matters in the field of quantum information processing is quantum data hiding including quantum steganography and quantum watermarking. This field is an efficient tool for protecting any kind of digital data. In this paper, three quantum color images steganography algorithms are investigated based on Least Significant Bit (LSB). The first algorithm employs only one of the image’s channels to cover secret data. The second procedure is based on LSB XORing technique, and the last algorithm utilizes two channels to cover the color image for hiding secret quantum data. The performances of the proposed schemes are analyzed by using software simulations in MATLAB environment. The analysis of PSNR, BER and Histogram graphs indicate that the presented schemes exhibit acceptable performances and also theoretical analysis demonstrates that the networks complexity of the approaches scales squarely.

  1. A New Position Location System Using DTV Transmitter Identification Watermark Signals

    Directory of Open Access Journals (Sweden)

    Chouinard Jean-Yves

    2006-01-01

    Full Text Available A new position location technique using the transmitter identification (TxID RF watermark in the digital TV (DTV signals is proposed in this paper. Conventional global positioning system (GPS usually does not work well inside buildings due to the high frequency and weak field strength of the signal. In contrast to the GPS, the DTV signals are received from transmitters at relatively short distance, while the broadcast transmitters operate at levels up to the megawatts effective radiated power (ERP. Also the RF frequency of the DTV signal is much lower than the GPS, which makes it easier for the signal to penetrate buildings and other objects. The proposed position location system based on DTV TxID signal is presented in this paper. Practical receiver implementation issues including nonideal correlation and synchronization are analyzed and discussed. Performance of the proposed technique is evaluated through Monte Carlo simulations and compared with other existing position location systems. Possible ways to improve the accuracy of the new position location system is discussed.

  2. DCT-based cyber defense techniques

    Science.gov (United States)

    Amsalem, Yaron; Puzanov, Anton; Bedinerman, Anton; Kutcher, Maxim; Hadar, Ofer

    2015-09-01

    With the increasing popularity of video streaming services and multimedia sharing via social networks, there is a need to protect the multimedia from malicious use. An attacker may use steganography and watermarking techniques to embed malicious content, in order to attack the end user. Most of the attack algorithms are robust to basic image processing techniques such as filtering, compression, noise addition, etc. Hence, in this article two novel, real-time, defense techniques are proposed: Smart threshold and anomaly correction. Both techniques operate at the DCT domain, and are applicable for JPEG images and H.264 I-Frames. The defense performance was evaluated against a highly robust attack, and the perceptual quality degradation was measured by the well-known PSNR and SSIM quality assessment metrics. A set of defense techniques is suggested for improving the defense efficiency. For the most aggressive attack configuration, the combination of all the defense techniques results in 80% protection against cyber-attacks with PSNR of 25.74 db.

  3. Steganalysis based on JPEG compatibility

    Science.gov (United States)

    Fridrich, Jessica; Goljan, Miroslav; Du, Rui

    2001-11-01

    In this paper, we introduce a new forensic tool that can reliably detect modifications in digital images, such as distortion due to steganography and watermarking, in images that were originally stored in the JPEG format. The JPEG compression leave unique fingerprints and serves as a fragile watermark enabling us to detect changes as small as modifying the LSB of one randomly chosen pixel. The detection of changes is based on investigating the compatibility of 8x8 blocks of pixels with JPEG compression with a given quantization matrix. The proposed steganalytic method is applicable to virtually all steganongraphic and watermarking algorithms with the exception of those that embed message bits into the quantized JPEG DCT coefficients. The method can also be used to estimate the size of the secret message and identify the pixels that carry message bits. As a consequence of our steganalysis, we strongly recommend avoiding using images that have been originally stored in the JPEG format as cover-images for spatial-domain steganography.

  4. Infrared and visible image fusion based on robust principal component analysis and compressed sensing

    Science.gov (United States)

    Li, Jun; Song, Minghui; Peng, Yuanxi

    2018-03-01

    Current infrared and visible image fusion methods do not achieve adequate information extraction, i.e., they cannot extract the target information from infrared images while retaining the background information from visible images. Moreover, most of them have high complexity and are time-consuming. This paper proposes an efficient image fusion framework for infrared and visible images on the basis of robust principal component analysis (RPCA) and compressed sensing (CS). The novel framework consists of three phases. First, RPCA decomposition is applied to the infrared and visible images to obtain their sparse and low-rank components, which represent the salient features and background information of the images, respectively. Second, the sparse and low-rank coefficients are fused by different strategies. On the one hand, the measurements of the sparse coefficients are obtained by the random Gaussian matrix, and they are then fused by the standard deviation (SD) based fusion rule. Next, the fused sparse component is obtained by reconstructing the result of the fused measurement using the fast continuous linearized augmented Lagrangian algorithm (FCLALM). On the other hand, the low-rank coefficients are fused using the max-absolute rule. Subsequently, the fused image is superposed by the fused sparse and low-rank components. For comparison, several popular fusion algorithms are tested experimentally. By comparing the fused results subjectively and objectively, we find that the proposed framework can extract the infrared targets while retaining the background information in the visible images. Thus, it exhibits state-of-the-art performance in terms of both fusion effects and timeliness.

  5. A robust segmentation approach based on analysis of features for defect detection in X-ray images of aluminium castings

    DEFF Research Database (Denmark)

    Lecomte, G.; Kaftandjian, V.; Cendre, Emmanuelle

    2007-01-01

    A robust image processing algorithm has been developed for detection of small and low contrasted defects, adapted to X-ray images of castings having a non-uniform background. The sensitivity to small defects is obtained at the expense of a high false alarm rate. We present in this paper a feature...... three parameters and taking into account the fact that X-ray grey-levels follow a statistical normal law. Results are shown on a set of 684 images, involving 59 defects, on which we obtained a 100% detection rate without any false alarm....

  6. Robust Small Target Co-Detection from Airborne Infrared Image Sequences.

    Science.gov (United States)

    Gao, Jingli; Wen, Chenglin; Liu, Meiqin

    2017-09-29

    In this paper, a novel infrared target co-detection model combining the self-correlation features of backgrounds and the commonality features of targets in the spatio-temporal domain is proposed to detect small targets in a sequence of infrared images with complex backgrounds. Firstly, a dense target extraction model based on nonlinear weights is proposed, which can better suppress background of images and enhance small targets than weights of singular values. Secondly, a sparse target extraction model based on entry-wise weighted robust principal component analysis is proposed. The entry-wise weight adaptively incorporates structural prior in terms of local weighted entropy, thus, it can extract real targets accurately and suppress background clutters efficiently. Finally, the commonality of targets in the spatio-temporal domain are used to construct target refinement model for false alarms suppression and target confirmation. Since real targets could appear in both of the dense and sparse reconstruction maps of a single frame, and form trajectories after tracklet association of consecutive frames, the location correlation of the dense and sparse reconstruction maps for a single frame and tracklet association of the location correlation maps for successive frames have strong ability to discriminate between small targets and background clutters. Experimental results demonstrate that the proposed small target co-detection method can not only suppress background clutters effectively, but also detect targets accurately even if with target-like interference.

  7. Robust selectivity to two-object images in human visual cortex

    Science.gov (United States)

    Agam, Yigal; Liu, Hesheng; Papanastassiou, Alexander; Buia, Calin; Golby, Alexandra J.; Madsen, Joseph R.; Kreiman, Gabriel

    2010-01-01

    SUMMARY We can recognize objects in a fraction of a second in spite of the presence of other objects [1–3]. The responses in macaque areas V4 and inferior temporal cortex [4–15] to a neuron’s preferred stimuli are typically suppressed by the addition of a second object within the receptive field (see however [16, 17]). How can this suppression be reconciled with rapid visual recognition in complex scenes? One option is that certain “special categories” are unaffected by other objects [18] but this leaves the problem unsolved for other categories. Another possibility is that serial attentional shifts help ameliorate the problem of distractor objects [19–21]. Yet, psychophysical studies [1–3], scalp recordings [1] and neurophysiological recordings [14, 16, 22–24], suggest that the initial sweep of visual processing contains a significant amount of information. We recorded intracranial field potentials in human visual cortex during presentation of flashes of two-object images. Visual selectivity from temporal cortex during the initial ~200 ms was largely robust to the presence of other objects. We could train linear decoders on the responses to isolated objects and decode information in two-object images. These observations are compatible with parallel, hierarchical and feed-forward theories of rapid visual recognition [25] and may provide a neural substrate to begin to unravel rapid recognition in natural scenes. PMID:20417105

  8. Robustness Analysis of Visual Question Answering Models by Basic Questions

    KAUST Repository

    Huang, Jia-Hong

    2017-11-01

    Visual Question Answering (VQA) models should have both high robustness and accuracy. Unfortunately, most of the current VQA research only focuses on accuracy because there is a lack of proper methods to measure the robustness of VQA models. There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the ranked basic questions, with similarity scores, of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question about the given image. We claim that a robust VQA model is one, whose performance is not changed much when related basic questions as also made available to it as input. We formulate the basic questions generation problem as a LASSO optimization, and also propose a large scale Basic Question Dataset (BQD) and Rscore (novel robustness measure), for analyzing the robustness of VQA models. We hope our BQD will be used as a benchmark for to evaluate the robustness of VQA models, so as to help the community build more robust and accurate VQA models.

  9. Robustness Analysis of Visual Question Answering Models by Basic Questions

    KAUST Repository

    Huang, Jia-Hong

    2017-01-01

    Visual Question Answering (VQA) models should have both high robustness and accuracy. Unfortunately, most of the current VQA research only focuses on accuracy because there is a lack of proper methods to measure the robustness of VQA models. There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the ranked basic questions, with similarity scores, of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question about the given image. We claim that a robust VQA model is one, whose performance is not changed much when related basic questions as also made available to it as input. We formulate the basic questions generation problem as a LASSO optimization, and also propose a large scale Basic Question Dataset (BQD) and Rscore (novel robustness measure), for analyzing the robustness of VQA models. We hope our BQD will be used as a benchmark for to evaluate the robustness of VQA models, so as to help the community build more robust and accurate VQA models.

  10. A robust sub-pixel edge detection method of infrared image based on tremor-based retinal receptive field model

    Science.gov (United States)

    Gao, Kun; Yang, Hu; Chen, Xiaomei; Ni, Guoqiang

    2008-03-01

    Because of complex thermal objects in an infrared image, the prevalent image edge detection operators are often suitable for a certain scene and extract too wide edges sometimes. From a biological point of view, the image edge detection operators work reliably when assuming a convolution-based receptive field architecture. A DoG (Difference-of- Gaussians) model filter based on ON-center retinal ganglion cell receptive field architecture with artificial eye tremors introduced is proposed for the image contour detection. Aiming at the blurred edges of an infrared image, the subsequent orthogonal polynomial interpolation and sub-pixel level edge detection in rough edge pixel neighborhood is adopted to locate the foregoing rough edges in sub-pixel level. Numerical simulations show that this method can locate the target edge accurately and robustly.

  11. Fast and robust extraction of hippocampus from MR images for diagnostics of Alzheimer's disease

    DEFF Research Database (Denmark)

    Lötjönen, Jyrki; Wolz, Robin; Koikkalainen, Juha

    2011-01-01

    importance in the clinical decision making. We propose a method for computing automatically the volume of hippocampus using a modified multi-atlas segmentation framework, including an improved initialization of the framework and the correction of partial volume effect. The method produced a high similarity......Assessment of temporal lobe atrophy from magnetic resonance images is a part of clinical guidelines for the diagnosis of prodromal Alzheimer's disease. As hippocampus is known to be among the first areas affected by the disease, fast and robust definition of hippocampus volume would be of great...

  12. A robust post-processing workflow for datasets with motion artifacts in diffusion kurtosis imaging.

    Science.gov (United States)

    Li, Xianjun; Yang, Jian; Gao, Jie; Luo, Xue; Zhou, Zhenyu; Hu, Yajie; Wu, Ed X; Wan, Mingxi

    2014-01-01

    The aim of this study was to develop a robust post-processing workflow for motion-corrupted datasets in diffusion kurtosis imaging (DKI). The proposed workflow consisted of brain extraction, rigid registration, distortion correction, artifacts rejection, spatial smoothing and tensor estimation. Rigid registration was utilized to correct misalignments. Motion artifacts were rejected by using local Pearson correlation coefficient (LPCC). The performance of LPCC in characterizing relative differences between artifacts and artifact-free images was compared with that of the conventional correlation coefficient in 10 randomly selected DKI datasets. The influence of rejected artifacts with information of gradient directions and b values for the parameter estimation was investigated by using mean square error (MSE). The variance of noise was used as the criterion for MSEs. The clinical practicality of the proposed workflow was evaluated by the image quality and measurements in regions of interest on 36 DKI datasets, including 18 artifact-free (18 pediatric subjects) and 18 motion-corrupted datasets (15 pediatric subjects and 3 essential tremor patients). The relative difference between artifacts and artifact-free images calculated by LPCC was larger than that of the conventional correlation coefficient (pworkflow improved the image quality and reduced the measurement biases significantly on motion-corrupted datasets (pworkflow was reliable to improve the image quality and the measurement precision of the derived parameters on motion-corrupted DKI datasets. The workflow provided an effective post-processing method for clinical applications of DKI in subjects with involuntary movements.

  13. A new automated assessment method for contrast-detail images by applying support vector machine and its robustness to nonlinear image processing.

    Science.gov (United States)

    Takei, Takaaki; Ikeda, Mitsuru; Imai, Kuniharu; Yamauchi-Kawaura, Chiyo; Kato, Katsuhiko; Isoda, Haruo

    2013-09-01

    The automated contrast-detail (C-D) analysis methods developed so-far cannot be expected to work well on images processed with nonlinear methods, such as noise reduction methods. Therefore, we have devised a new automated C-D analysis method by applying support vector machine (SVM), and tested for its robustness to nonlinear image processing. We acquired the CDRAD (a commercially available C-D test object) images at a tube voltage of 120 kV and a milliampere-second product (mAs) of 0.5-5.0. A partial diffusion equation based technique was used as noise reduction method. Three radiologists and three university students participated in the observer performance study. The training data for our SVM method was the classification data scored by the one radiologist for the CDRAD images acquired at 1.6 and 3.2 mAs and their noise-reduced images. We also compared the performance of our SVM method with the CDRAD Analyser algorithm. The mean C-D diagrams (that is a plot of the mean of the smallest visible hole diameter vs. hole depth) obtained from our devised SVM method agreed well with the ones averaged across the six human observers for both original and noise-reduced CDRAD images, whereas the mean C-D diagrams from the CDRAD Analyser algorithm disagreed with the ones from the human observers for both original and noise-reduced CDRAD images. In conclusion, our proposed SVM method for C-D analysis will work well for the images processed with the non-linear noise reduction method as well as for the original radiographic images.

  14. Robust histogram-based image retrieval

    Czech Academy of Sciences Publication Activity Database

    Höschl, Cyril; Flusser, Jan

    2016-01-01

    Roč. 69, č. 1 (2016), s. 72-81 ISSN 0167-8655 R&D Projects: GA ČR GA15-16928S Institutional support: RVO:67985556 Keywords : Image retrieval * Noisy image * Histogram * Convolution * Moments * Invariants Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.995, year: 2016 http://library.utia.cas.cz/separaty/2015/ZOI/hoschl-0452147.pdf

  15. Median Robust Extended Local Binary Pattern for Texture Classification.

    Science.gov (United States)

    Liu, Li; Lao, Songyang; Fieguth, Paul W; Guo, Yulan; Wang, Xiaogang; Pietikäinen, Matti

    2016-03-01

    Local binary patterns (LBP) are considered among the most computationally efficient high-performance texture features. However, the LBP method is very sensitive to image noise and is unable to capture macrostructure information. To best address these disadvantages, in this paper, we introduce a novel descriptor for texture classification, the median robust extended LBP (MRELBP). Different from the traditional LBP and many LBP variants, MRELBP compares regional image medians rather than raw image intensities. A multiscale LBP type descriptor is computed by efficiently comparing image medians over a novel sampling scheme, which can capture both microstructure and macrostructure texture information. A comprehensive evaluation on benchmark data sets reveals MRELBP's high performance-robust to gray scale variations, rotation changes and noise-but at a low computational cost. MRELBP produces the best classification scores of 99.82%, 99.38%, and 99.77% on three popular Outex test suites. More importantly, MRELBP is shown to be highly robust to image noise, including Gaussian noise, Gaussian blur, salt-and-pepper noise, and random pixel corruption.

  16. PERMUTATION-BASED POLYMORPHIC STEGO-WATERMARKS FOR PROGRAM CODES

    Directory of Open Access Journals (Sweden)

    Denys Samoilenko

    2016-06-01

    Full Text Available Purpose: One of the most actual trends in program code protection is code marking. The problem consists in creation of some digital “watermarks” which allow distinguishing different copies of the same program codes. Such marks could be useful for authority protection, for code copies numbering, for program propagation monitoring, for information security proposes in client-server communication processes. Methods: We used the methods of digital steganography adopted for program codes as text objects. The same-shape symbols method was transformed to same-semantic element method due to codes features which makes them different from ordinary texts. We use dynamic principle of marks forming making codes similar to be polymorphic. Results: We examined the combinatorial capacity of permutations possible in program codes. As a result it was shown that the set of 5-7 polymorphic variables is suitable for the most modern network applications. Marks creation and restoration algorithms where proposed and discussed. The main algorithm is based on full and partial permutations in variables names and its declaration order. Algorithm for partial permutation enumeration was optimized for calculation complexity. PHP code fragments which realize the algorithms were listed. Discussion: Methodic proposed in the work allows distinguishing of each client-server connection. In a case if a clone of some network resource was found the methodic could give information about included marks and thereby data on IP, date and time, authentication information of client copied the resource. Usage of polymorphic stego-watermarks should improve information security indexes in network communications.

  17. A new automated assessment method for contrast–detail images by applying support vector machine and its robustness to nonlinear image processing

    International Nuclear Information System (INIS)

    Takei, Takaaki; Ikeda, Mitsuru; Imai, Kumiharu; Yamauchi-Kawaura, Chiyo; Kato, Katsuhiko; Isoda, Haruo

    2013-01-01

    The automated contrast–detail (C–D) analysis methods developed so-far cannot be expected to work well on images processed with nonlinear methods, such as noise reduction methods. Therefore, we have devised a new automated C–D analysis method by applying support vector machine (SVM), and tested for its robustness to nonlinear image processing. We acquired the CDRAD (a commercially available C–D test object) images at a tube voltage of 120 kV and a milliampere-second product (mAs) of 0.5–5.0. A partial diffusion equation based technique was used as noise reduction method. Three radiologists and three university students participated in the observer performance study. The training data for our SVM method was the classification data scored by the one radiologist for the CDRAD images acquired at 1.6 and 3.2 mAs and their noise-reduced images. We also compared the performance of our SVM method with the CDRAD Analyser algorithm. The mean C–D diagrams (that is a plot of the mean of the smallest visible hole diameter vs. hole depth) obtained from our devised SVM method agreed well with the ones averaged across the six human observers for both original and noise-reduced CDRAD images, whereas the mean C–D diagrams from the CDRAD Analyser algorithm disagreed with the ones from the human observers for both original and noise-reduced CDRAD images. In conclusion, our proposed SVM method for C–D analysis will work well for the images processed with the non-linear noise reduction method as well as for the original radiographic images.

  18. Robust Multimodal Dictionary Learning

    Science.gov (United States)

    Cao, Tian; Jojic, Vladimir; Modla, Shannon; Powell, Debbie; Czymmek, Kirk; Niethammer, Marc

    2014-01-01

    We propose a robust multimodal dictionary learning method for multimodal images. Joint dictionary learning for both modalities may be impaired by lack of correspondence between image modalities in training data, for example due to areas of low quality in one of the modalities. Dictionaries learned with such non-corresponding data will induce uncertainty about image representation. In this paper, we propose a probabilistic model that accounts for image areas that are poorly corresponding between the image modalities. We cast the problem of learning a dictionary in presence of problematic image patches as a likelihood maximization problem and solve it with a variant of the EM algorithm. Our algorithm iterates identification of poorly corresponding patches and re-finements of the dictionary. We tested our method on synthetic and real data. We show improvements in image prediction quality and alignment accuracy when using the method for multimodal image registration. PMID:24505674

  19. Development of a robust MRI fiducial system for automated fusion of MR-US abdominal images.

    Science.gov (United States)

    Favazza, Christopher P; Gorny, Krzysztof R; Callstrom, Matthew R; Kurup, Anil N; Washburn, Michael; Trester, Pamela S; Fowler, Charles L; Hangiandreou, Nicholas J

    2018-05-21

    We present the development of a two-component magnetic resonance (MR) fiducial system, that is, a fiducial marker device combined with an auto-segmentation algorithm, designed to be paired with existing ultrasound probe tracking and image fusion technology to automatically fuse MR and ultrasound (US) images. The fiducial device consisted of four ~6.4 mL cylindrical wells filled with 1 g/L copper sulfate solution. The algorithm was designed to automatically segment the device in clinical abdominal MR images. The algorithm's detection rate and repeatability were investigated through a phantom study and in human volunteers. The detection rate was 100% in all phantom and human images. The center-of-mass of the fiducial device was robustly identified with maximum variations of 2.9 mm in position and 0.9° in angular orientation. In volunteer images, average differences between algorithm-measured inter-marker spacings and actual separation distances were 0.53 ± 0.36 mm. "Proof-of-concept" automatic MR-US fusions were conducted with sets of images from both a phantom and volunteer using a commercial prototype system, which was built based on the above findings. Image fusion accuracy was measured to be within 5 mm for breath-hold scanning. These results demonstrate the capability of this approach to automatically fuse US and MR images acquired across a wide range of clinical abdominal pulse sequences. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  20. AROSICS: An Automated and Robust Open-Source Image Co-Registration Software for Multi-Sensor Satellite Data

    Directory of Open Access Journals (Sweden)

    Daniel Scheffler

    2017-07-01

    Full Text Available Geospatial co-registration is a mandatory prerequisite when dealing with remote sensing data. Inter- or intra-sensoral misregistration will negatively affect any subsequent image analysis, specifically when processing multi-sensoral or multi-temporal data. In recent decades, many algorithms have been developed to enable manual, semi- or fully automatic displacement correction. Especially in the context of big data processing and the development of automated processing chains that aim to be applicable to different remote sensing systems, there is a strong need for efficient, accurate and generally usable co-registration. Here, we present AROSICS (Automated and Robust Open-Source Image Co-Registration Software, a Python-based open-source software including an easy-to-use user interface for automatic detection and correction of sub-pixel misalignments between various remote sensing datasets. It is independent of spatial or spectral characteristics and robust against high degrees of cloud coverage and spectral and temporal land cover dynamics. The co-registration is based on phase correlation for sub-pixel shift estimation in the frequency domain utilizing the Fourier shift theorem in a moving-window manner. A dense grid of spatial shift vectors can be created and automatically filtered by combining various validation and quality estimation metrics. Additionally, the software supports the masking of, e.g., clouds and cloud shadows to exclude such areas from spatial shift detection. The software has been tested on more than 9000 satellite images acquired by different sensors. The results are evaluated exemplarily for two inter-sensoral and two intra-sensoral use cases and show registration results in the sub-pixel range with root mean square error fits around 0.3 pixels and better.

  1. Secure and Efficient Transmission of Hyperspectral Images for Geosciences Applications

    Science.gov (United States)

    Carpentieri, Bruno; Pizzolante, Raffaele

    2017-12-01

    Hyperspectral images are acquired through air-borne or space-borne special cameras (sensors) that collect information coming from the electromagnetic spectrum of the observed terrains. Hyperspectral remote sensing and hyperspectral images are used for a wide range of purposes: originally, they were developed for mining applications and for geology because of the capability of this kind of images to correctly identify various types of underground minerals by analysing the reflected spectrums, but their usage has spread in other application fields, such as ecology, military and surveillance, historical research and even archaeology. The large amount of data obtained by the hyperspectral sensors, the fact that these images are acquired at a high cost by air-borne sensors and that they are generally transmitted to a base, makes it necessary to provide an efficient and secure transmission protocol. In this paper, we propose a novel framework that allows secure and efficient transmission of hyperspectral images, by combining a reversible invisible watermarking scheme, used in conjunction with digital signature techniques, and a state-of-art predictive-based lossless compression algorithm.

  2. Feature constrained compressed sensing CT image reconstruction from incomplete data via robust principal component analysis of the database

    International Nuclear Information System (INIS)

    Wu, Dufan; Li, Liang; Zhang, Li

    2013-01-01

    In computed tomography (CT), incomplete data problems such as limited angle projections often cause artifacts in the reconstruction results. Additional prior knowledge of the image has shown the potential for better results, such as a prior image constrained compressed sensing algorithm. While a pre-full-scan of the same patient is not always available, massive well-reconstructed images of different patients can be easily obtained from clinical multi-slice helical CTs. In this paper, a feature constrained compressed sensing (FCCS) image reconstruction algorithm was proposed to improve the image quality by using the prior knowledge extracted from the clinical database. The database consists of instances which are similar to the target image but not necessarily the same. Robust principal component analysis is employed to retrieve features of the training images to sparsify the target image. The features form a low-dimensional linear space and a constraint on the distance between the image and the space is used. A bi-criterion convex program which combines the feature constraint and total variation constraint is proposed for the reconstruction procedure and a flexible method is adopted for a good solution. Numerical simulations on both the phantom and real clinical patient images were taken to validate our algorithm. Promising results are shown for limited angle problems. (paper)

  3. A New Scrambling Evaluation Scheme Based on Spatial Distribution Entropy and Centroid Difference of Bit-Plane

    Science.gov (United States)

    Zhao, Liang; Adhikari, Avishek; Sakurai, Kouichi

    Watermarking is one of the most effective techniques for copyright protection and information hiding. It can be applied in many fields of our society. Nowadays, some image scrambling schemes are used as one part of the watermarking algorithm to enhance the security. Therefore, how to select an image scrambling scheme and what kind of the image scrambling scheme may be used for watermarking are the key problems. Evaluation method of the image scrambling schemes can be seen as a useful test tool for showing the property or flaw of the image scrambling method. In this paper, a new scrambling evaluation system based on spatial distribution entropy and centroid difference of bit-plane is presented to obtain the scrambling degree of image scrambling schemes. Our scheme is illustrated and justified through computer simulations. The experimental results show (in Figs. 6 and 7) that for the general gray-scale image, the evaluation degree of the corresponding cipher image for the first 4 significant bit-planes selection is nearly the same as that for the 8 bit-planes selection. That is why, instead of taking 8 bit-planes of a gray-scale image, it is sufficient to take only the first 4 significant bit-planes for the experiment to find the scrambling degree. This 50% reduction in the computational cost makes our scheme efficient.

  4. Robust retrieval of fine art paintings

    Science.gov (United States)

    Smolka, Bogdan; Lukac, Rastislav; Plataniotis, Konstantinos N.; Venetsanopoulos, Anastasios N.

    2003-10-01

    The rapid growth of image archives increases the need for efficient and fast tools that can retrieve and search through large amount of visual data. In this paper we propose an efficient method of extracting the image color content, which serves as an image digital signature, allowing to efficiently index and retrieve the content of large, heterogeneous multimedia databases. We apply the proposed method for the retrieval of images from the WEBMUSEUM Internet database, containing the collection of fine art images and show that the new method of image color representation is robust to image distorsions caused by resizing and compression and can be incorporated into existing retrieval systems which exploit the information on color content in digital images.

  5. Fast, accurate, and robust automatic marker detection for motion correction based on oblique kV or MV projection image pairs

    International Nuclear Information System (INIS)

    Slagmolen, Pieter; Hermans, Jeroen; Maes, Frederik; Budiharto, Tom; Haustermans, Karin; Heuvel, Frank van den

    2010-01-01

    Purpose: A robust and accurate method that allows the automatic detection of fiducial markers in MV and kV projection image pairs is proposed. The method allows to automatically correct for inter or intrafraction motion. Methods: Intratreatment MV projection images are acquired during each of five treatment beams of prostate cancer patients with four implanted fiducial markers. The projection images are first preprocessed using a series of marker enhancing filters. 2D candidate marker locations are generated for each of the filtered projection images and 3D candidate marker locations are reconstructed by pairing candidates in subsequent projection images. The correct marker positions are retrieved in 3D by the minimization of a cost function that combines 2D image intensity and 3D geometric or shape information for the entire marker configuration simultaneously. This optimization problem is solved using dynamic programming such that the globally optimal configuration for all markers is always found. Translational interfraction and intrafraction prostate motion and the required patient repositioning is assessed from the position of the centroid of the detected markers in different MV image pairs. The method was validated on a phantom using CT as ground-truth and on clinical data sets of 16 patients using manual marker annotations as ground-truth. Results: The entire setup was confirmed to be accurate to around 1 mm by the phantom measurements. The reproducibility of the manual marker selection was less than 3.5 pixels in the MV images. In patient images, markers were correctly identified in at least 99% of the cases for anterior projection images and 96% of the cases for oblique projection images. The average marker detection accuracy was 1.4±1.8 pixels in the projection images. The centroid of all four reconstructed marker positions in 3D was positioned within 2 mm of the ground-truth position in 99.73% of all cases. Detecting four markers in a pair of MV images

  6. Sadhana | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Home; Journals; Sadhana. TOSHANLAL MEENPAL. Articles written in Sadhana. Volume 43 Issue 1 January 2018 pp 4. DWT-based blind and robust watermarking using SPIHT algorithm with applications in tele-medicine · TOSHANLAL MEENPAL · More Details Abstract Fulltext PDF. Malicious manipulation of digital ...

  7. Efficient moving target analysis for inverse synthetic aperture radar images via joint speeded-up robust features and regular moment

    Science.gov (United States)

    Yang, Hongxin; Su, Fulin

    2018-01-01

    We propose a moving target analysis algorithm using speeded-up robust features (SURF) and regular moment in inverse synthetic aperture radar (ISAR) image sequences. In our study, we first extract interest points from ISAR image sequences by SURF. Different from traditional feature point extraction methods, SURF-based feature points are invariant to scattering intensity, target rotation, and image size. Then, we employ a bilateral feature registering model to match these feature points. The feature registering scheme can not only search the isotropic feature points to link the image sequences but also reduce the error matching pairs. After that, the target centroid is detected by regular moment. Consequently, a cost function based on correlation coefficient is adopted to analyze the motion information. Experimental results based on simulated and real data validate the effectiveness and practicability of the proposed method.

  8. Effects of x-ray and CT image enhancements on the robustness and accuracy of a rigid 3D/2D image registration

    International Nuclear Information System (INIS)

    Kim, Jinkoo; Yin Fangfang; Zhao Yang; Kim, Jae Ho

    2005-01-01

    A rigid body three-dimensional/two-dimensional (3D/2D) registration method has been implemented using mutual information, gradient ascent, and 3D texturemap-based digitally reconstructed radiographs. Nine combinations of commonly used x-ray and computed tomography (CT) image enhancement methods, including window leveling, histogram equalization, and adaptive histogram equalization, were examined to assess their effects on accuracy and robustness of the registration method. From a set of experiments using an anthropomorphic chest phantom, we were able to draw several conclusions. First, the CT and x-ray preprocessing combination with the widest attraction range was the one that linearly stretched the histograms onto the entire display range on both CT and x-ray images. The average attraction ranges of this combination were 71.3 mm and 61.3 deg in the translation and rotation dimensions, respectively, and the average errors were 0.12 deg and 0.47 mm. Second, the combination of the CT image with tissue and bone information and the x-ray images with adaptive histogram equalization also showed subvoxel accuracy, especially the best in the translation dimensions. However, its attraction ranges were the smallest among the examined combinations (on average 36 mm and 19 deg). Last the bone-only information on the CT image did not show convergency property to the correct registration

  9. Robust and Effective Component-based Banknote Recognition for the Blind.

    Science.gov (United States)

    Hasanuzzaman, Faiz M; Yang, Xiaodong; Tian, Yingli

    2012-11-01

    We develop a novel camera-based computer vision technology to automatically recognize banknotes for assisting visually impaired people. Our banknote recognition system is robust and effective with the following features: 1) high accuracy: high true recognition rate and low false recognition rate, 2) robustness: handles a variety of currency designs and bills in various conditions, 3) high efficiency: recognizes banknotes quickly, and 4) ease of use: helps blind users to aim the target for image capture. To make the system robust to a variety of conditions including occlusion, rotation, scaling, cluttered background, illumination change, viewpoint variation, and worn or wrinkled bills, we propose a component-based framework by using Speeded Up Robust Features (SURF). Furthermore, we employ the spatial relationship of matched SURF features to detect if there is a bill in the camera view. This process largely alleviates false recognition and can guide the user to correctly aim at the bill to be recognized. The robustness and generalizability of the proposed system is evaluated on a dataset including both positive images (with U.S. banknotes) and negative images (no U.S. banknotes) collected under a variety of conditions. The proposed algorithm, achieves 100% true recognition rate and 0% false recognition rate. Our banknote recognition system is also tested by blind users.

  10. Robust segmentation of medical images using competitive hop field neural network as a clustering tool

    International Nuclear Information System (INIS)

    Golparvar Roozbahani, R.; Ghassemian, M. H.; Sharafat, A. R.

    2001-01-01

    This paper presents the application of competitive Hop field neural network for medical images segmentation. Our proposed approach consists of Two steps: 1) translating segmentation of the given medical image into an optimization problem, and 2) solving this problem by a version of Hop field network known as competitive Hop field neural network. Segmentation is considered as a clustering problem and its validity criterion is based on both intra set distance and inter set distance. The algorithm proposed in this paper is based on gray level features only. This leads to near optimal solutions if both intra set distance and inter set distance are considered at the same time. If only one of these distances is considered, the result of segmentation process by competitive Hop field neural network will be far from optimal solution and incorrect even for very simple cases. Furthermore, sometimes the algorithm receives at unacceptable states. Both these problems may be solved by contributing both in tera distance and inter distances in the segmentation (optimization) process. The performance of the proposed algorithm is tested on both phantom and real medical images. The promising results and the robustness of algorithm to system noises show near optimal solutions

  11. Robust imaging of localized scatterers using the singular value decomposition and ℓ1 minimization

    International Nuclear Information System (INIS)

    Chai, A; Moscoso, M; Papanicolaou, G

    2013-01-01

    We consider narrow band, active array imaging of localized scatterers in a homogeneous medium with and without additive noise. We consider both single and multiple illuminations and study ℓ 1 minimization-based imaging methods. We show that for large arrays, with array diameter comparable to range, and when scatterers are sparse and well separated, ℓ 1 minimization using a single illumination and without additive noise can recover the location and reflectivity of the scatterers exactly. For multiple illuminations, we introduce a hybrid method which combines the singular value decomposition and ℓ 1 minimization. This method can be used when the essential singular vectors of the array response matrix are available. We show that with this hybrid method we can recover the location and reflectivity of the scatterers exactly when there is no noise in the data. Numerical simulations indicate that the hybrid method is, in addition, robust to noise in the data. We also compare the ℓ 1 minimization-based methods with others including Kirchhoff migration, ℓ 2 minimization and multiple signal classification. (paper)

  12. A robust computational solution for automated quantification of a specific binding ratio based on [123I]FP-CIT SPECT images

    International Nuclear Information System (INIS)

    Oliveira, F. P. M.; Tavares, J. M. R. S.; Borges, Faria D.; Campos, Costa D.

    2014-01-01

    The purpose of the current paper is to present a computational solution to accurately quantify a specific to a non-specific uptake ratio in [ 123 I]fP-CIT single photon emission computed tomography (SPECT) images and simultaneously measure the spatial dimensions of the basal ganglia, also known as basal nuclei. A statistical analysis based on a reference dataset selected by the user is also automatically performed. The quantification of the specific to non-specific uptake ratio here is based on regions of interest defined after the registration of the image under study with a template image. The computational solution was tested on a dataset of 38 [ 123 I]FP-CIT SPECT images: 28 images were from patients with Parkinson’s disease and the remainder from normal patients, and the results of the automated quantification were compared to the ones obtained by three well-known semi-automated quantification methods. The results revealed a high correlation coefficient between the developed automated method and the three semi-automated methods used for comparison (r ≥0.975). The solution also showed good robustness against different positions of the patient, as an almost perfect agreement between the specific to non-specific uptake ratio was found (ICC=1.000). The mean processing time was around 6 seconds per study using a common notebook PC. The solution developed can be useful for clinicians to evaluate [ 123 I]FP-CIT SPECT images due to its accuracy, robustness and speed. Also, the comparison between case studies and the follow-up of patients can be done more accurately and proficiently since the intra- and inter-observer variability of the semi-automated calculation does not exist in automated solutions. The dimensions of the basal ganglia and their automatic comparison with the values of the population selected as reference are also important for professionals in this area.

  13. A Study on the Security Levels of Spread-Spectrum Embedding Schemes in the WOA Framework.

    Science.gov (United States)

    Wang, Yuan-Gen; Zhu, Guopu; Kwong, Sam; Shi, Yun-Qing

    2017-08-23

    Security analysis is a very important issue for digital watermarking. Several years ago, according to Kerckhoffs' principle, the famous four security levels, namely insecurity, key security, subspace security, and stego-security, were defined for spread-spectrum (SS) embedding schemes in the framework of watermarked-only attack. However, up to now there has been little application of the definition of these security levels to the theoretical analysis of the security of SS embedding schemes, due to the difficulty of the theoretical analysis. In this paper, based on the security definition, we present a theoretical analysis to evaluate the security levels of five typical SS embedding schemes, which are the classical SS, the improved SS (ISS), the circular extension of ISS, the nonrobust and robust natural watermarking, respectively. The theoretical analysis of these typical SS schemes are successfully performed by taking advantage of the convolution of probability distributions to derive the probabilistic models of watermarked signals. Moreover, simulations are conducted to illustrate and validate our theoretical analysis. We believe that the theoretical and practical analysis presented in this paper can bridge the gap between the definition of the four security levels and its application to the theoretical analysis of SS embedding schemes.

  14. Iris image recognition wavelet filter-banks based iris feature extraction schemes

    CERN Document Server

    Rahulkar, Amol D

    2014-01-01

    This book provides the new results in wavelet filter banks based feature extraction, and the classifier in the field of iris image recognition. It provides the broad treatment on the design of separable, non-separable wavelets filter banks, and the classifier. The design techniques presented in the book are applied on iris image analysis for person authentication. This book also brings together the three strands of research (wavelets, iris image analysis, and classifier). It compares the performance of the presented techniques with state-of-the-art available schemes. This book contains the compilation of basic material on the design of wavelets that avoids reading many different books. Therefore, it provide an easier path for the new-comers, researchers to master the contents. In addition, the designed filter banks and classifier can also be effectively used than existing filter-banks in many signal processing applications like pattern classification, data-compression, watermarking, denoising etc.  that will...

  15. Topology in SU(2) lattice gauge theory and parallelization of functional magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Solbrig, Stefan

    2008-07-01

    In this thesis, I discuss topological properties of quenched SU(2) lattice gauge fields. In particular, clusters of topological charge density exhibit a power-law. The exponent of that power-law can be used to validate models for lattice gauge fields. Instead of working with fixed cutoffs of the topological charge density, using the notion of a ''watermark'' is more convenient. Furthermore, I discuss how a parallel computer, originally designed for lattice gauge field simulations, can be used for functional magnetic resonance imaging. Multi parameter fits can be parallelized to achieve almost real-time evaluation of fMRI data. (orig.)

  16. Topology in SU(2) lattice gauge theory and parallelization of functional magnetic resonance imaging

    International Nuclear Information System (INIS)

    Solbrig, Stefan

    2008-01-01

    In this thesis, I discuss topological properties of quenched SU(2) lattice gauge fields. In particular, clusters of topological charge density exhibit a power-law. The exponent of that power-law can be used to validate models for lattice gauge fields. Instead of working with fixed cutoffs of the topological charge density, using the notion of a ''watermark'' is more convenient. Furthermore, I discuss how a parallel computer, originally designed for lattice gauge field simulations, can be used for functional magnetic resonance imaging. Multi parameter fits can be parallelized to achieve almost real-time evaluation of fMRI data. (orig.)

  17. Unique identification code for medical fundus images using blood vessel pattern for tele-ophthalmology applications.

    Science.gov (United States)

    Singh, Anushikha; Dutta, Malay Kishore; Sharma, Dilip Kumar

    2016-10-01

    Identification of fundus images during transmission and storage in database for tele-ophthalmology applications is an important issue in modern era. The proposed work presents a novel accurate method for generation of unique identification code for identification of fundus images for tele-ophthalmology applications and storage in databases. Unlike existing methods of steganography and watermarking, this method does not tamper the medical image as nothing is embedded in this approach and there is no loss of medical information. Strategic combination of unique blood vessel pattern and patient ID is considered for generation of unique identification code for the digital fundus images. Segmented blood vessel pattern near the optic disc is strategically combined with patient ID for generation of a unique identification code for the image. The proposed method of medical image identification is tested on the publically available DRIVE and MESSIDOR database of fundus image and results are encouraging. Experimental results indicate the uniqueness of identification code and lossless recovery of patient identity from unique identification code for integrity verification of fundus images. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Robust Dehaze Algorithm for Degraded Image of CMOS Image Sensors

    Directory of Open Access Journals (Sweden)

    Chen Qu

    2017-09-01

    Full Text Available The CMOS (Complementary Metal-Oxide-Semiconductor is a new type of solid image sensor device widely used in object tracking, object recognition, intelligent navigation fields, and so on. However, images captured by outdoor CMOS sensor devices are usually affected by suspended atmospheric particles (such as haze, causing a reduction in image contrast, color distortion problems, and so on. In view of this, we propose a novel dehazing approach based on a local consistent Markov random field (MRF framework. The neighboring clique in traditional MRF is extended to the non-neighboring clique, which is defined on local consistent blocks based on two clues, where both the atmospheric light and transmission map satisfy the character of local consistency. In this framework, our model can strengthen the restriction of the whole image while incorporating more sophisticated statistical priors, resulting in more expressive power of modeling, thus, solving inadequate detail recovery effectively and alleviating color distortion. Moreover, the local consistent MRF framework can obtain details while maintaining better results for dehazing, which effectively improves the image quality captured by the CMOS image sensor. Experimental results verified that the method proposed has the combined advantages of detail recovery and color preservation.

  19. Robust and Efficient Parametric Face Alignment

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    2011-01-01

    We propose a correlation-based approach to parametric object alignment particularly suitable for face analysis applications which require efficiency and robustness against occlusions and illumination changes. Our algorithm registers two images by iteratively maximizing their correlation coefficient

  20. Robust MR spine detection using hierarchical learning and local articulated model.

    Science.gov (United States)

    Zhan, Yiqiang; Maneesh, Dewan; Harder, Martin; Zhou, Xiang Sean

    2012-01-01

    A clinically acceptable auto-spine detection system, i.e., localization and labeling of vertebrae and inter-vertebral discs, is required to have high robustness, in particular to severe diseases (e.g., scoliosis) and imaging artifacts (e.g. metal artifacts in MR). Our method aims to achieve this goal with two novel components. First, instead of treating vertebrae/discs as either repetitive components or completely independent entities, we emulate a radiologist and use a hierarchial strategy to learn detectors dedicated to anchor (distinctive) vertebrae, bundle (non-distinctive) vertebrae and inter-vertebral discs, respectively. At run-time, anchor vertebrae are detected concurrently to provide redundant and distributed appearance cues robust to local imaging artifacts. Bundle vertebrae detectors provide candidates of vertebrae with subtle appearance differences, whose labels are mutually determined by anchor vertebrae to gain additional robustness. Disc locations are derived from a cloud of responses from disc detectors, which is robust to sporadic voxel-level errors. Second, owing to the non-rigidness of spine anatomies, we employ a local articulated model to effectively model the spatial relations across vertebrae and discs. The local articulated model fuses appearance cues from different detectors in a way that is robust to abnormal spine geometry resulting from severe diseases. Our method is validated by 300 MR spine scout scans and exhibits robust performance, especially to cases with severe diseases and imaging artifacts.

  1. Communicating via robust synchronization of chaotic lasers

    International Nuclear Information System (INIS)

    Lopez-Gutierrez, R.M.; Posadas-Castillo, C.; Lopez-Mancilla, D.; Cruz-Hernandez, C.

    2009-01-01

    In this paper, the robust synchronization problem for coupled chaotic Nd:YAG lasers is addressed. We resort to complex systems theory to achieve chaos synchronization. Based on stability theory, it is shown that the state trajectories of the perturbed error synchronization are ultimately bounded, provided the unperturbed synchronization error system is exponentially stable, and some conditions on the bounds of the perturbation terms are satisfied. So that, encoding, transmission, and decoding in chaotic optical communications are presented. We analyze the transmission and recovery of encrypted information when parameter mismatches are considered. Computer simulations are provided to show the effectiveness of this robustness synchronization property, we present the encrypted transmission of image messages, and we show that, the transmitted image is faithfully recovered.

  2. Communicating via robust synchronization of chaotic lasers

    Energy Technology Data Exchange (ETDEWEB)

    Lopez-Gutierrez, R.M. [Engineering Faculty, Baja California Autonomous University (UABC), Km. 103 Carret. Tij-Ens., 22860 Ensenada, B.C. (Mexico); Posadas-Castillo, C. [Engineering Faculty, Baja California Autonomous University (UABC), Km. 103 Carret. Tij-Ens., 22860 Ensenada, B.C. (Mexico); FIME, Autonomous University of Nuevo Leon (UANL), Pedro de Alba, S.N., Cd. Universitaria, San Nicolas de los Garza, NL (Mexico); Lopez-Mancilla, D. [Departamento de Ciencias Exactas y Tecnologicas, Centro Universitario de los Lagos, Universidad de Guadalajara (CULagos-UdeG), Enrique Diaz de Leon s/n, 47460 Lagos de Moreno, Jal. (Mexico); Cruz-Hernandez, C. [Electronics and Telecommunications Department, Scientific Research and Advanced Studies of Ensenada (CICESE), Km. 107 Carret. Tij-Ens., 22860 Ensenada, B.C. (Mexico)], E-mail: ccruz@cicese.mx

    2009-10-15

    In this paper, the robust synchronization problem for coupled chaotic Nd:YAG lasers is addressed. We resort to complex systems theory to achieve chaos synchronization. Based on stability theory, it is shown that the state trajectories of the perturbed error synchronization are ultimately bounded, provided the unperturbed synchronization error system is exponentially stable, and some conditions on the bounds of the perturbation terms are satisfied. So that, encoding, transmission, and decoding in chaotic optical communications are presented. We analyze the transmission and recovery of encrypted information when parameter mismatches are considered. Computer simulations are provided to show the effectiveness of this robustness synchronization property, we present the encrypted transmission of image messages, and we show that, the transmitted image is faithfully recovered.

  3. Hyperspectral Unmixing with Robust Collaborative Sparse Regression

    Directory of Open Access Journals (Sweden)

    Chang Li

    2016-07-01

    Full Text Available Recently, sparse unmixing (SU of hyperspectral data has received particular attention for analyzing remote sensing images. However, most SU methods are based on the commonly admitted linear mixing model (LMM, which ignores the possible nonlinear effects (i.e., nonlinearity. In this paper, we propose a new method named robust collaborative sparse regression (RCSR based on the robust LMM (rLMM for hyperspectral unmixing. The rLMM takes the nonlinearity into consideration, and the nonlinearity is merely treated as outlier, which has the underlying sparse property. The RCSR simultaneously takes the collaborative sparse property of the abundance and sparsely distributed additive property of the outlier into consideration, which can be formed as a robust joint sparse regression problem. The inexact augmented Lagrangian method (IALM is used to optimize the proposed RCSR. The qualitative and quantitative experiments on synthetic datasets and real hyperspectral images demonstrate that the proposed RCSR is efficient for solving the hyperspectral SU problem compared with the other four state-of-the-art algorithms.

  4. DWT-based blind and robust watermarking using SPIHT algorithm ...

    Indian Academy of Sciences (India)

    TOSHANLAL MEENPAL

    2018-02-07

    Feb 7, 2018 ... reported where the crucial diseases have been identified and understood very .... the core technology of the emerging multimedia stan- dards MPEG-4 ... scheme resistive against large scale compression, crop- ping and many ...

  5. ROBUSTNESS OF A FACE-RECOGNITION TECHNIQUE BASED ON SUPPORT VECTOR MACHINES

    OpenAIRE

    Prashanth Harshangi; Koshy George

    2010-01-01

    The ever-increasing requirements of security concerns have placed a greater demand for face recognition surveillance systems. However, most current face recognition techniques are not quite robust with respect to factors such as variable illumination, facial expression and detail, and noise in images. In this paper, we demonstrate that face recognition using support vector machines are sufficiently robust to different kinds of noise, does not require image pre-processing, and can be used with...

  6. WE-AB-BRA-01: 3D-2D Image Registration for Target Localization in Spine Surgery: Comparison of Similarity Metrics Against Robustness to Content Mismatch

    International Nuclear Information System (INIS)

    De Silva, T; Ketcha, M; Siewerdsen, J H; Uneri, A; Reaungamornrat, S; Vogt, S; Kleinszig, G; Lo, S F; Wolinsky, J P; Gokaslan, Z L; Aygun, N

    2015-01-01

    Purpose: In image-guided spine surgery, mapping 3D preoperative images to 2D intraoperative images via 3D-2D registration can provide valuable assistance in target localization. However, the presence of surgical instrumentation, hardware implants, and soft-tissue resection/displacement causes mismatches in image content, confounding existing registration methods. Manual/semi-automatic methods to mask such extraneous content is time consuming, user-dependent, error prone, and disruptive to clinical workflow. We developed and evaluated 2 novel similarity metrics within a robust registration framework to overcome such challenges in target localization. Methods: An IRB-approved retrospective study in 19 spine surgery patients included 19 preoperative 3D CT images and 50 intraoperative mobile radiographs in cervical, thoracic, and lumbar spine regions. A neuroradiologist provided truth definition of vertebral positions in CT and radiography. 3D-2D registration was performed using the CMA-ES optimizer with 4 gradient-based image similarity metrics: (1) gradient information (GI); (2) gradient correlation (GC); (3) a novel variant referred to as gradient orientation (GO); and (4) a second variant referred to as truncated gradient correlation (TGC). Registration accuracy was evaluated in terms of the projection distance error (PDE) of the vertebral levels. Results: Conventional similarity metrics were susceptible to gross registration error and failure modes associated with the presence of surgical instrumentation: for GI, the median PDE and interquartile range was 33.0±43.6 mm; similarly for GC, PDE = 23.0±92.6 mm respectively. The robust metrics GO and TGC, on the other hand, demonstrated major improvement in PDE (7.6 ±9.4 mm and 8.1± 18.1 mm, respectively) and elimination of gross failure modes. Conclusion: The proposed GO and TGC similarity measures improve registration accuracy and robustness to gross failure in the presence of strong image content mismatch. Such

  7. WE-AB-BRA-01: 3D-2D Image Registration for Target Localization in Spine Surgery: Comparison of Similarity Metrics Against Robustness to Content Mismatch

    Energy Technology Data Exchange (ETDEWEB)

    De Silva, T; Ketcha, M; Siewerdsen, J H [Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD (United States); Uneri, A; Reaungamornrat, S [Department of Computer Science, Johns Hopkins University, Baltimore, MD (United States); Vogt, S; Kleinszig, G [Siemens Healthcare XP Division, Erlangen, DE (Germany); Lo, S F; Wolinsky, J P; Gokaslan, Z L [Department of Neurosurgery, The Johns Hopkins Hospital, Baltimore, MD (United States); Aygun, N [Department of Raiology and Radiological Sciences, The Johns Hopkins Hospital, Baltimore, MD (United States)

    2015-06-15

    Purpose: In image-guided spine surgery, mapping 3D preoperative images to 2D intraoperative images via 3D-2D registration can provide valuable assistance in target localization. However, the presence of surgical instrumentation, hardware implants, and soft-tissue resection/displacement causes mismatches in image content, confounding existing registration methods. Manual/semi-automatic methods to mask such extraneous content is time consuming, user-dependent, error prone, and disruptive to clinical workflow. We developed and evaluated 2 novel similarity metrics within a robust registration framework to overcome such challenges in target localization. Methods: An IRB-approved retrospective study in 19 spine surgery patients included 19 preoperative 3D CT images and 50 intraoperative mobile radiographs in cervical, thoracic, and lumbar spine regions. A neuroradiologist provided truth definition of vertebral positions in CT and radiography. 3D-2D registration was performed using the CMA-ES optimizer with 4 gradient-based image similarity metrics: (1) gradient information (GI); (2) gradient correlation (GC); (3) a novel variant referred to as gradient orientation (GO); and (4) a second variant referred to as truncated gradient correlation (TGC). Registration accuracy was evaluated in terms of the projection distance error (PDE) of the vertebral levels. Results: Conventional similarity metrics were susceptible to gross registration error and failure modes associated with the presence of surgical instrumentation: for GI, the median PDE and interquartile range was 33.0±43.6 mm; similarly for GC, PDE = 23.0±92.6 mm respectively. The robust metrics GO and TGC, on the other hand, demonstrated major improvement in PDE (7.6 ±9.4 mm and 8.1± 18.1 mm, respectively) and elimination of gross failure modes. Conclusion: The proposed GO and TGC similarity measures improve registration accuracy and robustness to gross failure in the presence of strong image content mismatch. Such

  8. A Simple and Robust Gray Image Encryption Scheme Using Chaotic Logistic Map and Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Adelaïde Nicole Kengnou Telem

    2014-01-01

    Full Text Available A robust gray image encryption scheme using chaotic logistic map and artificial neural network (ANN is introduced. In the proposed method, an external secret key is used to derive the initial conditions for the logistic chaotic maps which are employed to generate weights and biases matrices of the multilayer perceptron (MLP. During the learning process with the backpropagation algorithm, ANN determines the weight matrix of the connections. The plain image is divided into four subimages which are used for the first diffusion stage. The subimages obtained previously are divided into the square subimage blocks. In the next stage, different initial conditions are employed to generate a key stream which will be used for permutation and diffusion of the subimage blocks. Some security analyses such as entropy analysis, statistical analysis, and key sensitivity analysis are given to demonstrate the key space of the proposed algorithm which is large enough to make brute force attacks infeasible. Computing validation using experimental data with several gray images has been carried out with detailed numerical analysis, in order to validate the high security of the proposed encryption scheme.

  9. A robust firearm identification algorithm of forensic ballistics specimens

    Science.gov (United States)

    Chuan, Z. L.; Jemain, A. A.; Liong, C.-Y.; Ghani, N. A. M.; Tan, L. K.

    2017-09-01

    There are several inherent difficulties in the existing firearm identification algorithms, include requiring the physical interpretation and time consuming. Therefore, the aim of this study is to propose a robust algorithm for a firearm identification based on extracting a set of informative features from the segmented region of interest (ROI) using the simulated noisy center-firing pin impression images. The proposed algorithm comprises Laplacian sharpening filter, clustering-based threshold selection, unweighted least square estimator, and segment a square ROI from the noisy images. A total of 250 simulated noisy images collected from five different pistols of the same make, model and caliber are used to evaluate the robustness of the proposed algorithm. This study found that the proposed algorithm is able to perform the identical task on the noisy images with noise levels as high as 70%, while maintaining a firearm identification accuracy rate of over 90%.

  10. Robust Fringe Projection Profilometry via Sparse Representation.

    Science.gov (United States)

    Budianto; Lun, Daniel P K

    2016-04-01

    In this paper, a robust fringe projection profilometry (FPP) algorithm using the sparse dictionary learning and sparse coding techniques is proposed. When reconstructing the 3D model of objects, traditional FPP systems often fail to perform if the captured fringe images have a complex scene, such as having multiple and occluded objects. It introduces great difficulty to the phase unwrapping process of an FPP system that can result in serious distortion in the final reconstructed 3D model. For the proposed algorithm, it encodes the period order information, which is essential to phase unwrapping, into some texture patterns and embeds them to the projected fringe patterns. When the encoded fringe image is captured, a modified morphological component analysis and a sparse classification procedure are performed to decode and identify the embedded period order information. It is then used to assist the phase unwrapping process to deal with the different artifacts in the fringe images. Experimental results show that the proposed algorithm can significantly improve the robustness of an FPP system. It performs equally well no matter the fringe images have a simple or complex scene, or are affected due to the ambient lighting of the working environment.

  11. Robust skull stripping using multiple MR image contrasts insensitive to pathology.

    Science.gov (United States)

    Roy, Snehashis; Butman, John A; Pham, Dzung L

    2017-02-01

    Automatic skull-stripping or brain extraction of magnetic resonance (MR) images is often a fundamental step in many neuroimage processing pipelines. The accuracy of subsequent image processing relies on the accuracy of the skull-stripping. Although many automated stripping methods have been proposed in the past, it is still an active area of research particularly in the context of brain pathology. Most stripping methods are validated on T 1 -w MR images of normal brains, especially because high resolution T 1 -w sequences are widely acquired and ground truth manual brain mask segmentations are publicly available for normal brains. However, different MR acquisition protocols can provide complementary information about the brain tissues, which can be exploited for better distinction between brain, cerebrospinal fluid, and unwanted tissues such as skull, dura, marrow, or fat. This is especially true in the presence of pathology, where hemorrhages or other types of lesions can have similar intensities as skull in a T 1 -w image. In this paper, we propose a sparse patch based Multi-cONtrast brain STRipping method (MONSTR), 2 where non-local patch information from one or more atlases, which contain multiple MR sequences and reference delineations of brain masks, are combined to generate a target brain mask. We compared MONSTR with four state-of-the-art, publicly available methods: BEaST, SPECTRE, ROBEX, and OptiBET. We evaluated the performance of these methods on 6 datasets consisting of both healthy subjects and patients with various pathologies. Three datasets (ADNI, MRBrainS, NAMIC) are publicly available, consisting of 44 healthy volunteers and 10 patients with schizophrenia. Other three in-house datasets, comprising 87 subjects in total, consisted of patients with mild to severe traumatic brain injury, brain tumors, and various movement disorders. A combination of T 1 -w, T 2 -w were used to skull-strip these datasets. We show significant improvement in stripping

  12. Sparse alignment for robust tensor learning.

    Science.gov (United States)

    Lai, Zhihui; Wong, Wai Keung; Xu, Yong; Zhao, Cairong; Sun, Mingming

    2014-10-01

    Multilinear/tensor extensions of manifold learning based algorithms have been widely used in computer vision and pattern recognition. This paper first provides a systematic analysis of the multilinear extensions for the most popular methods by using alignment techniques, thereby obtaining a general tensor alignment framework. From this framework, it is easy to show that the manifold learning based tensor learning methods are intrinsically different from the alignment techniques. Based on the alignment framework, a robust tensor learning method called sparse tensor alignment (STA) is then proposed for unsupervised tensor feature extraction. Different from the existing tensor learning methods, L1- and L2-norms are introduced to enhance the robustness in the alignment step of the STA. The advantage of the proposed technique is that the difficulty in selecting the size of the local neighborhood can be avoided in the manifold learning based tensor feature extraction algorithms. Although STA is an unsupervised learning method, the sparsity encodes the discriminative information in the alignment step and provides the robustness of STA. Extensive experiments on the well-known image databases as well as action and hand gesture databases by encoding object images as tensors demonstrate that the proposed STA algorithm gives the most competitive performance when compared with the tensor-based unsupervised learning methods.

  13. Robust Analysis and Design of Multivariable Systems

    National Research Council Canada - National Science Library

    Tannenbaum, Allen

    1998-01-01

    In this Final Report, we will describe the work we have performed in robust control theory and nonlinear control, and the utilization of techniques in image processing and computer vision for problems in visual tracking...

  14. Rotation-robust math symbol recognition and retrieval using outer contours and image subsampling

    Science.gov (United States)

    Zhu, Siyu; Hu, Lei; Zanibbi, Richard

    2013-01-01

    This paper presents an unified recognition and retrieval system for isolated offline printed mathematical symbols for the first time. The system is based on nearest neighbor scheme and uses modified Turning Function and Grid Features to calculate the distance between two symbols based on Sum of Squared Difference. An unwrap process and an alignment process are applied to modify Turning Function to deal with the horizontal and vertical shift caused by the changing of staring point and rotation. This modified Turning Function make our system robust against rotation of the symbol image. The system obtains top-1 recognition rate of 96.90% and 47.27% Area Under Curve (AUC) of precision/recall plot on the InftyCDB-3 dataset. Experiment result shows that the system with modified Turning Function performs significantly better than the system with original Turning Function on the rotated InftyCDB-3 dataset.

  15. Facial Symmetry in Robust Anthropometrics

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2012-01-01

    Roč. 57, č. 3 (2012), s. 691-698 ISSN 0022-1198 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : forensic science * anthropology * robust image analysis * correlation analysis * multivariate data * classification Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.244, year: 2012

  16. Robust binarization of degraded document images using heuristics

    Science.gov (United States)

    Parker, Jon; Frieder, Ophir; Frieder, Gideon

    2013-12-01

    Historically significant documents are often discovered with defects that make them difficult to read and analyze. This fact is particularly troublesome if the defects prevent software from performing an automated analysis. Image enhancement methods are used to remove or minimize document defects, improve software performance, and generally make images more legible. We describe an automated, image enhancement method that is input page independent and requires no training data. The approach applies to color or greyscale images with hand written script, typewritten text, images, and mixtures thereof. We evaluated the image enhancement method against the test images provided by the 2011 Document Image Binarization Contest (DIBCO). Our method outperforms all 2011 DIBCO entrants in terms of average F1 measure - doing so with a significantly lower variance than top contest entrants. The capability of the proposed method is also illustrated using select images from a collection of historic documents stored at Yad Vashem Holocaust Memorial in Israel.

  17. Semantically transparent fingerprinting for right protection of digital cinema

    Science.gov (United States)

    Wu, Xiaolin

    2003-06-01

    Digital cinema, a new frontier and crown jewel of digital multimedia, has the potential of revolutionizing the science, engineering and business of movie production and distribution. The advantages of digital cinema technology over traditional analog technology are numerous and profound. But without effective and enforceable copyright protection measures, digital cinema can be more susceptible to widespread piracy, which can dampen or even prevent the commercial deployment of digital cinema. In this paper we propose a novel approach of fingerprinting each individual distribution copy of a digital movie for the purpose of tracing pirated copies back to their source. The proposed fingerprinting technique presents a fundamental departure from the traditional digital watermarking/fingerprinting techniques. Its novelty and uniqueness lie in a so-called semantic or subjective transparency property. The fingerprints are created by editing those visual and audio attributes that can be modified with semantic and subjective transparency to the audience. Semantically-transparent fingerprinting or watermarking is the most robust kind among all existing watermarking techniques, because it is content-based not sample-based, and semantically-recoverable not statistically-recoverable.

  18. 最佳设计

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Threshold-logic-based design of compressors;Robustness of optimal timing strategies in dynanlic investment processes;Optimization of a microstrip directional coupler with high performance using evolution strategy;Key Issues of Network Analysis for Optimizing Cellular Mobile Communications Systems;Low complexity code design for lossless and near-lossless side information source codes;Robust watermarking scheme with side information;A novel Si{sub}(1-x)Ge{sub}x/Si hetero-junction power diode for the fast-switching and the soft recovery。

  19. PET functional volume delineation: a robustness and repeatability study

    International Nuclear Information System (INIS)

    Hatt, Mathieu; Cheze-le Rest, Catherine; Albarghach, Nidal; Pradier, Olivier; Visvikis, Dimitris

    2011-01-01

    Current state-of-the-art algorithms for functional uptake volume segmentation in PET imaging consist of threshold-based approaches, whose parameters often require specific optimization for a given scanner and associated reconstruction algorithms. Different advanced image segmentation approaches previously proposed and extensively validated, such as among others fuzzy C-means (FCM) clustering, or fuzzy locally adaptive bayesian (FLAB) algorithm have the potential to improve the robustness of functional uptake volume measurements. The objective of this study was to investigate robustness and repeatability with respect to various scanner models, reconstruction algorithms and acquisition conditions. Robustness was evaluated using a series of IEC phantom acquisitions carried out on different PET/CT scanners (Philips Gemini and Gemini Time-of-Flight, Siemens Biograph and GE Discovery LS) with their associated reconstruction algorithms (RAMLA, TF MLEM, OSEM). A range of acquisition parameters (contrast, duration) and reconstruction parameters (voxel size) were considered for each scanner model, and the repeatability of each method was evaluated on simulated and clinical tumours and compared to manual delineation. For all the scanner models, acquisition parameters and reconstruction algorithms considered, the FLAB algorithm demonstrated higher robustness in delineation of the spheres with low mean errors (10%) and variability (5%), with respect to threshold-based methodologies and FCM. The repeatability provided by all segmentation algorithms considered was very high with a negligible variability of <5% in comparison to that associated with manual delineation (5-35%). The use of advanced image segmentation algorithms may not only allow high accuracy as previously demonstrated, but also provide a robust and repeatable tool to aid physicians as an initial guess in determining functional volumes in PET. (orig.)

  20. Whole-slide imaging is a robust alternative to traditional fluorescent microscopy for fluorescence in situ hybridization imaging using break-apart DNA probes.

    Science.gov (United States)

    Laurent, Camille; Guérin, Maxime; Frenois, François-Xavier; Thuries, Valérie; Jalabert, Laurence; Brousset, Pierre; Valmary-Degano, Séverine

    2013-08-01

    Fluorescence in situ hybridization is an indispensable technique used in routine pathology and for theranostic purposes. Because fluorescence in situ hybridization techniques require sophisticated microscopic workstations and long procedures of image acquisition with sometimes subjective and poorly reproducible results, we decided to test a whole-slide imaging system as an alternative approach. In this study, we used the latest generation of Pannoramic 250 Flash digital microscopes (P250 Flash digital microscopes; 3DHISTECH, Budapest, Hungary) to digitize fluorescence in situ hybridization slides of diffuse large B cells lymphoma cases for detecting MYC rearrangement. The P250 Flash digital microscope was found to be precise with better definition of split signals in cells containing MYC rearrangement with fewer truncated signals as compared to traditional fluorescence microscopy. This digital technique is easier thanks to the preview function, which allows almost immediate identification of the tumor area, and the panning and zooming functionalities as well as a shorter acquisition time. Moreover, fluorescence in situ hybridization analyses using the digital technique appeared to be more reproducible between pathologists. Finally, the digital technique also allowed prolonged conservation of photos. In conclusion, whole-slide imaging technologies represent rapid, robust, and highly sensitive methods for interpreting fluorescence in situ hybridization slides with break-apart probes. In addition, these techniques offer an easier way to interpret the signals and allow definitive storage of the images for pathology expert networks or e-learning databases. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Topology in SU(2) lattice gauge theory and parallelization of functional magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Solbrig, Stefan

    2008-07-01

    In this thesis, I discuss topological properties of quenched SU(2) lattice gauge fields. In particular, clusters of topological charge density exhibit a power-law. The exponent of that power-law can be used to validate models for lattice gauge fields. Instead of working with fixed cutoffs of the topological charge density, using the notion of a ''watermark'' is more convenient. Furthermore, I discuss how a parallel computer, originally designed for lattice gauge field simulations, can be used for functional magnetic resonance imaging. Multi parameter fits can be parallelized to achieve almost real-time evaluation of fMRI data. (orig.)

  2. A hash-based image encryption algorithm

    Science.gov (United States)

    Cheddad, Abbas; Condell, Joan; Curran, Kevin; McKevitt, Paul

    2010-03-01

    There exist several algorithms that deal with text encryption. However, there has been little research carried out to date on encrypting digital images or video files. This paper describes a novel way of encrypting digital images with password protection using 1D SHA-2 algorithm coupled with a compound forward transform. A spatial mask is generated from the frequency domain by taking advantage of the conjugate symmetry of the complex imagery part of the Fourier Transform. This mask is then XORed with the bit stream of the original image. Exclusive OR (XOR), a logical symmetric operation, that yields 0 if both binary pixels are zeros or if both are ones and 1 otherwise. This can be verified simply by modulus (pixel1, pixel2, 2). Finally, confusion is applied based on the displacement of the cipher's pixels in accordance with a reference mask. Both security and performance aspects of the proposed method are analyzed, which prove that the method is efficient and secure from a cryptographic point of view. One of the merits of such an algorithm is to force a continuous tone payload, a steganographic term, to map onto a balanced bits distribution sequence. This bit balance is needed in certain applications, such as steganography and watermarking, since it is likely to have a balanced perceptibility effect on the cover image when embedding.

  3. Anonymous Authorship Control for User-Generated Content

    Directory of Open Access Journals (Sweden)

    Suk-Bong LEE

    2007-12-01

    Full Text Available User-Generated Content (UGC is opening up new large market in content services, and more and more people are visiting web sites to share and enjoy UGCs. These trends make many authors to move into online. Authors want to conserve their authorship and expect to publish their UGC anonymously in cases. To meet the requirements, we propose a new authorship control model based on watermarking and metadata. Authors can embed their authorship into their UGC with identities or with anonym. Even though an author publishes his UGC anonymously, he can prove his authorship without unveiling his identity via 5 methods utilizing the proposed authorship model. The proposed model and methods need no TTP and are robust even based on fragile underlying watermarking scheme.

  4. Application of visual cryptography for learning in optics and photonics

    Science.gov (United States)

    Mandal, Avikarsha; Wozniak, Peter; Vauderwange, Oliver; Curticapean, Dan

    2016-09-01

    In the age data digitalization, important applications of optics and photonics based sensors and technology lie in the field of biometrics and image processing. Protecting user data in a safe and secure way is an essential task in this area. However, traditional cryptographic protocols rely heavily on computer aided computation. Secure protocols which rely only on human interactions are usually simpler to understand. In many scenarios development of such protocols are also important for ease of implementation and deployment. Visual cryptography (VC) is an encryption technique on images (or text) in which decryption is done by human visual system. In this technique, an image is encrypted into number of pieces (known as shares). When the printed shares are physically superimposed together, the image can be decrypted with human vision. Modern digital watermarking technologies can be combined with VC for image copyright protection where the shares can be watermarks (small identification) embedded in the image. Similarly, VC can be used for improving security of biometric authentication. This paper presents about design and implementation of a practical laboratory experiment based on the concept of VC for a course in media engineering. Specifically, our contribution deals with integration of VC in different schemes for applications like digital watermarking and biometric authentication in the field of optics and photonics. We describe theoretical concepts and propose our infrastructure for the experiment. Finally, we will evaluate the learning outcome of the experiment, performed by the students.

  5. Quantum Computation-Based Image Representation, Processing Operations and Their Applications

    Directory of Open Access Journals (Sweden)

    Fei Yan

    2014-10-01

    Full Text Available A flexible representation of quantum images (FRQI was proposed to facilitate the extension of classical (non-quantum-like image processing applications to the quantum computing domain. The representation encodes a quantum image in the form of a normalized state, which captures information about colors and their corresponding positions in the images. Since its conception, a handful of processing transformations have been formulated, among which are the geometric transformations on quantum images (GTQI and the CTQI that are focused on the color information of the images. In addition, extensions and applications of FRQI representation, such as multi-channel representation for quantum images (MCQI, quantum image data searching, watermarking strategies for quantum images, a framework to produce movies on quantum computers and a blueprint for quantum video encryption and decryption have also been suggested. These proposals extend classical-like image and video processing applications to the quantum computing domain and offer a significant speed-up with low computational resources in comparison to performing the same tasks on traditional computing devices. Each of the algorithms and the mathematical foundations for their execution were simulated using classical computing resources, and their results were analyzed alongside other classical computing equivalents. The work presented in this review is intended to serve as the epitome of advances made in FRQI quantum image processing over the past five years and to simulate further interest geared towards the realization of some secure and efficient image and video processing applications on quantum computers.

  6. Robustness of radiomic breast features of benign lesions and luminal A cancers across MR magnet strengths

    Science.gov (United States)

    Whitney, Heather M.; Drukker, Karen; Edwards, Alexandra; Papaioannou, John; Giger, Maryellen L.

    2018-02-01

    Radiomics features extracted from breast lesion images have shown potential in diagnosis and prognosis of breast cancer. As clinical institutions transition from 1.5 T to 3.0 T magnetic resonance imaging (MRI), it is helpful to identify robust features across these field strengths. In this study, dynamic contrast-enhanced MR images were acquired retrospectively under IRB/HIPAA compliance, yielding 738 cases: 241 and 124 benign lesions imaged at 1.5 T and 3.0 T and 231 and 142 luminal A cancers imaged at 1.5 T and 3.0 T, respectively. Lesions were segmented using a fuzzy C-means method. Extracted radiomic values for each group of lesions by cancer status and field strength of acquisition were compared using a Kolmogorov-Smirnov test for the null hypothesis that two groups being compared came from the same distribution, with p-values being corrected for multiple comparisons by the Holm-Bonferroni method. Two shape features, one texture feature, and three enhancement variance kinetics features were found to be potentially robust. All potentially robust features had areas under the receiver operating characteristic curve (AUC) statistically greater than 0.5 in the task of distinguishing between lesion types (range of means 0.57-0.78). The significant difference in voxel size between field strength of acquisition limits the ability to affirm more features as robust or not robust according to field strength alone, and inhomogeneities in static field strength and radiofrequency field could also have affected the assessment of kinetic curve features as robust or not. Vendor-specific image scaling could have also been a factor. These findings will contribute to the development of radiomic signatures that use features identified as robust across field strength.

  7. Automated detection of microaneurysms using robust blob descriptors

    Science.gov (United States)

    Adal, K.; Ali, S.; Sidibé, D.; Karnowski, T.; Chaum, E.; Mériaudeau, F.

    2013-03-01

    Microaneurysms (MAs) are among the first signs of diabetic retinopathy (DR) that can be seen as round dark-red structures in digital color fundus photographs of retina. In recent years, automated computer-aided detection and diagnosis (CAD) of MAs has attracted many researchers due to its low-cost and versatile nature. In this paper, the MA detection problem is modeled as finding interest points from a given image and several interest point descriptors are introduced and integrated with machine learning techniques to detect MAs. The proposed approach starts by applying a novel fundus image contrast enhancement technique using Singular Value Decomposition (SVD) of fundus images. Then, Hessian-based candidate selection algorithm is applied to extract image regions which are more likely to be MAs. For each candidate region, robust low-level blob descriptors such as Speeded Up Robust Features (SURF) and Intensity Normalized Radon Transform are extracted to characterize candidate MA regions. The combined features are then classified using SVM which has been trained using ten manually annotated training images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. Preliminary results show the competitiveness of the proposed candidate selection techniques against state-of-the art methods as well as the promising future for the proposed descriptors to be used in the localization of MAs from fundus images.

  8. A Novel Secure Image Hashing Based on Reversible Watermarking for Forensic Analysis

    OpenAIRE

    Doyoddorj, Munkhbaatar; Rhee, Kyung-Hyune

    2011-01-01

    Part 2: Workshop; International audience; Nowadays, digital images and videos have become increasingly popular over the Internet and bring great social impact to a wide audience. In the meanwhile, technology advancement allows people to easily alter the content of digital multimedia and brings serious concern on the trustworthiness of online multimedia information. In this paper, we propose a new framework for multimedia forensics by using compact side information based on reversible watermar...

  9. Protection of Mobile Agents Execution Using a Modified Self-Validating Branch-Based Software Watermarking with External Sentinel

    Science.gov (United States)

    Tomàs-Buliart, Joan; Fernández, Marcel; Soriano, Miguel

    Critical infrastructures are usually controlled by software entities. To monitor the well-function of these entities, a solution based in the use of mobile agents is proposed. Some proposals to detect modifications of mobile agents, as digital signature of code, exist but they are oriented to protect software against modification or to verify that an agent have been executed correctly. The aim of our proposal is to guarantee that the software is being executed correctly by a non trusted host. The way proposed to achieve this objective is by the improvement of the Self-Validating Branch-Based Software Watermarking by Myles et al.. The proposed modification is the incorporation of an external element called sentinel which controls branch targets. This technique applied in mobile agents can guarantee the correct operation of an agent or, at least, can detect suspicious behaviours of a malicious host during the execution of the agent instead of detecting when the execution of the agent have finished.

  10. ASIST 2001. Information in a Networked World: Harnessing the Flow. Part III: Poster Presentations.

    Science.gov (United States)

    Proceedings of the ASIST Annual Meeting, 2001

    2001-01-01

    Topics of Poster Presentations include: electronic preprints; intranets; poster session abstracts; metadata; information retrieval; watermark images; video games; distributed information retrieval; subject domain knowledge; data mining; information theory; course development; historians' use of pictorial images; information retrieval software;…

  11. Dermatological image search engines on the Internet: do they work?

    Science.gov (United States)

    Cutrone, M; Grimalt, R

    2007-02-01

    Atlases on CD-ROM first substituted the use of paediatric dermatology atlases printed on paper. This permitted a faster search and a practical comparison of differential diagnoses. The third step in the evolution of clinical atlases was the onset of the online atlas. Many doctors now use the Internet image search engines to obtain clinical images directly. The aim of this study was to test the reliability of the image search engines compared to the online atlases. We tested seven Internet image search engines with three paediatric dermatology diseases. In general, the service offered by the search engines is good, and continues to be free of charge. The coincidence between what we searched for and what we found was generally excellent, and contained no advertisements. Most Internet search engines provided similar results but some were more user friendly than others. It is not necessary to repeat the same research with Picsearch, Lycos and MSN, as the response would be the same; there is a possibility that they might share software. Image search engines are a useful, free and precise method to obtain paediatric dermatology images for teaching purposes. There is still the matter of copyright to be resolved. What are the legal uses of these 'free' images? How do we define 'teaching purposes'? New watermark methods and encrypted electronic signatures might solve these problems and answer these questions.

  12. Robust water fat separated dual-echo MRI by phase-sensitive reconstruction.

    Science.gov (United States)

    Romu, Thobias; Dahlström, Nils; Leinhard, Olof Dahlqvist; Borga, Magnus

    2017-09-01

    The purpose of this work was to develop and evaluate a robust water-fat separation method for T1-weighted symmetric two-point Dixon data. A method for water-fat separation by phase unwrapping of the opposite-phase images by phase-sensitive reconstruction (PSR) is introduced. PSR consists of three steps; (1), identification of clusters of tissue voxels; (2), unwrapping of the phase in each cluster by solving Poisson's equation; and (3), finding the correct sign of each unwrapped opposite-phase cluster, so that the water-fat images are assigned the correct identities. Robustness was evaluated by counting the number of water-fat swap artifacts in a total of 733 image volumes. The method was also compared to commercial software. In the water-fat separated image volumes, the PSR method failed to unwrap the phase of one cluster and misclassified 10. One swap was observed in areas affected by motion and was constricted to the affected area. Twenty swaps were observed surrounding susceptibility artifacts, none of which spread outside the artifact affected regions. The PSR method had fewer swaps when compared to commercial software. The PSR method can robustly produce water-fat separated whole-body images based on symmetric two-echo spoiled gradient echo images, under both ideal conditions and in the presence of common artifacts. Magn Reson Med 78:1208-1216, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  13. Robust Image Regression Based on the Extended Matrix Variate Power Exponential Distribution of Dependent Noise.

    Science.gov (United States)

    Luo, Lei; Yang, Jian; Qian, Jianjun; Tai, Ying; Lu, Gui-Fu

    2017-09-01

    Dealing with partial occlusion or illumination is one of the most challenging problems in image representation and classification. In this problem, the characterization of the representation error plays a crucial role. In most current approaches, the error matrix needs to be stretched into a vector and each element is assumed to be independently corrupted. This ignores the dependence between the elements of error. In this paper, it is assumed that the error image caused by partial occlusion or illumination changes is a random matrix variate and follows the extended matrix variate power exponential distribution. This has the heavy tailed regions and can be used to describe a matrix pattern of l×m dimensional observations that are not independent. This paper reveals the essence of the proposed distribution: it actually alleviates the correlations between pixels in an error matrix E and makes E approximately Gaussian. On the basis of this distribution, we derive a Schatten p -norm-based matrix regression model with L q regularization. Alternating direction method of multipliers is applied to solve this model. To get a closed-form solution in each step of the algorithm, two singular value function thresholding operators are introduced. In addition, the extended Schatten p -norm is utilized to characterize the distance between the test samples and classes in the design of the classifier. Extensive experimental results for image reconstruction and classification with structural noise demonstrate that the proposed algorithm works much more robustly than some existing regression-based methods.

  14. Radiometric Normalization of Temporal Images Combining Automatic Detection of Pseudo-Invariant Features from the Distance and Similarity Spectral Measures, Density Scatterplot Analysis, and Robust Regression

    Directory of Open Access Journals (Sweden)

    Ana Paula Ferreira de Carvalho

    2013-05-01

    Full Text Available Radiometric precision is difficult to maintain in orbital images due to several factors (atmospheric conditions, Earth-sun distance, detector calibration, illumination, and viewing angles. These unwanted effects must be removed for radiometric consistency among temporal images, leaving only land-leaving radiances, for optimum change detection. A variety of relative radiometric correction techniques were developed for the correction or rectification of images, of the same area, through use of reference targets whose reflectance do not change significantly with time, i.e., pseudo-invariant features (PIFs. This paper proposes a new technique for radiometric normalization, which uses three sequential methods for an accurate PIFs selection: spectral measures of temporal data (spectral distance and similarity, density scatter plot analysis (ridge method, and robust regression. The spectral measures used are the spectral angle (Spectral Angle Mapper, SAM, spectral correlation (Spectral Correlation Mapper, SCM, and Euclidean distance. The spectral measures between the spectra at times t1 and t2 and are calculated for each pixel. After classification using threshold values, it is possible to define points with the same spectral behavior, including PIFs. The distance and similarity measures are complementary and can be calculated together. The ridge method uses a density plot generated from images acquired on different dates for the selection of PIFs. In a density plot, the invariant pixels, together, form a high-density ridge, while variant pixels (clouds and land cover changes are spread, having low density, facilitating its exclusion. Finally, the selected PIFs are subjected to a robust regression (M-estimate between pairs of temporal bands for the detection and elimination of outliers, and to obtain the optimal linear equation for a given set of target points. The robust regression is insensitive to outliers, i.e., observation that appears to deviate

  15. A robust human face detection algorithm

    Science.gov (United States)

    Raviteja, Thaluru; Karanam, Srikrishna; Yeduguru, Dinesh Reddy V.

    2012-01-01

    Human face detection plays a vital role in many applications like video surveillance, managing a face image database, human computer interface among others. This paper proposes a robust algorithm for face detection in still color images that works well even in a crowded environment. The algorithm uses conjunction of skin color histogram, morphological processing and geometrical analysis for detecting human faces. To reinforce the accuracy of face detection, we further identify mouth and eye regions to establish the presence/absence of face in a particular region of interest.

  16. Robust real-time pattern matching using bayesian sequential hypothesis testing.

    Science.gov (United States)

    Pele, Ofir; Werman, Michael

    2008-08-01

    This paper describes a method for robust real time pattern matching. We first introduce a family of image distance measures, the "Image Hamming Distance Family". Members of this family are robust to occlusion, small geometrical transforms, light changes and non-rigid deformations. We then present a novel Bayesian framework for sequential hypothesis testing on finite populations. Based on this framework, we design an optimal rejection/acceptance sampling algorithm. This algorithm quickly determines whether two images are similar with respect to a member of the Image Hamming Distance Family. We also present a fast framework that designs a near-optimal sampling algorithm. Extensive experimental results show that the sequential sampling algorithm performance is excellent. Implemented on a Pentium 4 3 GHz processor, detection of a pattern with 2197 pixels, in 640 x 480 pixel frames, where in each frame the pattern rotated and was highly occluded, proceeds at only 0.022 seconds per frame.

  17. Robust Optical Flow Estimation

    Directory of Open Access Journals (Sweden)

    Javier Sánchez Pérez

    2013-10-01

    Full Text Available n this work, we describe an implementation of the variational method proposed by Brox etal. in 2004, which yields accurate optical flows with low running times. It has several benefitswith respect to the method of Horn and Schunck: it is more robust to the presence of outliers,produces piecewise-smooth flow fields and can cope with constant brightness changes. Thismethod relies on the brightness and gradient constancy assumptions, using the information ofthe image intensities and the image gradients to find correspondences. It also generalizes theuse of continuous L1 functionals, which help mitigate the effect of outliers and create a TotalVariation (TV regularization. Additionally, it introduces a simple temporal regularizationscheme that enforces a continuous temporal coherence of the flow fields.

  18. High-accuracy and robust face recognition system based on optical parallel correlator using a temporal image sequence

    Science.gov (United States)

    Watanabe, Eriko; Ishikawa, Mami; Ohta, Maiko; Kodate, Kashiko

    2005-09-01

    Face recognition is used in a wide range of security systems, such as monitoring credit card use, searching for individuals with street cameras via Internet and maintaining immigration control. There are still many technical subjects under study. For instance, the number of images that can be stored is limited under the current system, and the rate of recognition must be improved to account for photo shots taken at different angles under various conditions. We implemented a fully automatic Fast Face Recognition Optical Correlator (FARCO) system by using a 1000 frame/s optical parallel correlator designed and assembled by us. Operational speed for the 1: N (i.e. matching a pair of images among N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 seconds, including the pre/post processing. From trial 1: N identification experiments using FARCO, we acquired low error rates of 2.6% False Reject Rate and 1.3% False Accept Rate. By making the most of the high-speed data-processing capability of this system, much more robustness can be achieved for various recognition conditions when large-category data are registered for a single person. We propose a face recognition algorithm for the FARCO while employing a temporal image sequence of moving images. Applying this algorithm to a natural posture, a two times higher recognition rate scored compared with our conventional system. The system has high potential for future use in a variety of purposes such as search for criminal suspects by use of street and airport video cameras, registration of babies at hospitals or handling of an immeasurable number of images in a database.

  19. A robust object-based shadow detection method for cloud-free high resolution satellite images over urban areas and water bodies

    Science.gov (United States)

    Tatar, Nurollah; Saadatseresht, Mohammad; Arefi, Hossein; Hadavand, Ahmad

    2018-06-01

    Unwanted contrast in high resolution satellite images such as shadow areas directly affects the result of further processing in urban remote sensing images. Detecting and finding the precise position of shadows is critical in different remote sensing processing chains such as change detection, image classification and digital elevation model generation from stereo images. The spectral similarity between shadow areas, water bodies, and some dark asphalt roads makes the development of robust shadow detection algorithms challenging. In addition, most of the existing methods work on pixel-level and neglect the contextual information contained in neighboring pixels. In this paper, a new object-based shadow detection framework is introduced. In the proposed method a pixel-level shadow mask is built by extending established thresholding methods with a new C4 index which enables to solve the ambiguity of shadow and water bodies. Then the pixel-based results are further processed in an object-based majority analysis to detect the final shadow objects. Four different high resolution satellite images are used to validate this new approach. The result shows the superiority of the proposed method over some state-of-the-art shadow detection method with an average of 96% in F-measure.

  20. Robust Circle Detection Using Harmony Search

    Directory of Open Access Journals (Sweden)

    Jaco Fourie

    2017-01-01

    Full Text Available Automatic circle detection is an important element of many image processing algorithms. Traditionally the Hough transform has been used to find circular objects in images but more modern approaches that make use of heuristic optimisation techniques have been developed. These are often used in large complex images where the presence of noise or limited computational resources make the Hough transform impractical. Previous research on the use of the Harmony Search (HS in circle detection showed that HS is an attractive alternative to many of the modern circle detectors based on heuristic optimisers like genetic algorithms and simulated annealing. We propose improvements to this work that enables our algorithm to robustly find multiple circles in larger data sets and still work on realistic images that are heavily corrupted by noisy edges.

  1. A robust background regression based score estimation algorithm for hyperspectral anomaly detection

    Science.gov (United States)

    Zhao, Rui; Du, Bo; Zhang, Liangpei; Zhang, Lefei

    2016-12-01

    Anomaly detection has become a hot topic in the hyperspectral image analysis and processing fields in recent years. The most important issue for hyperspectral anomaly detection is the background estimation and suppression. Unreasonable or non-robust background estimation usually leads to unsatisfactory anomaly detection results. Furthermore, the inherent nonlinearity of hyperspectral images may cover up the intrinsic data structure in the anomaly detection. In order to implement robust background estimation, as well as to explore the intrinsic data structure of the hyperspectral image, we propose a robust background regression based score estimation algorithm (RBRSE) for hyperspectral anomaly detection. The Robust Background Regression (RBR) is actually a label assignment procedure which segments the hyperspectral data into a robust background dataset and a potential anomaly dataset with an intersection boundary. In the RBR, a kernel expansion technique, which explores the nonlinear structure of the hyperspectral data in a reproducing kernel Hilbert space, is utilized to formulate the data as a density feature representation. A minimum squared loss relationship is constructed between the data density feature and the corresponding assigned labels of the hyperspectral data, to formulate the foundation of the regression. Furthermore, a manifold regularization term which explores the manifold smoothness of the hyperspectral data, and a maximization term of the robust background average density, which suppresses the bias caused by the potential anomalies, are jointly appended in the RBR procedure. After this, a paired-dataset based k-nn score estimation method is undertaken on the robust background and potential anomaly datasets, to implement the detection output. The experimental results show that RBRSE achieves superior ROC curves, AUC values, and background-anomaly separation than some of the other state-of-the-art anomaly detection methods, and is easy to implement

  2. Biometric feature embedding using robust steganography technique

    Science.gov (United States)

    Rashid, Rasber D.; Sellahewa, Harin; Jassim, Sabah A.

    2013-05-01

    This paper is concerned with robust steganographic techniques to hide and communicate biometric data in mobile media objects like images, over open networks. More specifically, the aim is to embed binarised features extracted using discrete wavelet transforms and local binary patterns of face images as a secret message in an image. The need for such techniques can arise in law enforcement, forensics, counter terrorism, internet/mobile banking and border control. What differentiates this problem from normal information hiding techniques is the added requirement that there should be minimal effect on face recognition accuracy. We propose an LSB-Witness embedding technique in which the secret message is already present in the LSB plane but instead of changing the cover image LSB values, the second LSB plane will be changed to stand as a witness/informer to the receiver during message recovery. Although this approach may affect the stego quality, it is eliminating the weakness of traditional LSB schemes that is exploited by steganalysis techniques for LSB, such as PoV and RS steganalysis, to detect the existence of secrete message. Experimental results show that the proposed method is robust against PoV and RS attacks compared to other variants of LSB. We also discussed variants of this approach and determine capacity requirements for embedding face biometric feature vectors while maintain accuracy of face recognition.

  3. Robust, Efficient Depth Reconstruction With Hierarchical Confidence-Based Matching.

    Science.gov (United States)

    Sun, Li; Chen, Ke; Song, Mingli; Tao, Dacheng; Chen, Gang; Chen, Chun

    2017-07-01

    In recent years, taking photos and capturing videos with mobile devices have become increasingly popular. Emerging applications based on the depth reconstruction technique have been developed, such as Google lens blur. However, depth reconstruction is difficult due to occlusions, non-diffuse surfaces, repetitive patterns, and textureless surfaces, and it has become more difficult due to the unstable image quality and uncontrolled scene condition in the mobile setting. In this paper, we present a novel hierarchical framework with multi-view confidence-based matching for robust, efficient depth reconstruction in uncontrolled scenes. Particularly, the proposed framework combines local cost aggregation with global cost optimization in a complementary manner that increases efficiency and accuracy. A depth map is efficiently obtained in a coarse-to-fine manner by using an image pyramid. Moreover, confidence maps are computed to robustly fuse multi-view matching cues, and to constrain the stereo matching on a finer scale. The proposed framework has been evaluated with challenging indoor and outdoor scenes, and has achieved robust and efficient depth reconstruction.

  4. Robust Imaging Methodology for Challenging Environments: Wave Equation Dispersion Inversion of Surface Waves

    KAUST Repository

    Li, Jing

    2017-12-22

    A robust imaging technology is reviewed that provide subsurface information in challenging environments: wave-equation dispersion inversion (WD) of surface waves for the shear velocity model. We demonstrate the benefits and liabilities of the method with synthetic seismograms and field data. The benefits of WD are that 1) there is no layered medium assumption, as there is in conventional inversion of dispersion curves, so that the 2D or 3D S-velocity model can be reliably obtained with seismic surveys over rugged topography, and 2) WD mostly avoids getting stuck in local minima. The synthetic and field data examples demonstrate that WD can accurately reconstruct the S-wave velocity distributions in laterally heterogeneous media if the dispersion curves can be identified and picked. The WD method is easily extended to anisotropic media and the inversion of dispersion curves associated with Love wave. The liability is that is almost as expensive as FWI and only recovers the Vs distribution to a depth no deeper than about 1/2~1/3 wavelength.

  5. Robustness Metrics: Consolidating the multiple approaches to quantify Robustness

    DEFF Research Database (Denmark)

    Göhler, Simon Moritz; Eifler, Tobias; Howard, Thomas J.

    2016-01-01

    robustness metrics; 3) Functional expectancy and dispersion robustness metrics; and 4) Probability of conformance robustness metrics. The goal was to give a comprehensive overview of robustness metrics and guidance to scholars and practitioners to understand the different types of robustness metrics...

  6. Robust finger vein ROI localization based on flexible segmentation.

    Science.gov (United States)

    Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun

    2013-10-24

    Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.

  7. Robust Finger Vein ROI Localization Based on Flexible Segmentation

    Directory of Open Access Journals (Sweden)

    Dong Sun Park

    2013-10-01

    Full Text Available Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.

  8. Robust Finger Vein ROI Localization Based on Flexible Segmentation

    Science.gov (United States)

    Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun

    2013-01-01

    Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system. PMID:24284769

  9. Adaptive local thresholding for robust nucleus segmentation utilizing shape priors

    Science.gov (United States)

    Wang, Xiuzhong; Srinivas, Chukka

    2016-03-01

    This paper describes a novel local thresholding method for foreground detection. First, a Canny edge detection method is used for initial edge detection. Then, tensor voting is applied on the initial edge pixels, using a nonsymmetric tensor field tailored to encode prior information about nucleus size, shape, and intensity spatial distribution. Tensor analysis is then performed to generate the saliency image and, based on that, the refined edge. Next, the image domain is divided into blocks. In each block, at least one foreground and one background pixel are sampled for each refined edge pixel. The saliency weighted foreground histogram and background histogram are then created. These two histograms are used to calculate a threshold by minimizing the background and foreground pixel classification error. The block-wise thresholds are then used to generate the threshold for each pixel via interpolation. Finally, the foreground is obtained by comparing the original image with the threshold image. The effective use of prior information, combined with robust techniques, results in far more reliable foreground detection, which leads to robust nucleus segmentation.

  10. Handling Occlusions for Robust Augmented Reality Systems

    Directory of Open Access Journals (Sweden)

    Maidi Madjid

    2010-01-01

    Full Text Available Abstract In Augmented Reality applications, the human perception is enhanced with computer-generated graphics. These graphics must be exactly registered to real objects in the scene and this requires an effective Augmented Reality system to track the user's viewpoint. In this paper, a robust tracking algorithm based on coded fiducials is presented. Square targets are identified and pose parameters are computed using a hybrid approach based on a direct method combined with the Kalman filter. An important factor for providing a robust Augmented Reality system is the correct handling of targets occlusions by real scene elements. To overcome tracking failure due to occlusions, we extend our method using an optical flow approach to track visible points and maintain virtual graphics overlaying when targets are not identified. Our proposed real-time algorithm is tested with different camera viewpoints under various image conditions and shows to be accurate and robust.

  11. Robust tumor morphometry in multispectral fluorescence microscopy

    Science.gov (United States)

    Tabesh, Ali; Vengrenyuk, Yevgen; Teverovskiy, Mikhail; Khan, Faisal M.; Sapir, Marina; Powell, Douglas; Mesa-Tejada, Ricardo; Donovan, Michael J.; Fernandez, Gerardo

    2009-02-01

    Morphological and architectural characteristics of primary tissue compartments, such as epithelial nuclei (EN) and cytoplasm, provide important cues for cancer diagnosis, prognosis, and therapeutic response prediction. We propose two feature sets for the robust quantification of these characteristics in multiplex immunofluorescence (IF) microscopy images of prostate biopsy specimens. To enable feature extraction, EN and cytoplasm regions were first segmented from the IF images. Then, feature sets consisting of the characteristics of the minimum spanning tree (MST) connecting the EN and the fractal dimension (FD) of gland boundaries were obtained from the segmented compartments. We demonstrated the utility of the proposed features in prostate cancer recurrence prediction on a multi-institution cohort of 1027 patients. Univariate analysis revealed that both FD and one of the MST features were highly effective for predicting cancer recurrence (p <= 0.0001). In multivariate analysis, an MST feature was selected for a model incorporating clinical and image features. The model achieved a concordance index (CI) of 0.73 on the validation set, which was significantly higher than the CI of 0.69 for the standard multivariate model based solely on clinical features currently used in clinical practice (p < 0.0001). The contributions of this work are twofold. First, it is the first demonstration of the utility of the proposed features in morphometric analysis of IF images. Second, this is the largest scale study of the efficacy and robustness of the proposed features in prostate cancer prognosis.

  12. The value of museum communication: the cases of the Paper and Watermark Museum in Fabriano and the Ascoli Piceno Papal Paper Mill Museum in Ascoli Piceno

    Directory of Open Access Journals (Sweden)

    Patrizia Dragoni

    2017-12-01

    Full Text Available Guaranteeing the survival of cultural heritage, increasing its accessibility, both physical and intellectual, and the creation of countless benefits for different categories of stakeholders depends both on a perfect comprehension of the interests and abilities of users to take advantage of what is offered and, above all, on identifying and analysing the various types of value that can be attributed to it. According to Montella, there are three types of value that may be analysed for this purpose: a presentation value, informative in nature and inherent in the historical, cultural and possibly artistic value implicit in the heritage; a landscape value, extended to the context, inherent in the factual information services aimed at supporting policies of preventive and programmed conservation: and a production value, commercial in nature, which concerns the external effects generated by cultural heritage management to qualify the products and the images themselves of the businesses in order to make them stand out from the competition. The aim of this article is to inquire into whether, in what way and to what extent the communication of the Paper and Watermark Museum in Fabriano and the Ascoli Piceno Papal Paper Mill Museum in Ascoli Piceno creates presentation value and therefore leads the public to understand how far paper production has influenced the economic and socio-cultural history of the area in which they are located

  13. A ROBUST REGISTRATION ALGORITHM FOR POINT CLOUDS FROM UAV IMAGES FOR CHANGE DETECTION

    Directory of Open Access Journals (Sweden)

    A. Al-Rawabdeh

    2016-06-01

    Full Text Available Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs of the camera and the Exterior Orientation Parameters (EOPs of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV action camera which facilitated capturing high-resolution geo-tagged images

  14. a Robust Registration Algorithm for Point Clouds from Uav Images for Change Detection

    Science.gov (United States)

    Al-Rawabdeh, A.; Al-Gurrani, H.; Al-Durgham, K.; Detchev, I.; He, F.; El-Sheimy, N.; Habib, A.

    2016-06-01

    Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP) registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs) of the camera and the Exterior Orientation Parameters (EOPs) of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV) was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV) action camera which facilitated capturing high-resolution geo-tagged images in two epochs

  15. An efficient sequence for fetal brain imaging at 3T with enhanced T1 contrast and motion robustness.

    Science.gov (United States)

    Ferrazzi, Giulio; Price, Anthony N; Teixeira, Rui Pedro A G; Cordero-Grande, Lucilio; Hutter, Jana; Gomes, Ana; Padormo, Francesco; Hughes, Emer; Schneider, Torben; Rutherford, Mary; Kuklisova Murgasova, Maria; Hajnal, Joseph V

    2018-07-01

    Ultrafast single-shot T 2 -weighted images are common practice in fetal MR exams. However, there is limited experience with fetal T 1 -weighted acquisitions. This study aims at establishing a robust framework that allows fetal T 1 -weighted scans to be routinely acquired in utero at 3T. A 2D gradient echo sequence with an adiabatic inversion was optimized to be robust to fetal motion and maternal breathing optimizing grey/white matter contrast at the same time. This was combined with slice to volume registration and super resolution methods to produce volumetric reconstructions. The sequence was tested on 22 fetuses. Optimized grey/white matter contrast and robustness to fetal motion and maternal breathing were achieved. Signal from cerebrospinal fluid (CSF) and amniotic fluid was nulled and 0.75 mm isotropic anatomical reconstructions of the fetal brain were obtained using slice-to-volume registration and super resolution techniques. Total acquisition time for a single stack was 56 s, all acquired during free breathing. Enhanced sensitivity to normal anatomy and pathology with respect to established methods is demonstrated. A direct comparison with a 3D spoiled gradient echo sequence and a controlled motion experiment run on an adult volunteer are also shown. This paper describes a robust framework to perform T 1 -weighted acquisitions and reconstructions of the fetal brain in utero. Magn Reson Med 80:137-146, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic

  16. Iris recognition based on robust principal component analysis

    Science.gov (United States)

    Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong

    2014-11-01

    Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.

  17. Approximate truncation robust computed tomography—ATRACT

    International Nuclear Information System (INIS)

    Dennerlein, Frank; Maier, Andreas

    2013-01-01

    We present an approximate truncation robust algorithm to compute tomographic images (ATRACT). This algorithm targets at reconstructing volumetric images from cone-beam projections in scenarios where these projections are highly truncated in each dimension. It thus facilitates reconstructions of small subvolumes of interest, without involving prior knowledge about the object. Our method is readily applicable to medical C-arm imaging, where it may contribute to new clinical workflows together with a considerable reduction of x-ray dose. We give a detailed derivation of ATRACT that starts from the conventional Feldkamp filtered-backprojection algorithm and that involves, as one component, a novel original formula for the inversion of the two-dimensional Radon transform. Discretization and numerical implementation are discussed and reconstruction results from both, simulated projections and first clinical data sets are presented. (paper)

  18. Robust obstacle detection for unmanned surface vehicles

    Science.gov (United States)

    Qin, Yueming; Zhang, Xiuzhi

    2018-03-01

    Obstacle detection is of essential importance for Unmanned Surface Vehicles (USV). Although some obstacles (e.g., ships, islands) can be detected by Radar, there are many other obstacles (e.g., floating pieces of woods, swimmers) which are difficult to be detected via Radar because these obstacles have low radar cross section. Therefore, detecting obstacle from images taken onboard is an effective supplement. In this paper, a robust vision-based obstacle detection method for USVs is developed. The proposed method employs the monocular image sequence captured by the camera on the USVs and detects obstacles on the sea surface from the image sequence. The experiment results show that the proposed scheme is efficient to fulfill the obstacle detection task.

  19. Robust optimization methods for cardiac sparing in tangential breast IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Mahmoudzadeh, Houra, E-mail: houra@mie.utoronto.ca [Mechanical and Industrial Engineering Department, University of Toronto, Toronto, Ontario M5S 3G8 (Canada); Lee, Jenny [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Chan, Timothy C. Y. [Mechanical and Industrial Engineering Department, University of Toronto, Toronto, Ontario M5S 3G8, Canada and Techna Institute for the Advancement of Technology for Health, Toronto, Ontario M5G 1P5 (Canada); Purdie, Thomas G. [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3S2 (Canada); Techna Institute for the Advancement of Technology for Health, Toronto, Ontario M5G 1P5 (Canada)

    2015-05-15

    Purpose: In left-sided tangential breast intensity modulated radiation therapy (IMRT), the heart may enter the radiation field and receive excessive radiation while the patient is breathing. The patient’s breathing pattern is often irregular and unpredictable. We verify the clinical applicability of a heart-sparing robust optimization approach for breast IMRT. We compare robust optimized plans with clinical plans at free-breathing and clinical plans at deep inspiration breath-hold (DIBH) using active breathing control (ABC). Methods: Eight patients were included in the study with each patient simulated using 4D-CT. The 4D-CT image acquisition generated ten breathing phase datasets. An average scan was constructed using all the phase datasets. Two of the eight patients were also imaged at breath-hold using ABC. The 4D-CT datasets were used to calculate the accumulated dose for robust optimized and clinical plans based on deformable registration. We generated a set of simulated breathing probability mass functions, which represent the fraction of time patients spend in different breathing phases. The robust optimization method was applied to each patient using a set of dose-influence matrices extracted from the 4D-CT data and a model of the breathing motion uncertainty. The goal of the optimization models was to minimize the dose to the heart while ensuring dose constraints on the target were achieved under breathing motion uncertainty. Results: Robust optimized plans were improved or equivalent to the clinical plans in terms of heart sparing for all patients studied. The robust method reduced the accumulated heart dose (D10cc) by up to 801 cGy compared to the clinical method while also improving the coverage of the accumulated whole breast target volume. On average, the robust method reduced the heart dose (D10cc) by 364 cGy and improved the optBreast dose (D99%) by 477 cGy. In addition, the robust method had smaller deviations from the planned dose to the

  20. Rhesus monkeys (Macaca mulatta) show robust primacy and recency in memory for lists from small, but not large, image sets.

    Science.gov (United States)

    Basile, Benjamin M; Hampton, Robert R

    2010-02-01

    The combination of primacy and recency produces a U-shaped serial position curve typical of memory for lists. In humans, primacy is often thought to result from rehearsal, but there is little evidence for rehearsal in nonhumans. To further evaluate the possibility that rehearsal contributes to primacy in monkeys, we compared memory for lists of familiar stimuli (which may be easier to rehearse) to memory for unfamiliar stimuli (which are likely difficult to rehearse). Six rhesus monkeys saw lists of five images drawn from either large, medium, or small image sets. After presentation of each list, memory for one item was assessed using a serial probe recognition test. Across four experiments, we found robust primacy and recency with lists drawn from small and medium, but not large, image sets. This finding is consistent with the idea that familiar items are easier to rehearse and that rehearsal contributes to primacy, warranting further study of the possibility of rehearsal in monkeys. However, alternative interpretations are also viable and are discussed. Copyright 2009 Elsevier B.V. All rights reserved.

  1. Image Alignment for Multiple Camera High Dynamic Range Microscopy.

    Science.gov (United States)

    Eastwood, Brian S; Childs, Elisabeth C

    2012-01-09

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.

  2. Simple and robust image-based autofocusing for digital microscopy.

    Science.gov (United States)

    Yazdanfar, Siavash; Kenny, Kevin B; Tasimi, Krenar; Corwin, Alex D; Dixon, Elizabeth L; Filkins, Robert J

    2008-06-09

    A simple image-based autofocusing scheme for digital microscopy is demonstrated that uses as few as two intermediate images to bring the sample into focus. The algorithm is adapted to a commercial inverted microscope and used to automate brightfield and fluorescence imaging of histopathology tissue sections.

  3. Extended families of 2D arrays with near optimal auto and low cross-correlation

    Science.gov (United States)

    Svalbe, I. D.; Tirkel, A. Z.

    2017-12-01

    Families of 2D arrays can be constructed where each array has perfect autocorrelation, and the cross-correlation between any pair of family members is optimally low. We exploit equivalent Hadamard matrices to construct many families of p p × p arrays, where p is any 4k-1 prime. From these families, we assemble extended families of arrays with members that exhibit perfect autocorrelation and next-to-optimally low cross-correlation. Pseudo-Hadamard matrices are used to construct extended families using p = 4k + 1 primes. An optimal family of 31 31 × 31 perfect arrays can provide copyright protection to uniquely stamp a robust, low-visibility watermark within every frame of each second of high-definition, 30 fps video. The extended families permit the embedding of many more perfect watermarks that have next-to-minimal cross-correlations.

  4. Information Theoretical Analysis of Identification based on Active Content Fingerprinting

    OpenAIRE

    Farhadzadeh, Farzad; Willems, Frans M. J.; Voloshinovskiy, Sviatoslav

    2014-01-01

    Content fingerprinting and digital watermarking are techniques that are used for content protection and distribution monitoring. Over the past few years, both techniques have been well studied and their shortcomings understood. Recently, a new content fingerprinting scheme called {\\em active content fingerprinting} was introduced to overcome these shortcomings. Active content fingerprinting aims to modify a content to extract robuster fingerprints than the conventional content fingerprinting....

  5. New approaches for development, analyzing and security of multimedia archive of folklore objects

    Directory of Open Access Journals (Sweden)

    Galina Bogdanova

    2008-07-01

    Full Text Available We present new approaches used in development of the demo version of a WEB based client/server system that contains an archival fund with folklore materials of the Folklore Institute at Bulgarian Academy of Sciences (BAS. Some new methods for image and text securing to embed watermarks in system data are presented. A digital watermark is a visible or perfectly invisible, identification code that is permanently embedded in the data and remains present within the data after any decryption process. We have also developed improved tools and algorithms for analyzing of the database too.

  6. Robust design optimization using the price of robustness, robust least squares and regularization methods

    Science.gov (United States)

    Bukhari, Hassan J.

    2017-12-01

    In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.

  7. Color in Image and Video Processing: Most Recent Trends and Future Research Directions

    Directory of Open Access Journals (Sweden)

    Tominaga Shoji

    2008-01-01

    Full Text Available Abstract The motivation of this paper is to provide an overview of the most recent trends and of the future research directions in color image and video processing. Rather than covering all aspects of the domain this survey covers issues related to the most active research areas in the last two years. It presents the most recent trends as well as the state-of-the-art, with a broad survey of the relevant literature, in the main active research areas in color imaging. It also focuses on the most promising research areas in color imaging science. This survey gives an overview about the issues, controversies, and problems of color image science. It focuses on human color vision, perception, and interpretation. It focuses also on acquisition systems, consumer imaging applications, and medical imaging applications. Next it gives a brief overview about the solutions, recommendations, most recent trends, and future trends of color image science. It focuses on color space, appearance models, color difference metrics, and color saliency. It focuses also on color features, color-based object tracking, scene illuminant estimation and color constancy, quality assessment and fidelity assessment, color characterization and calibration of a display device. It focuses on quantization, filtering and enhancement, segmentation, coding and compression, watermarking, and lastly on multispectral color image processing. Lastly, it addresses the research areas which still need addressing and which are the next and future perspectives of color in image and video processing.

  8. Color in Image and Video Processing: Most Recent Trends and Future Research Directions

    Directory of Open Access Journals (Sweden)

    Konstantinos N. Plataniotis

    2008-05-01

    Full Text Available The motivation of this paper is to provide an overview of the most recent trends and of the future research directions in color image and video processing. Rather than covering all aspects of the domain this survey covers issues related to the most active research areas in the last two years. It presents the most recent trends as well as the state-of-the-art, with a broad survey of the relevant literature, in the main active research areas in color imaging. It also focuses on the most promising research areas in color imaging science. This survey gives an overview about the issues, controversies, and problems of color image science. It focuses on human color vision, perception, and interpretation. It focuses also on acquisition systems, consumer imaging applications, and medical imaging applications. Next it gives a brief overview about the solutions, recommendations, most recent trends, and future trends of color image science. It focuses on color space, appearance models, color difference metrics, and color saliency. It focuses also on color features, color-based object tracking, scene illuminant estimation and color constancy, quality assessment and fidelity assessment, color characterization and calibration of a display device. It focuses on quantization, filtering and enhancement, segmentation, coding and compression, watermarking, and lastly on multispectral color image processing. Lastly, it addresses the research areas which still need addressing and which are the next and future perspectives of color in image and video processing.

  9. Robustness of an artificially tailored fisheye imaging system with a curvilinear image surface

    Science.gov (United States)

    Lee, Gil Ju; Nam, Won Il; Song, Young Min

    2017-11-01

    Curved image sensors inspired by animal and insect eyes have provided a new development direction in next-generation digital cameras. It is known that natural fish eyes afford an extremely wide field of view (FOV) imaging due to the geometrical properties of the spherical lens and hemispherical retina. However, its inherent drawbacks, such as the low off-axis illumination and the fabrication difficulty of a 'dome-like' hemispherical imager, limit the development of bio-inspired wide FOV cameras. Here, a new type of fisheye imaging system is introduced that has simple lens configurations with a curvilinear image surface, while maintaining high off-axis illumination and a wide FOV. Moreover, through comparisons with commercial conventional fisheye designs, it is determined that the volume and required number of optical elements of the proposed design is practical while capturing the fundamental optical performances. Detailed design guidelines for tailoring the proposed optic system are also discussed.

  10. Robust Pose Estimation using the SwissRanger SR-3000 Camera

    DEFF Research Database (Denmark)

    Gudmundsson, Sigurjon Arni; Larsen, Rasmus; Ersbøll, Bjarne Kjær

    2007-01-01

    In this paper a robust method is presented to classify and estimate an objects pose from a real time range image and a low dimensional model. The model is made from a range image training set which is reduced dimensionally by a nonlinear manifold learning method named Local Linear Embedding (LLE)......). New range images are then projected to this model giving the low dimensional coordinates of the object pose in an efficient manner. The range images are acquired by a state of the art SwissRanger SR-3000 camera making the projection process work in real-time....

  11. Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Benoit Macq

    2008-07-01

    Full Text Available Based on the analysis of real mobile ad hoc network (MANET traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS to wireless clients is demonstrated.

  12. A Unifying Mathematical Framework for Genetic Robustness, Environmental Robustness, Network Robustness and their Trade-offs on Phenotype Robustness in Biological Networks. Part III: Synthetic Gene Networks in Synthetic Biology

    Science.gov (United States)

    Chen, Bor-Sen; Lin, Ying-Po

    2013-01-01

    Robust stabilization and environmental disturbance attenuation are ubiquitous systematic properties that are observed in biological systems at many different levels. The underlying principles for robust stabilization and environmental disturbance attenuation are universal to both complex biological systems and sophisticated engineering systems. In many biological networks, network robustness should be large enough to confer: intrinsic robustness for tolerating intrinsic parameter fluctuations; genetic robustness for buffering genetic variations; and environmental robustness for resisting environmental disturbances. Network robustness is needed so phenotype stability of biological network can be maintained, guaranteeing phenotype robustness. Synthetic biology is foreseen to have important applications in biotechnology and medicine; it is expected to contribute significantly to a better understanding of functioning of complex biological systems. This paper presents a unifying mathematical framework for investigating the principles of both robust stabilization and environmental disturbance attenuation for synthetic gene networks in synthetic biology. Further, from the unifying mathematical framework, we found that the phenotype robustness criterion for synthetic gene networks is the following: if intrinsic robustness + genetic robustness + environmental robustness ≦ network robustness, then the phenotype robustness can be maintained in spite of intrinsic parameter fluctuations, genetic variations, and environmental disturbances. Therefore, the trade-offs between intrinsic robustness, genetic robustness, environmental robustness, and network robustness in synthetic biology can also be investigated through corresponding phenotype robustness criteria from the systematic point of view. Finally, a robust synthetic design that involves network evolution algorithms with desired behavior under intrinsic parameter fluctuations, genetic variations, and environmental

  13. A ROBUST METHOD FOR STEREO VISUAL ODOMETRY BASED ON MULTIPLE EUCLIDEAN DISTANCE CONSTRAINT AND RANSAC ALGORITHM

    Directory of Open Access Journals (Sweden)

    Q. Zhou

    2017-07-01

    Full Text Available Visual Odometry (VO is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC and Random Sample Consensus (RANSAC algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation. The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.

  14. MPEG-4 AVC saliency map computation

    Science.gov (United States)

    Ammar, M.; Mitrea, M.; Hasnaoui, M.

    2014-02-01

    A saliency map provides information about the regions inside some visual content (image, video, ...) at which a human observer will spontaneously look at. For saliency maps computation, current research studies consider the uncompressed (pixel) representation of the visual content and extract various types of information (intensity, color, orientation, motion energy) which are then fusioned. This paper goes one step further and computes the saliency map directly from the MPEG-4 AVC stream syntax elements with minimal decoding operations. In this respect, an a-priori in-depth study on the MPEG-4 AVC syntax elements is first carried out so as to identify the entities appealing the visual attention. Secondly, the MPEG-4 AVC reference software is completed with software tools allowing the parsing of these elements and their subsequent usage in objective benchmarking experiments. This way, it is demonstrated that an MPEG-4 saliency map can be given by a combination of static saliency and motion maps. This saliency map is experimentally validated under a robust watermarking framework. When included in an m-QIM (multiple symbols Quantization Index Modulation) insertion method, PSNR average gains of 2.43 dB, 2.15dB, and 2.37 dB are obtained for data payload of 10, 20 and 30 watermarked blocks per I frame, i.e. about 30, 60, and 90 bits/second, respectively. These quantitative results are obtained out of processing 2 hours of heterogeneous video content.

  15. Improving Shadow Suppression for Illumination Robust Face Recognition

    KAUST Repository

    Zhang, Wuming

    2017-10-13

    2D face analysis techniques, such as face landmarking, face recognition and face verification, are reasonably dependent on illumination conditions which are usually uncontrolled and unpredictable in the real world. An illumination robust preprocessing method thus remains a significant challenge in reliable face analysis. In this paper we propose a novel approach for improving lighting normalization through building the underlying reflectance model which characterizes interactions between skin surface, lighting source and camera sensor, and elaborates the formation of face color appearance. Specifically, the proposed illumination processing pipeline enables the generation of Chromaticity Intrinsic Image (CII) in a log chromaticity space which is robust to illumination variations. Moreover, as an advantage over most prevailing methods, a photo-realistic color face image is subsequently reconstructed which eliminates a wide variety of shadows whilst retaining the color information and identity details. Experimental results under different scenarios and using various face databases show the effectiveness of the proposed approach to deal with lighting variations, including both soft and hard shadows, in face recognition.

  16. Robust adaptive multichannel SAR processing based on covariance matrix reconstruction

    Science.gov (United States)

    Tan, Zhen-ya; He, Feng

    2018-04-01

    With the combination of digital beamforming (DBF) processing, multichannel synthetic aperture radar(SAR) systems in azimuth promise well in high-resolution and wide-swath imaging, whereas conventional processing methods don't take the nonuniformity of scattering coefficient into consideration. This paper brings up a robust adaptive Multichannel SAR processing method which utilizes the Capon spatial spectrum estimator to obtain the spatial spectrum distribution over all ambiguous directions first, and then the interference-plus-noise covariance Matrix is reconstructed based on definition to acquire the Multichannel SAR processing filter. The performance of processing under nonuniform scattering coefficient is promoted by this novel method and it is robust again array errors. The experiments with real measured data demonstrate the effectiveness and robustness of the proposed method.

  17. 4D MR imaging using robust internal respiratory signal

    International Nuclear Information System (INIS)

    Hui, CheukKai; Wen, Zhifei; Beddar, Sam; Stemkens, Bjorn; Tijssen, R H N; Van den Berg, C A T; Hwang, Ken-Pin

    2016-01-01

    The purpose of this study is to investigate the feasibility of using internal respiratory (IR) surrogates to sort four-dimensional (4D) magnetic resonance (MR) images. The 4D MR images were constructed by acquiring fast 2D cine MR images sequentially, with each slice scanned for more than one breathing cycle. The 4D volume was then sorted retrospectively using the IR signal. In this study, we propose to use multiple low-frequency components in the Fourier space as well as the anterior body boundary as potential IR surrogates. From these potential IR surrogates, we used a clustering algorithm to identify those that best represented the respiratory pattern to derive the IR signal. A study with healthy volunteers was performed to assess the feasibility of the proposed IR signal. We compared this proposed IR signal with the respiratory signal obtained using respiratory bellows. Overall, 99% of the IR signals matched the bellows signals. The average difference between the end inspiration times in the IR signal and bellows signal was 0.18 s in this cohort of matching signals. For the acquired images corresponding to the other 1% of non-matching signal pairs, the respiratory motion shown in the images was coherent with the respiratory phases determined by the IR signal, but not the bellows signal. This suggested that the IR signal determined by the proposed method could potentially correct the faulty bellows signal. The sorted 4D images showed minimal mismatched artefacts and potential clinical applicability. The proposed IR signal therefore provides a feasible alternative to effectively sort MR images in 4D. (paper)

  18. Perceptual Robust Design

    DEFF Research Database (Denmark)

    Pedersen, Søren Nygaard

    The research presented in this PhD thesis has focused on a perceptual approach to robust design. The results of the research and the original contribution to knowledge is a preliminary framework for understanding, positioning, and applying perceptual robust design. Product quality is a topic...... been presented. Therefore, this study set out to contribute to the understanding and application of perceptual robust design. To achieve this, a state-of-the-art and current practice review was performed. From the review two main research problems were identified. Firstly, a lack of tools...... for perceptual robustness was found to overlap with the optimum for functional robustness and at most approximately 2.2% out of the 14.74% could be ascribed solely to the perceptual robustness optimisation. In conclusion, the thesis have offered a new perspective on robust design by merging robust design...

  19. Evaluation of the robustness of the preprocessing technique improving reversible compressibility of CT images: Tested on various CT examinations

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Chang Ho; Kim, Bohyoung; Gu, Bon Seung; Lee, Jong Min [Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of); Kim, Kil Joong [Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707, South Korea and Department of Radiation Applied Life Science, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul 110-799 (Korea, Republic of); Lee, Kyoung Ho [Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707, South Korea and Institute of Radiation Medicine, Seoul National University Medical Research Center, and Clinical Research Institute, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul 110-744 (Korea, Republic of); Kim, Tae Ki [Medical Information Center, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of)

    2013-10-15

    Purpose: To modify the preprocessing technique, which was previously proposed, improving compressibility of computed tomography (CT) images to cover the diversity of three dimensional configurations of different body parts and to evaluate the robustness of the technique in terms of segmentation correctness and increase in reversible compression ratio (CR) for various CT examinations.Methods: This study had institutional review board approval with waiver of informed patient consent. A preprocessing technique was previously proposed to improve the compressibility of CT images by replacing pixel values outside the body region with a constant value resulting in maximizing data redundancy. Since the technique was developed aiming at only chest CT images, the authors modified the segmentation method to cover the diversity of three dimensional configurations of different body parts. The modified version was evaluated as follows. In randomly selected 368 CT examinations (352 787 images), each image was preprocessed by using the modified preprocessing technique. Radiologists visually confirmed whether the segmented region covers the body region or not. The images with and without the preprocessing were reversibly compressed using Joint Photographic Experts Group (JPEG), JPEG2000 two-dimensional (2D), and JPEG2000 three-dimensional (3D) compressions. The percentage increase in CR per examination (CR{sub I}) was measured.Results: The rate of correct segmentation was 100.0% (95% CI: 99.9%, 100.0%) for all the examinations. The median of CR{sub I} were 26.1% (95% CI: 24.9%, 27.1%), 40.2% (38.5%, 41.1%), and 34.5% (32.7%, 36.2%) in JPEG, JPEG2000 2D, and JPEG2000 3D, respectively.Conclusions: In various CT examinations, the modified preprocessing technique can increase in the CR by 25% or more without concerning about degradation of diagnostic information.

  20. Robust and Effective Component-based Banknote Recognition by SURF Features.

    Science.gov (United States)

    Hasanuzzaman, Faiz M; Yang, Xiaodong; Tian, YingLi

    2011-01-01

    Camera-based computer vision technology is able to assist visually impaired people to automatically recognize banknotes. A good banknote recognition algorithm for blind or visually impaired people should have the following features: 1) 100% accuracy, and 2) robustness to various conditions in different environments and occlusions. Most existing algorithms of banknote recognition are limited to work for restricted conditions. In this paper we propose a component-based framework for banknote recognition by using Speeded Up Robust Features (SURF). The component-based framework is effective in collecting more class-specific information and robust in dealing with partial occlusion and viewpoint changes. Furthermore, the evaluation of SURF demonstrates its effectiveness in handling background noise, image rotation, scale, and illumination changes. To authenticate the robustness and generalizability of the proposed approach, we have collected a large dataset of banknotes from a variety of conditions including occlusion, cluttered background, rotation, and changes of illumination, scaling, and viewpoints. The proposed algorithm achieves 100% recognition rate on our challenging dataset.