WorldWideScience

Sample records for based image denoising

  1. Wavelet Based Image Denoising Technique

    Directory of Open Access Journals (Sweden)

    Sachin D Ruikar

    2011-03-01

    Full Text Available This paper proposes different approaches of wavelet based image denoising methods. The search for efficient image denoising methods is still a valid challenge at the crossing of functional analysis and statistics. In spite of the sophistication of the recently proposed methods, most algorithms have not yet attained a desirable level of applicability. Wavelet algorithms are useful tool for signal processing such as image compression and denoising. Multi wavelets can be considered as an extension of scalar wavelets. The main aim is to modify the wavelet coefficients in the new basis, the noise can be removed from the data. In this paper, we extend the existing technique and providing a comprehensive evaluation of the proposed method. Results based on different noise, such as Gaussian, Poisson’s, Salt and Pepper, and Speckle performed in this paper. A signal to noise ratio as a measure of the quality of denoising was preferred.

  2. Medical Image Denoising using Adaptive Threshold Based on Contourlet Transform

    CERN Document Server

    Satheesh, S; 10.5121/acij.2011.2205

    2011-01-01

    Image denoising has become an essential exercise in medical imaging especially the Magnetic Resonance Imaging (MRI). This paper proposes a medical image denoising algorithm using contourlet transform. Numerical results show that the proposed algorithm can obtained higher peak signal to noise ratio (PSNR) than wavelet based denoising algorithms using MR Images in the presence of AWGN.

  3. Image Denoising Based on Dilated Singularity Prior

    OpenAIRE

    Wufan Chen; Zhiwu Liao; Shaoxiang Hu

    2012-01-01

    In order to preserve singularities in denoising, we propose a new scheme by adding dilated singularity prior to noisy images. The singularities are detected by canny operator firstly and then dilated using mathematical morphology for finding pixels “near” singularities instead of “on” singularities. The denoising results for pixels near singularities are obtained by nonlocal means in spatial domain to preserve singularities while the denoising results for pixels in smooth regions are obtained...

  4. A Method Based on Geodesic Distance for Image Segmentation and Denoising

    OpenAIRE

    Liu Cuiyun; Zhang Caiming; Gao Shanshan

    2014-01-01

    The study introduces image segmentation and denoising method which is based on geodesic framework and k means algorithm. Our method combines geodesic with k means algorithm. What’s more, a denoising method is applied to denoise. We optimize the distance function of k means algorithm to achieve our goals. This method can segment and denoise image which contains a lot of noise effectually.

  5. A nonlinear Stein based estimator for multichannel image denoising

    CERN Document Server

    Chaux, Caroline; Benazza-Benyahia, Amel; Pesquet, Jean-Christophe

    2007-01-01

    The use of multicomponent images has become widespread with the improvement of multisensor systems having increased spatial and spectral resolutions. However, the observed images are often corrupted by an additive Gaussian noise. In this paper, we are interested in multichannel image denoising based on a multiscale representation of the images. A multivariate statistical approach is adopted to take into account both the spatial and the inter-component correlations existing between the different wavelet subbands. More precisely, we propose a new parametric nonlinear estimator which generalizes many reported denoising methods. The derivation of the optimal parameters is achieved by applying Stein's principle in the multivariate case. Experiments performed on multispectral remote sensing images clearly indicate that our method outperforms conventional wavelet denoising techniques

  6. Regularized Fractional Power Parameters for Image Denoising Based on Convex Solution of Fractional Heat Equation

    Directory of Open Access Journals (Sweden)

    Hamid A. Jalab

    2014-01-01

    Full Text Available The interest in using fractional mask operators based on fractional calculus operators has grown for image denoising. Denoising is one of the most fundamental image restoration problems in computer vision and image processing. This paper proposes an image denoising algorithm based on convex solution of fractional heat equation with regularized fractional power parameters. The performances of the proposed algorithms were evaluated by computing the PSNR, using different types of images. Experiments according to visual perception and the peak signal to noise ratio values show that the improvements in the denoising process are competent with the standard Gaussian filter and Wiener filter.

  7. An Edge-Preserved Image Denoising Algorithm Based on Local Adaptive Regularization

    Directory of Open Access Journals (Sweden)

    Li Guo

    2016-01-01

    Full Text Available Image denoising methods are often based on the minimization of an appropriately defined energy function. Many gradient dependent energy functions, such as Potts model and total variation denoising, regard image as piecewise constant function. In these methods, some important information such as edge sharpness and location is well preserved, but some detailed image feature like texture is often compromised in the process of denoising. For this reason, an image denoising method based on local adaptive regularization is proposed in this paper, which can adaptively adjust denoising degree of noisy image by adding spatial variable fidelity term, so as to better preserve fine scale features of image. Experimental results show that the proposed denoising method can achieve state-of-the-art subjective visual effect, and the signal-noise-ratio (SNR is also objectively improved by 0.3–0.6 dB.

  8. Visual and Artistic Images Denoising Methods Based on Partial Differential Equation

    Directory of Open Access Journals (Sweden)

    Zhenhe Ye

    2013-06-01

    Full Text Available Partial differential equation has a remarkable effect on image denoising, compression and segmentation. Based on partial differential equations, the denoising experiment is carried out on those artistic images requiring high degree of visual reduction through the application of 3 image-denoising algorithm models including thermal diffusion equation, P-M diffusion equation and the TV diffusion equation. By this experience, the respective characteristics in image-denoising of these 3 methods can be analyzed so that a better way can be chosen in adapting to digitization of artistic images or in dealing with distant signal.

  9. Contourlet Transform Based Method For Medical Image Denoising

    Directory of Open Access Journals (Sweden)

    Abbas H. Hassin AlAsadi

    2015-02-01

    Full Text Available Noise is an important factor of the medical image quality, because the high noise of medical imaging will not give us the useful information of the medical diagnosis. Basically, medical diagnosis is based on normal or abnormal information provided diagnose conclusion. In this paper, we proposed a denoising algorithm based on Contourlet transform for medical images. Contourlet transform is an extension of the wavelet transform in two dimensions using the multiscale and directional filter banks. The Contourlet transform has the advantages of multiscale and time-frequency-localization properties of wavelets, but also provides a high degree of directionality. For verifying the denoising performance of the Contourlet transform, two kinds of noise are added into our samples; Gaussian noise and speckle noise. Soft thresholding value for the Contourlet coefficients of noisy image is computed. Finally, the experimental results of proposed algorithm are compared with the results of wavelet transform. We found that the proposed algorithm has achieved acceptable results compared with those achieved by wavelet transform.

  10. Adaptively wavelet-based image denoising algorithm with edge preserving

    Institute of Scientific and Technical Information of China (English)

    Yihua Tan; Jinwen Tian; Jian Liu

    2006-01-01

    @@ A new wavelet-based image denoising algorithm, which exploits the edge information hidden in the corrupted image, is presented. Firstly, a canny-like edge detector identifies the edges in each subband.Secondly, multiplying the wavelet coefficients in neighboring scales is implemented to suppress the noise while magnifying the edge information, and the result is utilized to exclude the fake edges. The isolated edge pixel is also identified as noise. Unlike the thresholding method, after that we use local window filter in the wavelet domain to remove noise in which the variance estimation is elaborated to utilize the edge information. This method is adaptive to local image details, and can achieve better performance than the methods of state of the art.

  11. Boltzmann Machines and Denoising Autoencoders for Image Denoising

    OpenAIRE

    Cho, Kyunghyun

    2013-01-01

    Image denoising based on a probabilistic model of local image patches has been employed by various researchers, and recently a deep (denoising) autoencoder has been proposed by Burger et al. [2012] and Xie et al. [2012] as a good model for this. In this paper, we propose that another popular family of models in the field of deep learning, called Boltzmann machines, can perform image denoising as well as, or in certain cases of high level of noise, better than denoising autoencoders. We empiri...

  12. An adaptive image denoising method based on local parameters optimization

    Indian Academy of Sciences (India)

    Hari Om; Mantosh Biswas

    2014-08-01

    In image denoising algorithms, the noise is handled by either modifying term-by-term, i.e., individual pixels or block-by-block, i.e., group of pixels, using suitable shrinkage factor and threshold function. The shrinkage factor is generally a function of threshold and some other characteristics of the neighbouring pixels of the pixel to be thresholded (denoised). The threshold is determined in terms of the noise variance present in the image and its size. The VisuShrink, SureShrink, and NeighShrink methods are important denoising methods that provide good results. The first two, i.e., VisuShrink and SureShrink methods follow term-by-term approach, i.e., modify the individual pixel and the third one, i.e., NeighShrink and its variants: ModiNeighShrink, IIDMWD, and IAWDMBMC, follow block-by-block approach, i.e., modify the pixels in groups, in order to remove the noise. The VisuShrink, SureShrink, and NeighShrink methods however do not give very good visual quality because they remove too many coefficients due to their high threshold values. In this paper, we propose an image denoising method that uses the local parameters of the neighbouring coefficients of the pixel to be denoised in the noisy image. In our method, we propose two new shrinkage factors and the threshold at each decomposition level, which lead to better visual quality. We also establish the relationship between both the shrinkage factors. We compare the performance of our method with that of the VisuShrink and NeighShrink including various variants. Simulation results show that our proposed method has high peak signal-to-noise ratio and good visual quality of the image as compared to the traditional methods:Weiner filter, VisuShrink, SureShrink, NeighBlock, NeighShrink, ModiNeighShrink, LAWML, IIDMWT, and IAWDMBNC methods.

  13. Image denoising using statistical model based on quaternion wavelet domain

    Institute of Scientific and Technical Information of China (English)

    YIN Ming; LIU Wei; KONG Ranran

    2012-01-01

    Image denoising is the basic problem of image processing. Quaternion wavelet transform is a new kind of multiresolution analysis tools. Image via quaternion wavelet transform, wavelet coefficients both in intrascale and in interscale have certain correla- tions. First, according to the correlation of quaternion wavelet coefficients in interscale, non-Ganssian distribution model is used to model its correlations, and the coefficients are divided into important and unimportance coefficients. Then we use the non-Gaussian distribution model to model the important coefficients and its adjacent coefficients, and utilize the MAP method estimate original image wavelet coefficients from noisy coefficients, so as to achieve the purpose of denoising. Experimental results show that our al- gorithm outperforms the other classical algorithms in peak signal-to-noise ratio and visual quality.

  14. FASTICA based denoising for single sensor Digital Cameras images

    OpenAIRE

    Shawetangi kala; Raj Kumar Sahu

    2012-01-01

    Digital color cameras use a single sensor equipped with a color filter array (CFA) to capture scenes in color. Since each sensor cell can record only one color value, the other two missing components at each position need to be interpolated. The color interpolation process is usually called color demosaicking (CDM). The quality of demosaicked images is degraded due to the sensor noise introduced during the image acquisition process. Many advanced denoising algorithms, which are designed for ...

  15. Image Denoising Using Sure-Based Adaptive Thresholding in Directionlet Domain

    OpenAIRE

    Sethunadh R; Tessamma Thomas

    2012-01-01

    The standard separable two dimensional wavelet transform has achieved a great success in image denoising applications due to its sparse representation of images. However it fails to capture efficiently the anisotropic geometric structures like edges and contours in images as they intersect too many wavelet basis functions and lead to a non-sparse representation. In this paper a novel de-noising scheme based on multi directional and anisotropic wavelet transform called directionlet...

  16. Application of threshold estimation for terahertz digital holography image denoising based on stationary wavelet transform

    Science.gov (United States)

    Cui, Shan-shan; Li, Qi; Ma, Xue

    2015-11-01

    Terahertz digital holography imaging technology is one of the hot topics in imaging domain, and it has drawn more and more public attention. Owing to the redundancy and translation invariance of the stationary wavelet transform, it has significant application in image denoising, and the threshold selection has a great influence on denoising. The denoising researches based on stationary wavelet transform are performed on the real terahertz image, with Bayesian estimation and Birge-Massart strategy applied to evaluate the threshold. The experimental results reveal that, Bayesian estimation combined with homomorphic stationary wavelet transform manifests the optimal denoising effect at 3 decomposition levels, which improves the signal-to-noise and preserves the image detail information simultaneously.

  17. A New Method of Image Denoising for Underground Coal Mine Based on the Visual Characteristics

    Directory of Open Access Journals (Sweden)

    Gang Hua

    2014-01-01

    Full Text Available Affected by special underground circumstances of coal mine, the image clarity of most images captured in the mine is not very high, and a large amount of image noise is mingled with the images, which brings further downhole images processing many difficulties. Traditional image denoising method easily leads to blurred images, and the denoising effect is not very satisfactory. Aimed at the image characteristics of low image illumination and large amount of noise and based on the characteristics of color detail blindness and simultaneous contrast of human visual perception, this paper proposes a new method for image denoising based on visual characteristics. The method uses CIELab uniform color space to dynamically and adaptively decide the filter weights, thereby reducing the damage to the image contour edges and other details, so that the denoised image can have a higher clarity. Experimental results show that this method has a brilliant denoising effect and can significantly improve the subjective and objective picture quality of downhole images.

  18. A hybrid method for image Denoising based on Wavelet Thresholding and RBF network

    Directory of Open Access Journals (Sweden)

    Sandeep Dubey

    2012-06-01

    Full Text Available Digital image denoising is crucial part of image pre-processing. The application of denoising process in satellite image data and also in television broadcasting. Image data sets collected by image sensors are generally contaminated by noise. Imperfect instruments, problems with the data acquisition process, and interfering natural phenomena can all degrade the data of interest. Furthermore, noise can be introduced by transmission errors and compression. Thus, denoising is often a necessary and the first step to be taken before the images data is analyzed. In this paper we proposed a novel methodology for image denoising. Image denoising method based on wavelet transform and radial basis neural network and also used concept of soft thresholding. Wavelet transform decomposed image in to different layers, the decomposed layer differentiate by horizontal, vertical and diagonal. For the test of our hybrid method, we used noise image dataset. This data provided by UCI machine learning website. Our proposed method compare with traditional method and our base paper method and getting better comparative result.

  19. Image Denoising Using Sure-Based Adaptive Thresholding in Directionlet Domain

    Directory of Open Access Journals (Sweden)

    Sethunadh R

    2012-12-01

    Full Text Available The standard separable two dimensional wavelet transform has achieved a great success in image denoising applications due to its sparse representation of images. However it fails to capture efficiently the anisotropic geometric structures like edges and contours in images as they intersect too many wavelet basis functions and lead to a non-sparse representation. In this paper a novel de-noising scheme based on multi directional and anisotropic wavelet transform called directionlet is presented. The image denoising in wavelet domain has been extended to the directionlet domain to make the image features to concentrate on fewer coefficients so that more effective thresholding is possible. The image is first segmented and the dominant direction of each segment is identified to make a directional map. Then according to the directional map, the directionlet transform is taken along the dominant direction of the selected segment. The decomposed images with directional energy are used for scale dependent subband adaptive optimal threshold computation based on SURE risk. This threshold is then applied to the sub-bands except the LLL subband. The threshold corrected sub-bands with the unprocessed first sub-band (LLL are given as input to the inverse directionlet algorithm for getting the de-noised image. Experimental results show that the proposed method outperforms the standard wavelet-based denoising methods in terms of numeric and visual quality.

  20. Patch-based and multiresolution optimum bilateral filters for denoising images corrupted by Gaussian noise

    Science.gov (United States)

    Kishan, Harini; Seelamantula, Chandra Sekhar

    2015-09-01

    We propose optimal bilateral filtering techniques for Gaussian noise suppression in images. To achieve maximum denoising performance via optimal filter parameter selection, we adopt Stein's unbiased risk estimate (SURE)-an unbiased estimate of the mean-squared error (MSE). Unlike MSE, SURE is independent of the ground truth and can be used in practical scenarios where the ground truth is unavailable. In our recent work, we derived SURE expressions in the context of the bilateral filter and proposed SURE-optimal bilateral filter (SOBF). We selected the optimal parameters of SOBF using the SURE criterion. To further improve the denoising performance of SOBF, we propose variants of SOBF, namely, SURE-optimal multiresolution bilateral filter (SMBF), which involves optimal bilateral filtering in a wavelet framework, and SURE-optimal patch-based bilateral filter (SPBF), where the bilateral filter parameters are optimized on small image patches. Using SURE guarantees automated parameter selection. The multiresolution and localized denoising in SMBF and SPBF, respectively, yield superior denoising performance when compared with the globally optimal SOBF. Experimental validations and comparisons show that the proposed denoisers perform on par with some state-of-the-art denoising techniques.

  1. Dictionary-Based Image Denoising by Fused-Lasso Atom Selection

    Directory of Open Access Journals (Sweden)

    Ao Li

    2014-01-01

    Full Text Available We proposed an efficient image denoising scheme by fused lasso with dictionary learning. The scheme has two important contributions. The first one is that we learned the patch-based adaptive dictionary by principal component analysis (PCA with clustering the image into many subsets, which can better preserve the local geometric structure. The second one is that we coded the patches in each subset by fused lasso with the clustering learned dictionary and proposed an iterative Split Bregman to solve it rapidly. We present the capabilities with several experiments. The results show that the proposed scheme is competitive to some excellent denoising algorithms.

  2. Image Denoising Using Sure-Based Adaptive Thresholding in Directionlet Domain

    Directory of Open Access Journals (Sweden)

    Sethunadh R

    2013-01-01

    Full Text Available The standard separable two dimensional wavelet transform has achieved a great success in imagedenoising applications due to its sparse representation of images. However it fails to capture efficiently theanisotropic geometric structures like edges and contours in images as they intersect too many wavelet basisfunctions and lead to a non-sparse representation. In this paper a novel de-noising scheme based on multidirectional and anisotropic wavelet transform called directionlet is presented. The image denoising inwavelet domain has been extended to the directionlet domain to make the image features to concentrate onfewer coefficients so that more effective thresholding is possible. The image is first segmented and thedominant direction of each segment is identified to make a directional map. Then according to thedirectional map, the directionlet transform is taken along the dominant direction of the selected segment.The decomposed images with directional energy are used for scale dependent subband adaptive optimalthreshold computation based on SURE risk. This threshold is then applied to the sub-bands except the LLLsubband. The threshold corrected sub-bands with the unprocessed first sub-band (LLL are given as inputto the inverse directionlet algorithm for getting the de-noised image. Experimental results show that theproposed method outperforms the standard wavelet-based denoising methods in terms of numeric andvisual quality.

  3. Multicomponent MR Image Denoising

    Directory of Open Access Journals (Sweden)

    José V. Manjón

    2009-01-01

    Full Text Available Magnetic Resonance images are normally corrupted by random noise from the measurement process complicating the automatic feature extraction and analysis of clinical data. It is because of this reason that denoising methods have been traditionally applied to improve MR image quality. Many of these methods use the information of a single image without taking into consideration the intrinsic multicomponent nature of MR images. In this paper we propose a new filter to reduce random noise in multicomponent MR images by spatially averaging similar pixels using information from all available image components to perform the denoising process. The proposed algorithm also uses a local Principal Component Analysis decomposition as a postprocessing step to remove more noise by using information not only in the spatial domain but also in the intercomponent domain dealing in a higher noise reduction without significantly affecting the original image resolution. The proposed method has been compared with similar state-of-art methods over synthetic and real clinical multicomponent MR images showing an improved performance in all cases analyzed.

  4. Nonparametric Denoising Methods Based on Contourlet Transform with Sharp Frequency Localization: Application to Low Exposure Time Electron Microscopy Images

    Directory of Open Access Journals (Sweden)

    Soumia Sid Ahmed

    2015-05-01

    Full Text Available Image denoising is a very important step in cryo-transmission electron microscopy (cryo-TEM and the energy filtering TEM images before the 3D tomography reconstruction, as it addresses the problem of high noise in these images, that leads to a loss of the contained information. High noise levels contribute in particular to difficulties in the alignment required for 3D tomography reconstruction. This paper investigates the denoising of TEM images that are acquired with a very low exposure time, with the primary objectives of enhancing the quality of these low-exposure time TEM images and improving the alignment process. We propose denoising structures to combine multiple noisy copies of the TEM images. The structures are based on Bayesian estimation in the transform domains instead of the spatial domain to build a novel feature preserving image denoising structures; namely: wavelet domain, the contourlet transform domain and the contourlet transform with sharp frequency localization. Numerical image denoising experiments demonstrate the performance of the Bayesian approach in the contourlet transform domain in terms of improving the signal to noise ratio (SNR and recovering fine details that may be hidden in the data. The SNR and the visual quality of the denoised images are considerably enhanced using these denoising structures that combine multiple noisy copies. The proposed methods also enable a reduction in the exposure time.

  5. Improving performance of wavelet-based image denoising algorithm using complex diffusion process

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Sharifzadeh, Sara; Korhonen, Jari

    2012-01-01

    Image enhancement and de-noising is an essential pre-processing step in many image processing algorithms. In any image de-noising algorithm, the main concern is to keep the interesting structures of the image. Such interesting structures often correspond to the discontinuities (edges). In this pa...

  6. A Novel Hénon Map Based Adaptive PSO for Wavelet Shrinkage Image Denoising

    Directory of Open Access Journals (Sweden)

    Shruti Gandhi

    2013-07-01

    Full Text Available Degradation of images due to noise has led to the formulation of various techniques for image restoration. Wavelet shrinkage image denoising being one such technique has been improved over the years by using Particle Swarm Optimization (PSO and its variants for optimization of the wavelet parameters. However, the use of PSO has been rendered ineffective due to premature convergence and failure to maintain good population diversity. This paper proposes a Hénon map based adaptive PSO (HAPSO for wavelet shrinkage image denoising. While significantly improving the population diversity of the particles, it also increases the convergence rate and thereby the precision of the denoising technique. The proposed PSO uses adaptive cognitive and social components and adaptive inertia weight factors. The Hénon map sequence is applied to the control parameters instead of random variables, which introduces ergodicity and stochastic property in the PSO. This results in a more improved global convergence as compared to the traditional PSO and classical thresholding techniques. Simulation results and comparisons with the standard approaches show the effectiveness of the proposed algorithm.

  7. Normal Inverse Gaussian Model-Based Image Denoising in the NSCT Domain

    Directory of Open Access Journals (Sweden)

    Jian Jia

    2015-01-01

    Full Text Available The objective of image denoising is to retain useful details while removing as much noise as possible to recover an original image from its noisy version. This paper proposes a novel normal inverse Gaussian (NIG model-based method that uses a Bayesian estimator to carry out image denoising in the nonsubsampled contourlet transform (NSCT domain. In the proposed method, the NIG model is first used to describe the distributions of the image transform coefficients of each subband in the NSCT domain. Then, the corresponding threshold function is derived from the model using Bayesian maximum a posteriori probability estimation theory. Finally, optimal linear interpolation thresholding algorithm (OLI-Shrink is employed to guarantee a gentler thresholding effect. The results of comparative experiments conducted indicate that the denoising performance of our proposed method in terms of peak signal-to-noise ratio is superior to that of several state-of-the-art methods, including BLS-GSM, K-SVD, BivShrink, and BM3D. Further, the proposed method achieves structural similarity (SSIM index values that are comparable to those of the block-matching 3D transformation (BM3D method.

  8. Fractional Partial Differential Equation: Fractional Total Variation and Fractional Steepest Descent Approach-Based Multiscale Denoising Model for Texture Image

    Directory of Open Access Journals (Sweden)

    Yi-Fei Pu

    2013-01-01

    Full Text Available The traditional integer-order partial differential equation-based image denoising approaches often blur the edge and complex texture detail; thus, their denoising effects for texture image are not very good. To solve the problem, a fractional partial differential equation-based denoising model for texture image is proposed, which applies a novel mathematical method—fractional calculus to image processing from the view of system evolution. We know from previous studies that fractional-order calculus has some unique properties comparing to integer-order differential calculus that it can nonlinearly enhance complex texture detail during the digital image processing. The goal of the proposed model is to overcome the problems mentioned above by using the properties of fractional differential calculus. It extended traditional integer-order equation to a fractional order and proposed the fractional Green’s formula and the fractional Euler-Lagrange formula for two-dimensional image processing, and then a fractional partial differential equation based denoising model was proposed. The experimental results prove that the abilities of the proposed denoising model to preserve the high-frequency edge and complex texture information are obviously superior to those of traditional integral based algorithms, especially for texture detail rich images.

  9. Survey on Denoising Techniques in Medical Images

    Directory of Open Access Journals (Sweden)

    Ravi Mohan

    2013-07-01

    Full Text Available Denoising of Medical Images is challenging problems for researchers noise is not only effect the quality of image but it Creates a major change in calculation of medical field. The Medical Images normally have a problem of high level components of noises. There are different techniques for producing medical images such as Magnetic Resonance Imaging(MRI, X-ray, Computed Tomography and Ultrasound, during this process noise is added that decreases the image quality and image analysis. Image denoising is an important task in image processing, use of wavelet transform improves the quality of an image and reduces noise level. Noise is an inherent property of medical imaging, and it generally tends to reduce the image resolution and contrast, thereby reducing the diagnostic value of this imaging modality there is an emergent attentiveness in using multi-resolution Wavelet filters in a variety of medical imaging applications. We Have review recent wavelet based denoising techniques for medical ultrasound, magnetic resonance images, and some tomography imaging techniques like Positron Emission tomography and Computer tomography imaging and discuss some of their potential applications in the clinical investigations of the brain. The paper deals with the use of wavelet transform for signal and image de-noising employing a selected method of thresholding of appropriate decomposition coefficients

  10. GPU-Based Block-Wise Nonlocal Means Denoising for 3D Ultrasound Images

    Directory of Open Access Journals (Sweden)

    Liu Li

    2013-01-01

    Full Text Available Speckle suppression plays an important role in improving ultrasound (US image quality. While lots of algorithms have been proposed for 2D US image denoising with remarkable filtering quality, there is relatively less work done on 3D ultrasound speckle suppression, where the whole volume data rather than just one frame needs to be considered. Then, the most crucial problem with 3D US denoising is that the computational complexity increases tremendously. The nonlocal means (NLM provides an effective method for speckle suppression in US images. In this paper, a programmable graphic-processor-unit- (GPU- based fast NLM filter is proposed for 3D ultrasound speckle reduction. A Gamma distribution noise model, which is able to reliably capture image statistics for Log-compressed ultrasound images, was used for the 3D block-wise NLM filter on basis of Bayesian framework. The most significant aspect of our method was the adopting of powerful data-parallel computing capability of GPU to improve the overall efficiency. Experimental results demonstrate that the proposed method can enormously accelerate the algorithm.

  11. Image denoising using the squared eigenfunctions of the Schrodinger operator

    KAUST Repository

    Kaisserli, Zineb

    2015-02-02

    This study introduces a new image denoising method based on the spectral analysis of the semi-classical Schrodinger operator. The noisy image is considered as a potential of the Schrodinger operator, and the denoised image is reconstructed using the discrete spectrum of this operator. First results illustrating the performance of the proposed approach are presented and compared to the singular value decomposition method.

  12. Energy-based adaptive orthogonal FRIT and its application in image denoising

    Institute of Scientific and Technical Information of China (English)

    LIU YunXia; PENG YuHua; QU HuaiJing; YiN Yong

    2007-01-01

    Efficient representation of linear singularities is discussed in this paper. We analyzed the relationship between the "wrap around" effect and the distribution of FRAT (Finite Radon Transform) coefficients first, and then based on study of some properties of the columnwisely FRAT reconstruction procedure, we proposed an energy-based adaptive orthogonal FRIT scheme (EFRIT). Experiments using nonlinear approximation show its superiority in energy concentration over both Discrete Wavelet Transform (DWT) and Finite Ridgelet Transform (FRIT). Furthermore, we have modeled the denoising problem and proposed a novel threshold selecting method. Experiments carried out on images containing strong linear singularities and texture components with varying levels of addictive white Gaussian noise show that our method achieves prominent improvement in terms of both SNR and visual quality as compared with that of DWT and FRIT.

  13. Non parametric denoising methods based on wavelets: Application to electron microscopy images in low exposure time

    Energy Technology Data Exchange (ETDEWEB)

    Soumia, Sid Ahmed, E-mail: samasoumia@hotmail.fr [Science and Technology Faculty, El Bachir El Ibrahimi University, BordjBouArreridj (Algeria); Messali, Zoubeida, E-mail: messalizoubeida@yahoo.fr [Laboratory of Electrical Engineering(LGE), University of M' sila (Algeria); Ouahabi, Abdeldjalil, E-mail: abdeldjalil.ouahabi@univ-tours.fr [Polytechnic School, University of Tours (EPU - PolytechTours), EPU - Energy and Electronics Department (France); Trepout, Sylvain, E-mail: sylvain.trepout@curie.fr, E-mail: cedric.messaoudi@curie.fr, E-mail: sergio.marco@curie.fr; Messaoudi, Cedric, E-mail: sylvain.trepout@curie.fr, E-mail: cedric.messaoudi@curie.fr, E-mail: sergio.marco@curie.fr; Marco, Sergio, E-mail: sylvain.trepout@curie.fr, E-mail: cedric.messaoudi@curie.fr, E-mail: sergio.marco@curie.fr [INSERMU759, University Campus Orsay, 91405 Orsay Cedex (France)

    2015-01-13

    The 3D reconstruction of the Cryo-Transmission Electron Microscopy (Cryo-TEM) and Energy Filtering TEM images (EFTEM) hampered by the noisy nature of these images, so that their alignment becomes so difficult. This noise refers to the collision between the frozen hydrated biological samples and the electrons beam, where the specimen is exposed to the radiation with a high exposure time. This sensitivity to the electrons beam led specialists to obtain the specimen projection images at very low exposure time, which resulting the emergence of a new problem, an extremely low signal-to-noise ratio (SNR). This paper investigates the problem of TEM images denoising when they are acquired at very low exposure time. So, our main objective is to enhance the quality of TEM images to improve the alignment process which will in turn improve the three dimensional tomography reconstructions. We have done multiple tests on special TEM images acquired at different exposure time 0.5s, 0.2s, 0.1s and 1s (i.e. with different values of SNR)) and equipped by Golding beads for helping us in the assessment step. We herein, propose a structure to combine multiple noisy copies of the TEM images. The structure is based on four different denoising methods, to combine the multiple noisy TEM images copies. Namely, the four different methods are Soft, the Hard as Wavelet-Thresholding methods, Bilateral Filter as a non-linear technique able to maintain the edges neatly, and the Bayesian approach in the wavelet domain, in which context modeling is used to estimate the parameter for each coefficient. To ensure getting a high signal-to-noise ratio, we have guaranteed that we are using the appropriate wavelet family at the appropriate level. So we have chosen âĂIJsym8âĂİ wavelet at level 3 as the most appropriate parameter. Whereas, for the bilateral filtering many tests are done in order to determine the proper filter parameters represented by the size of the filter, the range parameter and the

  14. Non parametric denoising methods based on wavelets: Application to electron microscopy images in low exposure time

    International Nuclear Information System (INIS)

    The 3D reconstruction of the Cryo-Transmission Electron Microscopy (Cryo-TEM) and Energy Filtering TEM images (EFTEM) hampered by the noisy nature of these images, so that their alignment becomes so difficult. This noise refers to the collision between the frozen hydrated biological samples and the electrons beam, where the specimen is exposed to the radiation with a high exposure time. This sensitivity to the electrons beam led specialists to obtain the specimen projection images at very low exposure time, which resulting the emergence of a new problem, an extremely low signal-to-noise ratio (SNR). This paper investigates the problem of TEM images denoising when they are acquired at very low exposure time. So, our main objective is to enhance the quality of TEM images to improve the alignment process which will in turn improve the three dimensional tomography reconstructions. We have done multiple tests on special TEM images acquired at different exposure time 0.5s, 0.2s, 0.1s and 1s (i.e. with different values of SNR)) and equipped by Golding beads for helping us in the assessment step. We herein, propose a structure to combine multiple noisy copies of the TEM images. The structure is based on four different denoising methods, to combine the multiple noisy TEM images copies. Namely, the four different methods are Soft, the Hard as Wavelet-Thresholding methods, Bilateral Filter as a non-linear technique able to maintain the edges neatly, and the Bayesian approach in the wavelet domain, in which context modeling is used to estimate the parameter for each coefficient. To ensure getting a high signal-to-noise ratio, we have guaranteed that we are using the appropriate wavelet family at the appropriate level. So we have chosen âĂIJsym8âĂİ wavelet at level 3 as the most appropriate parameter. Whereas, for the bilateral filtering many tests are done in order to determine the proper filter parameters represented by the size of the filter, the range parameter and the

  15. A Comparison of PDE-based Non-Linear Anisotropic Diffusion Techniques for Image Denoising

    Energy Technology Data Exchange (ETDEWEB)

    Weeratunga, S K; Kamath, C

    2003-01-06

    PDE-based, non-linear diffusion techniques are an effective way to denoise images. In a previous study, we investigated the effects of different parameters in the implementation of isotropic, non-linear diffusion. Using synthetic and real images, we showed that for images corrupted with additive Gaussian noise, such methods are quite effective, leading to lower mean-squared-error values in comparison with spatial filters and wavelet-based approaches. In this paper, we extend this work to include anisotropic diffusion, where the diffusivity is a tensor valued function which can be adapted to local edge orientation. This allows smoothing along the edges, but not perpendicular to it. We consider several anisotropic diffusivity functions as well as approaches for discretizing the diffusion operator that minimize the mesh orientation effects. We investigate how these tensor-valued diffusivity functions compare in image quality, ease of use, and computational costs relative to simple spatial filters, the more complex bilateral filters, wavelet-based methods, and isotropic non-linear diffusion based techniques.

  16. Maximum likelihood estimation-based denoising of magnetic resonance images using restricted local neighborhoods

    International Nuclear Information System (INIS)

    In this paper, we propose a method to denoise magnitude magnetic resonance (MR) images, which are Rician distributed. Conventionally, maximum likelihood methods incorporate the Rice distribution to estimate the true, underlying signal from a local neighborhood within which the signal is assumed to be constant. However, if this assumption is not met, such filtering will lead to blurred edges and loss of fine structures. As a solution to this problem, we put forward the concept of restricted local neighborhoods where the true intensity for each noisy pixel is estimated from a set of preselected neighboring pixels. To this end, a reference image is created from the noisy image using a recently proposed nonlocal means algorithm. This reference image is used as a prior for further noise reduction. A scheme is developed to locally select an appropriate subset of pixels from which the underlying signal is estimated. Experimental results based on the peak signal to noise ratio, structural similarity index matrix, Bhattacharyya coefficient and mean absolute difference from synthetic and real MR images demonstrate the superior performance of the proposed method over other state-of-the-art methods.

  17. Adaptive Image Denoising by Mixture Adaptation.

    Science.gov (United States)

    Luo, Enming; Chan, Stanley H; Nguyen, Truong Q

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the expectation-maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper. First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. The experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms. PMID:27416593

  18. SPATIAL-VARIANT MORPHOLOGICAL FILTERS WITH NONLOCAL-PATCH-DISTANCE-BASED AMOEBA KERNEL FOR IMAGE DENOISING

    Directory of Open Access Journals (Sweden)

    Shuo Yang

    2015-01-01

    Full Text Available Filters of the Spatial-Variant amoeba morphology can preserve edges better, but with too much noise being left. For better denoising, this paper presents a new method to generate structuring elements for Spatially-Variant amoeba morphology.  The amoeba kernel in the proposed strategy is divided into two parts: one is the patch distance based amoeba center, and another is the geodesic distance based amoeba boundary, by which the nonlocal patch distance and local geodesic distance are both taken into consideration. Compared to traditional amoeba kernel, the new one has more stable center and its shape can be less influenced by noise in pilot image. What’s more important is that the nonlocal processing approach can induce a couple of adjoint dilation and erosion, and combinations of them can construct adaptive opening, closing, alternating sequential filters, etc. By designing the new amoeba kernel, a family of morphological filters therefore is derived. Finally, this paper presents a series of results on both synthetic and real images along with comparisons with current state-of-the-art techniques, including novel applications to medical image processing and noisy SAR image restoration.

  19. Twofold processing for denoising ultrasound medical images

    OpenAIRE

    P.V.V.Kishore; Kumar, K. V. V.; kumar, D. Anil; M.V.D.Prasad; Goutham, E. N. D.; Rahul, R.; Krishna, C. B. S. Vamsi; Sandeep, Y.

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing s...

  20. Astronomical Image Denoising Using Dictionary Learning

    CERN Document Server

    Beckouche, Simon; Fadili, Jalal

    2013-01-01

    Astronomical images suffer a constant presence of multiple defects that are consequences of the intrinsic properties of the acquisition equipments, and atmospheric conditions. One of the most frequent defects in astronomical imaging is the presence of additive noise which makes a denoising step mandatory before processing data. During the last decade, a particular modeling scheme, based on sparse representations, has drawn the attention of an ever growing community of researchers. Sparse representations offer a promising framework to many image and signal processing tasks, especially denoising and restoration applications. At first, the harmonics, wavelets, and similar bases and overcomplete representations have been considered as candidate domains to seek the sparsest representation. A new generation of algorithms, based on data-driven dictionaries, evolved rapidly and compete now with the off-the-shelf fixed dictionaries. While designing a dictionary beforehand leans on a guess of the most appropriate repre...

  1. Nonlocal Markovian models for image denoising

    Science.gov (United States)

    Salvadeo, Denis H. P.; Mascarenhas, Nelson D. A.; Levada, Alexandre L. M.

    2016-01-01

    Currently, the state-of-the art methods for image denoising are patch-based approaches. Redundant information present in nonlocal regions (patches) of the image is considered for better image modeling, resulting in an improved quality of filtering. In this respect, nonlocal Markov random field (MRF) models are proposed by redefining the energy functions of classical MRF models to adopt a nonlocal approach. With the new energy functions, the pairwise pixel interaction is weighted according to the similarities between the patches corresponding to each pair. Also, a maximum pseudolikelihood estimation of the spatial dependency parameter (β) for these models is presented here. For evaluating this proposal, these models are used as an a priori model in a maximum a posteriori estimation to denoise additive white Gaussian noise in images. Finally, results display a notable improvement in both quantitative and qualitative terms in comparison with the local MRFs.

  2. Image Denoising with Modified Wavelets Feature Restoration

    Directory of Open Access Journals (Sweden)

    Sachin D Ruikar

    2012-03-01

    Full Text Available Image Denoising is the principle problem of image restoration and many scholars have been devoted to this area and proposed lots of methods. In this paper we propose modified feature restoration algorithm based on threshold and neighbor technique which gives better result for all types of noise. Because of some limits of conventional methods in image denoising, several drawbacks are seen in the conventional methods such as introduction of blur and edges degradation. Those can be removed by using the new technique which is based on the wavelet transforms. The shrinkage algorithms like Universal shrink, Visue shrink, bays shrink; have strengths in Gaussian noise removal. Our proposed method gives noise removal for all types of noise, in wavelet domain. It gives a better peak signal to noise ratio as compared to traditional methods.

  3. Randomized denoising autoencoders for smaller and efficient imaging based AD clinical trials

    Science.gov (United States)

    Ithapu, Vamsi K.; Singh, Vikas; Okonkwo, Ozioma; Johnson, Sterling C.

    2015-01-01

    There is growing body of research devoted to designing imaging-based biomarkers that identify Alzheimer’s disease (AD) in its prodromal stage using statistical machine learning methods. Recently several authors investigated how clinical trials for AD can be made more efficient (i.e., smaller sample size) using predictive measures from such classification methods. In this paper, we explain why predictive measures given by such SVM type objectives may be less than ideal for use in the setting described above. We give a solution based on a novel deep learning model, randomized denoising autoencoders (rDA), which regresses on training labels y while also accounting for the variance, a property which is very useful for clinical trial design. Our results give strong improvements in sample size estimates over strategies based on multi-kernel learning. Also, rDA predictions appear to more accurately correlate to stages of disease. Separately, our formulation empirically shows how deep architectures can be applied in the large d, small n regime — the default situation in medical imaging. This result is of independent interest. PMID:25485413

  4. Survey for Wavelet Bayesian Network Image Denoising

    Directory of Open Access Journals (Sweden)

    Pallavi Sharma,

    2014-04-01

    Full Text Available In now days, wavelet-based image denoising method, which extends a recently emerged ―geometrical‖ Bayesian framework. The new scheme combines three criteria for distinctive theoretically useful coefficients from noise: coefficient magnitudes, their advancement across scales and spatial clustering of bulky coefficients close to image edges. These three criteria are united in a Bayesian construction. The spatial clustering properties are expressed in a earlier model. The statistical properties regarding coefficient magnitudes and their development crossways scales are expressed in a joint conditional model. We address the image denoising difficulty, where zero-mean white and homogeneous Gaussian additive noise is to be uninvolved from a given image. We employ the belief propagation (BP algorithm, which estimates a coefficient based on every one the coefficients of a picture, as the maximum-a-posterior (MAP estimator to derive the denoised wavelet coefficients. We illustrate that if the network is a spanning tree, the customary BP algorithm can achieve MAP estimation resourcefully. Our research consequences show that, in conditions of the peak-signal-to-noise-ratio and perceptual superiority, the planned approach outperforms state-of-the-art algorithms on a number of images, mostly in the textured regions, with a range of amounts of white Gaussian noise.

  5. Image Denoising via Nonlinear Hybrid Diffusion

    OpenAIRE

    Xiaoping Ji; Dazhi Zhang; Zhichang Guo; Boying Wu

    2013-01-01

    A nonlinear anisotropic hybrid diffusion equation is discussed for image denoising, which is a combination of mean curvature smoothing and Gaussian heat diffusion. First, we propose a new edge detection indicator, that is, the diffusivity function. Based on this diffusivity function, the new diffusion is nonlinear anisotropic and forward-backward. Unlike the Perona-Malik (PM) diffusion, the new forward-backward diffusion is adjustable and under control. Then, the existence, uniqueness, and lo...

  6. Postprocessing of Compressed Images via Sequential Denoising.

    Science.gov (United States)

    Dar, Yehuda; Bruckstein, Alfred M; Elad, Michael; Giryes, Raja

    2016-07-01

    In this paper, we propose a novel postprocessing technique for compression-artifact reduction. Our approach is based on posing this task as an inverse problem, with a regularization that leverages on existing state-of-the-art image denoising algorithms. We rely on the recently proposed Plug-and-Play Prior framework, suggesting the solution of general inverse problems via alternating direction method of multipliers, leading to a sequence of Gaussian denoising steps. A key feature in our scheme is a linearization of the compression-decompression process, so as to get a formulation that can be optimized. In addition, we supply a thorough analysis of this linear approximation for several basic compression procedures. The proposed method is suitable for diverse compression techniques that rely on transform coding. In particular, we demonstrate impressive gains in image quality for several leading compression methods-JPEG, JPEG2000, and HEVC. PMID:27214878

  7. A Novel and Robust Wavelet based Super Resolution Reconstruction of Low Resolution Images using Efficient Denoising and Adaptive Interpolation

    Directory of Open Access Journals (Sweden)

    Liyakathunisa

    2010-10-01

    Full Text Available High Resolution images can be reconstructed from several blurred, noisy and aliased low resolution images using a computational process know as super resolution reconstruction. Super resolution reconstruction is the process of combining several low resolution images into a single higher resolution image. In this paper we concentrate on a special case of super resolution problem where the wrap is composed of pure translation and rotation, the blur is space invariant and the noise is additive white Gaussian noise. Super resolution reconstruction consists of registration, restoration and interpolation phases. Once the Low resolution image are registered with respect to a reference frame then wavelet based restoration is performed to remove the blur and noise from the images, finally the images are interpolated using adaptive interpolation. We are proposing an efficient wavelet based denoising with adaptive interpolation for super resolution reconstruction. Under this frame work, the low resolution images are decomposed into many levels to obtain different frequency bands. Then our proposed novel soft thresholding technique is used to remove the noisy coefficients, by fixing optimum threshold value. In order to obtain an image of higher resolution we have proposed an adaptive interpolation technique. Our proposed wavelet based denoising with adaptive interpolation for super resolution reconstruction preserves the edges as well as smoothens the image without introducing artifacts. Experimental results show that the proposed approach has succeeded in obtaining a high-resolution image with a high PSNR, ISNR ratio and a good visual quality.

  8. Denoising of Magnetic Resonance and X-Ray Images Using Variance Stabilization and Patch Based Algorithms

    Directory of Open Access Journals (Sweden)

    V N Prudhvi Raj

    2013-01-01

    Full Text Available Developments in Medical imaging systems which are providing the anatomical and physiological details ofthe patients made the diagnosis simple day by day. But every medical imaging modality suffers from somesort of noise. Noise in medical images will decrease the contrast in the image, due to this effect lowcontrast lesions may not be detected in the diagnostic phase. So the removal of noise from medical imagesis very important task. In this paper we are presenting the Denoising techniques developed for removingthe poison noise from X-ray images due to low photon count and Rician noise from the MRI (magneticresonance images. The Poisson and Rician noise are data dependent so they won’t follow the Gaussiandistribution most of the times. In our algorithm we are converting the Poisson and Rician noise distributioninto Gaussian distribution using variance stabilization technique and then we used the patch basedalgorithms for denoising the images. The performance of the algorithms was evaluated using various imagequality metrics such as PSNR (Peak signal to noise ratio, UQI (Universal Quality Index, SSIM (Structuralsimilarity index etc. The results proved that the Anscombe transform, Freeman & Tukey transform withblock matching 3D algorithm is giving a better result.

  9. Literature Review of Image Denoising Methods

    Institute of Scientific and Technical Information of China (English)

    LIU Qian; YANG Xing-qiang; LI Yun-liang

    2014-01-01

    Image denoising is a fundamental and important task in image processing and computer vision fields. A lot of methods are proposed to reconstruct clean images from their noisy versions. These methods differ in both methodology and performance. On one hand, denoising methods can be classified into local and nonlocal methods. On the other hand, they can be marked as spatial and frequency domain methods. Sparse coding and low-rank are two popular techniques for denoising recently. This paper summarizes existing techniques and provides several promising directions for further studying in the future.

  10. Denoising CT Images using wavelet transform

    Directory of Open Access Journals (Sweden)

    Lubna Gabralla

    2015-05-01

    Full Text Available Image denoising is one of the most significant tasks especially in medical image processing, where the original images are of poor quality due the noises and artifacts introduces by the acquisition systems. In this paper, we propose a new image denoising scheme by modifying the wavelet coefficients using soft-thresholding method, we present a comparative study of different wavelet denoising techniques for CT images and we discuss the obtained results. The denoising process rejects noise by thresholding in the wavelet domain. The performance is evaluated using Peak Signal-to-Noise Ratio (PSNR and Mean Squared Error (MSE. Finally, Gaussian filter provides better PSNR and lower MSE values. Hence, we conclude that this filter is an efficient one for preprocessing medical images.

  11. MMW and THz images denoising based on adaptive CBM3D

    Science.gov (United States)

    Dai, Li; Zhang, Yousai; Li, Yuanjiang; Wang, Haoxiang

    2014-04-01

    Over the past decades, millimeter wave and terahertz radiation has received a lot of interest due to advances in emission and detection technologies which allowed the widely application of the millimeter wave and terahertz imaging technology. This paper focuses on solving the problem of this sort of images existing stripe noise, block effect and other interfered information. A new kind of nonlocal average method is put forward. Suitable level Gaussian noise is added to resonate with the image. Adaptive color block-matching 3D filtering is used to denoise. Experimental results demonstrate that it improves the visual effect and removes interference at the same time, making the analysis of the image and target detection more easily.

  12. Geometric moment based nonlocal-means filter for ultrasound image denoising

    Science.gov (United States)

    Dou, Yangchao; Zhang, Xuming; Ding, Mingyue; Chen, Yimin

    2011-06-01

    It is inevitable that there is speckle noise in ultrasound image. Despeckling is the important process. The original nonlocal means (NLM) filter can remove speckle noise and protect the texture information effectively when the image corruption degree is relatively low. But when the noise in the image is strong, NLM will produce fictitious texture information, which has the disadvantageous influence on its denoising performance. In this paper, a novel nonlocal means (NLM) filter is proposed. We introduce geometric moments into the NLM filter. Though geometric moments are not orthogonal moments, it is popular by its concision, and its restoration ability is not yet proved. Results on synthetic data and real ultrasound image show that the proposed method can get better despeckling performance than other state-of-the-art method.

  13. Denoising of Chinese calligraphy tablet images based on run-length statistics and structure characteristic of character strokes

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jun-song; YU Jin-hui; MAO Guo-hong; YE Xiu-zi

    2006-01-01

    In this paper, a novel approach is proposed for denoising of Chinese calligraphy tablet documents. The method includes two phases: First, a partial differential equations (PDE) based the total variation model and Otsu thresholding method are used to preprocess the calligraphy document image. Second, a new method based on on-length statistics and structure characteristics of Chinese characters is proposed to remove some random and ant-like noises. This includes the optimal threshold selection from histogram of run-length probability density, and improved Hough transform algorithm for line shape noise detection and removal. Examples are given in the paper to demonstrate the proposed method.

  14. Multiresolution Bilateral Filtering for Image Denoising

    OpenAIRE

    Zhang, Ming; Gunturk, Bahadir K.

    2008-01-01

    The bilateral filter is a nonlinear filter that does spatial averaging without smoothing edges; it has shown to be an effective image denoising technique. An important issue with the application of the bilateral filter is the selection of the filter parameters, which affect the results significantly. There are two main contributions of this paper. The first contribution is an empirical study of the optimal bilateral filter parameter selection in image denoising applications. The second contri...

  15. Combined self-learning based single-image super-resolution and dual-tree complex wavelet transform denoising for medical images

    OpenAIRE

    Yang, Guang; Ye, Xujiong; Slabaugh, Greg; Keegan, Jennifer; Mohiaddin, Raad; Firmin, David

    2016-01-01

    In this paper, we propose a novel self-learning based single-image super-resolution (SR) method, which is coupled with dual-tree complex wavelet transform (DTCWT) based denoising to better recover high-resolution (HR) medical images. Unlike previous methods, this self-learning based SR approach enables us to reconstruct HR medical images from a single low-resolution (LR) image without extra training on HR image datasets in advance. The relationships between the given image and its scaled down...

  16. Hyperspectral Image Denoising with Composite Regularization Models

    Directory of Open Access Journals (Sweden)

    Ao Li

    2016-01-01

    Full Text Available Denoising is a fundamental task in hyperspectral image (HSI processing that can improve the performance of classification, unmixing, and other subsequent applications. In an HSI, there is a large amount of local and global redundancy in its spatial domain that can be used to preserve the details and texture. In addition, the correlation of the spectral domain is another valuable property that can be utilized to obtain good results. Therefore, in this paper, we proposed a novel HSI denoising scheme that exploits composite spatial-spectral information using a nonlocal technique (NLT. First, a specific way to extract patches is employed to mine the spatial-spectral knowledge effectively. Next, a framework with composite regularization models is used to implement the denoising. A number of HSI data sets are used in our evaluation experiments and the results demonstrate that the proposed algorithm outperforms other state-of-the-art HSI denoising methods.

  17. Twofold processing for denoising ultrasound medical images.

    Science.gov (United States)

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India. PMID:26697285

  18. A HYBRID APPROACH FOR DENOISING DICOM IMAGES

    Directory of Open Access Journals (Sweden)

    J. UMAMAHESWARI

    2011-11-01

    Full Text Available This paper provides a new model based on the hybridization of wavelet and relaxed median filter for denoising of noisy medical images. The present study focuses on proposing a technique to reduce speckle and salt & pepper noise from CT (Computed Tomography scan devices. In diagnosis of diseases, devices are frequently used by healthcare professionals. The main problem during diagnosis is the distortion of visual image signals that are obtained, which is due to the consequence of the coherent of nature of the liquid speckle noise and salt and pepper noise added during transmission. We validate the new model by evaluating the standard brain images in terms of Peak Signal to Noise Ratio (PSNR, Mean Square Error (MSE and Elapsed Time (ET. The proposed filter is compared with existing filters. Experimental results prove, the proposed method is efficient.

  19. Fuzzy logic-based approach to wavelet denoising of 3D images produced by time-of-flight cameras.

    Science.gov (United States)

    Jovanov, Ljubomir; Pižurica, Aleksandra; Philips, Wilfried

    2010-10-25

    In this paper we present a new denoising method for the depth images of a 3D imaging sensor, based on the time-of-flight principle. We propose novel ways to use luminance-like information produced by a time-of flight camera along with depth images. Firstly, we propose a wavelet-based method for estimating the noise level in depth images, using luminance information. The underlying idea is that luminance carries information about the power of the optical signal reflected from the scene and is hence related to the signal-to-noise ratio for every pixel within the depth image. In this way, we can efficiently solve the difficult problem of estimating the non-stationary noise within the depth images. Secondly, we use luminance information to better restore object boundaries masked with noise in the depth images. Information from luminance images is introduced into the estimation formula through the use of fuzzy membership functions. In particular, we take the correlation between the measured depth and luminance into account, and the fact that edges (object boundaries) present in the depth image are likely to occur in the luminance image as well. The results on real 3D images show a significant improvement over the state-of-the-art in the field. PMID:21164605

  20. Local sparse representation for astronomical image denoising

    Institute of Scientific and Technical Information of China (English)

    杨阿锋; 鲁敏; 滕书华; 孙即祥

    2013-01-01

    Motivated by local coordinate coding(LCC) theory in nonlinear manifold learning, a new image representation model called local sparse representation(LSR) for astronomical image denoising was proposed. Borrowing ideas from surrogate function and applying the iterative shrinkage-thresholding algorithm(ISTA), an iterative shrinkage operator for LSR was derived. Meanwhile, a fast approximated LSR method by first performing a K-nearest-neighbor search and then solving a l1optimization problem was presented under the guarantee of denoising performance. In addition, the LSR model and adaptive dictionary learning were incorporated into a unified optimization framework, which explicitly established the inner connection of them. Such processing allows us to simultaneously update sparse coding vectors and the dictionary by alternating optimization method. The experimental results show that the proposed method is superior to the traditional denoising method and reaches state-of-the-art performance on astronomical image.

  1. Comparative Study of Image Denoising Algorithms in Digital Image Processing

    Directory of Open Access Journals (Sweden)

    Aarti Kumari

    2015-11-01

    Full Text Available This paper proposes a basic scheme for understanding the fundamentals of digital image processing and the image denising algorithm. There are three basic operation categorized on during image processing i.e. image rectification and restoration, enhancement and information extraction. Image denoising is the basic problem in digital image processing. The main task is to make the image free from Noise. Salt & pepper (Impulse noise and the additive white Gaussian noise and blurredness are the types of noise that occur during transmission and capturing. For denoising the image there are some algorithms which denoise the image.

  2. 基于稀疏序列的图像去噪方法及应用%Image Denoising Based on Sparse Sequences and Its Application

    Institute of Scientific and Technical Information of China (English)

    王蓓; 张欣; 刘洪

    2011-01-01

    文中基于图像稀疏分解,根据图像与噪声的稀疏分解不同,提出一种基于非对称原子模型的原子库,通过算法优化,实现对采集的布坯图像进行有效去噪分析,提高去噪图像的PSNR值,且具有更好的视觉效果.将所采集到的布坯数字图像去噪后将背景和缺陷进行分离,才能更有效地将缺陷进行界定,以利后续的相关特征提取.通过实验,与小波类去噪方法对比,文中的学习算法能更好地去除图像噪声,保留图像细节信息,获得更高PSNB值.%Base on the image sparse decomposition,according to the different characters of image and noise in sparse decomposition, proposed a model based on asymmetric atomic atoms library ,by algorithm the acquisition of effective de-noising analysis of gray images.Denoising to improve image PSNR values, and has a better visual effect. Will be collected by digital image denoising cloth blank background and the defects after separation in order to more effectively define the defects in order to facilitate the follow-up of the relevant characteristics of extraction. Experimental results show that :in comparison with the wavelet based denoising methods,our learning based algorithm has better denoising ability, keep more detail image information and improve the peak signal to noise ratio.

  3. Accelerated graph-based nonlinear denoising filters

    OpenAIRE

    Knyazev, Andrew; Malyshev, Alexander,

    2015-01-01

    Denoising filters, such as bilateral, guided, and total variation filters, applied to images on general graphs may require repeated application if noise is not small enough. We formulate two acceleration techniques of the resulted iterations: conjugate gradient method and Nesterov's acceleration. We numerically show efficiency of the accelerated nonlinear filters for image denoising and demonstrate 2-12 times speed-up, i.e., the acceleration techniques reduce the number of iterations required...

  4. Edge preserved enhancement of medical images using adaptive fusion-based denoising by shearlet transform and total variation algorithm

    Science.gov (United States)

    Gupta, Deep; Anand, Radhey Shyam; Tyagi, Barjeev

    2013-10-01

    Edge preserved enhancement is of great interest in medical images. Noise present in medical images affects the quality, contrast resolution, and most importantly, texture information and can make post-processing difficult also. An enhancement approach using an adaptive fusion algorithm is proposed which utilizes the features of shearlet transform (ST) and total variation (TV) approach. In the proposed method, three different denoised images processed with TV method, shearlet denoising, and edge information recovered from the remnant of the TV method and processed with the ST are fused adaptively. The result of enhanced images processed with the proposed method helps to improve the visibility and detectability of medical images. For the proposed method, different weights are evaluated from the different variance maps of individual denoised image and the edge extracted information from the remnant of the TV approach. The performance of the proposed method is evaluated by conducting various experiments on both the standard images and different medical images such as computed tomography, magnetic resonance, and ultrasound. Experiments show that the proposed method provides an improvement not only in noise reduction but also in the preservation of more edges and image details as compared to the others.

  5. PERFORMANCE ANALYSIS OF IMAGE DENOISING IN LIFTING WAVELET TRANSFORM

    Directory of Open Access Journals (Sweden)

    G.M.Rajathi

    2012-11-01

    Full Text Available Images are contaminated by noise due to several unavoidable reasons, Poor image sensors, imperfect instruments, problems with data acquisition process, transmission errors and interfering natural phenomena are its main sources. Therefore, it is necessary to detect and remove noises present in the images. Reserving the details of an image and removing the random noise as far as possible is the goal of image denoising approaches . Lifting wavelet transform (LWTis based on the theory of lazy wavelet and completely recoverable filter banks,improving the wavelet and its performance through the lifting process under the condition of maintaining the feature of the wavelet compared with the classical constructions (DWT is rely on the Fourier transform. In this paper we compare the image denoising performance of LWT with DWT . We demonstrated through simulations with images contaminated by white Gaussian noise that exhibits performance in both PSNR (Peak Signal-to-Noise Ratio and visual effect.

  6. MEDICAL IMAGE DENOISE METHOD BASED ON CURVELET TRANSFORM: AN APPROACH FOR EDGE PRESERVATION

    OpenAIRE

    T. Janardhan Reddy

    2016-01-01

    In medical images noise and artifacts are presented due to the measurement techniques and instrumentation. Because of the noise present in the medical images, physicians are unable to obtain required information from the images. The paper proposes a noise reduction method for both computed tomography (CT) and magnetic resonance imaging (MRI) which fuses the Curvelet transform based method. The performance is analysed by computing Peak Signal to Noise Ratio (PSNR).The results show the proposed...

  7. Robust Image Denoising using a Virtual Flash Image for Monte Carlo Ray Tracing

    DEFF Research Database (Denmark)

    Moon, Bochang; Jun, Jong Yun; Lee, JongHyeob;

    2013-01-01

    We propose an efficient and robust image-space denoising method for noisy images generated by Monte Carlo ray tracing methods. Our method is based on two new concepts: virtual flash images and homogeneous pixels. Inspired by recent developments in flash photography, virtual flash images emulate...... parameters. To highlight the benefits of our method, we apply our method to two Monte Carlo ray tracing methods, photon mapping and path tracing, with various input scenes. We demonstrate that using virtual flash images and homogeneous pixels with a standard denoising method outperforms state...... values. While denoising each pixel, we consider only homogeneous pixels—pixels that are statistically equivalent to each other. This makes it possible to define a stochastic error bound of our method, and this bound goes to zero as the number of ray samples goes to infinity, irrespective of denoising...

  8. Medical Image Denoising Using Bilateral Filter

    Directory of Open Access Journals (Sweden)

    Devanand Bhonsle

    2012-07-01

    Full Text Available Medical image processing is used for the diagnosis of diseases by the physicians or radiologists. Noise is introduced to the medical images due to various factors in medical imaging. Noise corrupts the medical images and the quality of the images degrades. This degradation includes suppression of edges, structural details, blurring boundaries etc. To diagnose diseases edge and details preservation are very important. Medical image denoising can help the physicians to diagnose the diseases. Medical images include MRI, CT scan, x-ray images, ultrasound images etc. In this paper we implemented bilateral filtering for medical image denoising. Its formulation & implementation are easy but the performance of bilateral filter depends upon its parameter. Therefore for obtaining the optimum result parameter must be estimated. We have applied bilateral filtering on medical images which are corrupted by additive white Gaussian noise with different values of variances. It is a nonlinear and local technique that preserves the features while smoothing the images. It removes the additive white Gaussian noise effectively but its performance is poor in removing salt and pepper noise.

  9. PDE-based Non-Linear Diffusion Techniques for Denoising Scientific and Industrial Images: An Empirical Study

    Energy Technology Data Exchange (ETDEWEB)

    Weeratunga, S K; Kamath, C

    2001-12-20

    Removing noise from data is often the first step in data analysis. Denoising techniques should not only reduce the noise, but do so without blurring or changing the location of the edges. Many approaches have been proposed to accomplish this; in this paper, they focus on one such approach, namely the use of non-linear diffusion operators. This approach has been studied extensively from a theoretical viewpoint ever since the 1987 work of Perona and Malik showed that non-linear filters outperformed the more traditional linear Canny edge detector. They complement this theoretical work by investigating the performance of several isotropic diffusion operators on test images from scientific domains. They explore the effects of various parameters such as the choice of diffusivity function, explicit and implicit methods for the discretization of the PDE, and approaches for the spatial discretization of the non-linear operator etc. They also compare these schemes with simple spatial filters and the more complex wavelet-based shrinkage techniques. The empirical results show that, with an appropriate choice of parameters, diffusion-based schemes can be as effective as competitive techniques.

  10. Application of multi-resolution analysis in sonar image denoising

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Sonar images have complex background, low contrast, and deteriorative edges; these characteristics make it difficult for researchers to dispose the sonar objects. The multi-resolution analysis represents the signals in different scales efficiently, which is widely used in image processing. Wavelets are successful in disposing point discontinuities in one dimension, but not in two dimensions. The finite Ridgelet transform (FRIT) deals efficiently with the singularity in high dimension. It presents three improved denoising approaches, which are based on FRIT and used in the sonar image disposal technique. By experiment and comparison with traditional methods, these approaches not only suppress the artifacts, but also obtain good effect in edge keeping and SNR of the sonar image denoising.

  11. Denoising human cardiac diffusion tensor magnetic resonance images using sparse representation combined with segmentation

    Science.gov (United States)

    Bao, L. J.; Zhu, Y. M.; Liu, W. Y.; Croisille, P.; Pu, Z. B.; Robini, M.; Magnin, I. E.

    2009-03-01

    Cardiac diffusion tensor magnetic resonance imaging (DT-MRI) is noise sensitive, and the noise can induce numerous systematic errors in subsequent parameter calculations. This paper proposes a sparse representation-based method for denoising cardiac DT-MRI images. The method first generates a dictionary of multiple bases according to the features of the observed image. A segmentation algorithm based on nonstationary degree detector is then introduced to make the selection of atoms in the dictionary adapted to the image's features. The denoising is achieved by gradually approximating the underlying image using the atoms selected from the generated dictionary. The results on both simulated image and real cardiac DT-MRI images from ex vivo human hearts show that the proposed denoising method performs better than conventional denoising techniques by preserving image contrast and fine structures.

  12. A Fast Wavelet Multilevel Approach to Total Variation Image Denoising

    Directory of Open Access Journals (Sweden)

    Kossi Edoh

    2009-09-01

    Full Text Available In this paper we present an adaptive multilevel total variational (TV method for image denoising which utilizes TV partial differential equation (PDE models and exploits the multiresolution properties of wavelets. The adaptive multilevel TV method provides fast adaptive wavelet-based solvers for the TV model. Our approach employs a wavelet collocation method applied to the TV model using two-dimensional anisotropic tensor product of Daubechies wavelets. The algorithm inherently combines the denoising property of wavelet compression algorithms with that of the TV model, and produces results superior to each method when implemented alone. It exploits the edge preservation property of the TVmodel to reduce the oscillations that may be generated around the edges in wavelet compression. In contrast with previous work combining TV denoising with wavelet compression, the method presented in this paper treats the numerical solution in a novel waywhich decreases the computational cost associated with the solution of the TV model. We present a detailed description of our method and results which indicate that a combination of wavelet based denoising techniques with the TV model produces superior results, for afraction of the computational cost.

  13. A Comparative Study of Wavelet Thresholding for Image Denoising

    Directory of Open Access Journals (Sweden)

    Arun Dixit

    2014-11-01

    Full Text Available Image denoising using wavelet transform has been successful as wavelet transform generates a large number of small coefficients and a small number of large coefficients. Basic denoising algorithm that using the wavelet transform consists of three steps – first computing the wavelet transform of the noisy image, thresholding is performed on the detail coefficients in order to remove noise and finally inverse wavelet transform of the modified coefficients is taken. This paper reviews the state of art methods of image denoising using wavelet thresholding. An Experimental analysis of wavelet based methods Visu Shrink, Sure Shrink, Bayes Shrink, Prob Shrink, Block Shrink and Neigh Shrink Sure is performed. These wavelet based methods are also compared with spatial domain methods like median filter and wiener filter. Results are evaluated on the basis of Peak Signal to Noise Ratio and visual quality of images. In the experiment, wavelet based methods perform better than spatial domain methods. In wavelet domain, recent methods like prob shrink, block shrink and neigh shrink sure performed better as compared to other wavelet based methods.

  14. A scale-based forward-and-backward diffusion process for adaptive image enhancement and denoising

    Directory of Open Access Journals (Sweden)

    Zhang Liangpei

    2011-01-01

    Full Text Available Abstract This work presents a scale-based forward-and-backward diffusion (SFABD scheme. The main idea of this scheme is to perform local adaptive diffusion using local scale information. To this end, we propose a diffusivity function based on the Minimum Reliable Scale (MRS of Elder and Zucker (IEEE Trans. Pattern Anal. Mach. Intell. 20(7, 699-716, 1998 to detect the details of local structures. The magnitude of the diffusion coefficient at each pixel is determined by taking into account the local property of the image through the scales. A scale-based variable weight is incorporated into the diffusivity function for balancing the forward and backward diffusion. Furthermore, as numerical scheme, we propose a modification of the Perona-Malik scheme (IEEE Trans. Pattern Anal. Mach. Intell. 12(7, 629-639, 1990 by incorporating edge orientations. The article describes the main principles of our method and illustrates image enhancement results on a set of standard images as well as simulated medical images, together with qualitative and quantitative comparisons with a variety of anisotropic diffusion schemes.

  15. Image Denoising Using Hybrid Filter

    OpenAIRE

    Ms. Rekha Rani; Dr. Sukhbir Singh; Er.Amit Malik

    2012-01-01

    Image Processing is the vast area in the field of research. There are various techniques used to remove Present noise. This paper represents obstacles related with image during transmission. The salt & pepper noise, Gaussian noise, impulse noise, Rayleigh noise are the such type of noise that are produced during transmission. Noise arises due to various factors like bit error rate, speed, dead pixels. The images become blurred due to camera movements, object movement or displacement of pixels...

  16. Stacked Denoise Autoencoder Based Feature Extraction and Classification for Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Chen Xing

    2016-01-01

    Full Text Available Deep learning methods have been successfully applied to learn feature representations for high-dimensional data, where the learned features are able to reveal the nonlinear properties exhibited in the data. In this paper, deep learning method is exploited for feature extraction of hyperspectral data, and the extracted features can provide good discriminability for classification task. Training a deep network for feature extraction and classification includes unsupervised pretraining and supervised fine-tuning. We utilized stacked denoise autoencoder (SDAE method to pretrain the network, which is robust to noise. In the top layer of the network, logistic regression (LR approach is utilized to perform supervised fine-tuning and classification. Since sparsity of features might improve the separation capability, we utilized rectified linear unit (ReLU as activation function in SDAE to extract high level and sparse features. Experimental results using Hyperion, AVIRIS, and ROSIS hyperspectral data demonstrated that the SDAE pretraining in conjunction with the LR fine-tuning and classification (SDAE_LR can achieve higher accuracies than the popular support vector machine (SVM classifier.

  17. Comparison of the Fuzzy-based Wavelet Shrinkage Image Denoising Techniques

    Directory of Open Access Journals (Sweden)

    Ali Adeli

    2012-03-01

    Full Text Available In this paper, a comparative study on the different membership functions which are used for fuzzy-based noise reduction methods is done. This study focuses on the three different membership functions such as Gaussian, Sigmaf and Trapezoidal. The fuzzy wavelet shrinkage method is tested with different membership functions in order to reduce different types of noise such as Gaussian, Salt Pepper, Poisson and Speckle. The measure of comparison between different membership function is based on PSNR (Peak Signal to Noise Ratio. Experimental results show that on the some well-known images, such as Lena, Barbara and Baboon, the Gaussian membership function can efficiently remove the additive Gaussian and the Poisson noises from the grey level images. Furthermore, on the Speckle and Salt Pepper noises, the Sigmaf membership function outperforms the Trapezoidal one to remove noise.

  18. PERFORMANCE ANALYSIS OF IMAGE DENOISING WITH WAVELET THRESHOLDING METHODS FOR DIFFERENT LEVELS OF DECOMPOSITION

    Directory of Open Access Journals (Sweden)

    Anutam

    2014-10-01

    Full Text Available Image Denoising is an important part of diverse image processing and computer vision problems. The important property of a good image denoising model is that it should completely remove noise as far as possible as well as preserve edges. One of the most powerful and perspective approaches in this area is image denoising using discrete wavelet transform (DWT. In this paper, comparison of various Wavelets at different decomposition levels has been done. As number of levels increased, Peak Signal to Noise Ratio (PSNR of image gets decreased whereas Mean Absolute Error (MAE and Mean Square Error (MSE get increased . A comparison of filters and various wavelet based methods has also been carried out to denoise the image. The simulation results reveal that wavelet based Bayes shrinkage method outperforms other methods.

  19. Comparative Study of Image Denoising Algorithms in Digital Image Processing

    OpenAIRE

    Aarti Kumari; Gaurav Pushkarna

    2015-01-01

    This paper proposes a basic scheme for understanding the fundamentals of digital image processing and the image denising algorithm. There are three basic operation categorized on during image processing i.e. image rectification and restoration, enhancement and information extraction. Image denoising is the basic problem in digital image processing. The main task is to make the image free from Noise. Salt & pepper (Impulse) noise and the additive white Gaussian noise and blurredness are th...

  20. Comparative Study of Image Denoising Algorithms in Digital Image Processing

    OpenAIRE

    Aarti; Gaurav Pushkarna

    2014-01-01

    This paper proposes a basic scheme for understanding the fundamentals of digital image processing and the image denising algorithm. There are three basic operation categorized on during image processing i.e. image rectification and restoration, enhancement and information extraction. Image denoising is the basic problem in digital image processing. The main task is to make the image free from Noise. Salt & pepper (Impulse) noise and the additive white Gaussian noise and blurrednes...

  1. Image denoising via Bayesian estimation of local variance with Maxwell density prior

    Science.gov (United States)

    Kittisuwan, Pichid

    2015-10-01

    The need for efficient image denoising methods has grown with the massive production of digital images and movies of all kinds. The distortion of images by additive white Gaussian noise (AWGN) is common during its processing and transmission. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. Indeed, one of the cruxes of the Bayesian image denoising algorithms is to estimate the local variance of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with Maxwell density prior for local observed variance and Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by analytical and computational tractability. The experimental results show that the proposed method yields good denoising results.

  2. A new method for mobile phone image denoising

    Science.gov (United States)

    Jin, Lianghai; Jin, Min; Li, Xiang; Xu, Xiangyang

    2015-12-01

    Images captured by mobile phone cameras via pipeline processing usually contain various kinds of noises, especially granular noise with different shapes and sizes in both luminance and chrominance channels. In chrominance channels, noise is closely related to image brightness. To improve image quality, this paper presents a new method to denoise such mobile phone images. The proposed scheme converts the noisy RGB image to luminance and chrominance images, which are then denoised by a common filtering framework. The common filtering framework processes a noisy pixel by first excluding the neighborhood pixels that significantly deviate from the (vector) median and then utilizing the other neighborhood pixels to restore the current pixel. In the framework, the strength of chrominance image denoising is controlled by image brightness. The experimental results show that the proposed method obviously outperforms some other representative denoising methods in terms of both objective measure and visual evaluation.

  3. Efficient bias correction for magnetic resonance image denoising.

    Science.gov (United States)

    Mukherjee, Partha Sarathi; Qiu, Peihua

    2013-05-30

    Magnetic resonance imaging (MRI) is a popular radiology technique that is used for visualizing detailed internal structure of the body. Observed MRI images are generated by the inverse Fourier transformation from received frequency signals of a magnetic resonance scanner system. Previous research has demonstrated that random noise involved in the observed MRI images can be described adequately by the so-called Rician noise model. Under that model, the observed image intensity at a given pixel is a nonlinear function of the true image intensity and of two independent zero-mean random variables with the same normal distribution. Because of such a complicated noise structure in the observed MRI images, denoised images by conventional denoising methods are usually biased, and the bias could reduce image contrast and negatively affect subsequent image analysis. Therefore, it is important to address the bias issue properly. To this end, several bias-correction procedures have been proposed in the literature. In this paper, we study the Rician noise model and the corresponding bias-correction problem systematically and propose a new and more effective bias-correction formula based on the regression analysis and Monte Carlo simulation. Numerical studies show that our proposed method works well in various applications. PMID:23074149

  4. A Denoising Filter Design based on No-Reference Image Content Metric

    Directory of Open Access Journals (Sweden)

    B.Padhmavathi

    2011-12-01

    Full Text Available DIGITAL images are subject to a wide variety of distortions during acquisition, processing,compression, storage, transmission and reproduction. Any of these may result in degradation of theirvisual quality. Hence, there has been an increasing need to develop quality measurement techniques that can predict perceived image/video quality automatically. These methods are useful in various image/videoprocessing applications such as compression, communication, printing, display, analysis, registration,restoration, and enhancement. Subjective quality metrics are considered to give the most reliable results since, it is the end user who is judging the quality of the output in many applications. Subjective quality metrics are costly, time-consuming and impractical for real-time implementation and system integration. On the other hand, objective metrics like full-reference, reduced-reference, and no-reference metrics aremost popular. This paper proposes an ideal no-reference measure that is useful for the parameter optimization problem and it takes care of both noise and blur on the reconstructed image into account.The experimental results have shown that the technique works well with images with various kinds of noise.

  5. FETAL ULTRASOUND IMAGE DENOISING USING CURVELET TRANSFORM

    Directory of Open Access Journals (Sweden)

    J. Nithya

    2015-02-01

    Full Text Available The random speckle noise in the acquired fetal ultrasound images is caused by the interference of reflected ultrasound wave fronts. The presence of speckle noise will degrade the quality of the image and even hide image details, which in turn affect the process of image segmentation, feature extraction and recognition and most importantly disease diagnosis. The standardization of measurements from the fetal ultrasound images will help the physicians to make correct diagnosis. The accuracy of diagnosis is possible only when the image is noise free. Hence it is very much important to perform filtering of the speckle noise. It is proposed that curvelet transform serves as a better edge preserving filter compared to other speckle reducing anisotropic diffusion filters. Curvelet transform is designed to handle images which involve curves using only a less number of coefficients. Hence a multiscale representation called curvelet transform is applied to enhance the visual quality of the ultrasound images. The experimented results indicate that the proposed curvelet denoising suppresses the noise effectively both in quantitative and visual means by producing high PSNR.

  6. Image denoising based on stationary ridgelet transform%基于平稳脊波变换的图像降噪

    Institute of Scientific and Technical Information of China (English)

    徐巍; 孔建益; 陈东方

    2015-01-01

    Orthogonal wavelet transform is commonly used in finite ridgelet transform (FRIT)to han-dle the point singularity in Radon transform domain.However,due to the non-redundancy of orthog-onal wavelet transform,image denoising using FRIT causes Gibbs phenomenon.In order to overcome these Gibbs interference fringes,this paper introduces a new concept of stationary ridgelet transform (SRT)based on FRIT and proposes a SRT-based image denoising algorithm.The key of this algo-rithm is to deal with the Radon transform coefficient matrixes by using stationary wavelet transform instead of orthogonal wavelet transform.Experimental results show that the proposed algorithm has better noise reduction performance than the FRIT-based image denoising method.Denoised images boast detailed edge feature information and great visual effect,and the ringing effect is restrained.%有限脊波变换在 Radon 变换域中用正交小波处理点奇异,而正交小波变换不存在冗余性,因此在应用有限脊波变换进行图像降噪时会产生 Gibbs 现象。为了解决 Gibbs 条纹干扰问题,本文在有限脊波变换的基础上提出一种新的基于平稳脊波变换的图像降噪方法,其关键是引入一维平稳小波变换来代替正交小波变换对 Radon 系数矩阵进行处理。实验结果表明,与基于有限脊波变换的图像降噪方法相比,本文提出的算法具有更优的降噪性能,可使图像降噪后保持更好的边缘特征和视觉效果,振铃效应得到改善。

  7. 基于Laplace-Impact混合模型的DR图像去噪算法%DR Image Denoising Based on Laplace-Impact Mixture Model

    Institute of Scientific and Technical Information of China (English)

    丰国栋; 何祥彬; 周荷琴

    2009-01-01

    A novel DR image denoising algorithm based on Laplace-Impact mixture model in dual-tree complex wavelet domain is proposed in this paper.It uses local variance to build probability density function of Laplace-Impact model fitted to the distribution of high-frequency subband coefficients well.Within Laplace-Impact framework, this paper describes a novel method for image denoising based on designing minimum mean squared error(MMSE) estimators, which relies on strong correlation between amplitudes of nearby coefficients.The experimental results show that the algorithm proposed in this paper outperforms several state-of-art denoising methods such as Bayes least squared Gaussian scale mixture and Laplace prior.%提出一种双树复小波域中基于Laplace-Impact混合模型的DR图像去噪算法.该法利用局部方差建立的拉普拉斯-冲击模型的概率密度函数来逼近高频子带的系数分布.由于高频子带系数局部高度相关,所以在Laplace-Impact混合模型的框架下,采用最小均方误差估计可以很好消除DR图像噪声.实验结果表明,本文提出的算法与BLS-GSM模型和Laplace模型等经典去噪算法相比,对DR图像中的高斯噪声有更好的去噪效果.

  8. Exploiting the self-similarity in ERP images by nonlocal means for single-trial denoising.

    Science.gov (United States)

    Strauss, Daniel J; Teuber, Tanja; Steidl, Gabriele; Corona-Strauss, Farah I

    2013-07-01

    Event related potentials (ERPs) represent a noninvasive and widely available means to analyze neural correlates of sensory and cognitive processing. Recent developments in neural and cognitive engineering proposed completely new application fields of this well-established measurement technique when using an advanced single-trial processing. We have recently shown that 2-D diffusion filtering methods from image processing can be used for the denoising of ERP single-trials in matrix representations, also called ERP images. In contrast to conventional 1-D transient ERP denoising techniques, the 2-D restoration of ERP images allows for an integration of regularities over multiple stimulations into the denoising process. Advanced anisotropic image restoration methods may require directional information for the ERP denoising process. This is especially true if there is a lack of a priori knowledge about possible traces in ERP images. However due to the use of event related experimental paradigms, ERP images are characterized by a high degree of self-similarity over the individual trials. In this paper, we propose the simple and easy to apply nonlocal means method for ERP image denoising in order to exploit this self-similarity rather than focusing on the edge-based extraction of directional information. Using measured and simulated ERP data, we compare our method to conventional approaches in ERP denoising. It is concluded that the self-similarity in ERP images can be exploited for single-trial ERP denoising by the proposed approach. This method might be promising for a variety of evoked and event-related potential applications, including nonstationary paradigms such as changing exogeneous stimulus characteristics or endogenous states during the experiment. As presented, the proposed approach is for the a posteriori denoising of single-trial sequences. PMID:23060344

  9. Evaluating image denoising methods in myocardial perfusion single photon emission computed tomography (SPECT) imaging

    International Nuclear Information System (INIS)

    The statistical nature of single photon emission computed tomography (SPECT) imaging, due to the Poisson noise effect, results in the degradation of image quality, especially in the case of lesions of low signal-to-noise ratio (SNR). A variety of well-established single-scale denoising methods applied on projection raw images have been incorporated in SPECT imaging applications, while multi-scale denoising methods with promising performance have been proposed. In this paper, a comparative evaluation study is performed between a multi-scale platelet denoising method and the well-established Butterworth filter applied as a pre- and post-processing step on images reconstructed without and/or with attenuation correction. Quantitative evaluation was carried out employing (i) a cardiac phantom containing two different size cold defects, utilized in two experiments conducted to simulate conditions without and with photon attenuation from myocardial surrounding tissue and (ii) a pilot-verified clinical dataset of 15 patients with ischemic defects. Image noise, defect contrast, SNR and defect contrast-to-noise ratio (CNR) metrics were computed for both phantom and patient defects. In addition, an observer preference study was carried out for the clinical dataset, based on rankings from two nuclear medicine clinicians. Without photon attenuation conditions, denoising by platelet and Butterworth post-processing methods outperformed Butterworth pre-processing for large size defects, while for small size defects, as well as with photon attenuation conditions, all methods have demonstrated similar denoising performance. Under both attenuation conditions, the platelet method showed improved performance with respect to defect contrast, SNR and defect CNR in the case of images reconstructed without attenuation correction, however not statistically significant (p > 0.05). Quantitative as well as preference results obtained from clinical data showed similar performance of the

  10. 基于相似图像检索与字典学习的图像去噪算法%A Image Denoising Method Based on Similar Image Retrieval and Dictionary Learning

    Institute of Scientific and Technical Information of China (English)

    胡占强; 耿龙

    2016-01-01

    为了更好地分析与理解图像,需对图像进行去噪。提出一种基于相似图像检索与字典学习的图像去噪方法。首先,为了提高图像检索的准确度,对噪声图像进行初始去噪提高信噪比;然后使用初始去噪图像在图片库里进行基于SIFT特征的图像检索,使用匹配到的相似图像作为字典学习的样本,提高字典与噪声图像的相关性;最后进行高频补偿。卫星图像被用于去噪实验证明所提算法的优越性。与传统去噪方法相比,所提出的方法不仅获得较好的去噪效果,而且在一定程度上有效地抑制去噪带来的高频信息丢失。%In order to analyze and understand the image effectively, it's necessary to conduct denoising for image. Proposes a denoising method based on similar image retrieval and dictionary learning. Firstly, to have the better accuracy of image retrieval by improving noise signal ratio, denoising initially is executed for noise image; secondly, carry on image retrieval based on SIFT feature by using the initial noise image in the picture library and regard the similar image as a dictionary learning samples matched to improve correlation of dictionary and noise image; finally, the compensation of high frequency is needed. Satellite images are used to demonstrate the superiority of the proposed algorithm. Compared with the traditional denoising methods,the proposed method obtains better denoising effect,furthermore,it can effectively suppress the loss of high frequency information caused by the denoising procession.

  11. Improved PDE image denoising method based on logarithmic image processing%改进的LIP偏微分方程图像去噪方法

    Institute of Scientific and Technical Information of China (English)

    郭茂银; 田有先

    2011-01-01

    Concerning the defects of Logarithmic Image Processing-Total Variation (LIP_TV) denoising model, an improved Partial Differential Equation (PDE) image denoising method based on LIP was proposed.Based on LIP mathematic theory, the new LIP gradient operator was obtained by introducing four directional derivatives in the original one, which can control the diffusion process effectively because it measures image information comprehensively and objectively.The fidelity coefficient was constructed by adopting the noise visibility function based on the structure characteristic of human visual system, which can further preserve the edge details and avoid estimating noise level faetitionsly.The theoretical analysis and experimental results show that the improved method has superiority in the visual effect and objective quality, which can better remove noise and preserve detailed edge features.%针对对数图像处理-全变分(LIP-TV)去噪模型存在的不足,提出一种改进的LIP偏微分方程去噪方法.首先基于LIP数学理论,在LIP梯度算子中,引入四方向导数信息,得到改进的LIP梯度算子以全面客观地度量图像信息,更好地控制扩散过程.然后利用人类视觉系统的结构化特性,用噪声可见度函数构造新的保真项系数,进一步保持了图像的边缘细节并避免了人为估计噪声水平.理论分析和实验结果表明,该改进方法能够更好地去除噪声和保持图像边缘细节特征,在视觉效果和客观评价指标上都明显优于LIP-TV方法.

  12. A Total Variation Model Based on the Strictly Convex Modification for Image Denoising

    Directory of Open Access Journals (Sweden)

    Boying Wu

    2014-01-01

    Full Text Available We propose a strictly convex functional in which the regular term consists of the total variation term and an adaptive logarithm based convex modification term. We prove the existence and uniqueness of the minimizer for the proposed variational problem. The existence, uniqueness, and long-time behavior of the solution of the associated evolution system is also established. Finally, we present experimental results to illustrate the effectiveness of the model in noise reduction, and a comparison is made in relation to the more classical methods of the traditional total variation (TV, the Perona-Malik (PM, and the more recent D-α-PM method. Additional distinction from the other methods is that the parameters, for manual manipulation, in the proposed algorithm are reduced to basically only one.

  13. Image Denoising And Enhancement Using Multiwavelet With Hard Threshold In Digital Mammographic Images

    Directory of Open Access Journals (Sweden)

    Kother Mohideen

    2011-01-01

    Full Text Available Breast cancer continues to be a significant public health problem in the world. The diagnosing mammographymethod is the most effective technology for early detection of the breast cancer. However, in some cases, it is difficult forradiologists to detect the typical diagnostic signs, such as masses and microcalcifications on the mammograms. Dense regionin digital mammographic images are usually noisy and have low contrast. And their visual screening is difficult to view forphysicians. This paper describes a new multiwavelet method for noise suppression and enhancement in digital mammographicimages. Initially the image is pre-processed to improve its local contrast and discriminations of subtle details. Imagesuppression and edge enhancement are performed based on the multiwavelet transform. At each resolution, coefficientassociated with the noise is modelled and generalized by laplacian random variables. Multiwavelet can satisfy both symmetryand asymmetry which are very important characteristics in Digital image processing. The better denoising result depends onthe degree of the noise, generally its energy distributed over low frequency band while both its noise and details aredistributed over high frequency band and also applied hard threshold in different scale of frequency sub-bands to limit theimage. This paper is proposed to indicate the suitability of different wavelets and multiwavelet on the neighbourhood in theperformance of image denoising algorithms in terms of PSNR.. Finally it compares the wavelet and multiwavelet techniques toproduce the best denoised mammographic image using efficient multiwavelet algorithm with hard threshold based on theperformance of image denoising algorithm in terms of PSNR values.

  14. Penalizing local correlations in the residual improves image denoising performance

    OpenAIRE

    Riot, Paul; Almansa, Andrès; Gousseau, Yann; Tupin, Florence

    2016-01-01

    International audience In this work, we address the problem of denoising an image corrupted by an additive white Gaussian noise. This hypothesis on the noise, despite being very common and justified as the result of a variance normalization step, is hardly used by classical denoising methods. Indeed, very few methods directly constrain the whiteness of the residual (the removed noise). We propose a new variational approach defining generic fidelity terms to locally control the residual dis...

  15. Fourth-order partial differential equations for effective image denoising

    Directory of Open Access Journals (Sweden)

    Seongjai Kim

    2009-04-01

    Full Text Available This article concerns mathematical image denoising methods incorporating fourth-order partial differential equations (PDEs. We introduce and analyze piecewise planarity conditions (PPCs with which unconstrained fourth-order variational models in continuum converge to a piecewise planar image. It has been observed that fourth-order variational models holding PPCs can restore better images than models without PPCs and second-order models. Numerical schemes are presented in detail and various examples in image denoising are provided to verify the claim.

  16. Medical image denoising using dual tree complex thresholding wavelet transform and Wiener filter

    Directory of Open Access Journals (Sweden)

    Hilal Naimi

    2015-01-01

    Full Text Available Image denoising is the process to remove the noise from the image naturally corrupted by the noise. The wavelet method is one among various methods for recovering infinite dimensional objects like curves, densities, images, etc. The wavelet techniques are very effective to remove the noise because of their ability to capture the energy of a signal in few energy transform values. The wavelet methods are based on shrinking the wavelet coefficients in the wavelet domain. We propose in this paper, a denoising approach basing on dual tree complex wavelet and shrinkage with the Wiener filter technique (where either hard or soft thresholding operators of dual tree complex wavelet transform for the denoising of medical images are used. The results proved that the denoised images using DTCWT (Dual Tree Complex Wavelet Transform with Wiener filter have a better balance between smoothness and accuracy than the DWT and are less redundant than SWT (StationaryWavelet Transform. We used the SSIM (Structural Similarity Index Measure along with PSNR (Peak Signal to Noise Ratio and SSIM map to assess the quality of denoised images.

  17. Modified Method for Denoising the Ultrasound Images by Wavelet Thresholding

    Directory of Open Access Journals (Sweden)

    Alka Vishwa

    2012-06-01

    Full Text Available Medical practitioners are increasingly using digital images during disease diagnosis. Several state-of-the-art medical equipment are producing images of different organs, which are used during various stages of analysis. Examples of such equipment include MRI, CT, ultrasound and X-Ray. In medical image processing, image denoising has become a very essential exercise all through the diagnosis as Ultrasound images are normally affected by speckle noise. The noise in the image has two negative outcomes, the first being the degradation of the image quality and the second and more important, obscures important information required for accurate diagnosis.Arbitration between the perpetuation of useful diagnostic information and noise suppression must be treasured in medical images. In general we rely on the intervention of a proficient to control the quality of processed images. In certain cases, for instance in Ultrasound images, the noise can suppress the information which is valuable for the general practitioner. Consequently medical images can be very inconsistent, and it is crucial to operate case to case. This paper presents a wavelet-based thresholding scheme for noise suppression in Ultrasound images and provides the knowledge about adaptive and anisotropic diffusion techniques for speckle noise removal from different types of images, like Ultrasound.

  18. GPU-accelerated denoising of 3D magnetic resonance images

    Energy Technology Data Exchange (ETDEWEB)

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  19. An Adaptive Total Generalized Variation Model with Augmented Lagrangian Method for Image Denoising

    Directory of Open Access Journals (Sweden)

    Chuan He

    2014-01-01

    Full Text Available We propose an adaptive total generalized variation (TGV based model, aiming at achieving a balance between edge preservation and region smoothness for image denoising. The variable splitting (VS and the classical augmented Lagrangian method (ALM are used to solve the proposed model. With the proposed adaptive model and ALM, the regularization parameter, which balances the data fidelity and the regularizer, is refreshed with a closed form in each iterate, and the image denoising can be accomplished without manual interference. Numerical results indicate that our method is effective in staircasing effect suppression and holds superiority over some other state-of-the-art methods both in quantitative and in qualitative assessment.

  20. Quaternion Wavelet Analysis and Application in Image Denoising

    Directory of Open Access Journals (Sweden)

    Ming Yin

    2012-01-01

    Full Text Available The quaternion wavelet transform is a new multiscale analysis tool. Firstly, this paper studies the standard orthogonal basis of scale space and wavelet space of quaternion wavelet transform in spatial L2(R2, proves and presents quaternion wavelet’s scale basis function and wavelet basis function concepts in spatial scale space L2(R2;H, and studies quaternion wavelet transform structure. Finally, the quaternion wavelet transform is applied to image denoising, and generalized Gauss distribution is used to model QWT coefficients’ magnitude distribution, under the Bayesian theory framework, to recover the original coefficients from the noisy wavelet coefficients, and so as to achieve the aim of denoising. Experimental results show that our method is not only better than many of the current denoising methods in the peak signal to noise ratio (PSNR, but also obtained better visual effect.

  1. Image restoration using regularized inverse filtering and adaptive threshold wavelet denoising

    Directory of Open Access Journals (Sweden)

    Mr. Firas Ali

    2007-01-01

    Full Text Available Although the Wiener filtering is the optimal tradeoff of inverse filtering and noise smoothing, in the case when the blurring filter is singular, the Wiener filtering actually amplify the noise. This suggests that a denoising step is needed to remove the amplified noise .Wavelet-based denoising scheme provides a natural technique for this purpose .In this paper a new image restoration scheme is proposed, the scheme contains two separate steps : Fourier-domain inverse filtering and wavelet-domain image denoising. The first stage is Wiener filtering of the input image , the filtered image is inputted to adaptive threshold wavelet denoising stage . The choice of the threshold estimation is carried out by analyzing the statistical parameters of the wavelet sub band coefficients like standard deviation, arithmetic mean and geometrical mean . The noisy image is first decomposed into many levels to obtain different frequency bands. Then soft thresholding method is used to remove the noisy coefficients, by fixing the optimum thresholding value by this method .Experimental results on test image by using this method show that this method yields significantly superior image quality and better Peak Signal to Noise Ratio (PSNR. Here, to prove the efficiency of this method in image restoration , we have compared this with various restoration methods like Wiener filter alone and inverse filter.

  2. A Novel Super Resolution Reconstruction of Low Reoslution Images Progressively Using DCT and Zonal Filter Based Denoising

    Directory of Open Access Journals (Sweden)

    Liyakathunisa

    2011-02-01

    Full Text Available Due to the factors like processing power limitations and channel capabilities images are often down sampled and transmitted at low bit rates resulting in a low resolution compressed image. High resolutionimages can be reconstructed from several blurred, noisy and down sampled low resolution images using a computational process know as super resolution reconstruction. Super-resolution is the process ofcombining multiple aliased low-quality images to produce a high resolution, high-quality image. The problem of recovering a high resolution image progressively from a sequence of low resolutioncompressed images is considered. In this paper we propose a novel DCT based progressive image display algorithm by stressing on the encoding and decoding process. At the encoder we consider a set of lowresolution images which are corrupted by additive white Gaussian noise and motion blur. The low resolution images are compressed using 8 by 8 blocks DCT and noise is filtered using our proposed novelzonal filter. Multiframe fusion is performed in order to obtain a single noise free image. At the decoder the image is reconstructed progressively by transmitting the coarser image first followed by the detail image. And finally a super resolution image is reconstructed by applying our proposed novel adaptive interpolation technique. We have performed both objective and subjective analysis of the reconstructed image, and the resultant image has better super resolution factor, and a higher ISNR and PSNR. A comparative study done with Iterative Back Projection (IBP and Projection on to Convex Sets (POCS,Papoulis Grechberg, FFT based Super resolution Reconstruction shows that our method has out performed the previous contributions.

  3. A new study on mammographic image denoising using multiresolution techniques

    Science.gov (United States)

    Dong, Min; Guo, Ya-Nan; Ma, Yi-De; Ma, Yu-run; Lu, Xiang-yu; Wang, Ke-ju

    2015-12-01

    Mammography is the most simple and effective technology for early detection of breast cancer. However, the lesion areas of breast are difficult to detect which due to mammograms are mixed with noise. This work focuses on discussing various multiresolution denoising techniques which include the classical methods based on wavelet and contourlet; moreover the emerging multiresolution methods are also researched. In this work, a new denoising method based on dual tree contourlet transform (DCT) is proposed, the DCT possess the advantage of approximate shift invariant, directionality and anisotropy. The proposed denoising method is implemented on the mammogram, the experimental results show that the emerging multiresolution method succeeded in maintaining the edges and texture details; and it can obtain better performance than the other methods both on visual effects and in terms of the Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Structure Similarity (SSIM) values.

  4. Multi-level denoising and enhancement method based on wavelet transform for mine monitoring

    Institute of Scientific and Technical Information of China (English)

    Yanqin Zhao

    2013-01-01

    Based on low illumination and a large number of mixed noises contained in coal mine,denoising with one method usually cannot achieve good results,So a multi-level image denoising method based on wavelet correlation relevant inter-scale is presented.Firstly,we used directional median filter to effectively reduce impulse noise in the spatial domain,which is the main cause of noise in mine.Secondly,we used a Wiener filtration method to mainly reduce the Gaussian noise,and then finally used a multi-wavelet transform to minimize the remaining noise of low-light images in the transform domain.This multi-level image noise reduction method combines spatial and transform domain denoising to enhance benefits,and effectively reduce impulse noise and Gaussian noise in a coal mine,while retaining good detailed image characteristics of the underground for improving quality of images with mixing noise and effective low-light environment.

  5. Denoising of Medical Images Using Total Variational Method

    Directory of Open Access Journals (Sweden)

    V N Prudhvi Raj

    2012-05-01

    Full Text Available Feature extraction and object recognition from images acquired by various imaging modalities are playingthe key role in diagnosing the various diseases. These operations will become difficult if the images arecorrupted with noise. So the need for developing the efficient algorithms for noise removal became animportant research area today. Developing Image denoising algorithms is a difficult operation because finedetails in a medical image embedding diagnostic information should not be destroyed during noiseremoval. In this paper the total variational method which had success in computational fluid dynamics isadopted to denoise the medical images. We are using split Bregman method from optimisation theory tofind the solution to this non-linear convex optimisation problem. The present approach will outperform indenoising the medical images while compared with the traditional spatial domain filtering methods. Theperformance metrics we used to measure the quality of the denoised images is PSNR (Peak signal to noiseratio.The results showed that these methods are removing the noise effectively while preserving the edgeinformation in the images.

  6. Denoising ECG signal based on ensemble empirical mode decomposition

    Science.gov (United States)

    Zhi-dong, Zhao; Liu, Juan; Wang, Sheng-tao

    2011-10-01

    The electrocardiogram (ECG) has been used extensively for detection of heart disease. Frequently the signal is corrupted by various kinds of noise such as muscle noise, electromyogram (EMG) interference, instrument noise etc. In this paper, a new ECG denoising method is proposed based on the recently developed ensemble empirical mode decomposition (EEMD). Noisy ECG signal is decomposed into a series of intrinsic mode functions (IMFs). The statistically significant information content is build by the empirical energy model of IMFs. Noisy ECG signal collected from clinic recording is processed using the method. The results show that on contrast with traditional methods, the novel denoising method can achieve the optimal denoising of the ECG signal.

  7. A Novel Super Resolution Algorithm Using Interpolation and LWT Based Denoising Method

    OpenAIRE

    Sapan Naik, Asst. Professor; Viral Borisagar, Asst. Professor

    2012-01-01

    Image capturing technique has some limitations and due to that we often get low resolution(LR) images. Super Resolution(SR) is a process by which we can generate High Resolution(HR) image from one or more LR images. Here we have proposed one SR algorithm which take three shifted and noisy LR images and generate HR image using Lifting Wavelet Transform(LWT) based denoising method and Directional Filtering and Data Fusion based Edge-Guided Interpolation Algorithm.

  8. Denoising Algorithm Based on Generalized Fractional Integral Operator with Two Parameters

    Directory of Open Access Journals (Sweden)

    Hamid A. Jalab

    2012-01-01

    Full Text Available In this paper, a novel digital image denoising algorithm called generalized fractional integral filter is introduced based on the generalized Srivastava-Owa fractional integral operator. The structures of n×n fractional masks of this algorithm are constructed. The denoising performance is measured by employing experiments according to visual perception and PSNR values. The results demonstrate that apart from enhancing the quality of filtered image, the proposed algorithm also reserves the textures and edges present in the image. Experiments also prove that the improvements achieved are competent with the Gaussian smoothing filter.

  9. 3D Wavelet Sub-Bands Mixing for Image De-noising and Segmentation of Brain Images

    Directory of Open Access Journals (Sweden)

    Joyjit Patra

    2016-07-01

    Full Text Available A critical issue in image restoration is the problem of noise removal while keeping the integrity of relevant image information. The method proposed in this paper is a fully automatic 3D block wise version of the Non Local (NL Means filter with wavelet sub-bands mixing. The proposed a wavelet sub-bands mixing is based on a multi-resolution approach for improving the quality of image de-noising filter. Quantitative validation was carried out on synthetic datasets generated with the Brain Web simulator. The results show that our NL-means filter with wavelet sub-band mixing outperforms the classical implementation of the NL-means filter in of de -noising quality and computation time. Comparison with well established methods, such as non linear diffusion filter and total variation minimization, shows that the proposed NL-means filter produces better de-noising results. Finally, qualitative results on real data are presented. And this paper presents an algorithm for medical 3D image de-noising and segmentation using redundant discrete wavelet transform. First, we present a two stage de-noising algorithm using the image fusion concept. The algorithm starts with globally de-noising the brain images (3D volume using Perona Malik’s algorithm and RDWT based algorithms followed by combining the outputs using entropy based fusion approach. Next, a region segmentation algorithm is proposed using texture information and k-means clustering. The proposed algorithms are evaluated using brain 3D image/volume data. The results suggest that the proposed algorithms provide improved performance compared to existing algorithms.

  10. A Novel Super Resolution Reconstruction of Low Reoslution Images Progressively Using DCT and Zonal Filter Based Denoising

    OpenAIRE

    Liyakathunisa; C.N .Ravi Kumar

    2011-01-01

    Due to the factors like processing power limitations and channel capabilities images are often down sampled and transmitted at low bit rates resulting in a low resolution compressed image. High resolution images can be reconstructed from several blurred, noisy and down sampled low resolution images using a computational process know as super resolution reconstruction. Super-resolution is the process of combining multiple aliased low-quality images to produce a high resolution, high-quality im...

  11. GLMdenoise: a fast, automated technique for denoising task-based fMRI data

    Directory of Open Access Journals (Sweden)

    KendrickKay

    2013-12-01

    Full Text Available In task-based functional magnetic resonance imaging (fMRI, researchers seek to measure fMRI signals related to a given task or condition. In many circumstances, measuring this signal of interest is limited by noise. In this study, we present GLMdenoise, a technique that improves signal-to-noise ratio (SNR by entering noise regressors into a general linear model (GLM analysis of fMRI data. The noise regressors are derived by conducting an initial model fit to determine voxels unrelated to the experimental paradigm, performing principal components analysis (PCA on the time-series of these voxels, and using cross-validation to select the optimal number of principal components to use as noise regressors. Due to the use of data resampling, GLMdenoise requires and is best suited for datasets involving multiple runs (where conditions repeat across runs. We show that GLMdenoise consistently improves cross-validation accuracy of GLM estimates on a variety of event-related experimental datasets and is accompanied by substantial gains in SNR. To promote practical application of methods, we provide MATLAB code implementing GLMdenoise. Furthermore, to help compare GLMdenoise to other denoising methods, we present the Denoise Benchmark (DNB, a public database and architecture for evaluating denoising methods. The DNB consists of the datasets described in this paper, a code framework that enables automatic evaluation of a denoising method, and implementations of several denoising methods, including GLMdenoise, the use of motion parameters as noise regressors, ICA-based denoising, and RETROICOR/RVHRCOR. Using the DNB, we find that GLMdenoise performs best out of all of the denoising methods we tested.

  12. GLMdenoise: a fast, automated technique for denoising task-based fMRI data.

    Science.gov (United States)

    Kay, Kendrick N; Rokem, Ariel; Winawer, Jonathan; Dougherty, Robert F; Wandell, Brian A

    2013-01-01

    In task-based functional magnetic resonance imaging (fMRI), researchers seek to measure fMRI signals related to a given task or condition. In many circumstances, measuring this signal of interest is limited by noise. In this study, we present GLMdenoise, a technique that improves signal-to-noise ratio (SNR) by entering noise regressors into a general linear model (GLM) analysis of fMRI data. The noise regressors are derived by conducting an initial model fit to determine voxels unrelated to the experimental paradigm, performing principal components analysis (PCA) on the time-series of these voxels, and using cross-validation to select the optimal number of principal components to use as noise regressors. Due to the use of data resampling, GLMdenoise requires and is best suited for datasets involving multiple runs (where conditions repeat across runs). We show that GLMdenoise consistently improves cross-validation accuracy of GLM estimates on a variety of event-related experimental datasets and is accompanied by substantial gains in SNR. To promote practical application of methods, we provide MATLAB code implementing GLMdenoise. Furthermore, to help compare GLMdenoise to other denoising methods, we present the Denoise Benchmark (DNB), a public database and architecture for evaluating denoising methods. The DNB consists of the datasets described in this paper, a code framework that enables automatic evaluation of a denoising method, and implementations of several denoising methods, including GLMdenoise, the use of motion parameters as noise regressors, ICA-based denoising, and RETROICOR/RVHRCOR. Using the DNB, we find that GLMdenoise performs best out of all of the denoising methods we tested. PMID:24381539

  13. Preliminary study on effects of 60Co γ-irradiation on video quality and the image de-noising methods

    International Nuclear Information System (INIS)

    There will be variable noises appear on images in video once the play device irradiated by γ-rays, so as to affect the image clarity. In order to eliminate the image noising, the affection mechanism of γ-irradiation on video-play device was studied in this paper and the methods to improve the image quality with both hardware and software were proposed by use of protection program and de-noising algorithm. The experimental results show that the scheme of video de-noising based on hardware and software can improve effectively the PSNR by 87.5 dB. (authors)

  14. A Novel and Robust Wavelet based Super Resolution Reconstruction of Low Resolution Images using Efficient Denoising and Adaptive Interpolation

    OpenAIRE

    Liyakathunisa; C.N.Ravi Kuamr

    2010-01-01

    High Resolution images can be reconstructed from several blurred, noisy and aliased low resolution images using a computational process know as super resolution reconstruction. Super resolution reconstruction is the process of combining several low resolution images into a single higher resolution image. In this paper we concentrate on a special case of super resolution problem where the wrap is composed of pure translation and rotation, the blur is space invariant and the noise is additive w...

  15. Performance evaluation and optimization of BM4D-AV denoising algorithm for cone-beam CT images

    Science.gov (United States)

    Huang, Kuidong; Tian, Xiaofei; Zhang, Dinghua; Zhang, Hua

    2015-12-01

    The broadening application of cone-beam Computed Tomography (CBCT) in medical diagnostics and nondestructive testing, necessitates advanced denoising algorithms for its 3D images. The block-matching and four dimensional filtering algorithm with adaptive variance (BM4D-AV) is applied to the 3D image denoising in this research. To optimize it, the key filtering parameters of the BM4D-AV algorithm are assessed firstly based on the simulated CBCT images and a table of optimized filtering parameters is obtained. Then, considering the complexity of the noise in realistic CBCT images, possible noise standard deviations in BM4D-AV are evaluated to attain the chosen principle for the realistic denoising. The results of corresponding experiments demonstrate that the BM4D-AV algorithm with optimized parameters presents excellent denosing effect on the realistic 3D CBCT images.

  16. Adaptive wiener filter based on Gaussian mixture distribution model for denoising chest X-ray CT image

    International Nuclear Information System (INIS)

    In recent decades, X-ray CT imaging has become more important as a result of its high-resolution performance. However, it is well known that the X-ray dose is insufficient in the techniques that use low-dose imaging in health screening or thin-slice imaging in work-up. Therefore, the degradation of CT images caused by the streak artifact frequently becomes problematic. In this study, we applied a Wiener filter (WF) using the universal Gaussian mixture distribution model (UNI-GMM) as a statistical model to remove streak artifact. In designing the WF, it is necessary to estimate the statistical model and the precise co-variances of the original image. In the proposed method, we obtained a variety of chest X-ray CT images using a phantom simulating a chest organ, and we estimated the statistical information using the images for training. The results of simulation showed that it is possible to fit the UNI-GMM to the chest X-ray CT images and reduce the specific noise. (author)

  17. TRANSLATION-INVARIANT BASED ADAPTIVE THRESHOLD DENOISING FOR IMPACT SIGNAL

    Institute of Scientific and Technical Information of China (English)

    Gai Guanghong; Qu Liangsheng

    2004-01-01

    A translation-invariant based adaptive threshold denoising method for mechanical impact signal is proposed. Compared with traditional wavelet denoising methods, it suppresses pseudo-Gibbs phenomena in the neighborhood of signal discontinuities. To remedy the drawbacks of conventional threshold functions, a new improved threshold function is introduced. It possesses more advantages than others. Moreover, based on utilizing characteristics of signal, a adaptive threshold selection procedure for impact signal is proposed. It is data-driven and level-dependent, therefore, it is more rational than other threshold estimation methods. The proposed method is compared to alternative existing methods, and its superiority is revealed by simulation and real data examples.

  18. Vibrator Data Denoising Based on Fractional Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Zheng Jing

    2015-06-01

    Full Text Available In this paper, a novel data denoising method is proposed for seismic exploration with a vibrator which produces a chirp-like signal. The method is based on fractional wavelet transform (FRWT, which is similar to the fractional Fourier transform (FRFT. It can represent signals in the fractional domain, and has the advantages of multi-resolution analysis as the wavelet transform (WT. The fractional wavelet transform can process the reflective chirp signal as pulse seismic signal and decompose it into multi-resolution domain to denoise. Compared with other methods, FRWT can offer wavelet transform for signal analysis in the timefractional- frequency plane which is suitable for processing vibratory seismic data. It can not only achieve better denoising performance, but also improve the quality and continuity of the reflection syncphase axis.

  19. A Variational Approach to the Denoising of Images Based on Different Variants of the TV-Regularization

    International Nuclear Information System (INIS)

    We discuss several variants of the TV-regularization model used in image recovery. The proposed alternatives are either of nearly linear growth or even of linear growth, but with some weak ellipticity properties. The main feature of the paper is the investigation of the analytic properties of the corresponding solutions.

  20. Image Pretreatment Tools I: Algorithms for Map Denoising and Background Subtraction Methods.

    Science.gov (United States)

    Cannistraci, Carlo Vittorio; Alessio, Massimo

    2016-01-01

    One of the critical steps in two-dimensional electrophoresis (2-DE) image pre-processing is the denoising, that might aggressively affect either spot detection or pixel-based methods. The Median Modified Wiener Filter (MMWF), a new nonlinear adaptive spatial filter, resulted to be a good denoising approach to use in practice with 2-DE. MMWF is suitable for global denoising, and contemporary for the removal of spikes and Gaussian noise, being its best setting invariant on the type of noise. The second critical step rises because of the fact that 2-DE gel images may contain high levels of background, generated by the laboratory experimental procedures, that must be subtracted for accurate measurements of the proteomic optical density signals. Here we discuss an efficient mathematical method for background estimation, that is suitable to work even before the 2-DE image spot detection, and it is based on the 3D mathematical morphology (3DMM) theory. PMID:26611410

  1. Denoising of Medical Ultrasound Images Using Spatial Filtering and Multiscale Transforms

    Directory of Open Access Journals (Sweden)

    V N Prudhvi Raj

    2013-01-01

    Full Text Available Medical imaging became the integral part in health care where all the critical diagnosis such as blocks inthe veins, plaques in the carotid arteries, minute fractures in the bones, blood flow in the brain etc arecarried out without opening the patient’s body. There are various imaging modalities for differentapplications to observe the anatomical and physiological conditions of the patient. These modalities willintroduce noise and artifacts during medical image acquisition. If the noise and artifacts are not minimiseddiagnosis will become difficult. One of the non-invasive modality widely used is ultrasound Imaging whereno question of radiation but suffers from speckle noise produced by the small particles in the tissues who’ssize is less than the wavelength of the ultrasound. The presence of the speckle noise will cause the lowcontrast images because of this the low contrast lesions and tumours can’t be detected in the diagnosticphase. So there is a strong need in developing the despeckling techniques to improve the quality ofultrasound images. Here in this paper we are presenting the denoising techniques for speckle reduction inultrasound imaging. First we presented the various spatial filters and their suitability for reducing thespeckle. Then we developed the denoising methods using multiscale transforms such as Discrete WaveletTransform (DWT, Undecimated Discrete Wavelet Transform (UDWT, dual tree complex wavelettransform (DTCDWT and Double density dual tree complex wavelet transform (DDDTCDWT. Theperformance of the filters was evaluated using various metrics based on pixel based, correlation based,edge based and Human visual system (HVS based and we found that denoising using double density dualtree complex discrete wavelet transform is outperformed with best edge preserving feature.

  2. APPLICATION OF SUBBAND ADAPTIVE THRESHOLDING TECHNIQUE WITH NEIGHBOURHOOD PIXEL FILTERING FOR DENOISING MRI IMAGES

    Directory of Open Access Journals (Sweden)

    S. KALAVATHY

    2012-02-01

    Full Text Available The image de-noising naturally corrupted by noise is a classical problem in the field of signal or image processing. Image denoising has become an essential exercise in medical imaging especially the Magnetic Resonance Imaging (MRI..We propose a new method for MRI restoration. Because MR magnitude images suffer from a contrast-reducing signal-dependent bias. Also the noise is often assumed to be white, however a widely used acquisition technique to decrease the acquisition time gives rise to correlated noise. Subband adaptive thresholding technique based on wavelet coefficient along with Neighbourhood Pixel Filtering Algorithm (NPFA for noise suppression of Magnetic Resonance Images (MRI is presented in this paper. Astatistical model is proposed to estimate the noise variance for each coefficient based on the subband using Maximum Likelihood (ML estimator or a Maximum a Posterior (MAP estimator. Also this model describes a new method for suppression of noise by fusing the wavelet denoising technique with optimized thresholding function. This is achieved by including a multiplying factor (α to make the threshold value dependent on decomposition level. By finding Neighbourhood Pixel Difference (NPD and adding NPFA along with subband thresholding the clarity of the image is improved. The filtered value is generated by minimizing NPD and Weighted Mean Square Error (WMSE using method of leastsquare.Areduction in noise pixel is well observedon replacing the optimal weight namely NPFA filter solution with the noisy value of the current pixel. Due to this NPFA filter gains the effect of both high pass and low pass filter. Hence the proposed technique yields significantly superior image quality by preserving the edges, producing a better PSNR value. To confirm the efficiency this is further compared with Median filter, Weiner Filter, Subband thresholding technique along with NPFA filter.

  3. Electrocardiogram signal denoising based on a new improved wavelet thresholding

    Science.gov (United States)

    Han, Guoqiang; Xu, Zhijun

    2016-08-01

    Good quality electrocardiogram (ECG) is utilized by physicians for the interpretation and identification of physiological and pathological phenomena. In general, ECG signals may mix various noises such as baseline wander, power line interference, and electromagnetic interference in gathering and recording process. As ECG signals are non-stationary physiological signals, wavelet transform is investigated to be an effective tool to discard noises from corrupted signals. A new compromising threshold function called sigmoid function-based thresholding scheme is adopted in processing ECG signals. Compared with other methods such as hard/soft thresholding or other existing thresholding functions, the new algorithm has many advantages in the noise reduction of ECG signals. It perfectly overcomes the discontinuity at ±T of hard thresholding and reduces the fixed deviation of soft thresholding. The improved wavelet thresholding denoising can be proved to be more efficient than existing algorithms in ECG signal denoising. The signal to noise ratio, mean square error, and percent root mean square difference are calculated to verify the denoising performance as quantitative tools. The experimental results reveal that the waves including P, Q, R, and S waves of ECG signals after denoising coincide with the original ECG signals by employing the new proposed method.

  4. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu [Université Bordeaux INCIA, CNRS UMR 5287, Hôpital de Bordeaux , Bordeaux 33 33076 (France); Visvikis, Dimitris [INSERM, UMR1101, LaTIM, Université de Bretagne Occidentale, Brest 29 29609 (France); Fernandez, Philippe; Lamare, Frederic [Université Bordeaux INCIA, CNRS UMR 5287, Hôpital de Bordeaux, Bordeaux 33 33076 (France)

    2015-02-15

    Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimation of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a

  5. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction

    International Nuclear Information System (INIS)

    Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimation of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a

  6. Statistics of Natural Stochastic Textures and Their Application in Image Denoising.

    Science.gov (United States)

    Zachevsky, Ido; Zeevi, Yehoshua Y Josh

    2016-05-01

    Natural stochastic textures (NSTs), characterized by their fine details, are prone to corruption by artifacts, introduced during the image acquisition process by the combined effect of blur and noise. While many successful algorithms exist for image restoration and enhancement, the restoration of natural textures and textured images based on suitable statistical models has yet to be further improved. We examine the statistical properties of NST using three image databases. We show that the Gaussian distribution is suitable for many NST, while other natural textures can be properly represented by a model that separates the image into two layers; one of these layers contains the structural elements of smooth areas and edges, while the other contains the statistically Gaussian textural details. Based on these statistical properties, an algorithm for the denoising of natural images containing NST is proposed, using patch-based fractional Brownian motion model and regularization by means of anisotropic diffusion. It is illustrated that this algorithm successfully recovers both missing textural details and structural attributes that characterize natural images. The algorithm is compared with classical as well as the state-of-the-art denoising algorithms. PMID:27045423

  7. Hybrid Denoising Method for Removal of Mixed Noise in Medical Images

    Directory of Open Access Journals (Sweden)

    J UMAMAHESWARI, Dr.G.RADHAMANI

    2012-05-01

    Full Text Available Nowadays, Digital image acquisition and processing techniques plays a very important role in current day medical diagnosis. During the acquisition process, there could be distortions in the images, which will negatively affect the diagnosis images. In this paper a new technique based on the hybridization of wavelet filter and center weighted median filters is proposed for denoising multiple noise (Gaussian and Impulse images. The model is experimented on standard Digital Imaging and Communications in Medicine (DICOM images and the performances are evaluated in terms of peak signal to noise ratio (PSNR, Mean Absolute Error (MAE, Universal Image Quality Index (UQI and Evaluation Time (ET. Results prove that utilization of center weighted median filters in combination with wavelet thresholding filters on DICOM images deteriorates the performance. The proposed filter gives suitable results on the basis of PSNR, MSE, UQI and ET. In addition, the proposed filter gives nearly uniform and consistent results on all the test images.

  8. Motion Fuzzy Image Denoising Algorithm Based on Wavelet Threshold Compression%基于小波阈值压缩的运动模糊图像去噪算法

    Institute of Scientific and Technical Information of China (English)

    李敏; 郭磊

    2015-01-01

    强干扰环境下采集的模糊运动图像通常含有大量的噪声,难以进行有效的细节分析,需要进行图像降噪滤波处理.传统方法采用小波分析的图像细节滤波算法进行降噪,对运动场景下的图像角点偏移部分的降噪效果不好.提出一种基于小波阈值压缩的运动模糊图像去噪算法.构建模糊运动图像的小波分析模型,采用小波阈值压缩方法进行运动模糊图像的角点检测,形成具有角点的图层小波阈值压缩库,实现图像降噪滤波处理.仿真结果表明,采用该算法能有效实现图像去噪滤波,提高图像质量和峰值信噪比.%Under the condition of strong interference, the fuzzy motion picture usually contains a lot of noise, it is difficult to carry out the detailed analysis, and it is necessary to carry out the image noise reduction filtering. The traditional method us-es wavelet analysis to reduce the noise of the image details. A new image denoising algorithm based on wavelet threshold compression is proposed. To construct a fuzzy wavelet image motion analysis model, using wavelet threshold compression method of motion blurred image corner detection, forming with corner layer wavelet threshold compression library, to real-ize filter for image denoising processing. Simulation results show that the using the algorithm can effectively achieve the im-age denoising and improve the image quality and peak signal-to-noise ratio.

  9. Semi-implicit Image Denoising Algorithm for Different Boundary Conditions

    Directory of Open Access Journals (Sweden)

    Yuying Shi

    2013-04-01

    Full Text Available In this paper, the Crank-Nicolson semi-implicit difference scheme in matrix form is applied to discrete the Rudin-Osher-Fatemi model. We also consider different boundary conditions: Dirichlet boundary conditions, periodic boundary conditions, Neumann boundary conditions, antireflective boundary conditions and mean boundary conditions. By comparing the experimental results of Crank-Nicolson semi-implicit scheme and explicit scheme with the proposed boundary conditions, we can get that the semi-implicit scheme can overcome the instability and the number of iterations of the shortcomings that the explicit discrete scheme has, and its recovery effects are better than the explicit discrete scheme. In addition, the antireflective boundary conditions and Neumann boundary conditions can better maintain the continuity of the boundary in image denoising.

  10. 基于压缩感知的水下图像去噪%Underwater image denoising based on compressed sensing

    Institute of Scientific and Technical Information of China (English)

    丁伟; 王国宇; 王宝锋

    2013-01-01

    The images got by underwater camera are fuzzy due to complex underwater environment. A number of data without useful information may be collected when data collection is performed, and the noise effect may be more serious. There-fore, a theory of compressed sensing is proposed, in which the signal can be reconstructed in high-probability by a low sampling rate. In order to research the effect of compressed sensing on underwater image denoising, OMP, SP, COSAMP greedy algorithms are used for the reconstruction analysis of underwater images in different sampling rates. The experimental results show that the image can be reconstructed by a small amount data and the underwater noise can be inhibited if an appropriate sampling rate is chosen. The best effect was got by OMP algorithm.%水下环境复杂,水下摄像得到的图像较为模糊.采集数据时会采集到大量不包含任何有用信息的数据,噪声影响更严重.压缩感知理论提出,能用较低采样率高概率重构信号.为研究压缩感知对水下图像噪声的抑制作用,采用OMP,SP,COSAMP不同贪婪重构算法对水下图像进行不同采样率重构分析.实验结果表明,选取合适采样率既可以以少量数据重构图像,又可以抑制水下噪声,且OMP算法效果最好.

  11. Multi-focus Image Fusion Using De-noising and Sharpness Criterion

    Directory of Open Access Journals (Sweden)

    sukhdip kaur

    2013-01-01

    Full Text Available the concept of multi-focus image fusion is used to combine multiple images with different objects in focus to obtain all the objects in focus and for better information in a scene. But the challenge is to how evaluate the information of the input images with better quality of image. In order to provide solution of this problem, a new criterion is proposed to give better quality of image using PCA, by de-noising and bilateral gradient based sharpness criterion that is evaluated using the gradient information of the images. Then the proposed method is further exploited to perform weighted aggregation of multi-focus images. The experimental results show that the proposed method is better than the other method in terms of quality matrices like Mutual information, spatial frequency and Average difference.

  12. Point Set Denoising Using Bootstrap-Based Radial Basis Function

    Science.gov (United States)

    Ramli, Ahmad; Abd. Majid, Ahmad

    2016-01-01

    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study. PMID:27315105

  13. 3D Wavelet Sub-Bands Mixing for Image De-noising and Segmentation of Brain Images

    OpenAIRE

    Joyjit Patra; Himadri Nath Moulick; Shreyosree Mallick; Arun Kanti Manna

    2016-01-01

    A critical issue in image restoration is the problem of noise removal while keeping the integrity of relevant image information. The method proposed in this paper is a fully automatic 3D block wise version of the Non Local (NL) Means filter with wavelet sub-bands mixing. The proposed a wavelet sub-bands mixing is based on a multi-resolution approach for improving the quality of image de-noising filter. Quantitative validation was carried out on synthetic datasets generated with the Brain W...

  14. Blind Analysis of CT Image Noise Using Residual Denoised Images

    CERN Document Server

    Roychowdhury, Sohini; Alessio, Adam

    2016-01-01

    CT protocol design and quality control would benefit from automated tools to estimate the quality of generated CT images. These tools could be used to identify erroneous CT acquisitions or refine protocols to achieve certain signal to noise characteristics. This paper investigates blind estimation methods to determine global signal strength and noise levels in chest CT images. Methods: We propose novel performance metrics corresponding to the accuracy of noise and signal estimation. We implement and evaluate the noise estimation performance of six spatial- and frequency- based methods, derived from conventional image filtering algorithms. Algorithms were tested on patient data sets from whole-body repeat CT acquisitions performed with a higher and lower dose technique over the same scan region. Results: The proposed performance metrics can evaluate the relative tradeoff of filter parameters and noise estimation performance. The proposed automated methods tend to underestimate CT image noise at low-flux levels...

  15. Flotation froth image de-noising algorithm based on lifting improved directionlet transform%基于提升改进方向波变换的浮选泡沫图像降噪方法

    Institute of Scientific and Technical Information of China (English)

    李建奇; 阳春华; 朱红求; 曹斌芳

    2013-01-01

    针对矿物浮选过程中泡沫图像易受噪声影响,存在纹理细节模糊、灰度值对比度低等问题,提出一种浮选泡沫图像的非线性降噪方法。首先构造一种改进方向波变换,保证信号的平移不变性,同时采用提升算法减小其运算量。然后通过对分解系数建模,针对低频子带系数采用多尺度 Retinex 算法进行处理,以改善整体亮度均匀性,提高对比度;对各高通子带构建基于高斯混合尺度模型的分解系数邻域模型,并利用Bayes最小均方(BLS)估计进行局部去噪。最后利用所提出的方法对大量浮选泡沫图像进行去噪分析。结果表明:所提出的降噪方法能突出泡沫图像的纹理细节信息,提高泡沫图像的对比度,在信噪比和实时性上有明显提高,为后续泡沫图像的分割和工况识别奠定基础。%Considering the defects, such as easy sensitivity to noise and heavy texture, low contrast of gray value in the process of the floatation of foam image, a non-linear de-noising method was proposed. Lifting improved directionlet transform was firstly constructed, which not only ensured the shifting invariance but reduced its complexity. Multi-scale Retinex algorithm dealing with low-frequency subband coefficient was proposed for improving luminance uniformity and overall contrast. For high-pass subband, a model of decomposition coefficients neighbourhood based on Gaussian scale mixtures model was proposed for de-noising the image locally using Bayes least square (BLS). The analysis on the effect of de-noising was given to lots of real froth images. The results show that the proposed method is successful in maintaining edges and is superior in de-noising in term of PSNR and visual effect. It lays a foundation for foamy segmentation and analyzing grade from flotation froth image.

  16. Nonlinear Denoising and Analysis of Neuroimages With Kernel Principal Component Analysis and Pre-Image Estimation

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Abrahamsen, Trine Julie; Madsen, Kristoffer Hougaard;

    2012-01-01

    We investigate the use of kernel principal component analysis (PCA) and the inverse problem known as pre-image estimation in neuroimaging: i) We explore kernel PCA and pre-image estimation as a means for image denoising as part of the image preprocessing pipeline. Evaluation of the denoising...... procedure is performed within a data-driven split-half evaluation framework. ii) We introduce manifold navigation for exploration of a nonlinear data manifold, and illustrate how pre-image estimation can be used to generate brain maps in the continuum between experimentally defined brain states/classes. We...

  17. Electrocardiogram de-noising based on forward wavelet transform translation invariant application in bionic wavelet domain

    Indian Academy of Sciences (India)

    Mourad Talbi

    2014-08-01

    In this paper, we propose a new technique of Electrocardiogram (ECG) signal de-noising based on thresholding of the coefficients obtained from the application of the Forward Wavelet Transform Translation Invariant (FWT_TI) to each Bionic Wavelet coefficient. The De-noise De-noised ECG is obtained from the application of the inverse of BWT (BWT−1) to the de-noise de-noised bionic wavelet coefficients. For evaluating this new proposed de-noising technique, we have compared it to a thresholding technique in the FWT_TI domain. Preliminary tests of the application of the two de-noising techniques were constructed on a number of ECG signals taken from MIT-BIH database. The obtained results from Signal to Noise Ratio (SNR) and Mean Square Error (MSE) computations showed that our proposed de-noising technique outperforms the second technique. We have also compared the proposed technique to the thresholding technique in the bionic wavelet domain and this comparison was performed by SNR improvement computing. The obtained results from this evaluation showed that the proposed technique also outperforms the de-noising technique based on bionic wavelet coefficients thresholding.

  18. A shape-optimized framework for kidney segmentation in ultrasound images using NLTV denoising and DRLSE

    Directory of Open Access Journals (Sweden)

    Yang Fan

    2012-10-01

    Full Text Available Abstract Background Computer-assisted surgical navigation aims to provide surgeons with anatomical target localization and critical structure observation, where medical image processing methods such as segmentation, registration and visualization play a critical role. Percutaneous renal intervention plays an important role in several minimally-invasive surgeries of kidney, such as Percutaneous Nephrolithotomy (PCNL and Radio-Frequency Ablation (RFA of kidney tumors, which refers to a surgical procedure where access to a target inside the kidney by a needle puncture of the skin. Thus, kidney segmentation is a key step in developing any ultrasound-based computer-aided diagnosis systems for percutaneous renal intervention. Methods In this paper, we proposed a novel framework for kidney segmentation of ultrasound (US images combined with nonlocal total variation (NLTV image denoising, distance regularized level set evolution (DRLSE and shape prior. Firstly, a denoised US image was obtained by NLTV image denoising. Secondly, DRLSE was applied in the kidney segmentation to get binary image. In this case, black and white region represented the kidney and the background respectively. The last stage is that the shape prior was applied to get a shape with the smooth boundary from the kidney shape space, which was used to optimize the segmentation result of the second step. The alignment model was used occasionally to enlarge the shape space in order to increase segmentation accuracy. Experimental results on both synthetic images and US data are given to demonstrate the effectiveness and accuracy of the proposed algorithm. Results We applied our segmentation framework on synthetic and real US images to demonstrate the better segmentation results of our method. From the qualitative results, the experiment results show that the segmentation results are much closer to the manual segmentations. The sensitivity (SN, specificity (SP and positive predictive value

  19. Projection Lens Wave-Front Aberration Measurement Method Based on Adaptive Aerial Image Denoising%基于空间像自适应降噪的投影物镜波像差检测方法

    Institute of Scientific and Technical Information of China (English)

    杨济硕; 李思坤; 王向朝; 闫观勇; 徐东波

    2013-01-01

    提出了一种基于空间像自适应降噪的投影物镜波像差检测方法.通过对空间像进行统计分析,获取空间像的噪声模型和噪声标准差模型.以噪声标准差为权重因子,利用加权最小二乘法对空间像进行主成分分解,可以实现对空间像的自适应、无损降噪,从而得到更为精确的主成分系数和泽尼克系数.使用光刻仿真软件PROLITH的仿真结果表明,在相同的噪声水平下,0.1λ像差幅值内,与基于空间像主成分分析的波像差检测技术相比,精度提高30%以上.在使用光刻实验平台测量Z8调整量的实验中,该方法的精度更高.%A wave-front aberration measurement method of lithographic projection lens based on adaptive aerial image denoising is proposed. Principal component analysis (PCA) and multivariate linear regression analysis are used for model generation. Weighted least-square (WLSQ) method based PCA is used to get the principal component coefficients that are used for extracting the actual Zernike coefficients. Both the noise model of aerial images and the standard deviation model of noises are obtained by statistical analysis of actually measured aerial images. The standard deviation of the noise is used as weighting factors of the weighted least-square method. Accurate principal component coefficients and Zernike coefficients can be calculated because of the adaptive and lossless denoising ability of this method. Compared with wave-front aberration measurement techniques based on principal component analysis of aerial images (AMAI-PCA), the new method can provide more accurate results. Simulations show that AMAI-WLSQ can enhance the accuracy by more than 30 % when the range of wavefront aberration is within 0. 1A. Experiments also show that AMAI-WLSQ can detect aberration shifts more accurately.

  20. A multi-scale non-local means algorithm for image de-noising

    Science.gov (United States)

    Nercessian, Shahan; Panetta, Karen A.; Agaian, Sos S.

    2012-06-01

    A highly studied problem in image processing and the field of electrical engineering in general is the recovery of a true signal from its noisy version. Images can be corrupted by noise during their acquisition or transmission stages. As noisy images are visually very poor in quality, and complicate further processing stages of computer vision systems, it is imperative to develop algorithms which effectively remove noise in images. In practice, it is a difficult task to effectively remove the noise while simultaneously retaining the edge structures within the image. Accordingly, many de-noising algorithms have been considered attempt to intelligent smooth the image while still preserving its details. Recently, a non-local means (NLM) de-noising algorithm was introduced, which exploited the redundant nature of images to achieve image de-noising. The algorithm was shown to outperform current de-noising standards, including Gaussian filtering, anisotropic diffusion, total variation minimization, and multi-scale transform coefficient thresholding. However, the NLM algorithm was developed in the spatial domain, and therefore, does not leverage the benefit that multi-scale transforms provide a framework in which signals can be better distinguished by noise. Accordingly, in this paper, a multi-scale NLM (MS-NLM) algorithm is proposed, which combines the advantage of the NLM algorithm and multi-scale image processing techniques. Experimental results via computer simulations illustrate that the MS-NLM algorithm outperforms the NLM, both visually and quantitatively.

  1. Improved Real-time Denoising Method Based on Lifting Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Liu Zhaohua

    2014-06-01

    Full Text Available Signal denoising can not only enhance the signal to noise ratio (SNR but also reduce the effect of noise. In order to satisfy the requirements of real-time signal denoising, an improved semisoft shrinkage real-time denoising method based on lifting wavelet transform was proposed. The moving data window technology realizes the real-time wavelet denoising, which employs wavelet transform based on lifting scheme to reduce computational complexity. Also hyperbolic threshold function and recursive threshold computing can ensure the dynamic characteristics of the system, in addition, it can improve the real-time calculating efficiency as well. The simulation results show that the semisoft shrinkage real-time denoising method has quite a good performance in comparison to the traditional methods, namely soft-thresholding and hard-thresholding. Therefore, this method can solve more practical engineering problems.

  2. Extreme value analysis of frame coefficients and implications for image denoising

    CERN Document Server

    Haltmeier, Markus

    2012-01-01

    Denoising by frame thresholding is one of the most basic and efficient methods for recovering a discrete signal or image from data that are corrupted by additive Gaussian white noise. The basic idea is to select a frame of analyzing elements that separates the data in few large coefficients due to the signal and many small coefficients mainly due to the noise $\\epsilon_n$. Removing all data coefficients being in magnitude below a certain threshold yields an approximation to the original signal. In order that a significant amount of the noise is removed and at the same time relevant information about the original image is kept, a precise understanding of the statistical properties of thresholding is important. For that purpose we compute, for the first time, the asymptotic distribution of $\\max_{\\om\\in\\Om_n} \\abs{\\inner{\\base_\\om^n, \

  3. A new method for fusion, denoising and enhancement of x-ray images retrieved from Talbot-Lau grating interferometry

    Science.gov (United States)

    Scholkmann, Felix; Revol, Vincent; Kaufmann, Rolf; Baronowski, Heidrun; Kottler, Christian

    2014-03-01

    This paper introduces a new image denoising, fusion and enhancement framework for combining and optimal visualization of x-ray attenuation contrast (AC), differential phase contrast (DPC) and dark-field contrast (DFC) images retrieved from x-ray Talbot-Lau grating interferometry. The new image fusion framework comprises three steps: (i) denoising each input image (AC, DPC and DFC) through adaptive Wiener filtering, (ii) performing a two-step image fusion process based on the shift-invariant wavelet transform, i.e. first fusing the AC with the DPC image and then fusing the resulting image with the DFC image, and finally (iii) enhancing the fused image to obtain a final image using adaptive histogram equalization, adaptive sharpening and contrast optimization. Application examples are presented for two biological objects (a human tooth and a cherry) and the proposed method is compared to two recently published AC/DPC/DFC image processing techniques. In conclusion, the new framework for the processing of AC, DPC and DFC allows the most relevant features of all three images to be combined in one image while reducing the noise and enhancing adaptively the relevant image features. The newly developed framework may be used in technical and medical applications.

  4. A new method for fusion, denoising and enhancement of x-ray images retrieved from Talbot–Lau grating interferometry

    International Nuclear Information System (INIS)

    This paper introduces a new image denoising, fusion and enhancement framework for combining and optimal visualization of x-ray attenuation contrast (AC), differential phase contrast (DPC) and dark-field contrast (DFC) images retrieved from x-ray Talbot–Lau grating interferometry. The new image fusion framework comprises three steps: (i) denoising each input image (AC, DPC and DFC) through adaptive Wiener filtering, (ii) performing a two-step image fusion process based on the shift-invariant wavelet transform, i.e. first fusing the AC with the DPC image and then fusing the resulting image with the DFC image, and finally (iii) enhancing the fused image to obtain a final image using adaptive histogram equalization, adaptive sharpening and contrast optimization. Application examples are presented for two biological objects (a human tooth and a cherry) and the proposed method is compared to two recently published AC/DPC/DFC image processing techniques. In conclusion, the new framework for the processing of AC, DPC and DFC allows the most relevant features of all three images to be combined in one image while reducing the noise and enhancing adaptively the relevant image features. The newly developed framework may be used in technical and medical applications. (paper)

  5. Blind Analysis of CT Image Noise Using Residual Denoised Images

    OpenAIRE

    Roychowdhury, Sohini; Hollraft, Nathan; Alessio, Adam

    2016-01-01

    CT protocol design and quality control would benefit from automated tools to estimate the quality of generated CT images. These tools could be used to identify erroneous CT acquisitions or refine protocols to achieve certain signal to noise characteristics. This paper investigates blind estimation methods to determine global signal strength and noise levels in chest CT images. Methods: We propose novel performance metrics corresponding to the accuracy of noise and signal estimation. We implem...

  6. Experimental wavelet based denoising for indoor infrared wireless communications.

    Science.gov (United States)

    Rajbhandari, Sujan; Ghassemlooy, Zabih; Angelova, Maia

    2013-06-01

    This paper reports the experimental wavelet denoising techniques carried out for the first time for a number of modulation schemes for indoor optical wireless communications in the presence of fluorescent light interference. The experimental results are verified using computer simulations, clearly illustrating the advantage of the wavelet denoising technique in comparison to the high pass filtering for all baseband modulation schemes. PMID:23736631

  7. Improvement of HMT based on uniform discrete curvelet coeffi-cients and application in image denoising%基于UDCT系数的改进HMT和在图像去噪中应用

    Institute of Scientific and Technical Information of China (English)

    杨兴明; 陈海燕; 王刚; 王彬彬; 赵银平

    2013-01-01

    通过对均匀离散曲波变换(Uniform Discrete Curvelet Transform,UDCT)系数的统计特性研究,同时对系数相关性度量指标互信息量的分析,最终选择隐马尔可夫树模型对其系数建模,且用EM算法训练序列;针对训练时间过长问题,通过分析系数的衰减性和尺度间系数延续性,提出一种新的对算法参数初值的方差和状态转移矩阵的优化方法,实验结果证明,在采用峰值信噪比和相似度作为图像去噪效果的度量时,同等条件下文中提出的算法比Wavelet HMT、Contourlet HMT、UDCT HMT算法有较好的实时性和去噪效果。%Based on the statistical properties of coefficients of the Uniform Discrete Curvelet Transform(UDCT), and the analysis of correlation metric mutual information about the coefficients, this paper chooses the Hidden Markov Tree to model the coefficients finally and trains the sequence with the EM algorithm. With amount of time consuming, an optimization EM algorithm based on HMT of UDCT coefficients is presented; it further optimizes the algorithm by defining the variance and state transition matrix based on the attenuation of coefficients and continuity between the scales. Experimental results show that, in the use of similarity and Peak Signal to Noise Ratio effect as the measurement of image de-noising, under the same conditions, the algorithm proposed has better real-time and de-noising effect than the Wavelet HMT, Contourlet HMT, UDCT HMT algorithm.

  8. SET OPERATOR-BASED METHOD OF DENOISING MEDICAL VOLUME DATA

    Institute of Scientific and Technical Information of China (English)

    程兵; 郑南宁; 袁泽剑

    2002-01-01

    Objective To investigate impulsive noise suppression of medical volume data. Methods The volume data is represented as level sets and a special set operator is defined and applied to filtering it. The small connected components, which are likely to be produced by impulsive noise, are eliminated after the filtering process. A fast algorithm that uses a heap data structure is also designed. Results Compared with traditional linear filters such as a Gaussian filter, this method preserves the fine structure features of the medical volume data while removing noise, and the fast algorithm developed by us reduces memory consumption and improves computing efficiency. The experimental results given illustrate the efficiency of the method and the fast algorithm. Conclusion The set operator-based method shows outstanding denoising properties in our experiment, especially for impulsive noise. The method has a wide variety of applications in the areas of volume visualization and high dimensional data processing.

  9. Image denoising algorithm of refuge chamber by combining wavelet transform and bilateral filtering

    Institute of Scientific and Technical Information of China (English)

    Zhang Weipeng

    2013-01-01

    In order to preferably identify infrared image of refuge chamber,reduce image noises of refuge chamber and retain more image details,we propose the method of combining two-dimensional discrete wavelet transform and bilateral denoising.First,the wavelet transform is adopted to decompose the image of refuge chamber,of which low frequency component remains unchanged.Then,three high-frequency components are treated by bilateral filtering,and the image is reconstructed.The result shows that the combination of bilateral filtering and wavelet transform for image denoising can better retain the details which are included in the image,while providing better visual effect.This is superior to using either bilateral filtering or wavelet transform alone.It is useful for perfecting emergency refuge system of coal.

  10. Autocorrelation based denoising of manatee vocalizations using the undecimated discrete wavelet transform.

    Science.gov (United States)

    Gur, Berke M; Niezrecki, Christopher

    2007-07-01

    Recent interest in the West Indian manatee (Trichechus manatus latirostris) vocalizations has been primarily induced by an effort to reduce manatee mortality rates due to watercraft collisions. A warning system based on passive acoustic detection of manatee vocalizations is desired. The success and feasibility of such a system depends on effective denoising of the vocalizations in the presence of high levels of background noise. In the last decade, simple and effective wavelet domain nonlinear denoising methods have emerged as an alternative to linear estimation methods. However, the denoising performances of these methods degrades considerably with decreasing signal-to-noise ratio (SNR) and therefore are not suited for denoising manatee vocalizations in which the typical SNR is below 0 dB. Manatee vocalizations possess a strong harmonic content and a slow decaying autocorrelation function. In this paper, an efficient denoising scheme that exploits both the autocorrelation function of manatee vocalizations and effectiveness of the nonlinear wavelet transform based denoising algorithms is introduced. The suggested wavelet-based denoising algorithm is shown to outperform linear filtering methods, extending the detection range of vocalizations. PMID:17614478

  11. Biomedical image and signal de-noising using dual tree complex wavelet transform

    Science.gov (United States)

    Rizi, F. Yousefi; Noubari, H. Ahmadi; Setarehdan, S. K.

    2011-10-01

    Dual tree complex wavelet transform(DTCWT) is a form of discrete wavelet transform, which generates complex coefficients by using a dual tree of wavelet filters to obtain their real and imaginary parts. The purposes of de-noising are reducing noise level and improving signal to noise ratio (SNR) without distorting the signal or image. This paper proposes a method for removing white Gaussian noise from ECG signals and biomedical images. The discrete wavelet transform (DWT) is very valuable in a large scope of de-noising problems. However, it has limitations such as oscillations of the coefficients at a singularity, lack of directional selectivity in higher dimensions, aliasing and consequent shift variance. The complex wavelet transform CWT strategy that we focus on in this paper is Kingsbury's and Selesnick's dual tree CWT (DTCWT) which outperforms the critically decimated DWT in a range of applications, such as de-noising. Each complex wavelet is oriented along one of six possible directions, and the magnitude of each complex wavelet has a smooth bell-shape. In the final part of this paper, we present biomedical image and signal de-noising by the means of thresholding magnitude of the wavelet coefficients.

  12. Chebyshev and Conjugate Gradient Filters for Graph Image Denoising

    OpenAIRE

    Tian, Dong; Mansour, Hassan; Knyazev, Andrew; Vetro, Anthony

    2015-01-01

    In 3D image/video acquisition, different views are often captured with varying noise levels across the views. In this paper, we propose a graph-based image enhancement technique that uses a higher quality view to enhance a degraded view. A depth map is utilized as auxiliary information to match the perspectives of the two views. Our method performs graph-based filtering of the noisy image by directly computing a projection of the image to be filtered onto a lower dimensional Krylov subspace o...

  13. Adaptive nonlocal means filtering based on local noise level for CT denoising

    International Nuclear Information System (INIS)

    Purpose: To develop and evaluate an image-domain noise reduction method based on a modified nonlocal means (NLM) algorithm that is adaptive to local noise level of CT images and to implement this method in a time frame consistent with clinical workflow. Methods: A computationally efficient technique for local noise estimation directly from CT images was developed. A forward projection, based on a 2D fan-beam approximation, was used to generate the projection data, with a noise model incorporating the effects of the bowtie filter and automatic exposure control. The noise propagation from projection data to images was analytically derived. The analytical noise map was validated using repeated scans of a phantom. A 3D NLM denoising algorithm was modified to adapt its denoising strength locally based on this noise map. The performance of this adaptive NLM filter was evaluated in phantom studies in terms of in-plane and cross-plane high-contrast spatial resolution, noise power spectrum (NPS), subjective low-contrast spatial resolution using the American College of Radiology (ACR) accreditation phantom, and objective low-contrast spatial resolution using a channelized Hotelling model observer (CHO). Graphical processing units (GPU) implementation of this noise map calculation and the adaptive NLM filtering were developed to meet demands of clinical workflow. Adaptive NLM was piloted on lower dose scans in clinical practice. Results: The local noise level estimation matches the noise distribution determined from multiple repetitive scans of a phantom, demonstrated by small variations in the ratio map between the analytical noise map and the one calculated from repeated scans. The phantom studies demonstrated that the adaptive NLM filter can reduce noise substantially without degrading the high-contrast spatial resolution, as illustrated by modulation transfer function and slice sensitivity profile results. The NPS results show that adaptive NLM denoising preserves the

  14. Adaptive nonlocal means filtering based on local noise level for CT denoising

    Energy Technology Data Exchange (ETDEWEB)

    Li, Zhoubo; Trzasko, Joshua D.; Lake, David S.; Blezek, Daniel J.; Manduca, Armando, E-mail: manduca.armando@mayo.edu [Department of Physiology and Biomedical Engineering, Mayo Clinic, Rochester, Minnesota 55905 (United States); Yu, Lifeng; Fletcher, Joel G.; McCollough, Cynthia H. [Department of Radiology, Mayo Clinic, Rochester, Minnesota 55905 (United States)

    2014-01-15

    Purpose: To develop and evaluate an image-domain noise reduction method based on a modified nonlocal means (NLM) algorithm that is adaptive to local noise level of CT images and to implement this method in a time frame consistent with clinical workflow. Methods: A computationally efficient technique for local noise estimation directly from CT images was developed. A forward projection, based on a 2D fan-beam approximation, was used to generate the projection data, with a noise model incorporating the effects of the bowtie filter and automatic exposure control. The noise propagation from projection data to images was analytically derived. The analytical noise map was validated using repeated scans of a phantom. A 3D NLM denoising algorithm was modified to adapt its denoising strength locally based on this noise map. The performance of this adaptive NLM filter was evaluated in phantom studies in terms of in-plane and cross-plane high-contrast spatial resolution, noise power spectrum (NPS), subjective low-contrast spatial resolution using the American College of Radiology (ACR) accreditation phantom, and objective low-contrast spatial resolution using a channelized Hotelling model observer (CHO). Graphical processing units (GPU) implementation of this noise map calculation and the adaptive NLM filtering were developed to meet demands of clinical workflow. Adaptive NLM was piloted on lower dose scans in clinical practice. Results: The local noise level estimation matches the noise distribution determined from multiple repetitive scans of a phantom, demonstrated by small variations in the ratio map between the analytical noise map and the one calculated from repeated scans. The phantom studies demonstrated that the adaptive NLM filter can reduce noise substantially without degrading the high-contrast spatial resolution, as illustrated by modulation transfer function and slice sensitivity profile results. The NPS results show that adaptive NLM denoising preserves the

  15. Impact of image denoising on image quality, quantitative parameters and sensitivity of ultra-low-dose volume perfusion CT imaging

    International Nuclear Information System (INIS)

    To examine the impact of denoising on ultra-low-dose volume perfusion CT (ULD-VPCT) imaging in acute stroke. Simulated ULD-VPCT data sets at 20 % dose rate were generated from perfusion data sets of 20 patients with suspected ischemic stroke acquired at 80 kVp/180 mAs. Four data sets were generated from each ULD-VPCT data set: not-denoised (ND); denoised using spatiotemporal filter (D1); denoised using quanta-stream diffusion technique (D2); combination of both methods (D1 + D2). Signal-to-noise ratio (SNR) was measured in the resulting 100 data sets. Image quality, presence/absence of ischemic lesions, CBV and CBF scores according to a modified ASPECTS score were assessed by two blinded readers. SNR and qualitative scores were highest for D1 + D2 and lowest for ND (all p ≤ 0.001). In 25 % of the patients, ND maps were not assessable and therefore excluded from further analyses. Compared to original data sets, in D2 and D1 + D2, readers correctly identified all patients with ischemic lesions (sensitivity 1.0, kappa 1.0). Lesion size was most accurately estimated for D1 + D2 with a sensitivity of 1.0 (CBV) and 0.94 (CBF) and an inter-rater agreement of 1.0 and 0.92, respectively. An appropriate combination of denoising techniques applied in ULD-VPCT produces diagnostically sufficient perfusion maps at substantially reduced dose rates as low as 20 % of the normal scan. (orig.)

  16. Impact of image denoising on image quality, quantitative parameters and sensitivity of ultra-low-dose volume perfusion CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Othman, Ahmed E. [RWTH Aachen University, Department of Diagnostic and Interventional Neuroradiology, Aachen (Germany); Eberhard Karls University Tuebingen, University Hospital Tuebingen, Department for Diagnostic and Interventional Radiology, Tuebingen (Germany); Brockmann, Carolin; Afat, Saif; Pjontek, Rastislav; Nikoubashman, Omid; Brockmann, Marc A.; Wiesmann, Martin [RWTH Aachen University, Department of Diagnostic and Interventional Neuroradiology, Aachen (Germany); Yang, Zepa; Kim, Changwon [Seoul National University, Department of Transdisciplinary Studies, Graduate School of Convergence Science and Technology, Suwon (Korea, Republic of); Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Nikolaou, Konstantin [Eberhard Karls University Tuebingen, University Hospital Tuebingen, Department for Diagnostic and Interventional Radiology, Tuebingen (Germany); Kim, Jong Hyo [Seoul National University, Department of Transdisciplinary Studies, Graduate School of Convergence Science and Technology, Suwon (Korea, Republic of); Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Advanced Institute of Convergence Technology, Center for Medical-IT Convergence Technology Research, Suwon (Korea, Republic of); Seoul National University Hospital, Department of Radiology, Seoul (Korea, Republic of)

    2016-01-15

    To examine the impact of denoising on ultra-low-dose volume perfusion CT (ULD-VPCT) imaging in acute stroke. Simulated ULD-VPCT data sets at 20 % dose rate were generated from perfusion data sets of 20 patients with suspected ischemic stroke acquired at 80 kVp/180 mAs. Four data sets were generated from each ULD-VPCT data set: not-denoised (ND); denoised using spatiotemporal filter (D1); denoised using quanta-stream diffusion technique (D2); combination of both methods (D1 + D2). Signal-to-noise ratio (SNR) was measured in the resulting 100 data sets. Image quality, presence/absence of ischemic lesions, CBV and CBF scores according to a modified ASPECTS score were assessed by two blinded readers. SNR and qualitative scores were highest for D1 + D2 and lowest for ND (all p ≤ 0.001). In 25 % of the patients, ND maps were not assessable and therefore excluded from further analyses. Compared to original data sets, in D2 and D1 + D2, readers correctly identified all patients with ischemic lesions (sensitivity 1.0, kappa 1.0). Lesion size was most accurately estimated for D1 + D2 with a sensitivity of 1.0 (CBV) and 0.94 (CBF) and an inter-rater agreement of 1.0 and 0.92, respectively. An appropriate combination of denoising techniques applied in ULD-VPCT produces diagnostically sufficient perfusion maps at substantially reduced dose rates as low as 20 % of the normal scan. (orig.)

  17. R-L Method and BLS-GSM Denoising for Penumbra Image Reconstruction

    International Nuclear Information System (INIS)

    When neutron yield is very low, reconstruction of coding penumbra image is rather difficult. In this paper, low-yield (109) 14 MeV neutron penumbra imaging was simulated by Monte Carlo method. The Richardson Lucy (R-L) iteration method was proposed to incorporated with Bayesian least square-Gaussian scale mixture model (BLS-GSM) wavelet denoising for the simulated image. Optimal number of R-L iterations was gotten by a large number of tests. The results show that compared with Wiener method and median filter denoising, this method is better in restraining background noise, the correlation coefficient Rsr between the reconstructed and the real images is larger, and the reconstruction result is better. (fusion engineering)

  18. DART: Denoising Algorithm based on Relevance network Topology improves molecular pathway activity inference

    Directory of Open Access Journals (Sweden)

    Purushotham Arnie

    2011-10-01

    Full Text Available Abstract Background Inferring molecular pathway activity is an important step towards reducing the complexity of genomic data, understanding the heterogeneity in clinical outcome, and obtaining molecular correlates of cancer imaging traits. Increasingly, approaches towards pathway activity inference combine molecular profiles (e.g gene or protein expression with independent and highly curated structural interaction data (e.g protein interaction networks or more generally with prior knowledge pathway databases. However, it is unclear how best to use the pathway knowledge information in the context of molecular profiles of any given study. Results We present an algorithm called DART (Denoising Algorithm based on Relevance network Topology which filters out noise before estimating pathway activity. Using simulated and real multidimensional cancer genomic data and by comparing DART to other algorithms which do not assess the relevance of the prior pathway information, we here demonstrate that substantial improvement in pathway activity predictions can be made if prior pathway information is denoised before predictions are made. We also show that genes encoding hubs in expression correlation networks represent more reliable markers of pathway activity. Using the Netpath resource of signalling pathways in the context of breast cancer gene expression data we further demonstrate that DART leads to more robust inferences about pathway activity correlations. Finally, we show that DART identifies a hypothesized association between oestrogen signalling and mammographic density in ER+ breast cancer. Conclusions Evaluating the consistency of prior information of pathway databases in molecular tumour profiles may substantially improve the subsequent inference of pathway activity in clinical tumour specimens. This de-noising strategy should be incorporated in approaches which attempt to infer pathway activity from prior pathway models.

  19. System and method for image reconstruction, analysis, and/or de-noising

    KAUST Repository

    Laleg-Kirati, Taous-Meriem

    2015-11-12

    A method and system can analyze, reconstruct, and/or denoise an image. The method and system can include interpreting a signal as a potential of a Schrödinger operator, decomposing the signal into squared eigenfunctions, reducing a design parameter of the Schrödinger operator, analyzing discrete spectra of the Schrödinger operator and combining the analysis of the discrete spectra to construct the image.

  20. Denoising method of heart sound signals based on self-construct heart sound wavelet

    Directory of Open Access Journals (Sweden)

    Xiefeng Cheng

    2014-08-01

    Full Text Available In the field of heart sound signal denoising, the wavelet transform has become one of the most effective measures. The selective wavelet basis is based on the well-known orthogonal db series or biorthogonal bior series wavelet. In this paper we present a self-construct wavelet basis which is suitable for the heart sound denoising and analyze its constructor method and features in detail according to the characteristics of heart sound and evaluation criterion of signal denoising. The experimental results show that the heart sound wavelet can effectively filter out the noise of the heart sound signals, reserve the main characteristics of the signal. Compared with the traditional wavelets, it has a higher signal-to-noise ratio, lower mean square error and better denoising effect.

  1. Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform

    OpenAIRE

    Chitchian, Shahab; Mayer, Markus A.; Boretsky, Adam R.; Van Kuijk, Frederik J.; Motamedi, Massoud

    2012-01-01

    Image enhancement of retinal structures, in optical coherence tomography (OCT) scans through denoising, has the potential to aid in the diagnosis of several eye diseases. In this paper, a locally adaptive denoising algorithm using double-density dual-tree complex wavelet transform, a combination of the double-density wavelet transform and the dual-tree complex wavelet transform, is applied to reduce speckle noise in OCT images of the retina. The algorithm overcomes the limitations of commonly...

  2. Developing de-noising methods for ultrasonic NDT based on wavelet transform and adaptive filtering

    International Nuclear Information System (INIS)

    Digital signal processing methods based on the advanced wavelet transform and adaptive filtering were developed to deal with the problem of material's grain noise in ultrasonic Non Destructive Testing applications. The developed methods were implemented in lab View (Laboratory Virtual Instruments Engineering Workbench) programming environment. The experimental ultrasonic signals were obtained by inspecting stainless steel blocks with side-drilled holes, and carbon steel welded plates contain three types of welding flaws: root crack, centerline crack and slag inclusion. The simulations were carried out using CIVA Non Destructive Evaluation modeling software. Wavelet transform has introduced innovative changes in different fields of science and engineering. One of its important applications is in de-noising of signals and images. Wavelet packet is an efficient de-noising method, which has been used for ultrasonic Non Destructive Testing signals de-noising, wavelet packet is generalizations of the discrete wavelet transform. The first part of this research proposes the use of the un decimated wavelet transform in implementing wavelet packets to overcome the limitation of the shift variance encountered in discrete wavelet transform. Simulations and experiments were carried out on flaw's echo signals contaminated with material's grain noise, various wavelet transform processing parameters were investigated, including the number of decomposition levels, analyzing wavelets, and threshold setting. The results showed superior de-noising effect of the developed method over the conventional one. In the second part of the research, improvements are proposed to the multi-stage adaptive filter, which has been reported in a previous study as an advanced adaptive noise cancellation system for ultrasonic None Destructive Testing applications. The multi stage adaptive filter is limited by the slow convergence speed of the least-mean-squares algorithm as well as

  3. BL_Wiener Denoising Method for Removal of Speckle Noise in Ultrasound Image

    Directory of Open Access Journals (Sweden)

    Suhaila Sari

    2015-02-01

    Full Text Available Medical imaging techniques are extremely important tools in medical diagnosis. One of these important imaging techniques is ultrasound imaging. However, during ultrasound image acquisition process, the quality of image can be degraded due to corruption by speckle noise. The enhancement of ultrasound images quality from the 2D ultrasound imaging machines is expected to provide medical practitioners more reliable medical images in their patients’ diagnosis. However, developing a denoising technique which could remove noise effectively without eliminating the image’s edges and details is still an ongoing issue. The objective of this paper is to develop a new method that is capable to remove speckle noise from the ultrasound image effectively. Therefore, in this paper we proposed the utilization of Bilateral Filter and Adaptive Wiener Filter (BL_Wiener denoising method for images corrupted by speckle noise. Bilateral Filter is a non-linear filter effective in removing noise, while Adaptive Wiener Filter balances the tradeoff between inverse filtering and noise smoothing by removing additive noise while inverting blurring. From our simulation results, it is found that the BL_Wiener method has improved between 0.89 [dB] to 3.35 [dB] in terms of PSNR for test images in different noise levels in comparison to conventional methods.

  4. 基于主成分分析的图像自适应阈值去噪算法%Adaptive Threshold Image Denoising Algorithm Based on Principal Component Analysis

    Institute of Scientific and Technical Information of China (English)

    李俊秀; 姜三平

    2014-01-01

    Principal component analysis(PCA) is a multivariate statistical method which selects a few important variables through a linear transformation of Multiple variables. In image denoising, because of the local similarity of images, a new and effective noise removal algorithm is put forwarded. The similar blocks are found out as training samples by block matching algorithm and the main signal feature extraction is extracted by PCA, and then, an adaptive threshold is used to each denoised block to remove noise. The experimental results show that the method can effectively remove the image of Gauss white noise, and at the same time, can be very good to keep the edge detail information.%主成分分析(PCA)是一种将多个变量通过线性变换选出较少个数重要变量的一种多元统计方法。在图像去噪中,由于图像的局部相似性,提出一种新的有效的去除噪声的算法。通过块匹配法寻找出相似块作为训练样本,利用主成分分析提取信号的主要特征,然后根据统计理论中最小均方误差方法构造线性自适应阈值方程,对含噪图像的每一块进行自适应阈值去噪。实验结果表明,该方法能有效去除图像的高斯白噪声,并同时能很好的保持边缘等的细节信息。

  5. Denoising of Nuclear Medicine images using Wavelet Transform

    International Nuclear Information System (INIS)

    Diagnosis using images is widely used in Nuclear Medicine. However, in the case of planar images some problems can appear related to low detectability of small lesions, due to noise contamination. This phenomenon is emphasized because of the impossibility of increasing the radiopharmaceutical dose or the exposure time above the established levels. An algorithm to reduce the random Gaussian noise in planar images using the Wavelet Transform is described in this paper. Results are compared among a set of filters designed by this procedure, in order to select those that offer the best images considering the evaluation of the image quality through quantitative metrics (au)

  6. A NEW DE-NOISING METHOD BASED ON 3-BAND WAVELET AND NONPARAMETRIC ADAPTIVE ESTIMATION

    Institute of Scientific and Technical Information of China (English)

    Li Li; Peng Yuhua; Yang Mingqiang; Xue Peijun

    2007-01-01

    Wavelet de-noising has been well known as an important method of signal de-noising.Recently,most of the research efforts about wavelet de-noising focus on how to select the threshold,where Donoho method is applied widely.Compared with traditional 2-band wavelet,3-band wavelet has advantages in many aspects.According to this theory,an adaptive signal de-noising method in 3-band wavelet domain based on nonparametric adaptive estimation is proposed.The experimental results show that in 3-band wavelet domain,the proposed method represents better characteristics than Donoho method in protecting detail and improving the signal-to-noise ratio of reconstruction signal.

  7. Raman spectroscopy de-noising based on EEMD combined with VS-LMS algorithm

    Science.gov (United States)

    Yu, Xiao; Xu, Liang; Mo, Jia-qing; Lü, Xiao-yi

    2016-01-01

    This paper proposes a novel de-noising algorithm based on ensemble empirical mode decomposition (EEMD) and the variable step size least mean square (VS-LMS) adaptive filter. The noise of the high frequency part of spectrum will be removed through EEMD, and then the VS-LMS algorithm is utilized for overall de-noising. The EEMD combined with VS-LMS algorithm can not only preserve the detail and envelope of the effective signal, but also improve the system stability. When the method is used on pure R6G, the signal-to-noise ratio ( SNR) of Raman spectrum is lower than 10 dB. The de-noising superiority of the proposed method in Raman spectrum can be verified by three evaluation standards of SNR, root mean square error ( RMSE) and the correlation coefficient ρ.

  8. Image Variational Denoising Using Gradient Fidelity on Curvelet Shrinkage

    Directory of Open Access Journals (Sweden)

    Roysam Badrinath

    2010-01-01

    Full Text Available A new variational image model is presented for image restoration using a combination of the curvelet shrinkage method and the total variation (TV functional. In order to suppress the staircasing effect and curvelet-like artifacts, we use the multiscale curvelet shrinkage to compute an initial estimated image, and then we propose a new gradient fidelity term, which is designed to force the gradients of desired image to be close to the curvelet approximation gradients. Then, we introduce the Euler-Lagrange equation and make an investigation on the mathematical properties. To improve the ability of preserving the details of edges and texture, the spatial-varying parameters are adaptively estimated in the iterative process of the gradient descent flow algorithm. Numerical experiments demonstrate that our proposed method has good performance in alleviating both the staircasing effect and curvelet-like artifacts, while preserving fine details.

  9. Blind Deblurring and Denoising of Images Corrupted by Unidirectional Object Motion Blur and Sensor Noise.

    Science.gov (United States)

    Zhang, Yi; Hirakawa, Keigo

    2016-09-01

    Low light photography suffers from blur and noise. In this paper, we propose a novel method to recover a dense estimate of spatially varying blur kernel as well as a denoised and deblurred image from a single noisy and object motion blurred image. A proposed method takes the advantage of the sparse representation of double discrete wavelet transform-a generative model of image blur that simplifies the wavelet analysis of a blurred image-and the Bayesian perspective of modeling the prior distribution of the latent sharp wavelet coefficient and the likelihood function that makes the noise handling explicit. We demonstrate the effectiveness of the proposed method on moderate noise and severely blurred images using simulated and real camera data. PMID:27337717

  10. Medical image denoising via optimal implementation of non-local means on hybrid parallel architecture.

    Science.gov (United States)

    Nguyen, Tuan-Anh; Nakib, Amir; Nguyen, Huy-Nam

    2016-06-01

    The Non-local means denoising filter has been established as gold standard for image denoising problem in general and particularly in medical imaging due to its efficiency. However, its computation time limited its applications in real world application, especially in medical imaging. In this paper, a distributed version on parallel hybrid architecture is proposed to solve the computation time problem and a new method to compute the filters' coefficients is also proposed, where we focused on the implementation and the enhancement of filters' parameters via taking the neighborhood of the current voxel more accurately into account. In terms of implementation, our key contribution consists in reducing the number of shared memory accesses. The different tests of the proposed method were performed on the brain-web database for different levels of noise. Performances and the sensitivity were quantified in terms of speedup, peak signal to noise ratio, execution time, the number of floating point operations. The obtained results demonstrate the efficiency of the proposed method. Moreover, the implementation is compared to that of other techniques, recently published in the literature. PMID:27084318

  11. Static Myocardial Perfusion Imaging using denoised dynamic Rb-82 PET/CT scans

    DEFF Research Database (Denmark)

    Petersen, Maiken N.M.; Hoff, Camilla; Harms, Hans;

    2015-01-01

    quantitative accuracy. In this study, we examine static images created by summing late frames of denoised dynamic series. Method: 47 random clinical 82Rb stress and rest scans (27 male, age 68+/- 12 y., BMI 27.9 +/- 5.5 kg/m2) performed on a GE Discovery 690 PET/CT scanner were included in the study......Introduction: Relative and absolute measures of myocardial perfusion are derived from a single 82Rb PET/CT scan. However, images are inherently noising due to the short half-life of 82Rb. We have previously shown that denoising techniques can be applied to dynamic 82Rb series with excellent...... and Bland-Altman analysis. Results: For HYPR-LR, a good correlation was found for relative segmental perfusion for both stress (y=1.007x+0.313, R2=0.98) and rest (y=1.007x+ 0.421, R2=0.96) scans with negative bias of -0.79±1.44 and -0.90±1.63, respectively. Correlations for SSS (R2=0.94), SRS (R2...

  12. Optimization of wavelet- and curvelet-based denoising algorithms by multivariate SURE and GCV

    Science.gov (United States)

    Mortezanejad, R.; Gholami, A.

    2016-06-01

    One of the most crucial challenges in seismic data processing is the reduction of noise in the data or improving the signal-to-noise ratio (SNR). Wavelet- and curvelet-based denoising algorithms have become popular to address random noise attenuation for seismic sections. Wavelet basis, thresholding function, and threshold value are three key factors of such algorithms, having a profound effect on the quality of the denoised section. Therefore, given a signal, it is necessary to optimize the denoising operator over these factors to achieve the best performance. In this paper a general denoising algorithm is developed as a multi-variant (variable) filter which performs in multi-scale transform domains (e.g. wavelet and curvelet). In the wavelet domain this general filter is a function of the type of wavelet, characterized by its smoothness, thresholding rule, and threshold value, while in the curvelet domain it is only a function of thresholding rule and threshold value. Also, two methods, Stein’s unbiased risk estimate (SURE) and generalized cross validation (GCV), evaluated using a Monte Carlo technique, are utilized to optimize the algorithm in both wavelet and curvelet domains for a given seismic signal. The best wavelet function is selected from a family of fractional B-spline wavelets. The optimum thresholding rule is selected from general thresholding functions which contain the most well known thresholding functions, and the threshold value is chosen from a set of possible values. The results obtained from numerical tests show high performance of the proposed method in both wavelet and curvelet domains in comparison to conventional methods when denoising seismic data.

  13. Application of Wavelet Based Denoising for T-Wave Alternans Analysis in High Resolution ECG Maps

    Science.gov (United States)

    Janusek, D.; Kania, M.; Zaczek, R.; Zavala-Fernandez, H.; Zbieć, A.; Opolski, G.; Maniewski, R.

    2011-01-01

    T-wave alternans (TWA) allows for identification of patients at an increased risk of ventricular arrhythmia. Stress test, which increases heart rate in controlled manner, is used for TWA measurement. However, the TWA detection and analysis are often disturbed by muscular interference. The evaluation of wavelet based denoising methods was performed to find optimal algorithm for TWA analysis. ECG signals recorded in twelve patients with cardiac disease were analyzed. In seven of them significant T-wave alternans magnitude was detected. The application of wavelet based denoising method in the pre-processing stage increases the T-wave alternans magnitude as well as the number of BSPM signals where TWA was detected.

  14. De-noising of Raman spectrum signal based on stationary wavelet transform

    Institute of Scientific and Technical Information of China (English)

    Qingwei Gao(高清维); Zhaoqi Sun(孙兆奇); Zhuoliang Cao(曹卓良); Pu Cheng(程蒲)

    2004-01-01

    @@ In this paper,the Raman spectrum signal de-noising based on stationary wavelet transform is discussed.Haar wavelet is selected to decompose the Raman spectrum signal for several levels based on stationarywavelet transform.The noise mean square σj is estimated by the wavelet details at every level,and thewavelet details toward 0 by a threshold σj √2lnn,where n is length of the detail,then recovery signalis reconstructed.Experimental results show this method not only suppresses noise effectively,but alsopreserves as many target characteristics of original signal as possible.This de-noising method offers a veryattractive alternative to Raman spectrum signal noise suppress.

  15. Application of Set Pair Analysis-Based Similarity Forecast Model and Wavelet Denoising for Runoff Forecasting

    OpenAIRE

    Chien-Ming Chou

    2014-01-01

    This study presents the application of a set pair analysis-based similarity forecast (SPA-SF) model and wavelet denoising to forecast annual runoff. The SPA-SF model was built from identical, discrepant and contrary viewpoints. The similarity between estimated and historical data can be obtained. The weighted average of the annual runoff values characterized by the highest connection coefficients was regarded as the predicted value of the estimated annual runoff. In addition, runoff time seri...

  16. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los

    2013-11-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  17. Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform

    Science.gov (United States)

    Chitchian, Shahab; Mayer, Markus A.; Boretsky, Adam R.; van Kuijk, Frederik J.; Motamedi, Massoud

    2012-11-01

    Image enhancement of retinal structures, in optical coherence tomography (OCT) scans through denoising, has the potential to aid in the diagnosis of several eye diseases. In this paper, a locally adaptive denoising algorithm using double-density dual-tree complex wavelet transform, a combination of the double-density wavelet transform and the dual-tree complex wavelet transform, is applied to reduce speckle noise in OCT images of the retina. The algorithm overcomes the limitations of commonly used multiple frame averaging technique, namely the limited number of frames that can be recorded due to eye movements, by providing a comparable image quality in significantly less acquisition time equal to an order of magnitude less time compared to the averaging method. In addition, improvements of image quality metrics and 5 dB increase in the signal-to-noise ratio are attained.

  18. Adaptive Wavelet Threshold Denoising Method for Machinery Sound Based on Improved Fruit Fly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Jing Xu

    2016-07-01

    Full Text Available As the sound signal of a machine contains abundant information and is easy to measure, acoustic-based monitoring or diagnosis systems exhibit obvious superiority, especially in some extreme conditions. However, the sound directly collected from industrial field is always polluted. In order to eliminate noise components from machinery sound, a wavelet threshold denoising method optimized by an improved fruit fly optimization algorithm (WTD-IFOA is proposed in this paper. The sound is firstly decomposed by wavelet transform (WT to obtain coefficients of each level. As the wavelet threshold functions proposed by Donoho were discontinuous, many modified functions with continuous first and second order derivative were presented to realize adaptively denoising. However, the function-based denoising process is time-consuming and it is difficult to find optimal thresholds. To overcome these problems, fruit fly optimization algorithm (FOA was introduced to the process. Moreover, to avoid falling into local extremes, an improved fly distance range obeying normal distribution was proposed on the basis of original FOA. Then, sound signal of a motor was recorded in a soundproof laboratory, and Gauss white noise was added into the signal. The simulation results illustrated the effectiveness and superiority of the proposed approach by a comprehensive comparison among five typical methods. Finally, an industrial application on a shearer in coal mining working face was performed to demonstrate the practical effect.

  19. [Research on ECG de-noising method based on ensemble empirical mode decomposition and wavelet transform using improved threshold function].

    Science.gov (United States)

    Ye, Linlin; Yang, Dan; Wang, Xu

    2014-06-01

    A de-noising method for electrocardiogram (ECG) based on ensemble empirical mode decomposition (EEMD) and wavelet threshold de-noising theory is proposed in our school. We decomposed noised ECG signals with the proposed method using the EEMD and calculated a series of intrinsic mode functions (IMFs). Then we selected IMFs and reconstructed them to realize the de-noising for ECG. The processed ECG signals were filtered again with wavelet transform using improved threshold function. In the experiments, MIT-BIH ECG database was used for evaluating the performance of the proposed method, contrasting with de-noising method based on EEMD and wavelet transform with improved threshold function alone in parameters of signal to noise ratio (SNR) and mean square error (MSE). The results showed that the ECG waveforms de-noised with the proposed method were smooth and the amplitudes of ECG features did not attenuate. In conclusion, the method discussed in this paper can realize the ECG denoising and meanwhile keep the characteristics of original ECG signal. PMID:25219236

  20. Optimization of Wavelet-Based De-noising in MRI

    Directory of Open Access Journals (Sweden)

    K. Bartusek

    2011-04-01

    Full Text Available In the paper, a method for MR image enhancement using the wavelet analysis is described. The wavelet analysis is concentrated on the influence of threshold level and mother wavelet choices on the resultant MR image. The influence is expressed by the measurement and mutual comparison of three MT image parameters: signal to noise ratio, image contrast, and linear slope edge approximation. Unlike most standard methods working exclusively with the MR image magnitude, in our case both the MR image magnitude and the MR image phase were used in the enhancement process. Some recommendations are mentioned in conclusion, such as how to use a combination of mother wavelets with threshold levels for various types of MR images.

  1. Segmentation of confocal Raman microspectroscopic imaging data using edge-preserving denoising and clustering.

    Science.gov (United States)

    Alexandrov, Theodore; Lasch, Peter

    2013-06-18

    Over the past decade, confocal Raman microspectroscopic (CRM) imaging has matured into a useful analytical tool to obtain spatially resolved chemical information on the molecular composition of biological samples and has found its way into histopathology, cytology, and microbiology. A CRM imaging data set is a hyperspectral image in which Raman intensities are represented as a function of three coordinates: a spectral coordinate λ encoding the wavelength and two spatial coordinates x and y. Understanding CRM imaging data is challenging because of its complexity, size, and moderate signal-to-noise ratio. Spatial segmentation of CRM imaging data is a way to reveal regions of interest and is traditionally performed using nonsupervised clustering which relies on spectral domain-only information with the main drawback being the high sensitivity to noise. We present a new pipeline for spatial segmentation of CRM imaging data which combines preprocessing in the spectral and spatial domains with k-means clustering. Its core is the preprocessing routine in the spatial domain, edge-preserving denoising (EPD), which exploits the spatial relationships between Raman intensities acquired at neighboring pixels. Additionally, we propose to use both spatial correlation to identify Raman spectral features colocalized with defined spatial regions and confidence maps to assess the quality of spatial segmentation. For CRM data acquired from midsagittal Syrian hamster ( Mesocricetus auratus ) brain cryosections, we show how our pipeline benefits from the complex spatial-spectral relationships inherent in the CRM imaging data. EPD significantly improves the quality of spatial segmentation that allows us to extract the underlying structural and compositional information contained in the Raman microspectra. PMID:23701523

  2. Performance comparison of wavelet denoising based fast DOA estimation of MIMO OFDM system over Rayleigh Fading Channel

    Directory of Open Access Journals (Sweden)

    A.V.Meenakshi

    2012-09-01

    Full Text Available This paper presents a tool for the analysis, and simulation of direction-of-arrival estimation for MIMO OFDM signal over the Rayleigh fading channel. The performance of the proposed technique is tested for wavelet denoising based CYCLIC MUSIC algorithm. Simulation results demonstrate that the proposed system not only has good ability of suppressing interference, but also significantly improves the DOA estimation of the system. In this paper, it is proposed to find DOA of the received MIMO OFDM signal, and the performances are analyzed using matlab simulation by the Monte Carlo computer iteration. This paper provides a fairly complete image of the performance and statistical efficiency with QPSK signal model for coherent system at a lower SNR(18dB and interference environment.

  3. New variational image decomposition model for simultaneously denoising and segmenting optical coherence tomography images

    International Nuclear Information System (INIS)

    Optical coherence tomography (OCT) imaging plays an important role in clinical diagnosis and monitoring of diseases of the human retina. Automated analysis of optical coherence tomography images is a challenging task as the images are inherently noisy. In this paper, a novel variational image decomposition model is proposed to decompose an OCT image into three components: the first component is the original image but with the noise completely removed; the second contains the set of edges representing the retinal layer boundaries present in the image; and the third is an image of noise, or in image decomposition terms, the texture, or oscillatory patterns of the original image. In addition, a fast Fourier transform based split Bregman algorithm is developed to improve computational efficiency of solving the proposed model. Extensive experiments are conducted on both synthesised and real OCT images to demonstrate that the proposed model outperforms the state-of-the-art speckle noise reduction methods and leads to accurate retinal layer segmentation. (paper)

  4. Optimization of Wavelet-Based De-noising in MRI

    Czech Academy of Sciences Publication Activity Database

    Bartušek, Karel; Přinosil, J.; Smékal, Z.

    2011-01-01

    Roč. 20, č. 1 (2011), s. 85-93. ISSN 1210-2512 R&D Projects: GA ČR GA102/09/0314 Institutional research plan: CEZ:AV0Z20650511 Keywords : wavelet transformation * filtering technique * magnetic resonance imaging Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 0.739, year: 2011

  5. Method and application of wavelet shrinkage denoising based on genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Genetic algorithm (GA) based on wavelet transform threshold shrinkage (WTS) and translation-invafiant threshold shrinkage (TIS) is introduced into the method of noise reduction, where parameters used in WTS and TIS, such as wavelet function,decomposition levels, hard or soft threshold and threshold can be selected automatically. This paper ends by comparing two noise reduction methods on the basis of their denoising performances, computation time, etc. The effectiveness of these methods introduced in this paper is validated by the results of analysis of the simulated and real signals.

  6. Quantification of GABAA receptors in the rat brain with [123I]Iomazenil SPECT from factor analysis-denoised images

    International Nuclear Information System (INIS)

    Purpose: In vivo imaging of GABAA receptors is essential for the comprehension of psychiatric disorders in which the GABAergic system is implicated. Small animal SPECT provides a modality for in vivo imaging of the GABAergic system in rodents using [123I]Iomazenil, an antagonist of the GABAA receptor. The goal of this work is to describe and evaluate different quantitative reference tissue methods that enable reliable binding potential (BP) estimations in the rat brain to be obtained. Methods: Five male Sprague–Dawley rats were used for [123I]Iomazenil brain SPECT scans. Binding parameters were obtained with a one-tissue compartment model (1TC), a constrained two-tissue compartment model (2TCc), the two-step Simplified Reference Tissue Model (SRTM2), Logan graphical analysis and analysis of delayed-activity images. In addition, we employed factor analysis (FA) to deal with noise in data. Results: BPND obtained with SRTM2, Logan graphical analysis and delayed-activity analysis was highly correlated with BPF values obtained with 2TCc (r = 0.954 and 0.945 respectively, p c and SRTM2 in raw and FA-denoised images (r = 0.961 and 0.909 respectively, p ND values from raw images while scans of only 70 min are sufficient from FA-denoised images. These images are also associated with significantly lower standard errors of 2TCc and SRTM2 BP values. Conclusion: Reference tissue methods such as SRTM2 and Logan graphical analysis can provide equally reliable BPND values from rat brain [123I]Iomazenil SPECT. Acquisitions, however, can be much less time-consuming either with analysis of delayed activity obtained from a 20-minute scan 50 min after tracer injection or with FA-denoising of images

  7. A volume-based method for denoising on curved surfaces

    KAUST Repository

    Biddle, Harry

    2013-09-01

    We demonstrate a method for removing noise from images or other data on curved surfaces. Our approach relies on in-surface diffusion: we formulate both the Gaussian diffusion and Perona-Malik edge-preserving diffusion equations in a surface-intrinsic way. Using the Closest Point Method, a recent technique for solving partial differential equations (PDEs) on general surfaces, we obtain a very simple algorithm where we merely alternate a time step of the usual Gaussian diffusion (and similarly Perona-Malik) in a small 3D volume containing the surface with an interpolation step. The method uses a closest point function to represent the underlying surface and can treat very general surfaces. Experimental results include image filtering on smooth surfaces, open surfaces, and general triangulated surfaces. © 2013 IEEE.

  8. A Denoising Based Autoassociative Model for Robust Sensor Monitoring in Nuclear Power Plants

    Directory of Open Access Journals (Sweden)

    Ahmad Shaheryar

    2016-01-01

    Full Text Available Sensors health monitoring is essentially important for reliable functioning of safety-critical chemical and nuclear power plants. Autoassociative neural network (AANN based empirical sensor models have widely been reported for sensor calibration monitoring. However, such ill-posed data driven models may result in poor generalization and robustness. To address above-mentioned issues, several regularization heuristics such as training with jitter, weight decay, and cross-validation are suggested in literature. Apart from these regularization heuristics, traditional error gradient based supervised learning algorithms for multilayered AANN models are highly susceptible of being trapped in local optimum. In order to address poor regularization and robust learning issues, here, we propose a denoised autoassociative sensor model (DAASM based on deep learning framework. Proposed DAASM model comprises multiple hidden layers which are pretrained greedily in an unsupervised fashion under denoising autoencoder architecture. In order to improve robustness, dropout heuristic and domain specific data corruption processes are exercised during unsupervised pretraining phase. The proposed sensor model is trained and tested on sensor data from a PWR type nuclear power plant. Accuracy, autosensitivity, spillover, and sequential probability ratio test (SPRT based fault detectability metrics are used for performance assessment and comparison with extensively reported five-layer AANN model by Kramer.

  9. Simultaneous de-noising in phase contrast tomography

    Science.gov (United States)

    Koehler, Thomas; Roessl, Ewald

    2012-07-01

    In this work, we investigate methods for de-noising of tomographic differential phase contrast and absorption contrast images. We exploit the fact that in grating-based differential phase contrast imaging (DPCI), first, several images are acquired simultaneously in exactly the same geometry, and second, these different images can show very different contrast-to-noise-ratios. These features of grating-based DPCI are used to generalize the conventional bilateral filter. Experiments using simulations show a superior de-noising performance of the generalized algorithm compared with the conventional one.

  10. An Imbalanced Data Classification Algorithm of De-noising Auto-Encoder Neural Network Based on SMOTE

    Directory of Open Access Journals (Sweden)

    Zhang Chenggang

    2016-01-01

    Full Text Available Imbalanced data classification problem has always been one of the hot issues in the field of machine learning. Synthetic minority over-sampling technique (SMOTE is a classical approach to balance datasets, but it may give rise to such problem as noise. Stacked De-noising Auto-Encoder neural network (SDAE, can effectively reduce data redundancy and noise through unsupervised layer-wise greedy learning. Aiming at the shortcomings of SMOTE algorithm when synthesizing new minority class samples, the paper proposed a Stacked De-noising Auto-Encoder neural network algorithm based on SMOTE, SMOTE-SDAE, which is aimed to deal with imbalanced data classification. The proposed algorithm is not only able to synthesize new minority class samples, but it also can de-noise and classify the sampled data. Experimental results show that compared with traditional algorithms, SMOTE-SDAE significantly improves the minority class classification accuracy of the imbalanced datasets.

  11. SVD and Hankel matrix based de-noising approach for ball bearing fault detection and its assessment using artificial faults

    Science.gov (United States)

    Golafshan, Reza; Yuce Sanliturk, Kenan

    2016-03-01

    Ball bearings remain one of the most crucial components in industrial machines and due to their critical role, it is of great importance to monitor their conditions under operation. However, due to the background noise in acquired signals, it is not always possible to identify probable faults. This incapability in identifying the faults makes the de-noising process one of the most essential steps in the field of Condition Monitoring (CM) and fault detection. In the present study, Singular Value Decomposition (SVD) and Hankel matrix based de-noising process is successfully applied to the ball bearing time domain vibration signals as well as to their spectrums for the elimination of the background noise and the improvement the reliability of the fault detection process. The test cases conducted using experimental as well as the simulated vibration signals demonstrate the effectiveness of the proposed de-noising approach for the ball bearing fault detection.

  12. Wavelet-based denoising of the Fourier metric in real-time wavefront correction for single molecule localization microscopy

    Science.gov (United States)

    Tehrani, Kayvan Forouhesh; Mortensen, Luke J.; Kner, Peter

    2016-03-01

    Wavefront sensorless schemes for correction of aberrations induced by biological specimens require a time invariant property of an image as a measure of fitness. Image intensity cannot be used as a metric for Single Molecule Localization (SML) microscopy because the intensity of blinking fluorophores follows exponential statistics. Therefore a robust intensity-independent metric is required. We previously reported a Fourier Metric (FM) that is relatively intensity independent. The Fourier metric has been successfully tested on two machine learning algorithms, a Genetic Algorithm and Particle Swarm Optimization, for wavefront correction about 50 μm deep inside the Central Nervous System (CNS) of Drosophila. However, since the spatial frequencies that need to be optimized fall into regions of the Optical Transfer Function (OTF) that are more susceptible to noise, adding a level of denoising can improve performance. Here we present wavelet-based approaches to lower the noise level and produce a more consistent metric. We compare performance of different wavelets such as Daubechies, Bi-Orthogonal, and reverse Bi-orthogonal of different degrees and orders for pre-processing of images.

  13. Fractional domain varying-order differential denoising method

    Science.gov (United States)

    Zhang, Yan-Shan; Zhang, Feng; Li, Bing-Zhao; Tao, Ran

    2014-10-01

    Removal of noise is an important step in the image restoration process, and it remains a challenging problem in image processing. Denoising is a process used to remove the noise from the corrupted image, while retaining the edges and other detailed features as much as possible. Recently, denoising in the fractional domain is a hot research topic. The fractional-order anisotropic diffusion method can bring a less blocky effect and preserve edges in image denoising, a method that has received much interest in the literature. Based on this method, we propose a new method for image denoising, in which fractional-varying-order differential, rather than constant-order differential, is used. The theoretical analysis and experimental results show that compared with the state-of-the-art fractional-order anisotropic diffusion method, the proposed fractional-varying-order differential denoising model can preserve structure and texture well, while quickly removing noise, and yields good visual effects and better peak signal-to-noise ratio.

  14. Binary Codes for Tagging X-Ray Images via Deep De-Noising Autoencoders

    OpenAIRE

    Sze-To, Antonio; Tizhoosh, Hamid R; Wong, Andrew K. C.

    2016-01-01

    A Content-Based Image Retrieval (CBIR) system which identifies similar medical images based on a query image can assist clinicians for more accurate diagnosis. The recent CBIR research trend favors the construction and use of binary codes to represent images. Deep architectures could learn the non-linear relationship among image pixels adaptively, allowing the automatic learning of high-level features from raw pixels. However, most of them require class labels, which are expensive to obtain, ...

  15. Implemented Wavelet Packet Tree based Denoising Algorithm in Bus Signals of a Wearable Sensorarray

    Science.gov (United States)

    Schimmack, M.; Nguyen, S.; Mercorelli, P.

    2015-11-01

    This paper introduces a thermosensing embedded system with a sensor bus that uses wavelets for the purposes of noise location and denoising. From the principle of the filter bank the measured signal is separated in two bands, low and high frequency. The proposed algorithm identifies the defined noise in these two bands. With the Wavelet Packet Transform as a method of Discrete Wavelet Transform, it is able to decompose and reconstruct bus input signals of a sensor network. Using a seminorm, the noise of a sequence can be detected and located, so that the wavelet basis can be rearranged. This particularly allows for elimination of any incoherent parts that make up unavoidable measuring noise of bus signals. The proposed method was built based on wavelet algorithms from the WaveLab 850 library of the Stanford University (USA). This work gives an insight to the workings of Wavelet Transformation.

  16. Denoising autoencoder with modulated lateral connections learns invariant representations of natural images

    OpenAIRE

    Rasmus, Antti; Raiko, Tapani; Valpola, Harri

    2014-01-01

    Suitable lateral connections between encoder and decoder are shown to allow higher layers of a denoising autoencoder (dAE) to focus on invariant representations. In regular autoencoders, detailed information needs to be carried through the highest layers but lateral connections from encoder to decoder relieve this pressure. It is shown that abstract invariant features can be translated to detailed reconstructions when invariant features are allowed to modulate the strength of the lateral conn...

  17. Sinogram denoising via simultaneous sparse representation in learned dictionaries

    Science.gov (United States)

    Karimi, Davood; Ward, Rabab K.

    2016-05-01

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster.

  18. Decoding Stacked Denoising Autoencoders

    OpenAIRE

    Sonoda, Sho; Murata, Noboru

    2016-01-01

    Data representation in a stacked denoising autoencoder is investigated. Decoding is a simple technique for translating a stacked denoising autoencoder into a composition of denoising autoencoders in the ground space. In the infinitesimal limit, a composition of denoising autoencoders is reduced to a continuous denoising autoencoder, which is rich in analytic properties and geometric interpretation. For example, the continuous denoising autoencoder solves the backward heat equation and transpo...

  19. Noise Removal From Microarray Images Using Maximum a Posteriori Based Bivariate Estimator

    Directory of Open Access Journals (Sweden)

    A.Sharmila Agnal

    2013-01-01

    Full Text Available Microarray Image contains information about thousands of genes in an organism and these images are affected by several types of noises. They affect the circular edges of spots and thus degrade the image quality. Hence noise removal is the first step of cDNA microarray image analysis for obtaining gene expression level and identifying the infected cells. The Dual Tree Complex Wavelet Transform (DT-CWT is preferred for denoising microarray images due to its properties like improved directional selectivity and near shift-invariance. In this paper, bivariate estimators namely Linear Minimum Mean Squared Error (LMMSE and Maximum A Posteriori (MAP derived by applying DT-CWT are used for denoising microarray images. Experimental results show that MAP based denoising method outperforms existing denoising techniques for microarray images.

  20. Total Variation Denoising using Iterated Conditional Expectation

    OpenAIRE

    Louchet, Cécile; Moisan, Lionel

    2014-01-01

    We propose a new variant of the celebrated Total Variation image denoising model of Rudin, Osher and Fatemi, which provides results very similar to the Bayesian posterior mean variant (TV-LSE) while showing a much better computational efficiency. This variant is based on an iterative procedure which is proved to converge linearly to a fixed point satisfying a marginal conditional mean property. The implementation is simple, provided numerical precision issues are correctly handled. Experiment...

  1. An Integrated Denoising Method for Sensor Mixed Noises Based on Wavelet Packet Transform and Energy-Correlation Analysis

    Directory of Open Access Journals (Sweden)

    Chao Tan

    2014-01-01

    Full Text Available In order to solve the problem of industrial sensor signal denoising, an integrated denoising method for sensor mixed noises based on wavelet packet transform and energy-correlation analysis is proposed. The architecture of proposed method is designed and the key technologies, such as wavelet packet transformation, energy-correlation analysis, and processing method of wavelet packet coefficients based on energy-correlation analysis, are presented. Finally, a simulation example for a specific signal and an application of shearer cutting current signal, which mainly contain white Gaussian noise and impact noise, are carried out, and the simulation and application results show that the proposed method is effective and is outperforming others.

  2. The Hilbert-Huang Transform-Based Denoising Method for the TEM Response of a PRBS Source Signal

    Science.gov (United States)

    Hai, Li; Guo-qiang, Xue; Pan, Zhao; Hua-sen, Zhong; Khan, Muhammad Younis

    2016-08-01

    The denoising process is critical in processing transient electromagnetic (TEM) sounding data. For the full waveform pseudo-random binary sequences (PRBS) response, an inadequate noise estimation may result in an erroneous interpretation. We consider the Hilbert-Huang transform (HHT) and its application to suppress the noise in the PRBS response. The focus is on the thresholding scheme to suppress the noise and the analysis of the signal based on its Hilbert time-frequency representation. The method first decomposes the signal into the intrinsic mode function, and then, inspired by the thresholding scheme in wavelet analysis; an adaptive and interval thresholding is conducted to set to zero all the components in intrinsic mode function which are lower than a threshold related to the noise level. The algorithm is based on the characteristic of the PRBS response. The HHT-based denoising scheme is tested on the synthetic and field data with the different noise levels. The result shows that the proposed method has a good capability in denoising and detail preservation.

  3. Effect of denoising on supervised lung parenchymal clusters

    Science.gov (United States)

    Jayamani, Padmapriya; Raghunath, Sushravya; Rajagopalan, Srinivasan; Karwoski, Ronald A.; Bartholmai, Brian J.; Robb, Richard A.

    2012-03-01

    Denoising is a critical preconditioning step for quantitative analysis of medical images. Despite promises for more consistent diagnosis, denoising techniques are seldom explored in clinical settings. While this may be attributed to the esoteric nature of the parameter sensitve algorithms, lack of quantitative measures on their ecacy to enhance the clinical decision making is a primary cause of physician apathy. This paper addresses this issue by exploring the eect of denoising on the integrity of supervised lung parenchymal clusters. Multiple Volumes of Interests (VOIs) were selected across multiple high resolution CT scans to represent samples of dierent patterns (normal, emphysema, ground glass, honey combing and reticular). The VOIs were labeled through consensus of four radiologists. The original datasets were ltered by multiple denoising techniques (median ltering, anisotropic diusion, bilateral ltering and non-local means) and the corresponding ltered VOIs were extracted. Plurality of cluster indices based on multiple histogram-based pair-wise similarity measures were used to assess the quality of supervised clusters in the original and ltered space. The resultant rank orders were analyzed using the Borda criteria to nd the denoising-similarity measure combination that has the best cluster quality. Our exhaustive analyis reveals (a) for a number of similarity measures, the cluster quality is inferior in the ltered space; and (b) for measures that benet from denoising, a simple median ltering outperforms non-local means and bilateral ltering. Our study suggests the need to judiciously choose, if required, a denoising technique that does not deteriorate the integrity of supervised clusters.

  4. Adaptive de-noising method based on wavelet and adaptive learning algorithm in on-line PD monitoring

    Institute of Scientific and Technical Information of China (English)

    王立欣; 诸定秋; 蔡惟铮

    2002-01-01

    It is an important step in the online monitoring of partial discharge (PD) to extract PD pulses from various background noises. An adaptive de-noising method is introduced for adaptive noise reduction during detection of PD pulses. This method is based on Wavelet Transform (WT) , and in the wavelet domain the noises decomposed at the levels are reduced by independent thresholds. Instead of the standard hard thresholding function, a new type of hard thresholding function with continuous derivative is employed by this method. For the selection of thresholds, an unsupervised learning algorithm based on gradient in a mean square error (MSE) is present to search for the optimal threshold for noise reduction, and the optimal threshold is selected when the minimum MSE is obtained. With the simulating signals and on-site experimental data processed by this method,it is shown that the background noises such as narrowband noises can be reduced efficiently. Furthermore, it is proved that in comparison with the conventional wavelet de-noising method the adaptive de-noising method has a better performance in keeping the pulses and is more adaptive when suppressing the background noises of PD signals.

  5. Image Denoising And Enhancement Using Multiwavelet With Hard Threshold In Digital Mammographic Images

    OpenAIRE

    Kother Mohideen; Arumuga Perumal; Nallaperumal Krishnan; Mohamed Sathik

    2011-01-01

    Breast cancer continues to be a significant public health problem in the world. The diagnosing mammographymethod is the most effective technology for early detection of the breast cancer. However, in some cases, it is difficult forradiologists to detect the typical diagnostic signs, such as masses and microcalcifications on the mammograms. Dense regionin digital mammographic images are usually noisy and have low contrast. And their visual screening is difficult to view forphysicians. This pap...

  6. De-noising and retrieving algorithm of Mie lidar data based on the particle filter and the Fernald method.

    Science.gov (United States)

    Li, Chen; Pan, Zengxin; Mao, Feiyue; Gong, Wei; Chen, Shihua; Min, Qilong

    2015-10-01

    The signal-to-noise ratio (SNR) of an atmospheric lidar decreases rapidly as range increases, so that maintaining high accuracy when retrieving lidar data at the far end is difficult. To avoid this problem, many de-noising algorithms have been developed; in particular, an effective de-noising algorithm has been proposed to simultaneously retrieve lidar data and obtain a de-noised signal by combining the ensemble Kalman filter (EnKF) and the Fernald method. This algorithm enhances the retrieval accuracy and effective measure range of a lidar based on the Fernald method, but sometimes leads to a shift (bias) in the near range as a result of the over-smoothing caused by the EnKF. This study proposes a new scheme that avoids this phenomenon using a particle filter (PF) instead of the EnKF in the de-noising algorithm. Synthetic experiments show that the PF performs better than the EnKF and Fernald methods. The root mean square error of PF are 52.55% and 38.14% of that of the Fernald and EnKF methods, and PF increases the SNR by 44.36% and 11.57% of that of the Fernald and EnKF methods, respectively. For experiments with real signals, the relative bias of the EnKF is 5.72%, which is reduced to 2.15% by the PF in the near range. Furthermore, the suppression impact on the random noise in the far range is also made significant via the PF. An extensive application of the PF method can be useful in determining the local and global properties of aerosols. PMID:26480164

  7. Intelligent Mechanical Fault Diagnosis Based on Multiwavelet Adaptive Threshold Denoising and MPSO

    Directory of Open Access Journals (Sweden)

    Hao Sun

    2014-01-01

    Full Text Available The condition diagnosis of rotating machinery depends largely on the feature analysis of vibration signals measured for the condition diagnosis. However, the signals measured from rotating machinery usually are nonstationary and nonlinear and contain noise. The useful fault features are hidden in the heavy background noise. In this paper, a novel fault diagnosis method for rotating machinery based on multiwavelet adaptive threshold denoising and mutation particle swarm optimization (MPSO is proposed. Geronimo, Hardin, and Massopust (GHM multiwavelet is employed for extracting weak fault features under background noise, and the method of adaptively selecting appropriate threshold for multiwavelet with energy ratio of multiwavelet coefficient is presented. The six nondimensional symptom parameters (SPs in the frequency domain are defined to reflect the features of the vibration signals measured in each state. Detection index (DI using statistical theory has been also defined to evaluate the sensitiveness of SP for condition diagnosis. MPSO algorithm with adaptive inertia weight adjustment and particle mutation is proposed for condition identification. MPSO algorithm effectively solves local optimum and premature convergence problems of conventional particle swarm optimization (PSO algorithm. It can provide a more accurate estimate on fault diagnosis. Practical examples of fault diagnosis for rolling element bearings are given to verify the effectiveness of the proposed method.

  8. Progressive image denoising through hybrid graph Laplacian regularization: a unified framework.

    Science.gov (United States)

    Liu, Xianming; Zhai, Deming; Zhao, Debin; Zhai, Guangtao; Gao, Wen

    2014-04-01

    Recovering images from corrupted observations is necessary for many real-world applications. In this paper, we propose a unified framework to perform progressive image recovery based on hybrid graph Laplacian regularized regression. We first construct a multiscale representation of the target image by Laplacian pyramid, then progressively recover the degraded image in the scale space from coarse to fine so that the sharp edges and texture can be eventually recovered. On one hand, within each scale, a graph Laplacian regularization model represented by implicit kernel is learned, which simultaneously minimizes the least square error on the measured samples and preserves the geometrical structure of the image data space. In this procedure, the intrinsic manifold structure is explicitly considered using both measured and unmeasured samples, and the nonlocal self-similarity property is utilized as a fruitful resource for abstracting a priori knowledge of the images. On the other hand, between two successive scales, the proposed model is extended to a projected high-dimensional feature space through explicit kernel mapping to describe the interscale correlation, in which the local structure regularity is learned and propagated from coarser to finer scales. In this way, the proposed algorithm gradually recovers more and more image details and edges, which could not been recovered in previous scale. We test our algorithm on one typical image recovery task: impulse noise removal. Experimental results on benchmark test images demonstrate that the proposed method achieves better performance than state-of-the-art algorithms. PMID:24565791

  9. 一种新颖的Contourlet域中子辐射图像降噪方法%A Novel Method of Neutron Radiography Image Denoising Using Contourlet Transform

    Institute of Scientific and Technical Information of China (English)

    金炜; 魏彪; 潘英俊; 冯鹏; 唐彬

    2006-01-01

    由于受CCD相机、中子散射及控制电路等因素的影响,数字中子照相系统所获图像常被噪音污染,抑制噪音对于提高数字中子照相系统图像质量具有重要意义.利用多尺度几何分析能捕获图像几何结构的特性,提出一种新颖的基于contourlet变换的图像去噪方法.通过计算方差一致性测度(VHM),确定局部自适应窗口,从而最优估计contourlet系数的阈值萎缩因子,对contourlet系数进行萎缩,实现降噪功能.该方法将阈值去噪法与基于子带相关的图像去噪法相结合,充分利用在同一方向子带中沿边缘或轮廓contourlet系数的相关性,它能实现"去噪"和"保留信号"之间的平衡.实验结果表明,该方法在峰值信噪比指标上优于传统的contourlet系数硬阈值处理方法及维纳滤波方法,能有效地抑制图像噪音,同时适合于中子辐射图像的处理.%A new image transform, namely, the contourlet transform is introduced,which can capture the intrinsic geometrical structure of image. Furthermore, a novel image denoising scheme based on contourlet is presented. Via calculating variance homogeneous measurement (VHM), the locally adaptive window is determined to estimate the shrinkage factor optimally, then the contourlet coefficient is shrunk using the shrinkage factor. The scheme utilizing the correlation of contourlet coefficients in the same subband along the edge or contour of the image, which can get the tradeoff between "noises removing" and "details preserving". In numerical comparisons with various methods, the presented scheme outperforms the traditional contourlet denoising method based on hard-thresholding and Wiener filter in terms of PSNR. Experiments also show that this scheme could not only remove the noises effectively, but also suit for the neutron radiography system.

  10. Research and Implementation of Heart Sound Denoising

    Science.gov (United States)

    Liu, Feng; Wang, Yutai; Wang, Yanxiang

    Heart sound is one of the most important signals. However, the process of getting heart sound signal can be interfered with many factors outside. Heart sound is weak electric signal and even weak external noise may lead to the misjudgment of pathological and physiological information in this signal, thus causing the misjudgment of disease diagnosis. As a result, it is a key to remove the noise which is mixed with heart sound. In this paper, a more systematic research and analysis which is involved in heart sound denoising based on matlab has been made. The study of heart sound denoising based on matlab firstly use the powerful image processing function of matlab to transform heart sound signals with noise into the wavelet domain through wavelet transform and decomposition these signals in muli-level. Then for the detail coefficient, soft thresholding is made using wavelet transform thresholding to eliminate noise, so that a signal denoising is significantly improved. The reconstructed signals are gained with stepwise coefficient reconstruction for the processed detail coefficient. Lastly, 50HZ power frequency and 35 Hz mechanical and electrical interference signals are eliminated using a notch filter.

  11. Compression and denoising in magnetic resonance imaging via SVD on the Fourier domain using computer algebra

    Science.gov (United States)

    Díaz, Felipe

    2015-09-01

    Magnetic resonance (MR) data reconstruction can be computationally a challenging task. The signal-to-noise ratio might also present complications, especially with high-resolution images. In this sense, data compression can be useful not only for reducing the complexity and memory requirements, but also to reduce noise, even to allow eliminate spurious components.This article proposes the use of a system based on singular value decomposition of low order for noise reconstruction and reduction in MR imaging system. The proposed method is evaluated using in vivo MRI data. Rebuilt images with less than 20 of the original data and with similar quality in terms of visual inspection are presented. Also a quantitative evaluation of the method is presented.

  12. Data-adaptive image-denoising for detecting and quantifying nanoparticle entry in mucosal tissues through intravital 2-photon microscopy

    Directory of Open Access Journals (Sweden)

    Torsten Bölke

    2014-11-01

    Full Text Available Intravital 2-photon microscopy of mucosal membranes across which nanoparticles enter the organism typically generates noisy images. Because the noise results from the random statistics of only very few photons detected per pixel, it cannot be avoided by technical means. Fluorescent nanoparticles contained in the tissue may be represented by a few bright pixels which closely resemble the noise structure. We here present a data-adaptive method for digital denoising of datasets obtained by 2-photon microscopy. The algorithm exploits both local and non-local redundancy of the underlying ground-truth signal to reduce noise. Our approach automatically adapts the strength of noise suppression in a data-adaptive way by using a Bayesian network. The results show that the specific adaption to both signal and noise characteristics improves the preservation of fine structures such as nanoparticles while less artefacts were produced as compared to reference algorithms. Our method is applicable to other imaging modalities as well, provided the specific noise characteristics are known and taken into account.

  13. Angle Based Orthogonal FRIT and Its Application in Image Denoising%一种基于角度的正交FRIT变换及其在图像去噪中的应用

    Institute of Scientific and Technical Information of China (English)

    刘云霞; 彭玉华; 孟庆芳; 尹勇

    2007-01-01

    M.N.Do和M.Vetterli提出的Finite Ridgelet Transform(FRIT)因其对线奇异特征的高效表示能力而被广泛应用,但其在图像压缩、去噪的处理中却受到"环绕"现象的严重影响.本文在揭示"环绕"现象和FRAT域系数关系的基础上,提出一种基于角度的正交FRIT变换方案(Angle-based FRIT,AFRIT).该方案具有更好的能量集中特性,并有效地降低了"环绕"现象.进一步,对正交FRIT去噪问题建模,并提出了一种基于AFRIT的改进阈值.不同噪声水平下各种图像的去噪实验结果表明,采用改进阈值的AFRIT同现在常用的FRIT去噪方法相比,具有明显的优越性.

  14. Photogrammetric DSM denoising

    Science.gov (United States)

    Nex, F.; Gerke, M.

    2014-08-01

    Image matching techniques can nowadays provide very dense point clouds and they are often considered a valid alternative to LiDAR point cloud. However, photogrammetric point clouds are often characterized by a higher level of random noise compared to LiDAR data and by the presence of large outliers. These problems constitute a limitation in the practical use of photogrammetric data for many applications but an effective way to enhance the generated point cloud has still to be found. In this paper we concentrate on the restoration of Digital Surface Models (DSM), computed from dense image matching point clouds. A photogrammetric DSM, i.e. a 2.5D representation of the surface is still one of the major products derived from point clouds. Four different algorithms devoted to DSM denoising are presented: a standard median filter approach, a bilateral filter, a variational approach (TGV: Total Generalized Variation), as well as a newly developed algorithm, which is embedded into a Markov Random Field (MRF) framework and optimized through graph-cuts. The ability of each algorithm to recover the original DSM has been quantitatively evaluated. To do that, a synthetic DSM has been generated and different typologies of noise have been added to mimic the typical errors of photogrammetric DSMs. The evaluation reveals that standard filters like median and edge preserving smoothing through a bilateral filter approach cannot sufficiently remove typical errors occurring in a photogrammetric DSM. The TGV-based approach much better removes random noise, but large areas with outliers still remain. Our own method which explicitly models the degradation properties of those DSM outperforms the others in all aspects.

  15. Discrete wavelet transform-based denoising technique for advanced state-of-charge estimator of a lithium-ion battery in electric vehicles

    International Nuclear Information System (INIS)

    Sophisticated data of the experimental DCV (discharging/charging voltage) of a lithium-ion battery is required for high-accuracy SOC (state-of-charge) estimation algorithms based on the state-space ECM (electrical circuit model) in BMSs (battery management systems). However, when sensing noisy DCV signals, erroneous SOC estimation (which results in low BMS performance) is inevitable. Therefore, this manuscript describes the design and implementation of a DWT (discrete wavelet transform)-based denoising technique for DCV signals. The steps for denoising a noisy DCV measurement in the proposed approach are as follows. First, using MRA (multi-resolution analysis), the noise-riding DCV signal is decomposed into different frequency sub-bands (low- and high-frequency components, An and Dn). Specifically, signal processing of the high frequency component Dn that focuses on a short-time interval is necessary to reduce noise in the DCV measurement. Second, a hard-thresholding-based denoising rule is applied to adjust the wavelet coefficients of the DWT to achieve a clear separation between the signal and the noise. Third, the desired de-noised DCV signal is reconstructed by taking the IDWT (inverse discrete wavelet transform) of the filtered detailed coefficients. Finally, this signal is sent to the ECM-based SOC estimation algorithm using an EKF (extended Kalman filter). Experimental results indicate the robustness of the proposed approach for reliable SOC estimation. - Highlights: • Sophisticated data of the experimental DCV is required for high-accuracy SOC. • DWT (discrete wavelet transform)-based denoising technique is newly investigated. • Three steps for denoising a noisy DCV measurement in this work are implemented. • Experimental results indicate the robustness of the proposed work for reliable SOC

  16. Integration of speckle de-noising and image segmentation using synthetic aperture radar image for flood extent extraction

    OpenAIRE

    J Senthilnath; Shenoy, Vikram H; Rajendra, Ritwik; Omkar, SN; Mani, V.; Diwakar, PG

    2013-01-01

    Flood is one of the detrimental hydro-meteorological threats to mankind. This compels very efficient flood assessment models. In this paper, we propose remote sensing based flood assessment using Synthetic Aperture Radar (SAR) image because of its imperviousness to unfavourable weather conditions. However, they suffer from the speckle noise. Hence, the processing of SAR image is applied in two stages: speckle removal filters and image segmentation methods for flood mapping. The speckle noise ...

  17. Classical low-pass filter and real-time wavelet-based denoising technique implemented on a DSP: a comparison study

    Science.gov (United States)

    Dolabdjian, Ch.; Fadili, J.; Huertas Leyva, E.

    2002-11-01

    We have implemented a real-time numerical denoising algorithm, using the Discrete Wavelet Transform (DWT), on a TMS320C3x Digital Signal Processor (DSP). We also compared from a theoretical and practical viewpoints this post-processing approach to a more classical low-pass filter. This comparison was carried out using an ECG-type signal (ElectroCardiogram). The denoising approach is an elegant and extremely fast alternative to the classical linear filters class. It is particularly adapted to non-stationary signals such as those encountered in biological applications. The denoising allows to substantially improve detection of such signals over Fourier-based techniques. This processing step is a vital element in our acquisition chain using high sensitivity magnetic sensors. It should enhance detection of cardiac-type magnetic signals or magnetic particles in movement.

  18. Real-time Dynamic MRI Reconstruction using Stacked Denoising Autoencoder

    OpenAIRE

    Majumdar, Angshul

    2015-01-01

    In this work we address the problem of real-time dynamic MRI reconstruction. There are a handful of studies on this topic; these techniques are either based on compressed sensing or employ Kalman Filtering. These techniques cannot achieve the reconstruction speed necessary for real-time reconstruction. In this work, we propose a new approach to MRI reconstruction. We learn a non-linear mapping from the unstructured aliased images to the corresponding clean images using a stacked denoising aut...

  19. Neural Networks with Wavelet Based Denoising Layer: Application to Central European Stock Market Forecasting

    Czech Academy of Sciences Publication Activity Database

    Baruník, Jozef; Vácha, Lukáš

    Liberec : Technical University of Liberec, 2008 - (Řehořová, P.; Maršíková, K.), s. 1-6 ISBN 978-80-7372-387-3. [Mathematical Methods in Economics 2008. Liberec (CZ), 17.09.2008-19.09.2008] R&D Projects: GA ČR(CZ) GA402/06/1417; GA ČR GP402/08/P207 Grant ostatní: GAUK(CZ) 46108 Institutional research plan: CEZ:AV0Z10750506 Keywords : neural networks * hard threshold denoising * time series prediction * wavelets Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2008/E/barunik-0314935.pdf

  20. Foetal phonocardiographic signal denoising based on non-negative matrix factorization.

    Science.gov (United States)

    Chourasia, V S; Tiwari, A K; Gangopadhyay, R; Akant, K A

    2012-01-01

    Foetal phonocardiography (fPCG) is a non-invasive, cost-effective and simple technique for antenatal care. The fPCG signals contain vital information of diagnostic importance regarding the foetal health. However, the fPCG signal is usually contaminated by various noises and thus requires robust signal processing to denoise the signal. The main aim of this paper is to develop a methodology for removal of unwanted noise from the fPCG signal. The proposed methodology utilizes the non-negative matrix factorization (NMF) algorithm. The developed methodology is tested on both simulated and real-time fPCG signals. The performance of the developed methodology has been evaluated in terms of the gain in signal-to-noise ratio (SNR) achieved through the process of denoising. In particular, using the NMF algorithm, a substantial improvement in SNR of the fPCG signals in the range of 12-30 dB has been achieved, providing a high quality assessment of foetal well-being. PMID:22136609

  1. Segmentation of Confocal Raman Microspectroscopic Imaging Data Using Edge-Preserving Denoising and Clustering

    OpenAIRE

    Alexandrov, Theodore; Lasch, Peter

    2013-01-01

    Over the past decade, confocal Raman microspectroscopic (CRM) imaging has matured into a useful analytical tool to obtain spatially resolved chemical information on the molecular composition of biological samples and has found its way into histopathology, cytology, and microbiology. A CRM imaging data set is a hyperspectral image in which Raman intensities are represented as a function of three coordinates: a spectral coordinate λ encoding the wavelength and two spatial coordinates x and y. U...

  2. Denoising of Medical Ultrasound Images Using Spatial Filtering and Multiscale Transforms

    OpenAIRE

    V N Prudhvi Raj; T Venkateswarlu

    2013-01-01

    Medical imaging became the integral part in health care where all the critical diagnosis such as blocks inthe veins, plaques in the carotid arteries, minute fractures in the bones, blood flow in the brain etc arecarried out without opening the patient’s body. There are various imaging modalities for differentapplications to observe the anatomical and physiological conditions of the patient. These modalities willintroduce noise and artifacts during medical image acquisition. If the noise and a...

  3. Integration of speckle de-noising and image segmentation using Synthetic Aperture Radar image for flood extent extraction

    Indian Academy of Sciences (India)

    J Senthilnath; H Vikram Shenoy; Ritwik Rajendra; S N Omkar; V Mani; P G Diwakar

    2013-06-01

    Flood is one of the detrimental hydro-meteorological threats to mankind. This compels very efficient flood assessment models. In this paper, we propose remote sensing based flood assessment using Synthetic Aperture Radar (SAR) image because of its imperviousness to unfavourable weather conditions. However, they suffer from the speckle noise. Hence, the processing of SAR image is applied in two stages: speckle removal filters and image segmentation methods for flood mapping. The speckle noise has been reduced with the help of Lee, Frost and Gamma MAP filters. A performance comparison of these speckle removal filters is presented. From the results obtained, we deduce that the Gamma MAP is reliable. The selected Gamma MAP filtered image is segmented using Gray Level Co-occurrence Matrix (GLCM) and Mean Shift Segmentation (MSS). The GLCM is a texture analysis method that separates the image pixels into water and non-water groups based on their spectral feature whereas MSS is a gradient ascent method, here segmentation is carried out using spectral and spatial information. As test case, Kosi river flood is considered in our study. From the segmentation result of both these methods are comprehensively analysed and concluded that the MSS is efficient for flood mapping.

  4. A de-noising algorithm based on wavelet threshold-exponential adaptive window width-fitting for ground electrical source airborne transient electromagnetic signal

    Science.gov (United States)

    Ji, Yanju; Li, Dongsheng; Yu, Mingmei; Wang, Yuan; Wu, Qiong; Lin, Jun

    2016-05-01

    The ground electrical source airborne transient electromagnetic system (GREATEM) on an unmanned aircraft enjoys considerable prospecting depth, lateral resolution and detection efficiency, etc. In recent years it has become an important technical means of rapid resources exploration. However, GREATEM data are extremely vulnerable to stationary white noise and non-stationary electromagnetic noise (sferics noise, aircraft engine noise and other human electromagnetic noises). These noises will cause degradation of the imaging quality for data interpretation. Based on the characteristics of the GREATEM data and major noises, we propose a de-noising algorithm utilizing wavelet threshold method and exponential adaptive window width-fitting. Firstly, the white noise is filtered in the measured data using the wavelet threshold method. Then, the data are segmented using data window whose step length is even logarithmic intervals. The data polluted by electromagnetic noise are identified within each window based on the discriminating principle of energy detection, and the attenuation characteristics of the data slope are extracted. Eventually, an exponential fitting algorithm is adopted to fit the attenuation curve of each window, and the data polluted by non-stationary electromagnetic noise are replaced with their fitting results. Thus the non-stationary electromagnetic noise can be effectively removed. The proposed algorithm is verified by the synthetic and real GREATEM signals. The results show that in GREATEM signal, stationary white noise and non-stationary electromagnetic noise can be effectively filtered using the wavelet threshold-exponential adaptive window width-fitting algorithm, which enhances the imaging quality.

  5. A blind detection scheme based on modified wavelet denoising algorithm for wireless optical communications

    Science.gov (United States)

    Li, Ruijie; Dang, Anhong

    2015-10-01

    This paper investigates a detection scheme without channel state information for wireless optical communication (WOC) systems in turbulence induced fading channel. The proposed scheme can effectively diminish the additive noise caused by background radiation and photodetector, as well as the intensity scintillation caused by turbulence. The additive noise can be mitigated significantly using the modified wavelet threshold denoising algorithm, and then, the intensity scintillation can be attenuated by exploiting the temporal correlation of the WOC channel. Moreover, to improve the performance beyond that of the maximum likelihood decision, the maximum a posteriori probability (MAP) criterion is considered. Compared with conventional blind detection algorithm, simulation results show that the proposed detection scheme can improve the signal-to-noise ratio (SNR) performance about 4.38 dB while the bit error rate and scintillation index (SI) are 1×10-6 and 0.02, respectively.

  6. Denoising of T Wave Using Wavelet Transform

    Directory of Open Access Journals (Sweden)

    K. Srinivas

    2014-03-01

    Full Text Available A Wavelet transform based denoising of T Wave is proposed in the work. The T wave is denoised with fixed form threshold, Rigorous SURE, Heuristic SURE threshold methods. Daubechies wavelets at different levels are used in the work. The Simulated results obtained from the data collected from MATLAB based ECG simulator and data recorded from 8 Channel Physiograph. It is observed that for hard thresholding the standard deviation is very much decreased with scaled noise

  7. Denoised and texture enhanced MVCT to improve soft tissue conspicuity

    Energy Technology Data Exchange (ETDEWEB)

    Sheng, Ke, E-mail: ksheng@mednet.ucla.edu; Qi, Sharon X. [Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States); Gou, Shuiping [Department of Radiation Oncology, University of California, Los Angeles, California 90095 and Xidian University, Xi’An 710071 (China); Wu, Jiaolong [Xidian University, Xi’An 710071 (China)

    2014-10-15

    Purpose: MVCT images have been used in TomoTherapy treatment to align patients based on bony anatomies but its usefulness for soft tissue registration, delineation, and adaptive radiation therapy is limited due to insignificant photoelectric interaction components and the presence of noise resulting from low detector quantum efficiency of megavoltage x-rays. Algebraic reconstruction with sparsity regularizers as well as local denoising methods has not significantly improved the soft tissue conspicuity. The authors aim to utilize a nonlocal means denoising method and texture enhancement to recover the soft tissue information in MVCT (DeTECT). Methods: A block matching 3D (BM3D) algorithm was adapted to reduce the noise while keeping the texture information of the MVCT images. Following imaging denoising, a saliency map was created to further enhance visual conspicuity of low contrast structures. In this study, BM3D and saliency maps were applied to MVCT images of a CT imaging quality phantom, a head and neck, and four prostate patients. Following these steps, the contrast-to-noise ratios (CNRs) were quantified. Results: By applying BM3D denoising and saliency map, postprocessed MVCT images show remarkable improvements in imaging contrast without compromising resolution. For the head and neck patient, the difficult-to-see lymph nodes and vein in the carotid space in the original MVCT image became conspicuous in DeTECT. For the prostate patients, the ambiguous boundary between the bladder and the prostate in the original MVCT was clarified. The CNRs of phantom low contrast inserts were improved from 1.48 and 3.8 to 13.67 and 16.17, respectively. The CNRs of two regions-of-interest were improved from 1.5 and 3.17 to 3.14 and 15.76, respectively, for the head and neck patient. DeTECT also increased the CNR of prostate from 0.13 to 1.46 for the four prostate patients. The results are substantially better than a local denoising method using anisotropic diffusion

  8. A New Matlab De-noising Algorithm for Signal Extraction

    Institute of Scientific and Technical Information of China (English)

    ZHANG Fu-ming; WU Song-lin

    2007-01-01

    The goal of a de-noising algorithm is to reconstruct a signal from its noise-corrupted observations. Perfect reconstruction is seldom possible and performance is measured under a given fidelity criterion. In a recent work, the authors addressed a new Matlab algorithm for de-noising. A key method of the algorithm is selecting an optimal basis from a library of wavelet bases for ideal de-noising. The algorithm with an optimal basis from a library of wavelet bases for de-noising was created through making use of Matlab′s Wavelet Toolbox. The experimental results show that the new algorithm is efficient in signal de-nosing.

  9. ECG De-noising Based on Hilbert-Huang Transform%基于Hilbert-Huang变换的ECG消噪

    Institute of Scientific and Technical Information of China (English)

    杨向林; 严洪; 许志; 任兆瑞; 宋晋忠; 姚宇华; 李延军

    2011-01-01

    提出一种基于Hilbert-Huang变换的ECG消噪方法,该方法对含噪ECG进行经验模态分解,对分解后的IMF进行Hilbert 频谱分析,然后根据ECG信号噪声特点对三种主要噪声分别消噪.工频干扰和高频噪声主要存在于ECG的低阶IMF中,而基线漂移主要存在于ECG的高阶IMF中,对低阶IMF采用基于自适应阈值的形态学滤波方法进行消噪,对高阶IMF采用平滑滤波法进行基线漂移估计.仿真实验和实际应用结果表明该方法优于小波消噪法,不仅对三种主要噪声具有较好的抑制作用,还能很好的保留ECG波形特征.%A method for ECG de-noising based on Hilbert-Huang Transform has been proposed. ECG was analyzed by the Hilbert spectrum of the IMF, which was produced by the Empirical mode decomposition method. The three main noises were respectively removed according to the characteristics of noises. Power line interference and high frequency noises were mainly mixed into the low IMF of ECG, while baseline wander noises were mainly mixed into the high IMF. The morphological filter method based on adaptive threshold was used in the low IMF for de-noising,while baseline wander was estimated by the smooth filter in the high IMF of ECG. The results of simulation experiment and practical application demonstrate that the method proposed in this paper performs better than wavelet de-noising method significantly, not only in suppressing the three mainly noises effectively, but also in preserving the primary characteristics of ECG signal.

  10. Analysis of hydrological trend for radioactivity content in bore-hole water samples using wavelet based denoising

    International Nuclear Information System (INIS)

    A wavelet transform based denoising methodology has been applied to detect the presence of any discernable trend in 137Cs and 90Sr activity levels in bore-hole water samples collected four times a year over a period of eight years, from 2002 to 2009, in the vicinity of typical nuclear facilities inside the restricted access zones. The conventional non-parametric methods viz., Mann–Kendall and Spearman rho, along with linear regression when applied for detecting the linear trend in the time series data do not yield results conclusive for trend detection with a confidence of 95% for most of the samples. The stationary wavelet based hard thresholding data pruning method with Haar as the analyzing wavelet was applied to remove the noise present in the same data. Results indicate that confidence interval of the established trend has significantly improved after pre-processing to more than 98% compared to the conventional non-parametric methods when applied to direct measurements. -- Highlights: ► Environmental trend analysis with wavelet pre-processing was carried out. ► Removal of local fluctuations to obtain the trend in a time series with various mother wavelets. ► Theoretical validation of the methodology with model outputs. ► Efficient detection of trend for 137Cs, 90Sr in bore-hole water samples improves the associated confidence interval to more than 98%. ► Wavelet based pre-processing reduces the indecisive nature of the detected trend

  11. FOG Random Drift Signal Denoising Based on the Improved AR Model and Modified Sage-Husa Adaptive Kalman Filter.

    Science.gov (United States)

    Sun, Jin; Xu, Xiaosu; Liu, Yiting; Zhang, Tao; Li, Yao

    2016-01-01

    In order to reduce the influence of fiber optic gyroscope (FOG) random drift error on inertial navigation systems, an improved auto regressive (AR) model is put forward in this paper. First, based on real-time observations at each restart of the gyroscope, the model of FOG random drift can be established online. In the improved AR model, the FOG measured signal is employed instead of the zero mean signals. Then, the modified Sage-Husa adaptive Kalman filter (SHAKF) is introduced, which can directly carry out real-time filtering on the FOG signals. Finally, static and dynamic experiments are done to verify the effectiveness. The filtering results are analyzed with Allan variance. The analysis results show that the improved AR model has high fitting accuracy and strong adaptability, and the minimum fitting accuracy of single noise is 93.2%. Based on the improved AR(3) model, the denoising method of SHAKF is more effective than traditional methods, and its effect is better than 30%. The random drift error of FOG is reduced effectively, and the precision of the FOG is improved. PMID:27420062

  12. FOG Random Drift Signal Denoising Based on the Improved AR Model and Modified Sage-Husa Adaptive Kalman Filter

    Science.gov (United States)

    Sun, Jin; Xu, Xiaosu; Liu, Yiting; Zhang, Tao; Li, Yao

    2016-01-01

    In order to reduce the influence of fiber optic gyroscope (FOG) random drift error on inertial navigation systems, an improved auto regressive (AR) model is put forward in this paper. First, based on real-time observations at each restart of the gyroscope, the model of FOG random drift can be established online. In the improved AR model, the FOG measured signal is employed instead of the zero mean signals. Then, the modified Sage-Husa adaptive Kalman filter (SHAKF) is introduced, which can directly carry out real-time filtering on the FOG signals. Finally, static and dynamic experiments are done to verify the effectiveness. The filtering results are analyzed with Allan variance. The analysis results show that the improved AR model has high fitting accuracy and strong adaptability, and the minimum fitting accuracy of single noise is 93.2%. Based on the improved AR(3) model, the denoising method of SHAKF is more effective than traditional methods, and its effect is better than 30%. The random drift error of FOG is reduced effectively, and the precision of the FOG is improved. PMID:27420062

  13. Identification of radionuclides present in trace amount in a high resolution gamma spectrum using wavelet transformation based denoising

    International Nuclear Information System (INIS)

    For fast and non-destructive qualitative and quantitative estimate of routinely released gamma emitting radionuclides from nuclear facilities, Gamma Spectrometry is carried out. Presence of instrumental, statistical noise and Compton continuum contributions from the higher energy radionuclides proper estimation of activity becomes difficult. So filtering of the spectrum is very important for identification and evaluation of area under the gamma peak. In this work, method based on wavelet transformation (WT) is implemented for denoising of high resolution gamma spectrum obtained from plant leaves samples using an HPGe detector. Work describes the details of the WT and its application for filtering the noise in a gamma spectrum and identification of 208Tl, 137Cs, 60Co, 40K present in trace amount with an enhanced signal to noise ratio (SNR). For low activity environmental sample (plant leaves), WT based methodology provide significant improvement in SNR (about twofold). With WT methodology, the presence of 60Co (1173 and 1332 keV) and 208Tl (2612 keV) were detected with 99.7% confidence, which were not detected in the original spectrum

  14. Denoising in Wavelet Packet Domain via Approximation Coefficients

    Directory of Open Access Journals (Sweden)

    Zahra Vahabi

    2012-01-01

    Full Text Available In this paper we propose a new approach in the wavelet domain for image denoising. In recent researches wavelet transform has introduced a time-Frequency transform for computing wavelet coefficient and eliminating noise. Some coefficients have effected smaller than the other's from noise, so they can be use reconstruct images with other subbands. We have developed Approximation image to estimate better denoised image. Naturally noiseless subimage introduced image with lower noise. Beside denoising we obtain a bigger compression rate. Increasing image contrast is another advantage of this method. Experimental results demonstrate that our approach compares favorably to more typical methods of denoising and compression in wavelet domain.100 images of LIVE Dataset were tested, comparing signal to noise ratios (SNR,soft thresholding was %1.12 better than hard thresholding, POAC was %1.94 better than soft thresholding and POAC with wavelet packet was %1.48 better than POAC.

  15. Scheduled denoising autoencoders

    OpenAIRE

    Geras, Krzysztof J.; Sutton, Charles

    2014-01-01

    We present a representation learning method that learns features at multiple different levels of scale. Working within the unsupervised framework of denoising autoencoders, we observe that when the input is heavily corrupted during training, the network tends to learn coarse-grained features, whereas when the input is only slightly corrupted, the network tends to learn fine-grained features. This motivates the scheduled denoising autoencoder, which starts with a high level of noise that lower...

  16. Scheduled Denoising Autoencoders

    OpenAIRE

    Geras, Krzysztof; Sutton, Charles

    2015-01-01

    We present a representation learning method that learns features at multiple different levels of scale. Working within the unsupervised framework of denoising autoencoders, we observe that when the input is heavily corrupted during training, the network tends to learn coarse-grained features, whereas when the input is only slightly corrupted, the network tends to learn fine-grained features. This motivates the scheduled denoising autoencoder, which starts with a high level of noise that lower...

  17. A connection between score matching and denoising autoencoders.

    Science.gov (United States)

    Vincent, Pascal

    2011-07-01

    Denoising autoencoders have been previously shown to be competitive alternatives to restricted Boltzmann machines for unsupervised pretraining of each layer of a deep architecture. We show that a simple denoising autoencoder training criterion is equivalent to matching the score (with respect to the data) of a specific energy-based model to that of a nonparametric Parzen density estimator of the data. This yields several useful insights. It defines a proper probabilistic model for the denoising autoencoder technique, which makes it in principle possible to sample from them or rank examples by their energy. It suggests a different way to apply score matching that is related to learning to denoise and does not require computing second derivatives. It justifies the use of tied weights between the encoder and decoder and suggests ways to extend the success of denoising autoencoders to a larger family of energy-based models. PMID:21492012

  18. Fusion Based Gaussian noise Removal in the Images using Curvelets and Wavelets with Gaussian Filter

    Directory of Open Access Journals (Sweden)

    Naga Sravanthi Kota, G.Umamaheswara Reddy

    2011-10-01

    Full Text Available Denoising images using Curvelet transform approach has been widely used in many fields for itsability to obtain high quality images. Curvelet transform is superior to wavelet in the expression ofimage edge, such as geometry characteristic of curve, which has been already obtained goodresults in image denoising. However artifacts those appear in the result images of Curveletsapproach prevent its application in some fields such as medical image. This paper puts forward afusion based method using both Wavelets and Curvelet transforms because certain regions of theimage have the ringing and radial stripe after Curvelets transform. The experimental resultsindicate that fusion method has an abroad future for eliminating the noise of images. The resultsof the algorithm applied to ultrasonic medical images indicate that the algorithm can be usedefficiently in medical image fields also.

  19. New denoising method based on dual-tree complex wavelet transform and nonlinear time series%基于双树复小波与非线性时间序列的降噪方法

    Institute of Scientific and Technical Information of China (English)

    胥永刚; 赵国亮; 马朝永; 张建宇

    2015-01-01

    A new denoising method based on dual-tree complex wavelet transform and nonlinear time series was proposed,considering the weakness,such as the phase distortion,of the wavelet soft-threshold denoising method,in which the real and image parts of the coefficient are processed individually.The new method process the magnitude of the complex coefficients instead,taking into account the fact that the magnitude does not oscillate in positive and negative directions which is more suitable for threshold denoising and the fact that the coefficients of the fault signal are always periodic.The nonlinear time series method can be used to strengthen the periodicity of the coefficients caused by the fault signal and to restrain the noise meanwhile.In the method proposed,the fault signal was decomposed by dual-tree complex wavelet transform to obtain the coefficients of different layers,the nonlinear time series method was used to strengthen the periodicity of the coefficient,and then the soft-threshold denoising was carried out to remove the DC component.Finally, the fault characteristic signal was obtained by coefficient reconstruction.The simulation and experimental results show the effectiveness of the method,and a new efficient denoising method was provided.%针对双树复小波变换传统软阈值降噪方法对实、虚部树系数分别进行阈值处理时提取的强背景噪声下轴承故障特征信号效果不理想,且实、虚部分离的阈值处理方法会引起局部相位失真问题,利用故障信号小波变换系数具有周期性与双树复小波系数模震荡小等特点,提出双树复小波变换与非线性时间序列方法相结合的强背景噪声下轴承故障特征提取方法。对故障信号进行双树复小波变换,获得各层小波系数并求模,选择系数模周期性较强层系数进行非线性时间序列处理,增强系数中周期性成分,抑制随机噪声;对增强后系数进行软阈值处理消除

  20. A fast method for video deblurring based on a combination of gradient methods and denoising algorithms in Matlab and C environments

    Science.gov (United States)

    Mirzadeh, Zeynab; Mehri, Razieh; Rabbani, Hossein

    2010-01-01

    In this paper the degraded video with blur and noise is enhanced by using an algorithm based on an iterative procedure. In this algorithm at first we estimate the clean data and blur function using Newton optimization method and then the estimation procedure is improved using appropriate denoising methods. These noise reduction techniques are based on local statistics of clean data and blur function. For estimated blur function we use LPA-ICI (local polynomial approximation - intersection of confidence intervals) method that use an anisotropic window around each point and obtain the enhanced data employing Wiener filter in this local window. Similarly, to improvement the quality of estimated clean video, at first we transform the data to wavelet transform domain and then improve our estimation using maximum a posterior (MAP) estimator and local Laplace prior. This procedure (initial estimation and improvement of estimation by denoising) is iterated and finally the clean video is obtained. The implementation of this algorithm is slow in MATLAB1 environment and so it is not suitable for online applications. However, MATLAB has the capability of running functions written in C. The files which hold the source for these functions are called MEX-Files. The MEX functions allow system-specific APIs to be called to extend MATLAB's abilities. So, in this paper to speed up our algorithm, the written code in MATLAB is sectioned and the elapsed time for each section is measured and slow sections (that use 60% of complete running time) are selected. Then these slow sections are translated to C++ and linked to MATLAB. In fact, the high loads of information in images and processed data in the "for" loops of relevant code, makes MATLAB an unsuitable candidate for writing such programs. The written code for our video deblurring algorithm in MATLAB contains eight "for" loops. These eighth "for" utilize 60% of the total execution time of the entire program and so the runtime should be

  1. An Imbalanced Data Classification Algorithm of De-noising Auto-Encoder Neural Network Based on SMOTE

    OpenAIRE

    Zhang Chenggang; Song Jiazhi; Pei Zhili; Jiang Jingqing

    2016-01-01

    Imbalanced data classification problem has always been one of the hot issues in the field of machine learning. Synthetic minority over-sampling technique (SMOTE) is a classical approach to balance datasets, but it may give rise to such problem as noise. Stacked De-noising Auto-Encoder neural network (SDAE), can effectively reduce data redundancy and noise through unsupervised layer-wise greedy learning. Aiming at the shortcomings of SMOTE algorithm when synthesizing new minority class samples...

  2. Random Correlation Matrix and De-Noising

    OpenAIRE

    Ken-ichi Mitsui; Yoshio Tabata

    2006-01-01

    In Finance, the modeling of a correlation matrix is one of the important problems. In particular, the correlation matrix obtained from market data has the noise. Here we apply the de-noising processing based on the wavelet analysis to the noisy correlation matrix, which is generated by a parametric function with random parameters. First of all, we show that two properties, i.e. symmetry and ones of all diagonal elements, of the correlation matrix preserve via the de-noising processing and the...

  3. Total Variation based Multivariate Shearlet Shrinkage for Image Reconstruction

    Directory of Open Access Journals (Sweden)

    Shengqian Wang

    2013-01-01

    Full Text Available Shearlet as a new multidirectional and multiscale transform is optimally efficient in representing images containing edges. In this paper, a total variation based multivariate shearlet adaptive shrinkage is proposed for discontinuity-preserving image denoising. The multivariate adaptive threshold is employed to reduce the noise. Projected total variation diffusion is used to suppress the pseudo-Gibbs and shearlet-like artifacts. Numerical experiments from piecewise-smooth to textured images demonstrate that the proposed method can effectively suppress noise and nonsmooth artifacts caused by shearlet transform. Furthermore, it outperforms several existing techniques in terms of structural similarity (SSIM index, peak signal-to-noise ratio (PSNR and visual quality.

  4. Wavelet-domain TI Wiener-like filtering for complex MR data denoising.

    Science.gov (United States)

    Hu, Kai; Cheng, Qiaocui; Gao, Xieping

    2016-10-01

    Magnetic resonance (MR) images are affected by random noises, which degrade many image processing and analysis tasks. It has been shown that the noise in magnitude MR images follows a Rician distribution. Unlike additive Gaussian noise, the noise is signal-dependent, and consequently difficult to reduce, especially in low signal-to-noise ratio (SNR) images. Wirestam et al. in [20] proposed a Wiener-like filtering technique in wavelet-domain to reduce noise before construction of the magnitude MR image. Based on Wirestam's study, we propose a wavelet-domain translation-invariant (TI) Wiener-like filtering algorithm for noise reduction in complex MR data. The proposed denoising algorithm shows the following improvements compared with Wirestam's method: (1) we introduce TI property into the Wiener-like filtering in wavelet-domain to suppress artifacts caused by translations of the signal; (2) we integrate one Stein's Unbiased Risk Estimator (SURE) thresholding with two Wiener-like filters to make the hard-thresholding scale adaptive; and (3) the first Wiener-like filtering is used to filter the original noisy image in which the noise obeys Gaussian distribution and it provides more reasonable results. The proposed algorithm is applied to denoise the real and imaginary parts of complex MR images. To evaluate our proposed algorithm, we conduct extensive denoising experiments using T1-weighted simulated MR images, diffusion-weighted (DW) phantom and in vivo data. We compare our algorithm with other popular denoising methods. The results demonstrate that our algorithm outperforms others in term of both efficiency and robustness. PMID:27238055

  5. Lidar signal de-noising based on wavelet trimmed thresholding technique

    Institute of Scientific and Technical Information of China (English)

    Haitao Fang(方海涛); Deshuang Huang(黄德双)

    2004-01-01

    Lidar is an efficient tool for remote monitoring, but the effective range is often limited by signal-to-noise ratio (SNR). By the power spectral estimation, we find that digital filters are not fit for processing lidar signals buried in noise. In this paper, we present a new method of the lidar signal acquisition based on the wavelet trimmed thresholding technique to increase the effective range of lidar measurements. The performance of our method is investigated by detecting the real signals in noise. The experiment results show that our approach is superior to the traditional methods such as Butterworth filter.

  6. Mammograms Enhancement and Denoising Using Generalized Gaussian Mixture Model in Nonsubsampled Contourlet Transform

    Directory of Open Access Journals (Sweden)

    Xinsheng Zhang

    2009-12-01

    Full Text Available In this paper, a novel algorithm for mammographic images enhancement and denoising based on Multiscale Geometric Analysis (MGA is proposed. Firstly mammograms are decomposed into different scales and directional subbands using Nonsubsampled Contourlet Transform (NSCT. After modeling the coefficients of each directional subbands using Generalized Gaussian Mixture Model (GGMM according to the statistical property, they are categorized into strong edges, weak edges and noise by Bayesian classifier. To enhance the suspicious lesion and suppress the noise, a nonlinear mapping function is designed to adjust the coefficients adaptively so as to obtain a good enhancement result with significant features. Finally, the resulted mammographic images are obtained by reconstructing with the modified coefficients using NSCT. Experimental results illustrate that the proposed approach is practicable and robustness, which outperforms the spatial filters and other methods based on wavelets in terms of mass and microcalcification denoising and enhancement.

  7. Denoising and Back Ground Clutter of Video Sequence using Adaptive Gaussian Mixture Model Based Segmentation for Human Action Recognition

    Directory of Open Access Journals (Sweden)

    Shanmugapriya. K

    2014-01-01

    Full Text Available The human action recognition system first gathers images by simply querying the name of the action on a web image search engine like Google or Yahoo. Based on the assumption that the set of retrieved images contains relevant images of the queried action, we construct a dataset of action images in an incremental manner. This yields a large image set, which includes images of actions taken from multiple viewpoints in a range of environments, performed by people who have varying body proportions and different clothing. The images mostly present the “key poses” since these images try to convey the action with a single pose. In existing system to support this they first used an incremental image retrieval procedure to collect and clean up the necessary training set for building the human pose classifiers. There are challenges that come at the expense of this broad and representative data. First, the retrieved images are very noisy, since the Web is very diverse. Second, detecting and estimating the pose of humans in still images is more difficult than in videos, partly due to the background clutter and the lack of a foreground mask. In videos, foreground segmentation can exploit motion cues to great benefit. In still images, the only cue at hand is the appearance information and therefore, our model must address various challenges associated with different forms of appearance. Therefore for robust separation, in proposed work a segmentation algorithm based on Gaussian Mixture Models is proposed which is adaptive to light illuminations, shadow and white balance is proposed here. This segmentation algorithm processes the video with or without noise and sets up adaptive background models based on the characteristics also this method is a very effective technique for background modeling which classifies the pixels of a video frame either background or foreground based on probability distribution.

  8. Discrete Denoising with Shifts

    CERN Document Server

    Moon, Taesup

    2007-01-01

    We introduce S-DUDE, a new algorithm for denoising DMC-corrupted data. The algorithm, which generalizes the recently introduced DUDE (Discrete Universal DEnoiser) of Weissman et al., aims to compete with a genie that has access, in addition to the noisy data, also to the underlying clean data, and can choose to switch, up to $m$ times, between sliding window denoisers in a way that minimizes the overall loss. When the underlying data form an individual sequence, we show that the S-DUDE performs essentially as well as this genie, provided that $m$ is sub-linear in the size of the data. When the clean data is emitted by a piecewise stationary process, we show that the S-DUDE achieves the optimum distribution-dependent performance, provided that the same sub-linearity condition is imposed on the number of switches. To further substantiate the universal optimality of the S-DUDE, we show that when the number of switches is allowed to grow linearly with the size of the data, \\emph{any} (sequence of) scheme(s) fails...

  9. Denoising, deblurring, and superresoluton in mobile phones

    Czech Academy of Sciences Publication Activity Database

    Šroubek, Filip; Kamenický, Jan; Flusser, Jan

    Bellingham : SPIE, 2011, 78730I/1-78730I/11. ISBN 978-0-8194-8410-9. ISSN 0277-786X. [IS&T/SPIE Electronic Imaging 2011. San Francisco (US), 24.01.2011-25.01.2011] R&D Projects: GA MŠk 1M0572; GA MV VG20102013064 Institutional research plan: CEZ:AV0Z10750506 Keywords : blind deconvolution * superresolution * denoising Subject RIV: JD - Computer Applications, Robotics

  10. Nonlocal two dimensional denoising of frequency specific chirp evoked ABR single trials.

    Science.gov (United States)

    Schubert, J Kristof; Teuber, Tanja; Steidl, Gabriele; Strauss, Daniel J; Corona-Strauss, Farah I

    2012-01-01

    Recently, we have shown that denoising evoked potential (EP) images is possible using two dimensional diffusion filtering methods. This restoration allows for an integration of regularities over multiple stimulations into the denoising process. In the present work we propose the nonlocal means (NLM) method for EP image denoising. The EP images were constructed using auditory brainstem responses (ABR) collected in young healthy subjects using frequency specific and broadband chirp stimulations. It is concluded that the NLM method is more efficient than conventional approaches in EP imaging denoising, specially in the case of ABRs, where the relevant information can be easily masked by the ongoing EEG activity, i.e., signals suffer from rather low signal-to-noise ratio SNR. The proposed approach is for the a posteriori denoising of single trials after the experiment and not for real time applications. PMID:23366439

  11. A method for improving wavelet threshold denoising in laser-induced breakdown spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Bo, E-mail: zhangbo@sia.cn [Lab. of Networked Control Systems, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016 (China); University of the Chinese Academy of Sciences, Beijing 100039 (China); Sun, Lanxiang, E-mail: sunlanxiang@sia.cn [Lab. of Networked Control Systems, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016 (China); Yu, Haibin; Xin, Yong; Cong, Zhibo [Lab. of Networked Control Systems, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016 (China)

    2015-05-01

    The wavelet threshold denoising method is an effective noise suppression approach for noisy laser-induced breakdown spectroscopy signal. In this paper, firstly, the noise sources of LIBS system are summarized. Secondly, wavelet multi-resolution analysis and wavelet threshold denoising method are introduced briefly. As one of the major factors influencing the denoising results in the process of wavelet threshold denoising, the optimal decomposition level selection is studied. Based on the entropy analysis of noisy LIBS signal and noise, a method of choosing optimal decomposition level is presented. Thirdly, the performance of the proposed method is verified by analyzing some synthetic signals. Not only the denoising results of the synthetic signals are analyzed, but also the ultimate denoising capacity of the wavelet threshold denoising method with the optimal decomposition level is explored. Finally, the experimental data analysis implies that the fluctuation of the noisy LIBS signals can be decreased and the weak LIBS signals can be recovered. The optimal decomposition level is able to improve the performance of the denoising results obtained by wavelet threshold denoising with non-optimal wavelet functions. The signal to noise ratios of the elements are improved and the limit of detection values are reduced by more than 50% by using the proposed method. - Highlights: • The noise sources of LIBS system are summarized. • The optimal decomposition level selection method in wavelet threshold denoising is obtained by entropy analysis.

  12. A method for improving wavelet threshold denoising in laser-induced breakdown spectroscopy

    International Nuclear Information System (INIS)

    The wavelet threshold denoising method is an effective noise suppression approach for noisy laser-induced breakdown spectroscopy signal. In this paper, firstly, the noise sources of LIBS system are summarized. Secondly, wavelet multi-resolution analysis and wavelet threshold denoising method are introduced briefly. As one of the major factors influencing the denoising results in the process of wavelet threshold denoising, the optimal decomposition level selection is studied. Based on the entropy analysis of noisy LIBS signal and noise, a method of choosing optimal decomposition level is presented. Thirdly, the performance of the proposed method is verified by analyzing some synthetic signals. Not only the denoising results of the synthetic signals are analyzed, but also the ultimate denoising capacity of the wavelet threshold denoising method with the optimal decomposition level is explored. Finally, the experimental data analysis implies that the fluctuation of the noisy LIBS signals can be decreased and the weak LIBS signals can be recovered. The optimal decomposition level is able to improve the performance of the denoising results obtained by wavelet threshold denoising with non-optimal wavelet functions. The signal to noise ratios of the elements are improved and the limit of detection values are reduced by more than 50% by using the proposed method. - Highlights: • The noise sources of LIBS system are summarized. • The optimal decomposition level selection method in wavelet threshold denoising is obtained by entropy analysis

  13. Wavelet denoising for P300 single-trial detection

    OpenAIRE

    Saavedra, Carolina; Bougrain, Laurent

    2010-01-01

    Template-based analysis techniques are good candidates to robustly detect transient temporal graphic elements (e.g. event-related potential, k-complex, sleep spindles, vertex waves, spikes) in noisy and multi-sources electro-encephalographic signals. More specifically, we present the impact on a large dataset of a wavelet denoising to detect evoked potentials in a single-trial P300 speller. Using coiflets as a denoising process allows to obtain more stable accurracies for all subjects.

  14. Airborne Gravity Data Denoising Based on Empirical Mode Decomposition: A Case Study for SGA-WZ Greenland Test Data

    DEFF Research Database (Denmark)

    Zhao, Lei; Wu, Meiping; Forsberg, René;

    2015-01-01

    Surveying the Earth's gravity field refers to an important domain of Geodesy, involving deep connections with Earth Sciences and Geo-information. Airborne gravimetry is an effective tool for collecting gravity data with mGal accuracy and a spatial resolution of several kilometers. The main obstacle...... broad range of frequency while low pass filter cannot deal with it in pass band of the low pass filter. In order to improve the accuracy of the airborne gravimetry, Empirical Mode Decomposition (EMD) is employed to denoise the measuring data of two primary repeated flights of the strapdown airborne...... of airborne gravimetry is extracting gravity disturbance from the extremely low signal to noise ratio measuring data. In general, the power of noise concentrates on the higher frequency of measuring data, and a low pass filter can be used to eliminate it. However, the noise could distribute in a...

  15. Speech signal denoising with wavelet-transforms and the mean opinion score characterizing the filtering quality

    Science.gov (United States)

    Yaseen, Alauldeen S.; Pavlov, Alexey N.; Hramov, Alexander E.

    2016-03-01

    Speech signal processing is widely used to reduce noise impact in acquired data. During the last decades, wavelet-based filtering techniques are often applied in communication systems due to their advantages in signal denoising as compared with Fourier-based methods. In this study we consider applications of a 1-D double density complex wavelet transform (1D-DDCWT) and compare the results with the standard 1-D discrete wavelet-transform (1DDWT). The performances of the considered techniques are compared using the mean opinion score (MOS) being the primary metric for the quality of the processed signals. A two-dimensional extension of this approach can be used for effective image denoising.

  16. Evaluation of 10 mAs and low-contrast CT image optimization based on the multifractal spectrum in brain of infant

    International Nuclear Information System (INIS)

    Objective: To analyze scanned image optimization based on the multifractal soectrum and image fractal algorithm of 64-slice spiral CT in brain of infant. Methods: The image data of Toshiba Aquilion 64-slice CT scanning using 10 mAs were imported to image processing toolboxs of Matlab 7.1. The evaluation of multifractal spectrum and image denosing were performed, and compared with image quality of conventional low-dose CT using 50 mAs. Results: The low-contrast scanned image used 10 mAs is the valueless medical image because of serious noise. Image denoise based on the fractal model had superior characteristic of image detail preserving and better contrast-to-noise ratio(CNR). There existed a group difference in the score of image quality between the rude imaging noise and optimized image based on the multifractal spectrum algorithm, though the score was still significantly lower than the normal dosage scanned image (F=38.85, P<0.01). The group difference was also manifested the image quality of infants can achieve basically the request of clinical diagnosis by suitable model denoising algorithm. Conclusions: Image denoising based on the multifractal spectrum model can be used on the low-dose and low-contrast CT image optimization. It improved the CNR of the pathological region. The radiation dose of CT scanning in infants would be declined significantly by its further application in the future. (authors)

  17. A Wavelet-Based Approach for Ultrasound Image Restoration

    Directory of Open Access Journals (Sweden)

    Mohammed Tarek GadAllah

    2014-08-01

    Full Text Available Ultrasound's images are generally affected by speckle noise which is mainly due to the scattering phenomenon’s coherent nature. Speckle filtration is accompanied with loss of diagnostic features. In this paper a modest new trial introduced to remove speckles while keeping the fine features of the tissue under diagnosis by enhancing image’s edges; via Curvelet denoising and Wavelet based image fusion. Performance evaluation of our work is done by four quantitative measures: the peak signal to noise ratio (PSNR, the square root of the mean square of error (RMSE, a universal image quality index (Q, and the Pratt’s figure of merit (FOM as a quantitative measure for edge preservation. Plus Canny edge map which is extracted as a qualitative measure of edge preservation. The measurements of the proposed approach assured its qualitative and quantitative success into image denoising while maintaining edges as possible. A Gray phantom is designed to test our proposed enhancement method. The phantom results assure the success and applicability of this paper approach not only to this research works but also for gray scale diagnostic scans’ images including ultrasound’s B-scans.

  18. D3PO - Denoising, Deconvolving, and Decomposing Photon Observations

    CERN Document Server

    Selig, Marco

    2013-01-01

    The analysis of astronomical images is a non-trivial task. The D3PO algorithm addresses the inference problem of denoising, deconvolving, and decomposing photon observations. The primary goal is the simultaneous reconstruction of the diffuse and point-like photon flux from a given photon count image. In order to discriminate between these morphologically different signal components, a probabilistic algorithm is derived in the language of information field theory based on a hierarchical Bayesian parameter model. The signal inference exploits prior information on the spatial correlation structure of the diffuse component and the brightness distribution of the spatially uncorrelated point-like sources. A maximum a posteriori solution and a solution minimizing the Gibbs free energy of the inference problem using variational Bayesian methods are discussed. Since the derivation of the solution does not dependent on the underlying position space, the implementation of the D3PO algorithm uses the NIFTY package to ens...

  19. Fast DOA estimation using wavelet denoising on MIMO fading channel

    Directory of Open Access Journals (Sweden)

    A.V. Meenakshi

    2011-12-01

    Full Text Available This paper presents a tool for the analysis, and simulation of direction-of-arrival (DOA estimation inwireless mobile communication systems over the fading channel. It reviews two methods of Direction ofarrival (DOA estimation algorithm. The standard Multiple Signal Classification (MUSIC can be obtainedfrom the subspace based methods. In improved MUSIC procedure called Cyclic MUSIC, it canautomatically classify the signals as desired and undesired based on the known spectral correlationproperty and estimate only the desired signal’s DOA. In this paper, the DOA estimation algorithm usingthe de-noising pre-processing based on time-frequency conversion analysis was proposed, and theperformances were analyzed. This is focused on the improvement of DOA estimation at a lower SNR andinterference environment. This paper provides a fairly complete image of the performance and statisticalefficiency of each of above two methods with QPSK signal.

  20. Fast DOA estimation using wavelet denoising on MIMO fading channel

    CERN Document Server

    Meenakshi, A V; Kayalvizhi, R; Asha, S

    2011-01-01

    This paper presents a tool for the analysis, and simulation of direction-of-arrival (DOA) estimation in wireless mobile communication systems over the fading channel. It reviews two methods of Direction of arrival (DOA) estimation algorithm. The standard Multiple Signal Classification (MUSIC) can be obtained from the subspace based methods. In improved MUSIC procedure called Cyclic MUSIC, it can automatically classify the signals as desired and undesired based on the known spectral correlation property and estimate only the desired signal's DOA. In this paper, the DOA estimation algorithm using the de-noising pre-processing based on time-frequency conversion analysis was proposed, and the performances were analyzed. This is focused on the improvement of DOA estimation at a lower SNR and interference environment. This paper provides a fairly complete image of the performance and statistical efficiency of each of above two methods with QPSK signal.

  1. Airborne Gravity Data Denoising Based on Empirical Mode Decomposition: A Case Study for SGA-WZ Greenland Test Data

    Directory of Open Access Journals (Sweden)

    Lei Zhao

    2015-10-01

    Full Text Available Surveying the Earth’s gravity field refers to an important domain of Geodesy, involving deep connections with Earth Sciences and Geo-information. Airborne gravimetry is an effective tool for collecting gravity data with mGal accuracy and a spatial resolution of several kilometers. The main obstacle of airborne gravimetry is extracting gravity disturbance from the extremely low signal to noise ratio measuring data. In general, the power of noise concentrates on the higher frequency of measuring data, and a low pass filter can be used to eliminate it. However, the noise could distribute in a broad range of frequency while low pass filter cannot deal with it in pass band of the low pass filter. In order to improve the accuracy of the airborne gravimetry, Empirical Mode Decomposition (EMD is employed to denoise the measuring data of two primary repeated flights of the strapdown airborne gravimetry system SGA-WZ carried out in Greenland. Comparing to the solutions of using finite impulse response filter (FIR, the new results are improved by 40% and 10% of root mean square (RMS of internal consistency and external accuracy, respectively.

  2. Denoising by Higher Order Statistics

    OpenAIRE

    Teuber, Tanja; Remmele, Steffen; Hesser, Jürgen; Steidl, Gabriele

    2011-01-01

    A standard approach for deducing a variational denoising method is the maximum a posteriori strategy. Here, the denoising result is chosen in such a way that it maximizes the conditional density function of the reconstruction given its observed noisy version. Unfortunately, this approach does not imply that the empirical distribution of the reconstructed noise components follows the statistics of the assumed noise model. In this paper, we propose to overcome this drawback by applying an addit...

  3. Multi-focus image fusion algorithm based on shearlets

    Institute of Scientific and Technical Information of China (English)

    Qiguang Miao; Cheng Shi; Pengfei Xu; Mei Yang; Yaobo Shi

    2011-01-01

    Shearlets not only possess all properties that other transforms have, but also are equipped with a rich mathematical structure similar to wavelets, which are associated to a multi-resolution analysis. Recently,shearlets have been used in image denoising, sparse image representation, and edge detection. However, its application in image fusion is still under study. In this letter, we study the feasibility of image fusion using shearlets. Fusion rules of larger high-frequency coefficients based on regional energy, regional variance,and absolute value are proposed because shearlet transform can catch detailed information in any scale and any direction. The fusion accuracy is also further improved by a region consistency check. Several different experiments are adopted to prove that fusion results based on shearlet transform can acquire better fusion quality than any other method.%@@ Shearlets not only possess all properties that other transforms have, but also are equipped with a rich mathematical structure similar to wavelets, which are associated to a multi-resolution analysis.Recently,shearlets have been used in image denoising, sparse image representation, and edge detection.However, its application in image fusion is still under study.In this letter, we study the feasibility of image fusion using shearlets.Fusion rules of larger high-frequency coefficients based on regional energy, regional variance,and absolute value are proposed because shearlet transform can catch detailed information in any scale and any direction.The fusion accuracy is also further improved by a region consistency check.Several different experiments are adopted to prove that fusion results based on shearlet transform can acquire better fusion quality than any other method.

  4. Wavelet decomposition and reconstruction denoising based on the mallat algorithm%基于Mallat算法的小波分解重构的心电信号处理

    Institute of Scientific and Technical Information of China (English)

    钟丽辉; 魏贯军

    2012-01-01

    In order to extract the ECG,the wavelet decomposition and reconstruction denoising based on the Mallat algorithm was discussed in the application of ECG denoising.Firstly,determined the wavelet base function;secondly,determined the decomposition layers;thirdly,re-constructed the ECG by the useful signals;finally,simulation experiments showed that the wavelet decomposition and re-construction denoising can effectively filter out the baseline drift and high frequency of the EMG interference.%为了实现对微弱低信噪比的心电信号的有效提取,采用了Mallat算法的小波分解重构法去除心电信号的噪声。首先确定小波分解重构的小波基;其次确定分解的层数;然后直接提取有用信号所在的频带(有用信号占优的频带)进行重构;最后,Matlab仿真MIT-BIT标准数据库中的心电信号表明小波分解重构法可以有效的去除心电信号中的多种干扰;同时比起传统滤波器法来说,小波分解与重构去噪法应用起来更方便。

  5. Algorithm of Spatio-temporal Combination Video Real-time Denoising Based on Non_local Means%基于Non_local means的时空联合视频降噪算法

    Institute of Scientific and Technical Information of China (English)

    叶庆伟; 谢永昌; 狄红卫

    2012-01-01

    An efficient algorithm of video real-time denoising is proposed for the video surveillance syetem. By motion detection to multi-frame images based on Non_local means,this algorithm can adaptively distinguish the still regions and motion regions of video image. Temporal weighted average filter to the still regions and spatial ANL filter to the motion regions are used separately. Experimental results show that because of fully utilizing the spatio-temporal information of the video sequences,the proposed algorithm can significantly improve the signal-to-noise ratio and the subjective image quality without movement ghosting.%针对监控视频图像的特点,提出了一种有效的实时视频降噪算法.首先结合多帧图像采用基于Non_local means的运动检测方法自适应地区分图像的运动区域和静止区域,对静止区域采用时域加权均值滤波,对运动区域采用空域ANL滤波.充分利用了视频的时域、空域信息,在去除视频序列噪声的同时很好地保护了图像的细节.实验结果表明,提出的算法在不造成运动拖影的前提下,能够显著提高视频的信噪比和图像的主观质量.

  6. Variational denoising method for electronic speckle pattern interferometry

    Institute of Scientific and Technical Information of China (English)

    Fang Zhang; Wenyao Liu; Chen Tang; Jinjiang Wang; Li Ren

    2008-01-01

    Traditional speckle fringe patterns by electronic speckle pattern interferometry (ESPI) are inherently noisy and of limited visibility, so denoising is the key problem in ESPI. We present the variational denoising method for ESPI. This method transforms the image denosing to minimizing an appropriate penalized energy function and solving a partial differential equation. We test the proposed method on computer-simulated and experimental speckle correlation fringes, respectively. The results show that this technique is capable of significantly improving the quality of fringe patterns. It works well as a pre-processing for the fringe patterns by ESPI.

  7. Wavelet-domain de-noising of optical coherent tomography data for biomedical applications

    International Nuclear Information System (INIS)

    Optical coherent tomography (OCT) is a rapidly developing method of fundamental and applied research. Detection and processing of OCT images is a very important problem of applied physics and optical signal processing. In the present paper we are demonstrating the ability for effective wavelet-domain de-noising of OCT images. We are realizing an algorithm for wavelet-domain de-noising of OCT data and implementing it for the purpose of studying test samples and for in vivo nail tomography. High de-noising efficiency with no significant losses of information about the internal sample structure is observed

  8. Denoising by semi-supervised kernel PCA preimaging

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Abrahamsen, Trine Julie; Hansen, Lars Kai

    2014-01-01

    Kernel Principal Component Analysis (PCA) has proven a powerful tool for nonlinear feature extraction, and is often applied as a pre-processing step for classification algorithms. In denoising applications Kernel PCA provides the basis for dimensionality reduction, prior to the so-called pre...... Amsterdam Library of Object Images (Geusebroek et al., 2005) [7]....

  9. Study on an improved wavelet shift-invariant threshold denoising for pulsed laser induced glucose photoacoustic signals

    Science.gov (United States)

    Wang, Zhengzi; Ren, Zhong; Liu, Guodong

    2015-10-01

    Noninvasive measurement of blood glucose concentration has become a hotspot research in the world due to its characteristic of convenient, rapid and non-destructive etc. The blood glucose concentration monitoring based on photoacoustic technique has attracted many attentions because the detected signal is ultrasonic signals rather than the photo signals. But during the acquisition of the photoacoustic signals of glucose, the photoacoustic signals are not avoid to be polluted by some factors, such as the pulsed laser, electronic noises and circumstance noises etc. These disturbances will impact the measurement accuracy of the glucose concentration, So, the denoising of the glucose photoacoustic signals is a key work. In this paper, a wavelet shift-invariant threshold denoising method is improved, and a novel wavelet threshold function is proposed. For the novel wavelet threshold function, two threshold values and two different factors are set, and the novel function is high order derivative and continuous, which can be looked as the compromise between the wavelet soft threshold denoising and hard threshold denoising. Simulation experimental results illustrate that, compared with other wavelet threshold denoising, this improved wavelet shift-invariant threshold denoising has higher signal-to-noise ratio(SNR) and smaller root mean-square error (RMSE) value. And this improved denoising also has better denoising effect than others. Therefore, this improved denoising has a certain of potential value in the denoising of glucose photoacoustic signals.

  10. Multi-Scale Patch-Based Image Restoration.

    Science.gov (United States)

    Papyan, Vardan; Elad, Michael

    2016-01-01

    Many image restoration algorithms in recent years are based on patch processing. The core idea is to decompose the target image into fully overlapping patches, restore each of them separately, and then merge the results by a plain averaging. This concept has been demonstrated to be highly effective, leading often times to the state-of-the-art results in denoising, inpainting, deblurring, segmentation, and other applications. While the above is indeed effective, this approach has one major flaw: the prior is imposed on intermediate (patch) results, rather than on the final outcome, and this is typically manifested by visual artifacts. The expected patch log likelihood (EPLL) method by Zoran and Weiss was conceived for addressing this very problem. Their algorithm imposes the prior on the patches of the final image, which in turn leads to an iterative restoration of diminishing effect. In this paper, we propose to further extend and improve the EPLL by considering a multi-scale prior. Our algorithm imposes the very same prior on different scale patches extracted from the target image. While all the treated patches are of the same size, their footprint in the destination image varies due to subsampling. Our scheme comes to alleviate another shortcoming existing in patch-based restoration algorithms--the fact that a local (patch-based) prior is serving as a model for a global stochastic phenomenon. We motivate the use of the multi-scale EPLL by restricting ourselves to the simple Gaussian case, comparing the aforementioned algorithms and showing a clear advantage to the proposed method. We then demonstrate our algorithm in the context of image denoising, deblurring, and super-resolution, showing an improvement in performance both visually and quantitatively. PMID:26571527

  11. Stacked Denoise Autoencoder Based Feature Extraction and Classification for Hyperspectral Images

    OpenAIRE

    Chen Xing; Li Ma; Xiaoquan Yang

    2016-01-01

    Deep learning methods have been successfully applied to learn feature representations for high-dimensional data, where the learned features are able to reveal the nonlinear properties exhibited in the data. In this paper, deep learning method is exploited for feature extraction of hyperspectral data, and the extracted features can provide good discriminability for classification task. Training a deep network for feature extraction and classification includes unsupervised pretraining and super...

  12. FMI SIGNAL DE-NOISING BASED ON WAVELET TRANSFORM AND MEDIAN FILTERING%基于小波变换和中值滤波的FMI信号去噪方法

    Institute of Scientific and Technical Information of China (English)

    张小涛; 张烈辉; 冯国庆; 魏伟

    2009-01-01

    The algorithm of wavelet decomposition and reconstruction was introduced. Based on the decomposition and reconstruction of image, the application of the traditional median filtering method and two-dimensional discrete wavelet transform to noise elimination was analyzed. A corresponding example of application was given. In the example, the new method was applied in one FMI image from a cavernous reservoir, and the resultant image was compared with the single usage in de-noising image. The results showed that the random noise was eliminated and the automatic identification of fractures and caverns were enhanced effectively by preprocessing using FMI images with two-dimensional discrete wavelet transform combined with traditional methods, and the combination was much better than single usage of these two methods.%介绍了二维离散小波变换下的图像分解与重构算法.在图像分解与重构的基础上,提出应用中值滤波和二维离散小波变换相结合的方法对含噪的FMI图像进行去噪处理,通过实例验证,采用新方法对一口溶蚀孔洞储层井的FMI图像进行去噪分析,并与单一的中值滤波或小波变换法的结果进行比较.实验结果表明,应用二维小波分析结合传统方法对FMI图像进行去噪,能够有效地去除图像中的随机噪声,其结果优于单一的中值滤波或小波分析法,对提高油、气层裂缝和孔洞的自动识别具有重要的指导意义.

  13. Contractive De-noising Auto-encoder

    OpenAIRE

    Chen, Fu-qiang; Wu, Yan; Zhao, Guo-dong; Zhang, Jun-Ming; Zhu, Ming; Bai, Jing

    2013-01-01

    Auto-encoder is a special kind of neural network based on reconstruction. De-noising auto-encoder (DAE) is an improved auto-encoder which is robust to the input by corrupting the original data first and then reconstructing the original input by minimizing the reconstruction error function. And contractive auto-encoder (CAE) is another kind of improved auto-encoder to learn robust feature by introducing the Frobenius norm of the Jacobean matrix of the learned feature with respect to the origin...

  14. The Feature Extraction Based on Texture Image Information for Emotion Sensing in Speech

    Directory of Open Access Journals (Sweden)

    Kun-Ching Wang

    2014-09-01

    Full Text Available In this paper, we present a novel texture image feature for Emotion Sensing in Speech (ESS. This idea is based on the fact that the texture images carry emotion-related information. The feature extraction is derived from time-frequency representation of spectrogram images. First, we transform the spectrogram as a recognizable image. Next, we use a cubic curve to enhance the image contrast. Then, the texture image information (TII derived from the spectrogram image can be extracted by using Laws’ masks to characterize emotional state. In order to evaluate the effectiveness of the proposed emotion recognition in different languages, we use two open emotional databases including the Berlin Emotional Speech Database (EMO-DB and eNTERFACE corpus and one self-recorded database (KHUSC-EmoDB, to evaluate the performance cross-corpora. The results of the proposed ESS system are presented using support vector machine (SVM as a classifier. Experimental results show that the proposed TII-based feature extraction inspired by visual perception can provide significant classification for ESS systems. The two-dimensional (2-D TII feature can provide the discrimination between different emotions in visual expressions except for the conveyance pitch and formant tracks. In addition, the de-noising in 2-D images can be more easily completed than de-noising in 1-D speech.

  15. About Classification Methods Based on Tensor Modelling for Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Salah Bourennane

    2010-03-01

    Full Text Available Denoising and Dimensionality Reduction (DR are key issue to improve the classifiers efficiency for Hyper spectral images (HSI. The multi-way Wiener filtering recently developed is used, Principal and independent component analysis (PCA; ICA and projection pursuit(PP approaches to DR have been investigated. These matrix algebra methods are applied on vectorized images. Thereof, the spatial rearrangement is lost. To jointly take advantage of the spatial and spectral information, HSI has been recently represented as tensor. Offering multiple ways to decompose data orthogonally, we introduced filtering and DR methods based on multilinear algebra tools. The DR is performed on spectral way using PCA, or PP joint to an orthogonal projection onto a lower subspace dimension of the spatial ways. Weshow the classification improvement using the introduced methods in function to existing methods. This experiment is exemplified using real-world HYDICE data. Multi-way filtering, Dimensionality reduction, matrix and multilinear algebra tools, tensor processing.

  16. Imaging Liver Lesions Using Grating-Based Phase-Contrast Computed Tomography with Bi-Lateral Filter Post-Processing

    OpenAIRE

    Herzen, Julia; Marian S Willner; Fingerle, Alexander A.; Peter B Noël; Köhler, Thomas; Drecoll, Enken; Rummeny, Ernst J.; Pfeiffer, Franz

    2014-01-01

    X-ray phase-contrast imaging shows improved soft-tissue contrast compared to standard absorption-based X-ray imaging. Especially the grating-based method seems to be one promising candidate for clinical implementation due to its extendibility to standard laboratory X-ray sources. Therefore the purpose of our study was to evaluate the potential of grating-based phase-contrast computed tomography in combination with a novel bi-lateral denoising method for imaging of focal liver lesions in an ex...

  17. Research of image enhancement of dental cast based on wavelet transformation

    Science.gov (United States)

    Zhao, Jing; Li, Zhongke; Liu, Xingmiao

    2010-10-01

    This paper describes a 3D laser scanner for dental cast that realize non-contact deepness measuring. The scanner and the control PC make up of a 3D scan system, accomplish the real time digital of dental cast. Owing to the complexity shape of the dental cast and the random nature of scanned points, the detected feature curves are generally not smooth or not accurate enough for subsequent application. The purpose of this p is to present an algorithm for enhancing the useful points and eliminating the noises. So an image enhancement algorithm based on wavelet transform and fuzzy set theory is presented. Firstly, the multi-scale wavelet transform is adopted to decompose the input image, which extracts the characteristic of multi-scale of the image. Secondly, wavelet threshold is used for image de-noising, and then the traditional fuzzy set theory is improved and applied to enhance the low frequency wavelet coefficients and the high frequency wavelet coefficients of different directions of each scale. Finally, the inverse wavelet transform is applied to synthesis image. A group of experimental results demonstrate that the proposed algorithm is effective for the dental cast image de-noising and enhancement, the edge of the enhanced image is distinct which is good for the subsequent image processing.

  18. Targets Separation and Imaging Method in Sparse Scene Based on Cluster Result of Range Profile Peaks

    OpenAIRE

    Yang, Qiu; Qun ZHANG; Wang, Min; Sun, Li

    2015-01-01

    This paper focuses on the synthetic aperture radar (SAR) imaging of space-sparse targets such as ships on the sea, and proposes a method of targets separation and imaging of sparse scene based on cluster result of range profile peaks. Firstly, wavelet de-noising algorithm is used to preprocess the original echo, and then the range profile at different viewing positions can be obtained by range compression and range migration correction. Peaks of the range profiles can be detected by the fast ...

  19. LPI Radar Signals De-noising Based on Frequency Domain SVD%基于频域奇异值分解的LPI雷达信号降噪

    Institute of Scientific and Technical Information of China (English)

    赵凯凯; 张柏林; 刘璘; 罗旭蒙

    2015-01-01

    随着低截获技术在战场上的大量使用,侦察接收机接收到的低截获( LPI )信号大多都湮没在噪声中。为了准确地检测威胁目标,在研究时域奇异值分解( SVD)降噪的基础上,提出了基于频域SVD的LPI雷达信号降噪方法。仿真实验表明,该方法可以将LPI雷达常用的线性调频连续波( LFMCW)信号和二相编码( BPSK)信号的信噪比从0 dB提升到6 dB以上。频域SVD降噪使侦察机发现LPI雷达的概率大大提高。%With extensive application of low probability of intercept( LPI) technology in battlefield,LPI radar signals are mostly lost in the noise. For the sake of detecting threats accurately,LPI radar signals de-noising method based on frequency domain singular value decomposition ( SVD ) is proposed on the basis of study on time domain SVD de-noising. Simulation shows that the proposed method can improve the widely-used linear frequency modulated continuous wave( LFMCW) and binary phase shift keying ( BPSK) signal’s signal-to-noise ratio( SNR) from 0 dB to at least 6 dB. Frequency domain SVD de-noising method makes the possibility to detect LPI radars greatly increased for reconnaissance receivers.

  20. 3-D Adaptive Sparsity Based Image Compression With Applications to Optical Coherence Tomography.

    Science.gov (United States)

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A; Farsiu, Sina

    2015-06-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  1. Dictionary Based Image Segmentation

    DEFF Research Database (Denmark)

    Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2015-01-01

    We propose a method for weakly supervised segmentation of natural images, which may contain both textured or non-textured regions. Our texture representation is based on a dictionary of image patches. To divide an image into separated regions with similar texture we use an implicit level sets...

  2. Sparse non-linear denoising: Generalization performance and pattern reproducibility in functional MRI

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    We investigate sparse non-linear denoising of functional brain images by kernel Principal Component Analysis (kernel PCA). The main challenge is the mapping of denoised feature space points back into input space, also referred to as ”the pre-image problem”. Since the feature space mapping is...... typically not bijective, pre-image estimation is inherently illposed. In many applications, including functional magnetic resonance imaging (fMRI) data which is the application used for illustration in the present work, it is of interest to denoise a sparse signal. To meet this objective we investigate...... sparse pre-image reconstruction by Lasso regularization. We find that sparse estimation provides better brain state decoding accuracy and a more reproducible pre-image. These two important metrics are combined in an evaluation framework which allow us to optimize both the degree of sparsity and the non...

  3. Bayesian Inference for Neighborhood Filters With Application in Denoising.

    Science.gov (United States)

    Huang, Chao-Tsung

    2015-11-01

    Range-weighted neighborhood filters are useful and popular for their edge-preserving property and simplicity, but they are originally proposed as intuitive tools. Previous works needed to connect them to other tools or models for indirect property reasoning or parameter estimation. In this paper, we introduce a unified empirical Bayesian framework to do both directly. A neighborhood noise model is proposed to reason and infer the Yaroslavsky, bilateral, and modified non-local means filters by joint maximum a posteriori and maximum likelihood estimation. Then, the essential parameter, range variance, can be estimated via model fitting to the empirical distribution of an observable chi scale mixture variable. An algorithm based on expectation-maximization and quasi-Newton optimization is devised to perform the model fitting efficiently. Finally, we apply this framework to the problem of color-image denoising. A recursive fitting and filtering scheme is proposed to improve the image quality. Extensive experiments are performed for a variety of configurations, including different kernel functions, filter types and support sizes, color channel numbers, and noise types. The results show that the proposed framework can fit noisy images well and the range variance can be estimated successfully and efficiently. PMID:26259244

  4. A Comprehensive Noise Robust Speech Parameterization Algorithm Using Wavelet Packet Decomposition-Based Denoising and Speech Feature Representation Techniques

    Directory of Open Access Journals (Sweden)

    Kotnik Bojan

    2007-01-01

    Full Text Available This paper concerns the problem of automatic speech recognition in noise-intense and adverse environments. The main goal of the proposed work is the definition, implementation, and evaluation of a novel noise robust speech signal parameterization algorithm. The proposed procedure is based on time-frequency speech signal representation using wavelet packet decomposition. A new modified soft thresholding algorithm based on time-frequency adaptive threshold determination was developed to efficiently reduce the level of additive noise in the input noisy speech signal. A two-stage Gaussian mixture model (GMM-based classifier was developed to perform speech/nonspeech as well as voiced/unvoiced classification. The adaptive topology of the wavelet packet decomposition tree based on voiced/unvoiced detection was introduced to separately analyze voiced and unvoiced segments of the speech signal. The main feature vector consists of a combination of log-root compressed wavelet packet parameters, and autoregressive parameters. The final output feature vector is produced using a two-staged feature vector postprocessing procedure. In the experimental framework, the noisy speech databases Aurora 2 and Aurora 3 were applied together with corresponding standardized acoustical model training/testing procedures. The automatic speech recognition performance achieved using the proposed noise robust speech parameterization procedure was compared to the standardized mel-frequency cepstral coefficient (MFCC feature extraction procedures ETSI ES 201 108 and ETSI ES 202 050.

  5. WaveletQuant, an improved quantification software based on wavelet signal threshold de-noising for labeled quantitative proteomic analysis

    Directory of Open Access Journals (Sweden)

    Li Song

    2010-04-01

    Full Text Available Abstract Background Quantitative proteomics technologies have been developed to comprehensively identify and quantify proteins in two or more complex samples. Quantitative proteomics based on differential stable isotope labeling is one of the proteomics quantification technologies. Mass spectrometric data generated for peptide quantification are often noisy, and peak detection and definition require various smoothing filters to remove noise in order to achieve accurate peptide quantification. Many traditional smoothing filters, such as the moving average filter, Savitzky-Golay filter and Gaussian filter, have been used to reduce noise in MS peaks. However, limitations of these filtering approaches often result in inaccurate peptide quantification. Here we present the WaveletQuant program, based on wavelet theory, for better or alternative MS-based proteomic quantification. Results We developed a novel discrete wavelet transform (DWT and a 'Spatial Adaptive Algorithm' to remove noise and to identify true peaks. We programmed and compiled WaveletQuant using Visual C++ 2005 Express Edition. We then incorporated the WaveletQuant program in the Trans-Proteomic Pipeline (TPP, a commonly used open source proteomics analysis pipeline. Conclusions We showed that WaveletQuant was able to quantify more proteins and to quantify them more accurately than the ASAPRatio, a program that performs quantification in the TPP pipeline, first using known mixed ratios of yeast extracts and then using a data set from ovarian cancer cell lysates. The program and its documentation can be downloaded from our website at http://systemsbiozju.org/data/WaveletQuant.

  6. Real-Time Wavelet-Based Coordinated Control of Hybrid Energy Storage Systems for Denoising and Flattening Wind Power Output

    Directory of Open Access Journals (Sweden)

    Tran Thai Trung

    2014-10-01

    Full Text Available Since the penetration level of wind energy is continuously increasing, the negative impact caused by the fluctuation of wind power output needs to be carefully managed. This paper proposes a novel real-time coordinated control algorithm based on a wavelet transform to mitigate both short-term and long-term fluctuations by using a hybrid energy storage system (HESS. The short-term fluctuation is eliminated by using an electric double-layer capacitor (EDLC, while the wind-HESS system output is kept constant during each 10-min period by a Ni-MH battery (NB. State-of-charge (SOC control strategies for both EDLC and NB are proposed to maintain the SOC level of storage within safe operating limits. A ramp rate limitation (RRL requirement is also considered in the proposed algorithm. The effectiveness of the proposed algorithm has been tested by using real time simulation. The simulation model of the wind-HESS system is developed in the real-time digital simulator (RTDS/RSCAD environment. The proposed algorithm is also implemented as a user defined model of the RSCAD. The simulation results demonstrate that the HESS with the proposed control algorithm can indeed assist in dealing with the variation of wind power generation. Moreover, the proposed method shows better performance in smoothing out the fluctuation and managing the SOC of battery and EDLC than the simple moving average (SMA based method.

  7. Wavelet filter based de-noising of weak neutron flux signal for dynamic control rod reactivity measurement

    Energy Technology Data Exchange (ETDEWEB)

    Park, Moon Ghu; Bae Sung Man; Lee, Chang Sup [Korea Electric Power Research Institute, Taejon (Korea, Republic of)

    2002-10-01

    The measurement and validation of control rod bank (group) worths are typically required by the start-up physics test standard programs for Pressurized Water Reactors (PWR). Recently, the method of DCRM{sup TM} (Dynamic Control rod Reactivity Measurement) technique is developed by KEPRI and will be implemented in near future. The method is based on the fast and complete bank insertion within the short period of time which makes the range of the reactivity variation very large from the below of the background gamma level to the vicinity of nuclear heating point. The weak flux signal below background gamma level is highly noise contaminated, which invokes the large reactivity fluctuation. This paper describes the efficient noise filtering method with wavelet filters. The performance of developed method is demonstrated with the measurement data at YGN-3 cycle 7.

  8. A New Approach to Inverting and De-Noising Backscatter from Lidar Observations

    Science.gov (United States)

    Marais, Willem; Hen Hu, Yu; Holz, Robert; Eloranta, Edwin

    2016-06-01

    Atmospheric lidar observations provide a unique capability to directly observe the vertical profile of cloud and aerosol scattering properties and have proven to be an important capability for the atmospheric science community. For this reason NASA and ESA have put a major emphasis on developing both space and ground based lidar instruments. Measurement noise (solar background and detector noise) has proven to be a significant limitation and is typically reduced by temporal and vertical averaging. This approach has significant limitations as it results in significant reduction in the spatial information and can introduce biases due to the non-linear relationship between the signal and the retrieved scattering properties. This paper investigates a new approach to de-noising and retrieving cloud and aerosol backscatter properties from lidar observations that leverages a technique developed for medical imaging to de-blur and de-noise images; the accuracy is defined as the error between the true and inverted photon rates. Hence non-linear bias errors can be mitigated and spatial information can be preserved.

  9. Denoising of Ultrasonic Testing Signal Based on Wavelet Footprint and Matching Pursuit%基于小波迹和匹配追踪算法的超声波检测信号消噪

    Institute of Scientific and Technical Information of China (English)

    何明格; 殷鹰; 林丽君; 赵秀粉; 殷国富

    2011-01-01

    针对某大型国有企业的零件在线无损检测工程实际需求,为了提高超声波检测的准确度和精度,引入小波迹理论和匹配追踪算法对超声波检测信号进行消噪处理.首先,在信号的小波变换基础上构建一个小波迹字典;然后,在小波迹域内进行阈值消噪去除信号中的噪声;最后,利用匹配追踪算法通过有限步骤的迭代后,在小波迹字典上用一定数量的小波迹的组合来实现原信号的稀疏描述.小波迹字典内的小波迹具有对信号结构特征无损的描述能力,在小波迹域内消噪克服了传统小波消噪不考虑各尺度之间小波系数的相关性而只进行简单的系数收缩的缺陷.通过仿真试验将本文采用的方法与传统的小波硬阈值和软阈值消噪技术进行了对比,结果表明该方法的消噪效果要优于传统的小波消噪方法.实际超声波检测信号的处理结果也论证了本文所采用消噪技术的优越性.%In order to improve the nicety and precision of online ultrasonic testing of parts in one big national factory, Wavelet Footprint and Matching Pursuit were employed to denoise the ultrasonic testing signal. Firstly, a wavelet footprint dictionary was constructed from wavelet basis function. Secondly, a threshold was applied in the footprint domain to remove the noise from the noisy signal. At last, by adopting the matching pursuit algorithm, a sparse representation of the testing signal was achieved with a certain number of wavelet footprints in a finite number of iterations in the footprint dictionary. The wavelet footprints characterize efficiently the singular structures of signal. Denoising based on wavelet footprint can better exploit the dependency of the wavelet coefficients across scales than traditional wavelet based denoising which just simply shrinks the wavelet coefficients. The experimental simulation results showed that this method outperformed traditional hard and soft

  10. Electrocardiogram Denoised Signal by Discrete Wavelet Transform and Continuous Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Akram Aouinet

    2014-01-01

    Full Text Available One of commonest problems in electrocardiogram (ECG signal processing is denoising. In this paper a denoising technique based on discrete wavelet transform (DWT has been developed. To evaluate proposed technique, we compare it to continuous wavelet transform (CWT. Performance evaluation uses parameters like mean square error (MSE and signal to noise ratio (SNR computations show that the proposed technique out performs the CWT.

  11. Independent Component Analysis and Decision Trees for ECG Holter Recording De-Noising

    OpenAIRE

    Jakub Kuzilek; Vaclav Kremen; Filip Soucek; Lenka Lhotska

    2014-01-01

    We have developed a method focusing on ECG signal de-noising using Independent component analysis (ICA). This approach combines JADE source separation and binary decision tree for identification and subsequent ECG noise removal. In order to to test the efficiency of this method comparison to standard filtering a wavelet- based de-noising method was used. Freely data available at Physionet medical data storage were evaluated. Evaluation criteria was root mean square error (RMSE) between origin...

  12. Nonlinear denoising of transient signals with application to event related potentials

    CERN Document Server

    Effern, A; Schreiber, T; Grunwald, T; David, P; Elger, C E

    2000-01-01

    We present a new wavelet based method for the denoising of {\\it event related potentials} ERPs), employing techniques recently developed for the paradigm of deterministic chaotic systems. The denoising scheme has been constructed to be appropriate for short and transient time sequences using circular state space embedding. Its effectiveness was successfully tested on simulated signals as well as on ERPs recorded from within a human brain. The method enables the study of individual ERPs against strong ongoing brain electrical activity.

  13. Denoising and Frequency Analysis of Noninvasive Magnetoencephalography Sensor Signals for Functional Brain Mapping

    CERN Document Server

    Ukil, A

    2015-01-01

    Magnetoencephalography (MEG) is an important noninvasive, nonhazardous technology for functional brain mapping, measuring the magnetic fields due to the intracellular neuronal current flow in the brain. However, most often, the inherent level of noise in the MEG sensor data collection process is large enough to obscure the signal(s) of interest. In this paper, a denoising technique based on the wavelet transform and the multiresolution signal decomposition technique along with thresholding is presented, substantiated by application results. Thereafter, different frequency analysis are performed on the denoised MEG signals to identify the major frequencies of the brain oscillations present in the denoised signals. Time-frequency plots (spectrograms) of the denoised signals are also provided.

  14. Feedwater flowrate estimation based on the two-step de-noising using the wavelet analysis and an autoassociative neural network

    International Nuclear Information System (INIS)

    This paper proposes an improved signal processing strategy for accurate feedwater flowrate estimation in nuclear power plants. It is generally known that ∼ 2% thermal power errors occur due to fouling phenomena in feedwater flowmeters. In the strategy proposed, the noises included in feedwater flowrate signal are classified into rapidly varying noises and gradually varying noises according to the characteristics in a frequency domain. The estimation precision is enhanced by introducing a low pass filter with the wavelet analysis against rapidly varying noises, and an autoassociative neural network which takes charge of the correction of only gradually varying noises. The modified multivariate stratification sampling using the concept of time stratification and MAXIMIN criteria is developed to overcome the shortcoming of a general random sampling. In addition the multi-stage robust training method is developed to increase the quality and reliability of training signals. Some validations using the simulated data from a micro-simulator were carried out. In the validation tests, the proposed methodology removed both rapidly varying noises and gradually varying noises respectively in each de-noising step, and 5.54 % root mean square errors of initial noisy signals were decreased to 0.674% after denoising. These results indicate that it is possible to estimate the reactor thermal power more elaborately by adopting this strategy. (author). 16 refs., 6 figs., 2 tabs

  15. Feedwater flowrate estimation based on the two-step de-noising using the wavelet analysis and an autoassociative neural network

    Energy Technology Data Exchange (ETDEWEB)

    Heo, Gyun Young; Choi, Seong Soo; Chang, Soon Heung [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1999-04-01

    This paper proposes an improved signal processing strategy for accurate feedwater flowrate estimation in nuclear power plants. It is generally known that {approx} 2% thermal power errors occur due to fouling phenomena in feedwater flowmeters. In the strategy proposed, the noises included in feedwater flowrate signal are classified into rapidly varying noises and gradually varying noises according to the characteristics in a frequency domain. The estimation precision is enhanced by introducing a low pass filter with the wavelet analysis against rapidly varying noises, and an autoassociative neural network which takes charge of thecorrection of only gradually varying noises. The modified multivariate stratification sampling using the concept of time stratification and MAXIMIN criteria is developed to overcome the shortcoming of a general random sampling. In addition the multi-stage robust training method is developed to increase the quality and reliability of training signals. Some validations using the simulated data from a micro-simulator were carried out. In the validation tests, the proposed methodology removed both rapidly varying noises and gradually varying noises respectively in each de-noising step, and 5.54 % root mean square errors of initial noisy signals were decreased to 0.674% after denoising. These results indicate that it is possible to estimate the reactor thermal power more elaborately by adopting this strategy. (author). 16 refs., 6 figs., 2 tabs.

  16. ECG Signal Denoising By Wavelet Transform Thresholding

    Directory of Open Access Journals (Sweden)

    Mikhled Alfaouri

    2008-01-01

    Full Text Available In recent years, ECG signal plays an important role in the primary diagnosis, prognosis and survival analysis of heart diseases. In this paper a new approach based on the threshold value of ECG signal determination is proposed using Wavelet Transform coefficients. Electrocardiography has had a profound influence on the practice of medicine. The electrocardiogram signal contains an important amount of information that can be exploited in different manners. The ECG signal allows for the analysis of anatomic and physiologic aspects of the whole cardiac muscle. Different ECG signals are used to verify the proposed method using MATLAB software. Method presented in this paper is compared with the Donoho's method for signal denoising meanwhile better results are obtained for ECG signals by the proposed algorithm.

  17. Semantic-based high resolution remote sensing image retrieval

    Science.gov (United States)

    Guo, Dihua

    High Resolution Remote Sensing (HRRS) imagery has been experiencing extraordinary development in the past decade. Technology development means increased resolution imagery is available at lower cost, making it a precious resource for planners, environmental scientists, as well as others who can learn from the ground truth. Image retrieval plays an important role in managing and accessing huge image database. Current image retrieval techniques, cannot satisfy users' requests on retrieving remote sensing images based on semantics. In this dissertation, we make two fundamental contributions to the area of content based image retrieval. First, we propose a novel unsupervised texture-based segmentation approach suitable for accurately segmenting HRRS images. The results of existing segmentation algorithms dramatically deteriorate if simply adopted to HRRS images. This is primarily clue to the multi-texture scales and the high level noise present in these images. Therefore, we propose an effective and efficient segmentation model, which is a two-step process. At high-level, we improved the unsupervised segmentation algorithm by coping with two special features possessed by HRRS images. By preprocessing images with wavelet transform, we not only obtain multi-resolution images but also denoise the original images. By optimizing the splitting results, we solve the problem of textons in HRRS images existing in different scales. At fine level, we employ fuzzy classification segmentation techniques with adjusted parameters for different land cover. We implement our algorithm using real world 1-foot resolution aerial images. Second, we devise methodologies to automatically annotate HRRS images based on semantics. In this, we address the issue of semantic feature selection, the major challenge faced by semantic-based image retrieval. To discover and make use of hidden semantics of images is application dependent. One type of the semantics in HRRS image is conveyed by composite

  18. Dictionary Based Image Segmentation

    OpenAIRE

    Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2015-01-01

    We propose a method for weakly supervised segmentationof natural images, which may contain both textured or non-texturedregions. Our texture representation is based on a dictionary of imagepatches. To divide an image into separated regions with similar texturewe use an implicit level sets representation of the curve, which makesour method topologically adaptive. In addition, we suggest a multi-labelversion of the method. Finally, we improve upon a similar texture representation,by formulating...

  19. 基于LabVIEW的宽频时变信号阈值去噪系统设计%Design of threshold denoising system for broadband time-varying signal based on LabVIEW

    Institute of Scientific and Technical Information of China (English)

    张俊涛; 曹梦娜; 张涛

    2014-01-01

    针对含有高斯噪声的宽频时变信号,利用LabVIEW强大的信号处理函数库设计了基于LabVIEW的虚拟去噪系统,该系统去噪快捷高效。对含有高斯噪声的宽频时变信号进行功率谱分析,根据其频谱特性进行小波阈值去噪,通过对各种不同阈值去噪准则及软硬阈值去噪结果进行对比分析,寻找阈值去噪方法的最优参数。结果表明,基于LabVIEW阈值去噪法对含有高斯噪声的宽频时变信号具有较好的效果,在时频域都表现了优良的特征。%A virtual denoising system w hich is fast and efficient is designed for broadband time‐varying signal by using the powerful signal processing function library based on Lab‐VIEW .The power spectrum of broadband time‐varying signal containing gaussian noise is analyzed in the paper and wavelet threshold denoising is used based on their spectral fea‐tures .The optimal parameters of thresholding methods is searched by analyzing and compa‐ring the results of different thresholding criteria and the hard and soft threshold denosing . The results show that thresholding method based on LabVIEW for broadband time‐varying signal achieves good effect and present excellent characteristics in the time‐frequency do‐m ain .

  20. Adaptive Non-Linear Bayesian Filter for ECG Denoising

    Directory of Open Access Journals (Sweden)

    Mitesh Kumar Sao

    2014-06-01

    Full Text Available The cycles of an electrocardiogram (ECG signal contain three components P-wave, QRS complex and the T-wave. Noise is present in cardiograph as signals being measured in which biological resources (muscle contraction, base line drift, motion noise and environmental resources (power line interference, electrode contact noise, instrumentation noise are normally pollute ECG signal detected at the electrode. Visu-Shrink thresholding and Bayesian thresholding are the two filters based technique on wavelet method which is denoising the PLI noisy ECG signal. So thresholding techniques are applied for the effectiveness of ECG interval and compared the results with the wavelet soft and hard thresholding methods. The outputs are evaluated by calculating the root mean square (RMS, signal to noise ratio (SNR, correlation coefficient (CC and power spectral density (PSD using MATLAB software. The clean ECG signal shows Bayesian thresholding technique is more powerful algorithm for denoising.

  1. A Self-adaptive Learning Rate Principle for Stacked Denoising Autoencoders

    Institute of Scientific and Technical Information of China (English)

    HAO Qian-qian; DING Jin-kou; WANG Jian-fei

    2015-01-01

    Existing research on image classification mainly used the artificial definition as the pre-training of the original image, which cost a lot of time on adjusting parameters. However, the depth of learning algorithm intends to make the computers automatically choose the most suitable features in the training process. The substantial of deep learning is to train mass data and obtain an accurate classification or prediction without any artificial work by constructing a multi-hidden-layer model. However, current deep learning model has problems of local minimums when choosing a con-stant learning rate to solve non-convex objective cost function in model training. This paper proposes an algorithm based on the Stacked Denoising Autoencoders (SDA) to solve this problem, and gives a contrast of different layer designs to test the performance. A MNIST database of handwritten digits is used to verify the effectiveness of this model..

  2. 基于小波提升的ECG去噪和QRS波识别快速算法%Fast algorithm of ECG denoising and QRS wave identification based on wavelet lifting

    Institute of Scientific and Technical Information of China (English)

    姚成; 司玉娟; 郎六琪; 朴德慧; 徐海峰; 李贺佳

    2012-01-01

    提出了一种基于小波提升的ECG去噪和QRS波识别的快速算法。该算法在小波提升基础上引入加权阈值收缩法,保证ECG有用信息不丢失,提高了去噪效果;利用去噪重构的中间结果并结合简单的差分法,实现了使用平滑函数一阶导数对信号进行小波提升变换,避免了需要二次小波提升变换的运算,在保证识别精度的同时,大大降低了运算复杂度。实验结果表明,该算法能得到较高的SNR和较低的MSE,QRS波识别准确率达到了99.5%以上。并且,该算法利于在硬件平台(FPGA)上实现,便于在心电监护设备上集成。%This paper proposes a fast algorithm of ECG denoising and QRS wave identification based on wavelet lifting. On the basis of wavelet lifting, the weighted threshold shrinkage method is introduced to ensure not to lose useful ECG information and improve the denoising effect. Using the intermediate result of the denoising and reconstruction together with the simple finite difference method, the proposed algorithm uses the first derivative of the smooth function to process the signals by the lifting wavelet transform. Such process can avoid the secondary operation of the lifting wavelet transform, thus significantly reducing the complexity of operation meanwhile maintaining the identification precision. Experimental results demonstrate that the proposed algorithm can achieve relatively higher SNR and lower MSE. In addition, the accuracy rate of QRS wave identification is above 99. 5%. Moreover, this algorithm can be realized on the hardware platform of FPGA, which is convenient for ECG monitoring equipment integration.

  3. Anti-electromagnetic Interference Method Based on Wavelet De-noising and Morphology Compensation Algorithm%基于小波降噪和形态补偿的抗电磁干扰方法

    Institute of Scientific and Technical Information of China (English)

    杨云涛; 石志勇; 关贞珍; 许杨

    2011-01-01

    导航载体上电气设备产生的电磁噪声是影响地磁测量精度的主要因素之一.对电气设备产生的电磁干扰特性进行了深入地研究与分析,将电磁干扰噪声分为高频交变磁扰和等效电流磁扰,并根据其不同的特性采用了不同的降噪方法.首先运用小波阈值法消除高频交变电磁噪声,然后提出了以交变磁扰噪声能量分布脊线为参量,建立消影除等效电流磁扰补偿模型的算法.试验证明,该方法不但能起到较好的降噪、平滑效果,而且可以保持真实信号的突变细节,提高了地磁测量的精度.%Electromagnetic noising that produced by the electric equipment on a navigation carrier is a main causation which influenced the geomagnetic measurement precision. The electromagnetic noising characteristic was researched and analyzed deeply in the article. The different de-noising methods were applied when the noise were classed into the high frequency magnetic noising and the equivalent electric current magnetic noising. The method of wavelet threshold de-noising was adopted in order to eliminate the high frequency magnetic noising. And then a compensation algorithm was put forward and molded based on the high frequency magnetic noising energy ridge envelop to eliminate the equivalent electric current magnetic noising. An experiment was conducted at last which showed that the technology not only can de-noising electromagnetic noising preferably but also can keep the actual signal break detail, and improved the geomagnetic measurement precision.

  4. An Ultrasound Image Despeckling Approach Based on Principle Component Analysis

    Directory of Open Access Journals (Sweden)

    Jawad F. Al-Asad

    2014-07-01

    Full Text Available An approach based on principle component analysis (PCA to filter out multiplicative noise from ultrasound images is presented in this paper. An image with speckle noise is segmented into small dyadic lengths, depending on the original size of the image, and the global covariance matrix is found. A projection matrix is then formed by selecting the maximum eigenvectors of the global covariance matrix. This projection matrix is used to filter speckle noise by projecting each segment into the signal subspace. The approach is based on the assumption that the signal and noise are independent and that the signal subspace is spanned by a subset of few principal eigenvectors. When applied on simulated and real ultrasound images, the proposed approach has outperformed some popular nonlinear denoising techniques such as 2D wavelets, 2D total variation filtering, and 2D anisotropic diffusion filtering in terms of edge preservation and maximum cleaning of speckle noise. It has also showed lower sensitivity to outliers resulting from the log transformation of the multiplicative noise.

  5. Patch-based anisotropic diffusion scheme for fluorescence diffuse optical tomography—part 2: image reconstruction

    Science.gov (United States)

    Correia, Teresa; Koch, Maximilian; Ale, Angelique; Ntziachristos, Vasilis; Arridge, Simon

    2016-02-01

    Fluorescence diffuse optical tomography (fDOT) provides 3D images of fluorescence distributions in biological tissue, which represent molecular and cellular processes. The image reconstruction problem is highly ill-posed and requires regularisation techniques to stabilise and find meaningful solutions. Quadratic regularisation tends to either oversmooth or generate very noisy reconstructions, depending on the regularisation strength. Edge preserving methods, such as anisotropic diffusion regularisation (AD), can preserve important features in the fluorescence image and smooth out noise. However, AD has limited ability to distinguish an edge from noise. We propose a patch-based anisotropic diffusion regularisation (PAD), where regularisation strength is determined by a weighted average according to the similarity between patches around voxels within a search window, instead of a simple local neighbourhood strategy. However, this method has higher computational complexity and, hence, we wavelet compress the patches (PAD-WT) to speed it up, while simultaneously taking advantage of the denoising properties of wavelet thresholding. Furthermore, structural information can be incorporated into the image reconstruction with PAD-WT to improve image quality and resolution. In this case, the weights used to average voxels in the image are calculated using the structural image, instead of the fluorescence image. The regularisation strength depends on both structural and fluorescence images, which guarantees that the method can preserve fluorescence information even when it is not structurally visible in the anatomical images. In part 1, we tested the method using a denoising problem. Here, we use simulated and in vivo mouse fDOT data to assess the algorithm performance. Our results show that the proposed PAD-WT method provides high quality and noise free images, superior to those obtained using AD.

  6. Denoising of mechanical vibration signals based on quantum superposition inspired parametric estimation%基于量子叠加态参数估计的机械振动信号降噪方法

    Institute of Scientific and Technical Information of China (English)

    陈彦龙; 张培林; 王怀光

    2014-01-01

    提出基于量子叠加态参数估计的机械振动信号降噪方法。考虑双树复小波系数虚、实部关系,建立带自适应参数的二维概率密度函数模型;研究父-子代小波系数相关性,提出量子叠加态信号与噪声出现概率,并结合贝叶斯估计理论推导出基于量子叠加态参数估计的自适应收缩函数;分析仿真信号与滚动轴承故障振动信号。结果表明该方法较传统软硬阈值算法适应性更好,降噪效果显著。%A novel denoising method for mechanical vibration signals was proposed based on quantum superposition inspired parametric estimation.Considering the relation between real coefficients and imaginary ones of the dual-tree complex wavelet transformation,a new two-dimensional probability density function model with an adaptive parameter was built.Through investigating the inter-scale dependency of coefficients and those of their parents,the proability for quantum superposition inspired signal and noise to occur was presented.Combined with Bayesian estimation theory,an adaptive shrinkage function was deuced based on quantum superposition inspired parametric estimation.At last,the simulated signals and rolling bearing fault vibration signals were analyzed.The results showed that using the proposed method can reduce noise effectively,can achieve much better performance than that of the traditional soft and hard thresholds denoising algorithms.

  7. Denoising of mechanical vibration signals based on quantum superposition inspired parametric estimation%基于量子叠加态参数估计的机械振动信号降噪方法

    Institute of Scientific and Technical Information of China (English)

    陈彦龙; 张培林; 王怀光

    2014-01-01

    A novel denoising method for mechanical vibration signals was proposed based on quantum superposition inspired parametric estimation.Considering the relation between real coefficients and imaginary ones of the dual-tree complex wavelet transformation,a new two-dimensional probability density function model with an adaptive parameter was built.Through investigating the inter-scale dependency of coefficients and those of their parents,the proability for quantum superposition inspired signal and noise to occur was presented.Combined with Bayesian estimation theory,an adaptive shrinkage function was deuced based on quantum superposition inspired parametric estimation.At last,the simulated signals and rolling bearing fault vibration signals were analyzed.The results showed that using the proposed method can reduce noise effectively,can achieve much better performance than that of the traditional soft and hard thresholds denoising algorithms.%提出基于量子叠加态参数估计的机械振动信号降噪方法。考虑双树复小波系数虚、实部关系,建立带自适应参数的二维概率密度函数模型;研究父-子代小波系数相关性,提出量子叠加态信号与噪声出现概率,并结合贝叶斯估计理论推导出基于量子叠加态参数估计的自适应收缩函数;分析仿真信号与滚动轴承故障振动信号。结果表明该方法较传统软硬阈值算法适应性更好,降噪效果显著。

  8. 基于熵的自适应门限小波去噪股市价格预测%Stock Price Forecasting Based on Adaptive Threshold Wavelet Entropy Denoising

    Institute of Scientific and Technical Information of China (English)

    张晶; 张建文; 李宁

    2011-01-01

    本文针对传统软阈值法小波去噪采用统一门限而引起的过平滑问题,根据熵的特性,在各层自适应调整去噪门限,提出一种改进的小波去噪算法,采用Hurst指数和盒维数作为判决准则抑制过平滑。最后将算法应用于股市价格时间序列去噪,并用BP神经网络对去噪后的深发展A近20年的收盘价格进行了分段预测。仿真表明,本文方法与传统方法相比,误差明显减小,预测结果更为理想。%In this paper, duo to the over-smoothing problems of the traditional so.threshold wavelet de-noising which was caused by uniform threshold.Based on characteristics of entropy in time series of stock price,in this article we proposed a new algorithm to filter out the noise using adaptive threshold.Then we take the Hurst index and the box dimension as the decision threshold to justify the effects.Taking“Shenfazhan A”for example,this article forecasts the closing stock price in the recent 20 years.The simulation result indicates that algorithm reduced errors and has more rational de-noising effects.

  9. Research on the measuring technology of minute part's geometrical parameter based on image processing

    Science.gov (United States)

    Jia, Xiao-yan; Xiao, Ze-xin

    2008-03-01

    The measuring technology of minute part's geometrical parameter based on image processing is an integration of optics, the mechanics, electronics, calculation and control. Accomplishing the video alteration of measuring microscope, real-time gathering image with CCD, and compiling automatically measuring software in Visual C++6.0 environment. First to do image processing which includes denoise filter, illuminance non-uniformity adjustment and image enhancement, then to carry on the on-line automatic measuring to its geometry parameters. By measuring the minute part's geometry parameters of machineries and integrated circuit in this system, the experimental results indicate that the measuring accuracy could amount to 1 micron, and the system survey stability and usability are all good.

  10. Hardware design and implementation of a wavelet de-noising procedure for medical signal preprocessing.

    Science.gov (United States)

    Chen, Szi-Wen; Chen, Yuan-Ho

    2015-01-01

    In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz. PMID:26501290

  11. Hardware Design and Implementation of a Wavelet De-Noising Procedure for Medical Signal Preprocessing

    Directory of Open Access Journals (Sweden)

    Szi-Wen Chen

    2015-10-01

    Full Text Available In this paper, a discrete wavelet transform (DWT based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan 40 nm standard cell library. The integrated circuit (IC synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz.

  12. Identity Based Color Image Cryptography

    OpenAIRE

    Gopi Krishnan S; Loganathan D

    2011-01-01

    An Identity based cryptography based on visual cryptography scheme was proposed for protecting color image. A color image to be protected and authentic entities such as account number, password, signature image are given as input. The binary key image is obtained by distributing the digital signature of obtained authentic entities. A secret color image which needs to be communicated is decomposed into three grayscale tones of Y-Cb-Cr color component. Then these grayscale images are half-toned...

  13. Fourth-order partial differential equation noise removal on welding images

    International Nuclear Information System (INIS)

    Partial differential equation (PDE) has become one of the important topics in mathematics and is widely used in various fields. It can be used for image denoising in the image analysis field. In this paper, a fourth-order PDE is discussed and implemented as a denoising method on digital images. The fourth-order PDE is solved computationally using finite difference approach and then implemented on a set of digital radiographic images with welding defects. The performance of the discretized model is evaluated using Peak Signal to Noise Ratio (PSNR). Simulation is carried out on the discretized model on different level of Gaussian noise in order to get the maximum PSNR value. The convergence criteria chosen to determine the number of iterations required is measured based on the highest PSNR value. Results obtained show that the fourth-order PDE model produced promising results as an image denoising tool compared with median filter

  14. Fourth-order partial differential equation noise removal on welding images

    Energy Technology Data Exchange (ETDEWEB)

    Halim, Suhaila Abd; Ibrahim, Arsmah; Sulong, Tuan Nurul Norazura Tuan [Center of Mathematics Studies, Faculty of Computer and Mathematical Sciences, Universiti Teknologi MARA, 40450 Shah Alam, Selangor. Malaysia (Malaysia); Manurung, Yupiter HP [Advanced Manufacturing Technology Center, Faculty of Mechanical Engineering, Universiti TEknologi MARA, 40450 Shah Alam, Selangor. Malaysia (Malaysia)

    2015-10-22

    Partial differential equation (PDE) has become one of the important topics in mathematics and is widely used in various fields. It can be used for image denoising in the image analysis field. In this paper, a fourth-order PDE is discussed and implemented as a denoising method on digital images. The fourth-order PDE is solved computationally using finite difference approach and then implemented on a set of digital radiographic images with welding defects. The performance of the discretized model is evaluated using Peak Signal to Noise Ratio (PSNR). Simulation is carried out on the discretized model on different level of Gaussian noise in order to get the maximum PSNR value. The convergence criteria chosen to determine the number of iterations required is measured based on the highest PSNR value. Results obtained show that the fourth-order PDE model produced promising results as an image denoising tool compared with median filter.

  15. Iris image recognition wavelet filter-banks based iris feature extraction schemes

    CERN Document Server

    Rahulkar, Amol D

    2014-01-01

    This book provides the new results in wavelet filter banks based feature extraction, and the classifier in the field of iris image recognition. It provides the broad treatment on the design of separable, non-separable wavelets filter banks, and the classifier. The design techniques presented in the book are applied on iris image analysis for person authentication. This book also brings together the three strands of research (wavelets, iris image analysis, and classifier). It compares the performance of the presented techniques with state-of-the-art available schemes. This book contains the compilation of basic material on the design of wavelets that avoids reading many different books. Therefore, it provide an easier path for the new-comers, researchers to master the contents. In addition, the designed filter banks and classifier can also be effectively used than existing filter-banks in many signal processing applications like pattern classification, data-compression, watermarking, denoising etc.  that will...

  16. Determination and Visualization of pH Values in Anaerobic Digestion of Water Hyacinth and Rice Straw Mixtures Using Hyperspectral Imaging with Wavelet Transform Denoising and Variable Selection

    Directory of Open Access Journals (Sweden)

    Chu Zhang

    2016-02-01

    Full Text Available Biomass energy represents a huge supplement for meeting current energy demands. A hyperspectral imaging system covering the spectral range of 874–1734 nm was used to determine the pH value of anaerobic digestion liquid produced by water hyacinth and rice straw mixtures used for methane production. Wavelet transform (WT was used to reduce noises of the spectral data. Successive projections algorithm (SPA, random frog (RF and variable importance in projection (VIP were used to select 8, 15 and 20 optimal wavelengths for the pH value prediction, respectively. Partial least squares (PLS and a back propagation neural network (BPNN were used to build the calibration models on the full spectra and the optimal wavelengths. As a result, BPNN models performed better than the corresponding PLS models, and SPA-BPNN model gave the best performance with a correlation coefficient of prediction (rp of 0.911 and root mean square error of prediction (RMSEP of 0.0516. The results indicated the feasibility of using hyperspectral imaging to determine pH values during anaerobic digestion. Furthermore, a distribution map of the pH values was achieved by applying the SPA-BPNN model. The results in this study would help to develop an on-line monitoring system for biomass energy producing process by hyperspectral imaging.

  17. Determination and Visualization of pH Values in Anaerobic Digestion of Water Hyacinth and Rice Straw Mixtures Using Hyperspectral Imaging with Wavelet Transform Denoising and Variable Selection.

    Science.gov (United States)

    Zhang, Chu; Ye, Hui; Liu, Fei; He, Yong; Kong, Wenwen; Sheng, Kuichuan

    2016-01-01

    Biomass energy represents a huge supplement for meeting current energy demands. A hyperspectral imaging system covering the spectral range of 874-1734 nm was used to determine the pH value of anaerobic digestion liquid produced by water hyacinth and rice straw mixtures used for methane production. Wavelet transform (WT) was used to reduce noises of the spectral data. Successive projections algorithm (SPA), random frog (RF) and variable importance in projection (VIP) were used to select 8, 15 and 20 optimal wavelengths for the pH value prediction, respectively. Partial least squares (PLS) and a back propagation neural network (BPNN) were used to build the calibration models on the full spectra and the optimal wavelengths. As a result, BPNN models performed better than the corresponding PLS models, and SPA-BPNN model gave the best performance with a correlation coefficient of prediction (rp) of 0.911 and root mean square error of prediction (RMSEP) of 0.0516. The results indicated the feasibility of using hyperspectral imaging to determine pH values during anaerobic digestion. Furthermore, a distribution map of the pH values was achieved by applying the SPA-BPNN model. The results in this study would help to develop an on-line monitoring system for biomass energy producing process by hyperspectral imaging. PMID:26901202

  18. Comparative study of ECG signal denoising by wavelet thresholding in empirical and variational mode decomposition domains.

    Science.gov (United States)

    Lahmiri, Salim

    2014-09-01

    Hybrid denoising models based on combining empirical mode decomposition (EMD) and discrete wavelet transform (DWT) were found to be effective in removing additive Gaussian noise from electrocardiogram (ECG) signals. Recently, variational mode decomposition (VMD) has been proposed as a multiresolution technique that overcomes some of the limits of the EMD. Two ECG denoising approaches are compared. The first is based on denoising in the EMD domain by DWT thresholding, whereas the second is based on noise reduction in the VMD domain by DWT thresholding. Using signal-to-noise ratio and mean of squared errors as performance measures, simulation results show that the VMD-DWT approach outperforms the conventional EMD-DWT. In addition, a non-local means approach used as a reference technique provides better results than the VMD-DWT approach. PMID:26609387

  19. MRA-based wavelet frames and applications: image segmentation and surface reconstruction

    Science.gov (United States)

    Dong, Bin; Shen, Zuowei

    2012-06-01

    Theory of wavelet frames and their applications to image restoration problems have been extensively studied for the past two decades. The success of wavelet frames in solving image restoration problems, which includes denoising, deblurring, inpainting, computed tomography, etc., is mainly due to their capability of sparsely approximating piecewise smooth functions such as images. However, in contrast to the wide applications of wavelet frame based approaches to image restoration problems, they are rarely used for some image/data analysis tasks, such as image segmentation, registration and surface reconstruction from unorganized point clouds. The main reason for this is the lack of geometric interpretations of wavelet frames and their associated transforms. Recently, geometric meanings of wavelet frames have been discovered and connections between the wavelet frame based approach and the differential operator based variational model were established.1 Such discovery enabled us to extend the wavelet frame based approach to some image/data analysis tasks that have not yet been studied before. In this paper, we will provide a unified survey of the wavelet frame based models for image segmentation and surface reconstruction from unorganized point clouds. Advantages of the wavelet frame based approach are illustrated by numerical experiments.

  20. PCA-based visualization of terahertz time-domain spectroscopy image

    Science.gov (United States)

    Pei, Jihong; Hu, Yong; Xie, Weixin

    2007-11-01

    A novel visualization method of terahertz time-domain spectroscopy (THz-TDS) image is presented, which is based on principal component analysis (PCA) technique. The proposed method include three processing steps: firstly, the THz- TDS image is preprocessed using a spatial vector filtering technique to denoise. Secondly, the THz-TDS image is transformed from spatio-temporal domain to spatio-spectral domain, and the transformed image can be viewed as a multispectral image whose spectral dimensionality D is equal to the sampled number of THz-TDS pulse at each pixel. Thirdly, each of spectrum vector at a pixel is viewed as a point in D dimensional space, the covariance matrix of pixels can be computed, and then three eigenvectors corresponding to the first 3 largest eigenvalues are found by PCA technique. the THz-TDS image is projected along these three eigenvectors. By normalizing these 3 principal component images and mapping them into the RGB space, we can get a synthetic color image as a visualization result of the THz- TDS image. Due to vector-based dimensionality reduction, the proposed method can provide more visual information of the THz-TDS image than scalar-based visualization techniques. Finally, experimental results are provided to demonstrate the performance of the proposed method.

  1. The research and application of double mean weighting denoising algorithm

    Science.gov (United States)

    Fang, Hao; Xiong, Feng

    2015-12-01

    In the application of image processing and pattern recognition, the precision of image preprocessing has a great influence on the image after-processing and analysis. This paper describes a novel local double mean weighted algorithm (hereinafter referred to as D-M algorithm) for image denoising. Firstly, the pixel difference and the absolute value are taken for the current pixels and the pixels in the neighborhood; then the absolute values are sorted again, the means of such pixels are taken in an half-to-half way; finally the weighting coefficient of the mean is taken. According to a large number of experiments, such algorithm not only introduces a certain robustness, but also improves increment significantly.

  2. Evaluation of Wavelet Denoising Methods for Small-Scale Joint Roughness Estimation Using Terrestrial Laser Scanning

    Science.gov (United States)

    Bitenc, M.; Kieffer, D. S.; Khoshelham, K.

    2015-08-01

    The precision of Terrestrial Laser Scanning (TLS) data depends mainly on the inherent random range error, which hinders extraction of small details from TLS measurements. New post processing algorithms have been developed that reduce or eliminate the noise and therefore enable modelling details at a smaller scale than one would traditionally expect. The aim of this research is to find the optimum denoising method such that the corrected TLS data provides a reliable estimation of small-scale rock joint roughness. Two wavelet-based denoising methods are considered, namely Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT), in combination with different thresholding procedures. The question is, which technique provides a more accurate roughness estimates considering (i) wavelet transform (SWT or DWT), (ii) thresholding method (fixed-form or penalised low) and (iii) thresholding mode (soft or hard). The performance of denoising methods is tested by two analyses, namely method noise and method sensitivity to noise. The reference data are precise Advanced TOpometric Sensor (ATOS) measurements obtained on 20 × 30 cm rock joint sample, which are for the second analysis corrupted by different levels of noise. With such a controlled noise level experiments it is possible to evaluate the methods' performance for different amounts of noise, which might be present in TLS data. Qualitative visual checks of denoised surfaces and quantitative parameters such as grid height and roughness are considered in a comparative analysis of denoising methods. Results indicate that the preferred method for realistic roughness estimation is DWT with penalised low hard thresholding.

  3. MSPA BASED ON PROCESS INFORMATION DENOISED WITH WAVELET TRANSFORM AND ITS APPLICATION TO CHEMICAL PROCESS MONITORING%基于小波变换去噪的多元统计投影分析及其在化工过程监控中的应用

    Institute of Scientific and Technical Information of China (English)

    陈国金; 梁军; 钱积新

    2003-01-01

    In industrial processes, measured data are often contaminated by noise, which causes poor performance of some techniques driven by data. Wavelet transform is a useful tool to de-noise the process information, but conventional transaction is directly employing wavelet transform to the measured variables, which will make the method less effective and more multifarious if there exists lots of process variables and collinear relationships. In this paper, a novel multivariate statistical projection analysis (MSPA) based on data de-noised with wavelet transform and blind signal analysis is presented, which can detect fault more quickly and improve the monitoring performance of the process. The simulation results applying to a double-effect evaporator verify higher effectiveness and better performance of the new MSPA than classical multivariate statistical process control(MSPC).

  4. Effectiveness of Wavelet Denoising on Electroencephalogram Signals

    Directory of Open Access Journals (Sweden)

    Md. Mamun

    2013-01-01

    Full Text Available Analyzing Electroencephalogram (EEG signal is a challenge due to the various artifacts used by Electromyogram,eye blink and Electrooculogram. The present de-noising techniques that are based on the frequency selective filteringsuffers from a substantial loss of the EEG data. Noise removal using wavelet has the characteristic of preservingsignal uniqueness even if noise is going to be minimized. To remove noise from EEG signal, this research employeddiscrete wavelet transform. Root mean square difference has been used to find the usefulness of the noiseelimination. In this research, four different discrete wavelet functions have been used to remove noise from theElectroencephalogram signal gotten from two different types of patients (healthy and epileptic to show theeffectiveness of DWT on EEG noise removal. The result shows that the WF orthogonal meyer is the best one fornoise elimination from the EEG signal of epileptic subjects and the WF Daubechies 8 (db8 is the best one for noiseelimination from the EEG signal on healthy subjects.

  5. Shock capturing, level sets, and PDE based methods in computer vision and image processing: a review of Osher's contributions

    International Nuclear Information System (INIS)

    In this paper we review the algorithm development and applications in high resolution shock capturing methods, level set methods, and PDE based methods in computer vision and image processing. The emphasis is on Stanley Osher's contribution in these areas and the impact of his work. We will start with shock capturing methods and will review the Engquist-Osher scheme, TVD schemes, entropy conditions, ENO and WENO schemes, and numerical schemes for Hamilton-Jacobi type equations. Among level set methods we will review level set calculus, numerical techniques, fluids and materials, variational approach, high codimension motion, geometric optics, and the computation of discontinuous solutions to Hamilton-Jacobi equations. Among computer vision and image processing we will review the total variation model for image denoising, images on implicit surfaces, and the level set method in image processing and computer vision

  6. A multiscale products technique for denoising of DNA capillary electrophoresis signals

    Science.gov (United States)

    Gao, Qingwei; Lu, Yixiang; Sun, Dong; Zhang, Dexiang

    2013-06-01

    Since noise degrades the accuracy and precision of DNA capillary electrophoresis (CE) analysis, signal denoising is thus important to facilitate the postprocessing of CE data. In this paper, a new denoising algorithm based on dyadic wavelet transform using multiscale products is applied for the removal of the noise in the DNA CE signal. The adjacent scale wavelet coefficients are first multiplied to amplify the significant features of the CE signal while diluting noise. Then, noise is suppressed by applying a multiscale threshold to the multiscale products instead of directly to the wavelet coefficients. Finally, the noise-free CE signal is recovered from the thresholded coefficients by using inverse dyadic wavelet transform. We compare the performance of the proposed algorithm with other denoising methods applied to the synthetic CE and real CE signals. Experimental results show that the new scheme achieves better removal of noise while preserving the shape of peaks corresponding to the analytes in the sample.

  7. Lifting transform via Savitsky-Golay filter predictor and application of denoising

    Institute of Scientific and Technical Information of China (English)

    ZHOU Guang-zhu; YANG Feng-jie; WANG Cui-zhen

    2006-01-01

    The Savitsky-Golay filter is a smoothing filter based on polynomial regression. It employs the regression fitting capacity to improve the smoothing results. But Savitsky-Golay filter uses a fix sized window. It has the same shortage of Window Fourier Transform. Wavelet mutiresolution analysis may deal with this problem. In this paper, taking advantage of Savitsky-Golay filter's fitting ability and the wavelet transform's multiscale analysis ability, we developed a new lifting transform via Savitsky-Golay smoothing filter as the lifting predictor, and then processed the signals comparing with the ordinary Savitsky-Golay smoothing method. We useed the new lifting in noisy heavy sine denoising. The new transform obviously has better denoise ability than ordinary Savitsky-Golay smoothing method. At the same time singular points are perfectly retained in the denoised signal.Singularity analysis, multiscale interpolation, estimation, chemical data smoothing and other potential signal processing utility of this new lifting transform are in prospect.

  8. A multiscale products technique for denoising of DNA capillary electrophoresis signals

    International Nuclear Information System (INIS)

    Since noise degrades the accuracy and precision of DNA capillary electrophoresis (CE) analysis, signal denoising is thus important to facilitate the postprocessing of CE data. In this paper, a new denoising algorithm based on dyadic wavelet transform using multiscale products is applied for the removal of the noise in the DNA CE signal. The adjacent scale wavelet coefficients are first multiplied to amplify the significant features of the CE signal while diluting noise. Then, noise is suppressed by applying a multiscale threshold to the multiscale products instead of directly to the wavelet coefficients. Finally, the noise-free CE signal is recovered from the thresholded coefficients by using inverse dyadic wavelet transform. We compare the performance of the proposed algorithm with other denoising methods applied to the synthetic CE and real CE signals. Experimental results show that the new scheme achieves better removal of noise while preserving the shape of peaks corresponding to the analytes in the sample. (paper)

  9. Post-processing noise removal algorithm for magnetic resonance imaging based on edge detection and wavelet analysis

    Energy Technology Data Exchange (ETDEWEB)

    Placidi, Giuseppe; Alecci, Marcello; Sotgiu, Antonello [INFM, c/o Centro di Risonanza Magnetica and Dipartimento di Scienze e Tecnologie Biomediche, Universita dell' Aquila, Via Vetoio 10, 67010 Coppito, L' Aquila (Italy)

    2003-07-07

    A post-processing noise suppression technique for biomedical MRI images is presented. The described procedure recovers both sharp edges and smooth surfaces from a given noisy MRI image; it does not blur the edges and does not introduce spikes or other artefacts. The fine details of the image are also preserved. The proposed algorithm first extracts the edges from the original image and then performs noise reduction by using a wavelet de-noise method. After the application of the wavelet method, the edges are restored to the filtered image. The result is the original image with less noise, fine detail and sharp edges. Edge extraction is performed by using an algorithm based on Sobel operators. The wavelet de-noise method is based on the calculation of the correlation factor between wavelet coefficients belonging to different scales. The algorithm was tested on several MRI images and, as an example of its application, we report the results obtained from a spin echo (multi echo) MRI image of a human wrist collected with a low field experimental scanner (the signal-to-noise ratio, SNR, of the experimental image was 12). Other filtering operations have been performed after the addition of white noise on both channels of the experimental image, before the magnitude calculation. The results at SNR = 7, SNR = 5 and SNR = 3 are also reported. For SNR values between 5 and 12, the improvement in SNR was substantial and the fine details were preserved, the edges were not blurred and no spikes or other artefacts were evident, demonstrating the good performances of our method. At very low SNR (SNR = 3) our result is worse than that obtained by a simpler filtering procedure.

  10. Identity Based Color Image Cryptography

    Directory of Open Access Journals (Sweden)

    Gopi Krishnan S

    2011-05-01

    Full Text Available An Identity based cryptography based on visual cryptography scheme was proposed for protecting color image. A color image to be protected and authentic entities such as account number, password, signature image are given as input. The binary key image is obtained by distributing the digital signature of obtained authentic entities. A secret color image which needs to be communicated is decomposed into three grayscale tones of Y-Cb-Cr color component. Then these grayscale images are half-toned to binary image, and finally the obtained binary images are encrypted using binary key image to obtain binary cipher images. To encrypt Exclusive-OR operation is done on binary key image and three half-tones of secret color image separately. These binary images are combined to obtain cipher. In decryption the shares are decrypted by applying Exclusive-OR operation on cipher and key, then the recovered binary images are inverse half-toned and combined to get secret color image. This scheme is more efficient for communicating natural images across diffident channel.

  11. Detection of the Concentration and Size Distribution of Indoor Inhalable Particle Based on Mathematical Morphology

    OpenAIRE

    Hongli Liu; Xiong Zhou; Lei Xiao

    2012-01-01

    Adopting the microscopic observation imaging and digital image processing technologies, this paper researches a new measuring method of indoor inhalable particulate matter concentration and size distribution .It realizes denoising, binarization, filtering and edge detection to based on mathematical morphology, regional filled and calibration detection to particulate matter image, designs the parameters recognition algorithm for particulate matter size, fractal dimension ,shape factors and so ...

  12. Multi-scale image fusion for x-ray grating-based mammography

    Science.gov (United States)

    Jiang, Xiaolei; Zhang, Li; Wang, Zhentian; Stampanoni, Marco

    2012-10-01

    X-ray phase contrast imaging (PCI) can provide high sensitivity of weakly absorbing low-Z objects in medical and biological fields, especially in mammography. Grating-based differential phase contrast (DPC) method is the most potential PCI method for clinic applications because it can works well with conventional X-ray tube and it can retrieve attenuation, DPC and dark-field information of the samples in a single scanning. Three kinds of information have different details and contrast which represent different physical characteristics of X-rays with matters. Hence, image fusion can show the most desirable characteristics of each image. In this paper, we proposed a multi-scale image fusion for X-ray grating-based DPC mammography. Firstly, non-local means method is adopted for denoising due to the strong noise, especially for DPC and dark-field images. Then, Laplacian pyramid is used for multi-scale image fusion. The principal component analysis (PCA) method is used on the high frequency part and the spatial frequency method is used on the low frequency part. Finally, the fused image is obtained by inverse Laplacian pyramid transform. Our algorithm is validated by experiments. The experiments were performed on mammoDPC instrumentation at the Paul Scherrer Institut in Villigen, Switzerland. The results show that our algorithm can significantly show the advantages of three kinds of information in the fused image, which is very helpful for the breast cancer diagnosis.

  13. IMAGE DENOISING BASED ON FINITE RIDGELET TRANSFORM%基于有限Ridgelet变换的图像去噪

    Institute of Scientific and Technical Information of China (English)

    唐永茂; 施鹏飞

    2006-01-01

    Ridgelet变换是一种新的图像多尺度几何分析(MGA)方法,它能有效地对图像进行多尺度,多方向的描述.M.N.Do提出一种可逆的,正交化的,极好重建性的Ridgelet变换实现-有限Ridgelet变换(FRIT).本文将有限Ridgelet变换应用到线状边界明显的图像去噪中,实验结果表明,它比小波去噪取得更好的效果.

  14. Sparse representation for color image restoration.

    Science.gov (United States)

    Mairal, Julien; Elad, Michael; Sapiro, Guillermo

    2008-01-01

    Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. In particular, the design of well adapted dictionaries for images has been a major challenge. The K-SVD has been recently proposed for this task and shown to perform very well for various grayscale image processing tasks. In this paper, we address the problem of learning dictionaries for color images and extend the K-SVD-based grayscale image denoising algorithm that appears in. This work puts forward ways for handling nonhomogeneous noise and missing information, paving the way to state-of-the-art results in applications such as color image denoising, demosaicing, and inpainting, as demonstrated in this paper. PMID:18229804

  15. Imaging liver lesions using grating-based phase-contrast computed tomography with bi-lateral filter post-processing.

    Directory of Open Access Journals (Sweden)

    Julia Herzen

    Full Text Available X-ray phase-contrast imaging shows improved soft-tissue contrast compared to standard absorption-based X-ray imaging. Especially the grating-based method seems to be one promising candidate for clinical implementation due to its extendibility to standard laboratory X-ray sources. Therefore the purpose of our study was to evaluate the potential of grating-based phase-contrast computed tomography in combination with a novel bi-lateral denoising method for imaging of focal liver lesions in an ex vivo feasibility study. Our study shows that grating-based phase-contrast CT (PCCT significantly increases the soft-tissue contrast in the ex vivo liver specimens. Combining the information of both signals--absorption and phase-contrast--the bi-lateral filtering leads to an improvement of lesion detectability and higher contrast-to-noise ratios. The normal and the pathological tissue can be clearly delineated and even internal structures of the pathological tissue can be visualized, being invisible in the absorption-based CT alone. Histopathology confirmed the presence of the corresponding findings in the analyzed tissue. The results give strong evidence for a sufficiently high contrast for different liver lesions using non-contrast-enhanced PCCT. Thus, ex vivo imaging of liver lesions is possible with a polychromatic X-ray source and at a spatial resolution of ∼100 µm. The post-processing with the novel bi-lateral denoising method improves the image quality by combining the information from the absorption and the phase-contrast images.

  16. Diagnostic accuracy of late iodine enhancement on cardiac computed tomography with a denoise filter for the evaluation of myocardial infarction.

    Science.gov (United States)

    Matsuda, Takuya; Kido, Teruhito; Itoh, Toshihide; Saeki, Hideyuki; Shigemi, Susumu; Watanabe, Kouki; Kido, Tomoyuki; Aono, Shoji; Yamamoto, Masaya; Matsuda, Takeshi; Mochizuki, Teruhito

    2015-12-01

    We evaluated the image quality and diagnostic performance of late iodine enhancement (LIE) in dual-source computed tomography (DSCT) with low kilo-voltage peak (kVp) images and a denoise filter for the detection of acute myocardial infarction (AMI) in comparison with late gadolinium enhancement (LGE) magnetic resonance imaging (MRI). The Hospital Ethics Committee approved the study protocol. Before discharge, 19 patients who received percutaneous coronary intervention after AMI underwent DSCT and 1.5 T MRI. Immediately after coronary computed tomography (CT) angiography, contrast medium was administered at a slow injection rate. LIE-CT scans were acquired via dual-energy CT and reconstructed as 100-, 140-kVp, and mixed images. An iterative three-dimensional edge-preserved smoothing filter was applied to the 100-kVp images to obtain denoised 100-kVp images. The mixed, 140-kVp, 100-kVp, and denoised 100-kVp images were assessed using contrast-to-noise ratio (CNR), and their diagnostic performance in comparison with MRI and infarcted volumes were evaluated. Three hundred four segments of 19 patients were evaluated. Fifty-three segments showed LGE in MRI. The median CNR of the mixed, 140-, 100-kVp and denoised 100-kVp images was 3.49, 1.21, 3.57, and 6.08, respectively. The median CNR was significantly higher in the denoised 100-kVp images than in the other three images (P LIE-CT. PMID:26202159

  17. 3D Wavelet-Based Filter and Method

    Science.gov (United States)

    Moss, William C.; Haase, Sebastian; Sedat, John W.

    2008-08-12

    A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

  18. Efficient Image Retireval Using Region Based Image Retrieval

    OpenAIRE

    Niket Amoda; Ramesh K Kulkarni

    2013-01-01

    Early image retrieval techniques were based on text ual annotation of images. Manual annotation of images is a burdensome and expensive work for a huge image database. It is often introspective, context-sensitive and crude. Content based image retrieval, is implem ented using the optical constituents of an image such as shape, colour, spatial layout, and texture to ex hibit and index the image. The Region Based Image Retrieval (RBIR) system us...

  19. Visual-Based Transmedia Events Detection

    OpenAIRE

    Joly, Alexis; Champ, Julien; Letessier, Pierre; Hervé, Nicolas; Buisson, Olivier; Viaud, Marie-Luce

    2012-01-01

    This paper presents a visual-based media event detection system based on the automatic discovery of the most circulated images across the main news media (news websites, press agencies, TV news and newspapers). Its main originality is to rely on the transmedia contextual information to denoise the raw visual detections and consequently focus on the most salient transmedia events.

  20. 基于混沌理论和小波变换的微弱周期信号检测方法%Weak periodical signal detection based on wavelet threshold de-noising and chaos theory

    Institute of Scientific and Technical Information of China (English)

    邓宏贵; 曹文晖; 杨兵初; 梅卫平; 敖邦乾

    2012-01-01

    根据小波变换具有多分辨率,混沌系统对噪声的强免疫力和对周期微弱信号的敏感性等特性,通过对小波阈值去噪方法和混沌Duffing振子方程的改进,提出小波阈值去噪和混沌系统相结合的微弱周期信号检测新方法.该方法利用小波变换的平滑作用对包含噪声的信号进行有限离散处理,并根据小波分解尺度确定阈值去噪深度,然后把重构的信号作为周期策动力的摄动并入混沌系统,采用混沌振子阵列实现在噪声背景下微弱信号的检测,并采用梅尔尼科夫方法作为混沌判据.该检测方法克服了以往小波分解对尺度确定的盲目性和阈值选择的不合理性以及对混沌临界状态与周期态区别的模糊性:同时能检测多种频率的信号.仿真测试表明:该方法直观、高效,检测精度高,检测的最低信噪比达到-100dB,频率误差为0.04%左右,改善了湮没在强噪声下的微弱信号检测技术.%Based on the multi-resolution of wavelet transform and chaotic system having a good immunity to noise and sensitive to weak periodical signal, a new method of weak signal detection was proposed based on the combination of the wavelet threshold de-noising and chaotic system by improving wavelet threshold de-noising and duffing oscillator. This method uses wavelet smoothing effect to the limited discrete processing of the signal that contains noise and uses the scale of the wavelet decomposition to determine the de-nosing depth, and then uses the reconstructed signal as the driving motivation of the perturbation cycle into the chaotic system. The chaotic oscillator array is applied to detect weak signal in noisy background, and Melnikov method is adopted as chaotic criterion. This new method has overcome the blindness to the determination of scale and irrationality choice to the threshold of past method of the wavelet decomposition, as well as the ambiguity to distinguish between the critical

  1. Electronic speckle pattern interferometry for fracture expansion in nuclear graphite based on PDE image processing methods

    Science.gov (United States)

    Tang, Chen; Zhang, Junjiang; Sun, Chen; Su, Yonggang; Su, Kai Leung

    2015-05-01

    Nuclear graphite has been widely used as moderating and reflecting materials. However, due to severe neutron irradiation under high temperature, nuclear graphite is prone to deteriorate, resulting in massive microscopic flaws and even cracks under large stress in the later period of its service life. It is indispensable, therefore, to understand the fracture behavior of nuclear graphite to provide reference to structural integrity and safety analysis of nuclear graphite members in reactors. In this paper, we investigated the fracture expansion in nuclear graphite based on PDE image processing methods. We used the second-order oriented partial differential equations filtering model (SOOPDE) to denoise speckle noise, then used the oriented gradient vector fields for to obtain skeletons. The full-field displacement of fractured nuclear graphite and the location of the crack tip were lastly measured under various loading conditions.

  2. Correction of defective pixels for medical and space imagers based on Ising Theory

    Science.gov (United States)

    Cohen, Eliahu; Shnitser, Moriel; Avraham, Tsvika; Hadar, Ofer

    2014-09-01

    We propose novel models for image restoration based on statistical physics. We investigate the affinity between these fields and describe a framework from which interesting denoising algorithms can be derived: Ising-like models and simulated annealing techniques. When combined with known predictors such as Median and LOCO-I, these models become even more effective. In order to further examine the proposed models we apply them to two important problems: (i) Digital Cameras in space damaged from cosmic radiation. (ii) Ultrasonic medical devices damaged from speckle noise. The results, as well as benchmark and comparisons, suggest in most of the cases a significant gain in PSNR and SSIM in comparison to other filters.

  3. 基于小波半软阈值消噪的盲源分离方法%Blind Source Separation Based on Wavelet Semi-soft Threshold Denoising

    Institute of Scientific and Technical Information of China (English)

    孟宗; 马钊; 刘东; 李晶

    2016-01-01

    为了有效提取含噪机械故障信号中的故障特征信息,研究了一种基于小波半软阈值消噪的盲源分离方法。利用小波半软阈值对故障信号进行消噪处理;采用联合近似对角化算法对信号进行盲源分离;考虑在噪声干扰下预消噪常常不足以消除全部噪声,因此在盲源分离后再进行适当的消噪处理,以提高其分离性能。实验验证了所提出方法的有效性和可行性。%In order to extract fault feature informations from the mechanical malfunction signals with noise,a method of blind source separation was proposed based on wavelet semi-soft threshold de-noising.First,wavelet semi-soft threshold was used to filter the failure signals.Then,joint approxi-mate diagonalization was used as blind source separation method to separate signals.Pretreatment was often not enough to eliminate all noises,therefore,it was necessary to denoise again to improve the separation performance.Finally,the feasibility and validity of this method was verified by experi-ments.

  4. Targets Separation and Imaging Method in Sparse Scene Based on Cluster Result of Range Profile Peaks

    Directory of Open Access Journals (Sweden)

    YANG Qiu

    2015-08-01

    Full Text Available This paper focuses on the synthetic aperture radar (SAR imaging of space-sparse targets such as ships on the sea, and proposes a method of targets separation and imaging of sparse scene based on cluster result of range profile peaks. Firstly, wavelet de-noising algorithm is used to preprocess the original echo, and then the range profile at different viewing positions can be obtained by range compression and range migration correction. Peaks of the range profiles can be detected by the fast peak detection algorithm based on second order difference operator. Targets with sparse energy intervals can be imaged through azimuth compression after clustering of peaks in range dimension. What's more, targets without coupling in range energy interval and direction synthetic aperture time can be imaged through azimuth compression after clustering of peaks both in range and direction dimension. Lastly, the effectiveness of the proposed method is validated by simulations. Results of experiment demonstrate that space-sparse targets such as ships can be imaged separately and completely with a small computation in azimuth compression, and the images are more beneficial for target recognition.

  5. A Denoising Autoencoder that Guides Stochastic Search

    OpenAIRE

    Churchill, Alexander W.; Sigtia, Siddharth; Fernando, Chrisantha

    2014-01-01

    An algorithm is described that adaptively learns a non-linear mutation distribution. It works by training a denoising autoencoder (DA) online at each generation of a genetic algorithm to reconstruct a slowly decaying memory of the best genotypes so far. A compressed hidden layer forces the autoencoder to learn hidden features in the training set that can be used to accelerate search on novel problems with similar structure. Its output neurons define a probability distribution that we sample f...

  6. Imaging based refractometers

    Energy Technology Data Exchange (ETDEWEB)

    Baba, Justin S.

    2015-11-24

    Refractometers for simultaneously measuring refractive index of a sample over a range or wavelengths of light include dispersive and focusing optical systems. An optical beam including the rang of wavelengths is spectrally spread along a first axis and focused along a second axis so as to be incident to an interface between the sample and a prism at a range of angles of incidence including a critical angle for at least one wavelength. In some cases, the prism can have a triangle, parallelogram, trapezoid, or other shape. In some cases, the optical beam can be reflected off of multiple interfaces between the prism and the sample. An imaging detector is situated to receive the spectrally spread and focused light from the interface and form an image corresponding to angle of incidence as a function of wavelength. One or more critical angles are indentified and corresponding refractive indices are determined.

  7. Photography-based image generator

    Science.gov (United States)

    Dalton, Nicholas M.; Deering, Charles S.

    1989-09-01

    A two-channel Photography Based Image Generator system was developed to drive the Helmet Mounted Laser Projector at the Naval Training System Center at Orlando, Florida. This projector is a two-channel system that displays a wide field-of-view color image with a high-resolution inset to efficiently match the pilot's visual capability. The image generator is a derivative of the LTV-developed visual system installed in the A-7E Weapon System Trainer at NAS Cecil Field. The Photography Based Image Generator is based on patented LTV technology for high resolution, multi-channel, real world visual simulation. Special provisions were developed for driving the NTSC-developed and patented Helmet Mounted Laser Projector. These include a special 1023-line raster format, an electronic image blending technique, spherical lens mapping for dome projection, a special computer interface for head/eye tracking and flight parameters, special software, and a number of data bases. Good gaze angle tracking is critical to the use of the NTSC projector in a flight simulation environment. The Photography Based Image Generator provides superior dynamic response by performing a relatively simple perspective transformation on stored, high-detail photography instead of generating this detail by "brute force" computer image generation methods. With this approach, high detail can be displayed and updated at the television field rate (60 Hz).

  8. SINGLE IMAGE SUPER RESOLUTION IN SPATIAL AND WAVELET DOMAIN

    Directory of Open Access Journals (Sweden)

    Sapan Naik

    2013-08-01

    Full Text Available Recently single image super resolution is very important research area to generate high-resolution image from given low-resolution image. Algorithms of single image resolution are mainly based on wavelet domain and spatial domain. Filter’s support to model the regularity of natural images is exploited in wavelet domain while edges of images get sharp during up sampling in spatial domain. Here single image super resolution algorithm is presented which based on both spatial and wavelet domain and take the advantage of both. Algorithm is iterative and use back projection to minimize reconstruction error. Wavelet based denoising method is also introduced to remove noise.

  9. Medical Image Registration Based Retrieval

    Directory of Open Access Journals (Sweden)

    Swarnambiga AYYACHAMY

    2013-02-01

    Full Text Available This paper presents a quantitative evaluation of state-of-the art intensity based image registration with retrieval methods applied to medical images. The purpose of this study is to access the stability of these methods for medical image analysis. The accuracy of this medical image retrieval with affine based registration and without registration is evaluated using observer study. For retrieval without registration and with registration, we examine the performance of various transform methods for the retrieval of medical images by extracting the features. This helps for the early diagnosis. The technique used for retrieval of medical images were a set of 2-D discrete Fourier transform (DFT, discrete cosine transform (DCT, discrete wavelet transform (DWT, Complex wavelet transform (CWT, and rotated complex wavelet filters (RCWF were implemented and examined for MRI imaging modalities. Especially RCWF gives texture information strongly oriented in six different directions (45° apart from the complex wavelet transform. Experimental results indicate that the DWT method perform well in retrieval of medical images. The method also retains the comparable levels of computational complexity. Then the experimental evaluation is carried by calculating the precision and recall values. It is found that DWT performs well for retrieval without registration and CWT with affine performs well in registration based retrieval with efficiency of 92% from retrieval efficiency 83% of DWT without registration. This helps in classification as before registration and after registration especially for clinical treatment and diagnosis.

  10. Detector Based Radio Tomographic Imaging

    OpenAIRE

    Yiğitler, Hüseyin; Jäntti, Riku; Kaltiokallio, Ossi; Patwari, Neal

    2016-01-01

    Received signal strength based radio tomographic imaging is a popular device-free indoor localization method which reconstructs the spatial loss field of the environment using measurements from a dense wireless network. Existing methods solve an associated inverse problem using algebraic or compressed sensing reconstruction algorithms. We propose an alternative imaging method that reconstructs spatial field of occupancy using a back-projection based reconstruction algorithm. The introduced sy...

  11. Image Data Bases on Campus.

    Science.gov (United States)

    Kaplan, Reid; Mathieson, Gordon

    1989-01-01

    A description of how image database technology was used to develop two prototypes for academic and administrative applications at Yale University, one using a video data base integration and the other using document-scanning data base technology, is presented. Technical underpinnings for the creation of data bases are described. (Author/MLW)

  12. Biogeography based Satellite Image Classification

    CERN Document Server

    Panchal, V K; Kaur, Navdeep; Kundra, Harish

    2009-01-01

    Biogeography is the study of the geographical distribution of biological organisms. The mindset of the engineer is that we can learn from nature. Biogeography Based Optimization is a burgeoning nature inspired technique to find the optimal solution of the problem. Satellite image classification is an important task because it is the only way we can know about the land cover map of inaccessible areas. Though satellite images have been classified in past by using various techniques, the researchers are always finding alternative strategies for satellite image classification so that they may be prepared to select the most appropriate technique for the feature extraction task in hand. This paper is focused on classification of the satellite image of a particular land cover using the theory of Biogeography based Optimization. The original BBO algorithm does not have the inbuilt property of clustering which is required during image classification. Hence modifications have been proposed to the original algorithm and...

  13. Dictionary learning method for joint sparse representation-based image fusion

    Science.gov (United States)

    Zhang, Qiheng; Fu, Yuli; Li, Haifeng; Zou, Jian

    2013-05-01

    Recently, sparse representation (SR) and joint sparse representation (JSR) have attracted a lot of interest in image fusion. The SR models signals by sparse linear combinations of prototype signal atoms that make a dictionary. The JSR indicates that different signals from the various sensors of the same scene form an ensemble. These signals have a common sparse component and each individual signal owns an innovation sparse component. The JSR offers lower computational complexity compared with SR. First, for JSR-based image fusion, we give a new fusion rule. Then, motivated by the method of optimal directions (MOD), for JSR, we propose a novel dictionary learning method (MODJSR) whose dictionary updating procedure is derived by employing the JSR structure one time with singular value decomposition (SVD). MODJSR has lower complexity than the K-SVD algorithm which is often used in previous JSR-based fusion algorithms. To capture the image details more efficiently, we proposed the generalized JSR in which the signals ensemble depends on two dictionaries. MODJSR is extended to MODGJSR in this case. MODJSR/MODGJSR can simultaneously carry out dictionary learning, denoising, and fusion of noisy source images. Some experiments are given to demonstrate the validity of the MODJSR/MODGJSR for image fusion.

  14. Content based Image Retrieval from Forensic Image Databases

    OpenAIRE

    Swati A. Gulhane; Dr. Ajay. A. Gurjar

    2015-01-01

    Due to the proliferation of video and image data in digital form, Content based Image Retrieval has become a prominent research topic. In forensic sciences, digital data have been widely used such as criminal images, fingerprints, scene images and so on. Therefore, the arrangement of such large image data becomes a big issue such as how to get an interested image fast. There is a great need for developing an efficient technique for finding the images. In order to find an image, im...

  15. CONTENT BASED BATIK IMAGE RETRIEVAL

    Directory of Open Access Journals (Sweden)

    A. Haris Rangkuti

    2014-01-01

    Full Text Available Content Based Batik Image Retrieval (CBBIR is an area of research that focuses on image processing based on characteristic motifs of batik. Basically the image has a unique batik motif compared with other images. Its uniqueness lies in the characteristics possessed texture and shape, which has a unique and distinct characteristics compared with other image characteristics. To study this batik image must start from a preprocessing stage, in which all its color images must be removed with a grayscale process. Proceed with the feature extraction process taking motifs characteristic of every kind of batik using the method of edge detection. After getting the characteristic motifs seen visually, it will be calculated by using 4 texture characteristic function is the mean, energy, entropy and stadard deviation. Characteristic function will be added as needed. The results of the calculation of characteristic functions will be made more specific using the method of wavelet transform Daubechies type 2 and invariant moment. The result will be the index value of every type of batik. Because each motif there are the same but have different sizes, so any kind of motive would be divided into three sizes: Small, medium and large. The perfomance of Batik Image similarity using this method about 90-92%.

  16. Metadata for Content-Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Adrian Sterca

    2010-12-01

    Full Text Available This paper presents an image retrieval technique that combines content based image retrieval with pre-computed metadata-based image retrieval. The resulting system will have the advantages of both approaches: the speed/efficiency of metadata-based image retrieval and the accuracy/power of content-based image retrieval.

  17. Metadata for Content-Based Image Retrieval

    OpenAIRE

    Adrian Sterca; Daniela Miron

    2010-01-01

    This paper presents an image retrieval technique that combines content based image retrieval with pre-computed metadata-based image retrieval. The resulting system will have the advantages of both approaches: the speed/efficiency of metadata-based image retrieval and the accuracy/power of content-based image retrieval.

  18. REVIEW OF PHASE BASED IMAGE MATCHING

    OpenAIRE

    Jaydeep Kale*

    2016-01-01

    This paper review the phase based image matching method. A major approach for image matching is to extract feature vectors corresponding to given images and perform image matching based on some distance metrics. One of the difficult problem with this feature based image matching is that matching performance depends upon many parameters in feature extraction process. So this paper reviews the phase based image matching methods in which 2D DFTs of given images are used to determine resemblance ...

  19. Wavelength conversion based spectral imaging

    DEFF Research Database (Denmark)

    Dam, Jeppe Seidelin

    resolution for this spectral region. Today, an increasing number of applications exists outside the spectral region covered by Si-based devices, e.g. within cleantech, medical or food imaging. We present a technology based on wavelength conversion which will extend the spectral coverage of state of the art...

  20. Fovea based image quality assessment

    Science.gov (United States)

    Guo, Anan; Zhao, Debin; Liu, Shaohui; Cao, Guangyao

    2010-07-01

    Humans are the ultimate receivers of the visual information contained in an image, so the reasonable method of image quality assessment (IQA) should follow the properties of the human visual system (HVS). In recent years, IQA methods based on HVS-models are slowly replacing classical schemes, such as mean squared error (MSE) and Peak Signal-to-Noise Ratio (PSNR). IQA-structural similarity (SSIM) regarded as one of the most popular HVS-based methods of full reference IQA has apparent improvements in performance compared with traditional metrics in nature, however, it performs not very well when the images' structure is destroyed seriously or masked by noise. In this paper, a new efficient fovea based structure similarity image quality assessment (FSSIM) is proposed. It enlarges the distortions in the concerned positions adaptively and changes the importances of the three components in SSIM. FSSIM predicts the quality of an image through three steps. First, it computes the luminance, contrast and structure comparison terms; second, it computes the saliency map by extracting the fovea information from the reference image with the features of HVS; third, it pools the above three terms according to the processed saliency map. Finally, a commonly experimental database LIVE IQA is used for evaluating the performance of the FSSIM. Experimental results indicate that the consistency and relevance between FSSIM and mean opinion score (MOS) are both better than SSIM and PSNR clearly.

  1. 基于小波消噪的植物电信号频谱特征分析%The Analysis on Spectrum Characteristic of Plant Electrical Signal Based on Wavelet De-Noising

    Institute of Scientific and Technical Information of China (English)

    张晓辉; 余宁梅; 习岗; 孟晓丽

    2011-01-01

    The paper studies the basic characteristics and the changing laws of aloe electrical signals under different temperatures based on wavelet soft threshold de-noising method and Fast Fourier Transform. The spectral edge frequency ( SEF) , spectral gravity frequency ( SGF) and power spectral entropy ( PSE ) of plant electrical signals are used to study the changes of power spectrum of aloe electrical signals under different temperatures. The results show that the magnitude of aloe electrical signal is a strength of mV, and the frequency is below 5 Hz. The SEF and SGF in aloe leaves move to the high frequency as the temperature increases, and the PSE of the electrical signal has a dramatic increase. The study reveals that the SEF, SGF and PSE have the consistent trend to change and there is a significant relevance between PSE and SGF during the process of raising temperature. It is considered that the changes of the PSE and SGF in aloe leaves can be used as sensitive index of external environment change in leaf cells, and then implement scientific regulators of physiological and biochemical process of plant growth and development.%利用小波软阈值消噪法和快速傅里叶变换研究不同温度条件下芦荟叶片电信号的基本特征及变化规律.通过植物电信号谱边缘频率(SEF)、谱重心频率(SGF)和功率谱熵(PSE)研究不同温度下芦荟(Aloe vera L.)叶片电信号功率谱的变化.结果表明,芦荟的电信号是一种强度为mV数量级、频率分布在5 Hz以下的低频信号;随着温度的升高,电信号的SEF和SGF向高频段移动,细胞活动受到激发,PSE急剧增加;在升温过程中SEF、SGF和PSE三者的变化趋势趋于一致,PSE与SGF的变化之间有很强的关联性,因而植物电信号PSE或SGF的变化可以作为叶片细胞响应外界环境变化的灵敏指标,而对植物生长发育的生理生化过程实施科学调控.

  2. Blind restoration method of three-dimensional microscope image based on RL algorithm

    Science.gov (United States)

    Yao, Jin-li; Tian, Si; Wang, Xiang-rong; Wang, Jing-li

    2013-08-01

    Thin specimens of biological tissue appear three dimensional transparent under a microscope. The optic slice images can be captured by moving the focal planes at the different locations of the specimen. The captured image has low resolution due to the influence of the out-of-focus information comes from the planes adjacent to the local plane. Using traditional methods can remove the blur in the images at a certain degree, but it needs to know the point spread function (PSF) of the imaging system accurately. The accuracy degree of PSF influences the restoration result greatly. In fact, it is difficult to obtain the accurate PSF of the imaging system. In order to restore the original appearance of the specimen under the conditions of the imaging system parameters are unknown or there is noise and spherical aberration in the system, a blind restoration methods of three-dimensional microscope based on the R-L algorithm is proposed in this paper. On the basis of the exhaustive study of the two-dimension R-L algorithm, according to the theory of the microscopy imaging and the wavelet transform denoising pretreatment, we expand the R-L algorithm to three-dimension space. It is a nonlinear restoration method with the maximum entropy constraint. The method doesn't need to know the PSF of the microscopy imaging system precisely to recover the blur image. The image and PSF converge to the optimum solutions by many alterative iterations and corrections. The matlab simulation and experiments results show that the expansion algorithm is better in visual indicators, peak signal to noise ratio and improved signal to noise ratio when compared with the PML algorithm, and the proposed algorithm can suppress noise, restore more details of target, increase image resolution.

  3. Atomic norm denoising with applications to line spectral estimation

    CERN Document Server

    Bhaskar, Badri Narayan; Recht, Benjamin

    2012-01-01

    The sub-Nyquist estimation of line spectra is a classical problem in signal processing, but currently popular subspace-based techniques have few guarantees in the presence of noise and rely on a priori knowledge about system model order. Motivated by recent work on atomic norms in inverse problems, we propose a new approach to line spectral estimation that provides theoretical guarantees for the mean-squared-error performance in the presence of noise and without advance knowledge of the model order. We propose an abstract theory of denoising with atomic norms and specialize this theory to provide a convex optimization problem for estimating the frequencies and phases of a mixture of complex exponentials with guaranteed bounds on the mean-squared error. We show that the associated convex optimization problem, called "Atomic norm Soft Thresholding" (AST), can be solved in polynomial time via semidefinite programming. For very large scale problems we provide an alternative, efficient algorithm, called "Discretiz...

  4. Compressed Sensing for Denoising in Adaptive System Identification

    CERN Document Server

    Hosseini, Seyed Hossein

    2012-01-01

    We propose a new technique for adaptive identification of sparse systems based on the compressed sensing (CS) theory. We manipulate the transmitted pilot (input signal) and the received signal such that the weights of adaptive filter approach the compressed version of the sparse system instead of the original system. To this end, we use random filter structure at the transmitter to form the measurement matrix according to the CS framework. The original sparse system can be reconstructed by the conventional recovery algorithms. As a result, the denoising property of CS can be deployed in the proposed method at the recovery stage. The experiments indicate significant performance improvement of proposed method compared to the conventional LMS method which directly identifies the sparse system. Furthermore, at low levels of sparsity, our method outperforms a specialized identification algorithm that promotes sparsity.

  5. Denoising Message Passing for X-ray Computed Tomography Reconstruction

    CERN Document Server

    Perelli, Alessandro; Can, Ali; Davies, Mike E

    2016-01-01

    X-ray Computed Tomography (CT) reconstruction from sparse number of views is becoming a powerful way to reduce either the radiation dose or the acquisition time in CT systems but still requires a huge computational time. This paper introduces an approximate Bayesian inference framework for CT reconstruction based on a family of denoising approximate message passing (DCT-AMP) algorithms able to improve both the convergence speed and the reconstruction quality. Approximate Message Passing for Compressed Sensing has been extensively analysed for random linear measurements but there are still not clear solutions on how AMP should be modified and how it performs with real world problems. In particular to overcome the convergence issues of DCT-AMP with structured measurement matrices, we propose a disjoint preconditioned version of the algorithm tailored for both the geometric system model and the noise model. In addition the Bayesian DCT-AMP formulation allows to measure how the current estimate is close to the pr...

  6. A PSEUDO RELEVANCE BASED IMAGE RETRIEVAL MODEL

    OpenAIRE

    Kamini Thakur; Preetika Saxena

    2015-01-01

    Image retrieval is the basic requirement, task now a day. Content based image retrieval is the popular image retrieval system by which the target image to be retrieved based on the useful features of the given image. CBIR has an active and fast growing research area in both image processing and data mining. In marine ecosystems the captured images having lower resolution, transformation invariant and translation capabilities. Therefore, accurate image extraction according to the u...

  7. New second order Mumford-Shah model based on Γ-convergence approximation for image processing

    Science.gov (United States)

    Duan, Jinming; Lu, Wenqi; Pan, Zhenkuan; Bai, Li

    2016-05-01

    In this paper, a second order variational model named the Mumford-Shah total generalized variation (MSTGV) is proposed for simultaneously image denoising and segmentation, which combines the original Γ-convergence approximated Mumford-Shah model with the second order total generalized variation (TGV). For image denoising, the proposed MSTGV can eliminate both the staircase artefact associated with the first order total variation and the edge blurring effect associated with the quadratic H1 regularization or the second order bounded Hessian regularization. For image segmentation, the MSTGV can obtain clear and continuous boundaries of objects in the image. To improve computational efficiency, the implementation of the MSTGV does not directly solve its high order nonlinear partial differential equations and instead exploits the efficient split Bregman algorithm. The algorithm benefits from the fast Fourier transform, analytical generalized soft thresholding equation, and Gauss-Seidel iteration. Extensive experiments are conducted to demonstrate the effectiveness and efficiency of the proposed model.

  8. Edge-based correlation image registration for multispectral imaging

    Science.gov (United States)

    Nandy, Prabal

    2009-11-17

    Registration information for images of a common target obtained from a plurality of different spectral bands can be obtained by combining edge detection and phase correlation. The images are edge-filtered, and pairs of the edge-filtered images are then phase correlated to produce phase correlation images. The registration information can be determined based on these phase correlation images.

  9. The use of wavelet filters for reducing noise in posterior fossa Computed Tomography images

    Energy Technology Data Exchange (ETDEWEB)

    Pita-Machado, Reinado [Centro de Ingeniería Clínica. Guacalote y Circunvalación, Santa Clara 50200 (Cuba); Perez-Diaz, Marlen, E-mail: mperez@uclv.edu.cu; Lorenzo-Ginori, Juan V., E-mail: mperez@uclv.edu.cu; Bravo-Pino, Rolando, E-mail: mperez@uclv.edu.cu [Centro de Estudios de Electrónica y Tecnologías de la Información (CEETI), Universidad Central Marta Abreu de las Villas, Carretera a Camajuaní, km. 5 1/2 Santa Clara 54830 (Cuba)

    2014-11-07

    Wavelet transform based de-noising like wavelet shrinkage, gives the good results in CT. This procedure affects very little the spatial resolution. Some applications are reconstruction methods, while others are a posteriori de-noising methods. De-noising after reconstruction is very difficult because the noise is non-stationary and has unknown distribution. Therefore, methods which work on the sinogram-space don’t have this problem, because they always work over a known noise distribution at this point. On the other hand, the posterior fossa in a head CT is a very complex region for physicians, because it is commonly affected by artifacts and noise which are not eliminated during the reconstruction procedure. This can leads to some false positive evaluations. The purpose of our present work is to compare different wavelet shrinkage de-noising filters to reduce noise, particularly in images of the posterior fossa within CT scans in the sinogram-space. This work describes an experimental search for the best wavelets, to reduce Poisson noise in Computed Tomography (CT) scans. Results showed that de-noising with wavelet filters improved the quality of posterior fossa region in terms of an increased CNR, without noticeable structural distortions.

  10. Object-Based Image Compression

    Science.gov (United States)

    Schmalz, Mark S.

    2003-01-01

    Image compression frequently supports reduced storage requirement in a computer system, as well as enhancement of effective channel bandwidth in a communication system, by decreasing the source bit rate through reduction of source redundancy. The majority of image compression techniques emphasize pixel-level operations, such as matching rectangular or elliptical sampling blocks taken from the source data stream, with exemplars stored in a database (e.g., a codebook in vector quantization or VQ). Alternatively, one can represent a source block via transformation, coefficient quantization, and selection of coefficients deemed significant for source content approximation in the decompressed image. This approach, called transform coding (TC), has predominated for several decades in the signal and image processing communities. A further technique that has been employed is the deduction of affine relationships from source properties such as local self-similarity, which supports the construction of adaptive codebooks in a self-VQ paradigm that has been called iterated function systems (IFS). Although VQ, TC, and IFS based compression algorithms have enjoyed varying levels of success for different types of applications, bit rate requirements, and image quality constraints, few of these algorithms examine the higher-level spatial structure of an image, and fewer still exploit this structure to enhance compression ratio. In this paper, we discuss a fourth type of compression algorithm, called object-based compression, which is based on research in joint segmentaton and compression, as well as previous research in the extraction of sketch-like representations from digital imagery. Here, large image regions that correspond to contiguous recognizeable objects or parts of objects are segmented from the source, then represented compactly in the compressed image. Segmentation is facilitated by source properties such as size, shape, texture, statistical properties, and spectral

  11. Performance Evaluation of Image Fusion for Impulse Noise Reduction in Digital Images Using an Image Quality Assessment

    Directory of Open Access Journals (Sweden)

    M PremKumar

    2011-07-01

    Full Text Available Image fusion is the process of combining two or more images into a single image while retaining the important features of each image. Multiple image fusion is an important technique used in military, remote sensing and medical applications. In this paper, Image Fusion based on local area variance is used to combine the de-noised images from two different filtering algorithms, Vector Median Filter (VMF and Spatial Median Filter (SMF. The performance of the Image Fusion is evaluated by using a new non-reference image quality assessment; Gradient based Image Quality Index (GIQI, to estimate how well the important information in the source images is represented by the fused image. Experimental results show that GIQI is better in non-reference image fusion performance assessment than universal image quality index (UIQI.

  12. A multiple de-noising method based on energy-zero-product%一种基于能零积的综合去噪方法

    Institute of Scientific and Technical Information of China (English)

    冯纪强; 徐晨; 张维强

    2007-01-01

    基于短时能零积提出一种改进的语音综合去噪方法.通过短时能零积实现噪音阈值选择和噪音功率谱的自适应调整,得到较理想的语音起止点和噪音的功率谱估计.根据噪音在语音段和无音段不同的分布特征,噪音功率在不同阶段设置不同的权重,实现更好的谱减去噪.对每连续三帧进行残留噪音处理,可进一步除去残留噪音.实验表明,本法去噪效果优于使用固定噪音功率谱估计的传统谱减去噪方法.%An improved speech de noising method by modifying the spectral subtraction technique based on the energy-zero-product was presented.Firstly,the power spectrum of the noisy speech and its threshold can be adjusted via the short time energy-zero-product.So the exact begin end points as well as the accurate noise power spectrum are obtained.Secondly,during the course of de noising using the modified spectral subtraction technique,different weights are assigned to the noise power spectrum based on the different characteristics of the noisy signal in both the speech and non speech segments.Finally,the method of residual noise reduction is applied to the residual noise.Simulation experiment results illustrate that this method is more effective than the conventional spectral subtraction methods using the fixed noise power spectrum.

  13. A Review of Decision Based Impulse Noise Removing Algorithms

    Directory of Open Access Journals (Sweden)

    SnehalAmbulkar

    2014-04-01

    Full Text Available Noises is an unwanted factor in digital image and videos, hiding the details and destroying image information. Hence denoising has great importance to restore the details and to improve the quality measures. This paper takes a look towards different type of noise found in digital images, Denoising domains, and classification of denoising filters. Some denoising filters like Median filter (MF, Adaptive median filter (AMF and simple adaptive median filters (SAMF are described and compared briefly. A new approach is proposed for video denoising using combination of median filters with multiple views.

  14. A New Pixels Flipping Method for Huge Watermarking Capacity of the Invoice Font Image

    Directory of Open Access Journals (Sweden)

    Li Li

    2014-01-01

    Full Text Available Invoice printing just has two-color printing, so invoice font image can be seen as binary image. To embed watermarks into invoice image, the pixels need to be flipped. The more huge the watermark is, the more the pixels need to be flipped. We proposed a new pixels flipping method in invoice image for huge watermarking capacity. The pixels flipping method includes one novel interpolation method for binary image, one flippable pixels evaluation mechanism, and one denoising method based on gravity center and chaos degree. The proposed interpolation method ensures that the invoice image keeps features well after scaling. The flippable pixels evaluation mechanism ensures that the pixels keep better connectivity and smoothness and the pattern has highest structural similarity after flipping. The proposed denoising method makes invoice font image smoother and fiter for human vision. Experiments show that the proposed flipping method not only keeps the invoice font structure well but also improves watermarking capacity.

  15. A new pixels flipping method for huge watermarking capacity of the invoice font image.

    Science.gov (United States)

    Li, Li; Hou, Qingzheng; Lu, Jianfeng; Xu, Qishuai; Dai, Junping; Mao, Xiaoyang; Chang, Chin-Chen

    2014-01-01

    Invoice printing just has two-color printing, so invoice font image can be seen as binary image. To embed watermarks into invoice image, the pixels need to be flipped. The more huge the watermark is, the more the pixels need to be flipped. We proposed a new pixels flipping method in invoice image for huge watermarking capacity. The pixels flipping method includes one novel interpolation method for binary image, one flippable pixels evaluation mechanism, and one denoising method based on gravity center and chaos degree. The proposed interpolation method ensures that the invoice image keeps features well after scaling. The flippable pixels evaluation mechanism ensures that the pixels keep better connectivity and smoothness and the pattern has highest structural similarity after flipping. The proposed denoising method makes invoice font image smoother and fiter for human vision. Experiments show that the proposed flipping method not only keeps the invoice font structure well but also improves watermarking capacity. PMID:25489606

  16. A Novel and Efficient Lifting Scheme based Super Resolution Reconstruction for Early Detection of Cancer in Low Resolution Mammogram Images

    Directory of Open Access Journals (Sweden)

    Liyakathunisa

    2011-05-01

    Full Text Available Mammography is the most effective method for early detection of breast diseases. However, thetypical diagnostic signs, such as masses and microcalcifications, are difficult to be detectedbecause mammograms are low contrast and noisy images. We concentrate on a special case ofsuper resolution reconstruction for early detection of cancer from low resolution mammogramimages. Super resolution reconstruction is the process of combining several low resolutionimages into a single higher resolution image. This paper describes a novel approach forenhancing the resolution of mammographic images. We are proposing an efficient lifting waveletbased denoising with adaptive interpolation for super resolution reconstruction. Under this framework, the digitized low resolution mammographic images are decomposed into many levels toobtain different frequency bands. We use Daubechies (D4 lifting schemes to decompose lowresolution mammogram images into multilevel scale and wavelet coefficients. Then our proposednovel soft thresholding technique is used to remove the noisy coefficients, by fixing optimumthreshold value. In order to obtain an image of higher resolution adaptive interpolation is applied.Our proposed lifting wavelet transform based restoration and adaptive interpolation preserves theedges as well as smoothens the image without introducing artifacts. The proposed algorithmavoids the application of iterative method, reduces the complexity of calculation and applies tolarge dimension low-resolution images. Experimental results show that the proposed approachhas succeeded in obtaining a high-resolution mammogram image with a high PSNR, ISNR ratioand a good visual quality.

  17. Denoising solar radiation data using coiflet wavelets

    Energy Technology Data Exchange (ETDEWEB)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my; Janier, Josefina B., E-mail: josefinajanier@petronas.com.my; Muthuvalu, Mohana Sundaram, E-mail: mohana.muthuvalu@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia); Hasan, Mohammad Khatim, E-mail: khatim@ftsm.ukm.my [Jabatan Komputeran Industri, Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor (Malaysia); Sulaiman, Jumat, E-mail: jumat@ums.edu.my [Program Matematik dengan Ekonomi, Universiti Malaysia Sabah, Beg Berkunci 2073, 88999 Kota Kinabalu, Sabah (Malaysia); Ismail, Mohd Tahir [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM Minden, Penang (Malaysia)

    2014-10-24

    Signal denoising and smoothing plays an important role in processing the given signal either from experiment or data collection through observations. Data collection usually was mixed between true data and some error or noise. This noise might be coming from the apparatus to measure or collect the data or human error in handling the data. Normally before the data is use for further processing purposes, the unwanted noise need to be filtered out. One of the efficient methods that can be used to filter the data is wavelet transform. Due to the fact that the received solar radiation data fluctuates according to time, there exist few unwanted oscillation namely noise and it must be filtered out before the data is used for developing mathematical model. In order to apply denoising using wavelet transform (WT), the thresholding values need to be calculated. In this paper the new thresholding approach is proposed. The coiflet2 wavelet with variation diminishing 4 is utilized for our purpose. From numerical results it can be seen clearly that, the new thresholding approach give better results as compare with existing approach namely global thresholding value.

  18. Denoising solar radiation data using coiflet wavelets

    International Nuclear Information System (INIS)

    Signal denoising and smoothing plays an important role in processing the given signal either from experiment or data collection through observations. Data collection usually was mixed between true data and some error or noise. This noise might be coming from the apparatus to measure or collect the data or human error in handling the data. Normally before the data is use for further processing purposes, the unwanted noise need to be filtered out. One of the efficient methods that can be used to filter the data is wavelet transform. Due to the fact that the received solar radiation data fluctuates according to time, there exist few unwanted oscillation namely noise and it must be filtered out before the data is used for developing mathematical model. In order to apply denoising using wavelet transform (WT), the thresholding values need to be calculated. In this paper the new thresholding approach is proposed. The coiflet2 wavelet with variation diminishing 4 is utilized for our purpose. From numerical results it can be seen clearly that, the new thresholding approach give better results as compare with existing approach namely global thresholding value

  19. Graph-based Image Inpainting

    OpenAIRE

    Defferrard, Michaël

    2014-01-01

    The project goal was to explore the applications of spectral graph theory to address the inpainting problem of large missing chunks. We used a non-local patch graph representation of the image and proposed a structure detector which leverages the graph representation and influences the fill-order of our exemplar-based algorithm. Our method achieved state-of-the-art performances.

  20. Denoising technique based on cascaded filtering of particle filter and ANFIS%粒子滤波和ANFIS级联滤波的去噪技术

    Institute of Scientific and Technical Information of China (English)

    刘宇; 曾燎燎; 路永乐; 黎蕾蕾; 潘英俊

    2012-01-01

    为实现实际应用中的非线性、非高斯系统中的状态估计,结合粒子滤波非线性估计的优势和自适应神经模糊推理系统(ANFIS)的非线性逼近功能,建立了ANFIS一粒子滤波模型。该模型首先通过ANFIS消除测量信号中有色噪声的影响,再运用粒子滤波实现对状态的最优估计,从而进一步提高估计精度。仿真结果表明ANFIS与PF的级联滤波较单一的粒子滤波均值减少了65%,方差减小了74.4%。ANFIS一粒子滤波对于强非线性系统的噪声消除效果显著,使状态估计精度得到了较大提高,证明了该级联滤波模型的有效性。%F develops th system(AN state signa demo or the practical application of nonlinear, e ANFIS- Particle filter cascaded filteri FIS) nonlinear approximation function estimation. ANFIS is used 1 is processed by the nstrate that with the respectively, systems, and the proposed partm casca ANFIS-particle the state estim model. non-Gaussian noise system s ng model based on the adap and particle filter's obvious tate estlmat paper tive neuro-fuzzy inference advantages for non-linear eliminate the bias in the colored noise of the signal, filter to realize the optimal state estimation. The s filter model the mean and variance filter model atlon accuracy has significant noise cancell has been greatly enhanced, then the filtered imulation results are reduced by 65% and 74% ation effect for strongly nonlinear which verifies the effectiveness of

  1. Independent component analysis and decision trees for ECG holter recording de-noising.

    Directory of Open Access Journals (Sweden)

    Jakub Kuzilek

    Full Text Available We have developed a method focusing on ECG signal de-noising using Independent component analysis (ICA. This approach combines JADE source separation and binary decision tree for identification and subsequent ECG noise removal. In order to to test the efficiency of this method comparison to standard filtering a wavelet- based de-noising method was used. Freely data available at Physionet medical data storage were evaluated. Evaluation criteria was root mean square error (RMSE between original ECG and filtered data contaminated with artificial noise. Proposed algorithm achieved comparable result in terms of standard noises (power line interference, base line wander, EMG, but noticeably significantly better results were achieved when uncommon noise (electrode cable movement artefact were compared.

  2. A New Denoising Technique for Capillary Electrophoresis Signals

    Institute of Scientific and Technical Information of China (English)

    王瑛; 莫金垣

    2002-01-01

    Capillary electrophoresis(CE) is a powerful analytical tool in chemistry,Thus,it is valuable to solve the denoising of CE signals.A new denoising method called MWDA which emplosy Mexican Hat wavelet is presented ,It is an efficient chemometrics technique and has been applied successfully in processing CE signals ,Useful information can be extractred even from signals of S/N=1 .After denoising,the peak positions are unchanged and the relative errors of peak height are less than 3%.

  3. A Miniature-Based Image Retrieval System

    OpenAIRE

    Islam, Md Saiful; Ali, Md. Haider

    2010-01-01

    Due to the rapid development of World Wide Web (WWW) and imaging technology, more and more images are available in the Internet and stored in databases. Searching the related images by the querying image is becoming tedious and difficult. Most of the images on the web are compressed by methods based on discrete cosine transform (DCT) including Joint Photographic Experts Group(JPEG) and H.261. This paper presents an efficient content-based image indexing technique for searching similar images ...

  4. A nonlinear filtering algorithm for denoising HR(S)TEM micrographs

    Energy Technology Data Exchange (ETDEWEB)

    Du, Hongchu, E-mail: h.du@fz-juelich.de [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons, Jülich Research Centre, Jülich, 52425 (Germany); Central Facility for Electron Microscopy (GFE), RWTH Aachen University, Aachen 52074 (Germany); Peter Grünberg Institute, Jülich Research Centre, Jülich 52425 (Germany)

    2015-04-15

    Noise reduction of micrographs is often an essential task in high resolution (scanning) transmission electron microscopy (HR(S)TEM) either for a higher visual quality or for a more accurate quantification. Since HR(S)TEM studies are often aimed at resolving periodic atomistic columns and their non-periodic deviation at defects, it is important to develop a noise reduction algorithm that can simultaneously handle both periodic and non-periodic features properly. In this work, a nonlinear filtering algorithm is developed based on widely used techniques of low-pass filter and Wiener filter, which can efficiently reduce noise without noticeable artifacts even in HR(S)TEM micrographs with contrast of variation of background and defects. The developed nonlinear filtering algorithm is particularly suitable for quantitative electron microscopy, and is also of great interest for beam sensitive samples, in situ analyses, and atomic resolution EFTEM. - Highlights: • A nonlinear filtering algorithm for denoising HR(S)TEM images is developed. • It can simultaneously handle both periodic and non-periodic features properly. • It is particularly suitable for quantitative electron microscopy. • It is of great interest for beam sensitive samples, in situ analyses, and atomic resolution EFTEM.

  5. A nonlinear filtering algorithm for denoising HR(S)TEM micrographs

    International Nuclear Information System (INIS)

    Noise reduction of micrographs is often an essential task in high resolution (scanning) transmission electron microscopy (HR(S)TEM) either for a higher visual quality or for a more accurate quantification. Since HR(S)TEM studies are often aimed at resolving periodic atomistic columns and their non-periodic deviation at defects, it is important to develop a noise reduction algorithm that can simultaneously handle both periodic and non-periodic features properly. In this work, a nonlinear filtering algorithm is developed based on widely used techniques of low-pass filter and Wiener filter, which can efficiently reduce noise without noticeable artifacts even in HR(S)TEM micrographs with contrast of variation of background and defects. The developed nonlinear filtering algorithm is particularly suitable for quantitative electron microscopy, and is also of great interest for beam sensitive samples, in situ analyses, and atomic resolution EFTEM. - Highlights: • A nonlinear filtering algorithm for denoising HR(S)TEM images is developed. • It can simultaneously handle both periodic and non-periodic features properly. • It is particularly suitable for quantitative electron microscopy. • It is of great interest for beam sensitive samples, in situ analyses, and atomic resolution EFTEM

  6. Rotationally Invariant Image Representation for Viewing Direction Classification in Cryo-EM

    OpenAIRE

    Zhao, Zhizhen; Singer, Amit

    2014-01-01

    We introduce a new rotationally invariant viewing angle classification method for identifying, among a large number of Cryo-EM projection images, similar views without prior knowledge of the molecule. Our rotationally invariant features are based on the bispectrum. Each image is denoised and compressed using steerable principal component analysis (PCA) such that rotating an image is equivalent to phase shifting the expansion coefficients. Thus we are able to extend the theory of bispectrum of...

  7. Image Retrieval Based on Fractal Dictionary Parameters

    OpenAIRE

    Yuanyuan Sun; Rudan Xu; Lina Chen; Xiaopeng Hu

    2013-01-01

    Content-based image retrieval is a branch of computer vision. It is important for efficient management of a visual database. In most cases, image retrieval is based on image compression. In this paper, we use a fractal dictionary to encode images. Based on this technique, we propose a set of statistical indices for efficient image retrieval. Experimental results on a database of 416 texture images indicate that the proposed method provides a competitive retrieval rate, compared to the existi...

  8. A 1D wavelet filtering for ultrasound images despeckling

    Science.gov (United States)

    Dahdouh, Sonia; Dubois, Mathieu; Frenoux, Emmanuelle; Osorio, Angel

    2010-03-01

    Ultrasound images appearance is characterized by speckle, shadows, signal dropout and low contrast which make them really difficult to process and leads to a very poor signal to noise ratio. Therefore, for main imaging applications, a denoising step is necessary to apply successfully medical imaging algorithms on such images. However, due to speckle statistics, denoising and enhancing edges on these images without inducing additional blurring is a real challenging problem on which usual filters often fail. To deal with such problems, a large number of papers are working on B-mode images considering that the noise is purely multiplicative. Making such an assertion could be misleading, because of internal pre-processing such as log compression which are done in the ultrasound device. To address those questions, we designed a novel filtering method based on 1D Radiofrequency signal. Indeed, since B-mode images are initially composed of 1D signals and since the log compression made by ultrasound devices modifies noise statistics, we decided to filter directly the 1D Radiofrequency signal envelope before log compression and image reconstitution, in order to conserve as much information as possible. A bi-orthogonal wavelet transform is applied to the log transform of each signal and an adaptive 1D split and merge like algorithm is used to denoise wavelet coefficients. Experiments were carried out on synthetic data sets simulated with Field II simulator and results show that our filter outperforms classical speckle filtering methods like Lee, non-linear means or SRAD filters.

  9. A content-based image retrieval method for optical colonoscopy images based on image recognition techniques

    Science.gov (United States)

    Nosato, Hirokazu; Sakanashi, Hidenori; Takahashi, Eiichi; Murakawa, Masahiro

    2015-03-01

    This paper proposes a content-based image retrieval method for optical colonoscopy images that can find images similar to ones being diagnosed. Optical colonoscopy is a method of direct observation for colons and rectums to diagnose bowel diseases. It is the most common procedure for screening, surveillance and treatment. However, diagnostic accuracy for intractable inflammatory bowel diseases, such as ulcerative colitis (UC), is highly dependent on the experience and knowledge of the medical doctor, because there is considerable variety in the appearances of colonic mucosa within inflammations with UC. In order to solve this issue, this paper proposes a content-based image retrieval method based on image recognition techniques. The proposed retrieval method can find similar images from a database of images diagnosed as UC, and can potentially furnish the medical records associated with the retrieved images to assist the UC diagnosis. Within the proposed method, color histogram features and higher order local auto-correlation (HLAC) features are adopted to represent the color information and geometrical information of optical colonoscopy images, respectively. Moreover, considering various characteristics of UC colonoscopy images, such as vascular patterns and the roughness of the colonic mucosa, we also propose an image enhancement method to highlight the appearances of colonic mucosa in UC. In an experiment using 161 UC images from 32 patients, we demonstrate that our method improves the accuracy of retrieving similar UC images.

  10. Restoration of images with rotated shapes

    Science.gov (United States)

    Setzer, S.; Steidl, G.; Teuber, T.

    2008-07-01

    Methods for image restoration which respect edges and other important features are of fundamental importance in digital image processing. In this paper, we present a novel technique for the restoration of images containing rotated (linearly transformed) rectangular shapes which avoids the round-off effects at vertices produced by known edge-preserving denoising techniques. Following an idea of Berkels et al. our approach is also based on two steps: the determination of the angles related to the rotated shapes and a subsequent restoration step which incorporates the knowledge of the angles. However, in contrast to Berkels et al., we find the smoothed rotation angles of the shapes by minimizing a simple quadratic functional without constraints which involves only first order derivatives so that we finally have to solve only a linear system of equations. Moreover, we propose to perform the restoration step either by quadratic programming or by solving an anisotropic diffusion equation. We focus on a discrete approach which approximates derivatives by finite differences. Particular attention is paid to the choice of the difference filters. We prove some relations concerning the preservation of rectangular shapes for our discrete settingE Finally, we present numerical examples for the denoising of artificial images with rotated rectangles and parallelograms and for the denoising of a real-world image.

  11. Performance of Various Order Statistics Filters in Impulse and Mixed Noise Removal for RS Images

    Directory of Open Access Journals (Sweden)

    Mrs V.Radhika

    2010-12-01

    Full Text Available Remote sensing images (ranges from satellite to seismic are affected by number of noises like interference, impulse and speckle noises. Image denoising is one of the traditional problems in digital image processing, which plays vital role as a pre-processing step in number of image and video applications. Image denoising still remains a challenging research area for researchers because noise removal introduces artifacts and causes blurring of the images. This study is done with the intension of designing a best algorithm for impulsive noise reduction in an industrial environment. A review of the typical impulsive noise reduction systems which are based on order statistics are done and particularized for the described situation. Finally, computational aspects are analyzed in terms of PSNR values and some solutions are proposed.

  12. Denoising of chaotic signal using independent component analysis and empirical mode decomposition with circulate translating

    Science.gov (United States)

    Wen-Bo, Wang; Xiao-Dong, Zhang; Yuchan, Chang; Xiang-Li, Wang; Zhao, Wang; Xi, Chen; Lei, Zheng

    2016-01-01

    In this paper, a new method to reduce noises within chaotic signals based on ICA (independent component analysis) and EMD (empirical mode decomposition) is proposed. The basic idea is decomposing chaotic signals and constructing multidimensional input vectors, firstly, on the base of EMD and its translation invariance. Secondly, it makes the independent component analysis on the input vectors, which means that a self adapting denoising is carried out for the intrinsic mode functions (IMFs) of chaotic signals. Finally, all IMFs compose the new denoised chaotic signal. Experiments on the Lorenz chaotic signal composed of different Gaussian noises and the monthly observed chaotic sequence on sunspots were put into practice. The results proved that the method proposed in this paper is effective in denoising of chaotic signals. Moreover, it can correct the center point in the phase space effectively, which makes it approach the real track of the chaotic attractor. Project supported by the National Science and Technology, China (Grant No. 2012BAJ15B04), the National Natural Science Foundation of China (Grant Nos. 41071270 and 61473213), the Natural Science Foundation of Hubei Province, China (Grant No. 2015CFB424), the State Key Laboratory Foundation of Satellite Ocean Environment Dynamics, China (Grant No. SOED1405), the Hubei Provincial Key Laboratory Foundation of Metallurgical Industry Process System Science, China (Grant No. Z201303), and the Hubei Key Laboratory Foundation of Transportation Internet of Things, Wuhan University of Technology, China (Grant No.2015III015-B02).

  13. Content based Image Retrieval from Forensic Image Databases

    Directory of Open Access Journals (Sweden)

    Swati A. Gulhane

    2015-03-01

    Full Text Available Due to the proliferation of video and image data in digital form, Content based Image Retrieval has become a prominent research topic. In forensic sciences, digital data have been widely used such as criminal images, fingerprints, scene images and so on. Therefore, the arrangement of such large image data becomes a big issue such as how to get an interested image fast. There is a great need for developing an efficient technique for finding the images. In order to find an image, image has to be represented with certain features. Color, texture and shape are three important visual features of an image. Searching for images using color, texture and shape features has attracted much attention. There are many content based image retrieval techniques in the literature. This paper gives the overview of different existing methods used for content based image retrieval and also suggests an efficient image retrieval method for digital image database of criminal photos, using dynamic dominant color, texture and shape features of an image which will give an effective retrieval result.

  14. Digital image-based titrations.

    Science.gov (United States)

    Gaiao, Edvaldo da Nobrega; Martins, Valdomiro Lacerda; Lyra, Wellington da Silva; de Almeida, Luciano Farias; da Silva, Edvan Cirino; Araújo, Mário César Ugulino

    2006-06-16

    The exploitation of digital images obtained from a CCD camera (WebCam) as a novel instrumental detection technique for titration is proposed for the first time. Named of digital image-based (DIB) titration, it also requires, as a traditional titration (for example, spectrophotometric, potentiometric, conductimetric), a discontinuity in titration curves where there is an end point, which is associated to the chemical equivalence condition. The monitored signal in the DIB titration is a RGB-based value that is calculated, for each digital image, by using a proposed procedure based on the red, green, and blue colour system. The DIB titration was applied to determine HCl and H3PO4 in aqueous solutions and total alkalinity in mineral and tap waters. Its results were compared to the spectrophotometric (SPEC) titration and, by applying the paired t-test, no statistic difference between the results of both methods was verified at the 95% confidence level. Identical standard deviations were obtained by both titrations in the determinations of HCl and H3PO4, with a slightly better precision for DIB titration in the determinations of total alkalinity. The DIB titration shows to be an efficient and promising tool for quantitative chemical analysis and, as it employs an inexpensive device (WebCam) as analytical detector, it offers an economically viable alternative to titrations that need instrumental detection. PMID:17723410

  15. Low rank approximation (LRA) based noise reduction in spectral-resolved x-ray imaging using photon counting detector

    Science.gov (United States)

    Li, Yinsheng; Hsieh, Jiang; Chen, Guang-Hong

    2015-03-01

    Spectral imaging with photon counting detectors has recently attracted a lot of interest in X-ray and CT imaging due to its potential to enable ultra low radiation dose x-ray imaging. However, when radiation exposure level is low, quantum noise may be prohibitively high to hinder applications. Therefore, it is desirable to develop new methods to reduce quantum noise in the acquired data from photon counting detectors. In this paper, we propose a new denoising algorithm to reduce quantum noise in data acquired using an ideal photon counting detector. The proposed method exploits the intrinsic low dimensionality of acquired spectral data to decompose the acquired data in a series of orthonormal spectral bases. The first few spectral bases contain object information while the rest of the bases contain primarily quantum noise. The separation of image content and noise in these orthogonal spatial bases provides a means to reject noise without losing image content. Numerical simulations were conducted to validate and evaluate the proposed noise reduction algorithm. The results demonstrated that the proposed method can effectively reduce quantum noise while maintaining both spatial and spectral fidelity.

  16. Rapid Feature Learning with Stacked Linear Denoisers

    CERN Document Server

    Xu, Zhixiang Eddie; Sha, Fei

    2011-01-01

    We investigate unsupervised pre-training of deep architectures as feature generators for "shallow" classifiers. Stacked Denoising Autoencoders (SdA), when used as feature pre-processing tools for SVM classification, can lead to significant improvements in accuracy - however, at the price of a substantial increase in computational cost. In this paper we create a simple algorithm which mimics the layer by layer training of SdAs. However, in contrast to SdAs, our algorithm requires no training through gradient descent as the parameters can be computed in closed-form. It can be implemented in less than 20 lines of MATLABTMand reduces the computation time from several hours to mere seconds. We show that our feature transformation reliably improves the results of SVM classification significantly on all our data sets - often outperforming SdAs and even deep neural networks in three out of four deep learning benchmarks.

  17. Covariance Eigenvector Sparsity for Compression and Denoising

    CERN Document Server

    Schizas, Ioannis D

    2012-01-01

    Sparsity in the eigenvectors of signal covariance matrices is exploited in this paper for compression and denoising. Dimensionality reduction (DR) and quantization modules present in many practical compression schemes such as transform codecs, are designed to capitalize on this form of sparsity and achieve improved reconstruction performance compared to existing sparsity-agnostic codecs. Using training data that may be noisy a novel sparsity-aware linear DR scheme is developed to fully exploit sparsity in the covariance eigenvectors and form noise-resilient estimates of the principal covariance eigenbasis. Sparsity is effected via norm-one regularization, and the associated minimization problems are solved using computationally efficient coordinate descent iterations. The resulting eigenspace estimator is shown capable of identifying a subset of the unknown support of the eigenspace basis vectors even when the observation noise covariance matrix is unknown, as long as the noise power is sufficiently low. It i...

  18. Denoising of ECG -- A discrete time approach using DWT.

    OpenAIRE

    Munzaleen Rashid Bhat; Virk Rana

    2015-01-01

    This paper is about denoising of ECG signal using DWT transform. In this paper, ECG signals are denoised using DWT transform.Ecg signals are taken and noise at different frequencies are generated which are superimposed on this original ecg signal.High frequency noise is of 4000 hertz and power line interference is of 50 hertz.Decomposition of noisy signal is achieved through wavelet packet .wavelet packets are reconstructed and appropriate wavelet packets are combined to obtain a ...

  19. Denoising without access to clean data using a partitioned autoencoder

    OpenAIRE

    Stowell, Dan; Turner, Richard E

    2015-01-01

    Training a denoising autoencoder neural network requires access to truly clean data, a requirement which is often impractical. To remedy this, we introduce a method to train an autoencoder using only noisy data, having examples with and without the signal class of interest. The autoencoder learns a partitioned representation of signal and noise, learning to reconstruct each separately. We illustrate the method by denoising birdsong audio (available abundantly in uncontrolled noisy datasets) u...

  20. Feasibility of RFID signal denoising using neural network

    OpenAIRE

    Vojtěch, Lukáš

    2010-01-01

    Radio Frequency Identification signal denoising can be a perspective method for the future intelligent Radio Frequency Identification readers with high reading distances capability. This paper deals with the Group Method of Data Handling neural network denoising filter experiments. Capability of the probability learning of the Group Method of Data Handling filters is an effective instrument in more exacting applications in comparison with classical Finite Impulse Respo...

  1. Content Base Image Retrieval Using Phong Shading

    OpenAIRE

    Uday Pratap Singh; Sanjeev Jain; Gulfishan Firdose Ahmed

    2010-01-01

    The digital image data is rapidly expanding in quantity and heterogeneity. The traditional information retrieval techniques does not meet the user’s demand, so there is need to develop an efficient system for content based image retrieval. Content based image retrieval means retrieval of images from database on the basis of visual features of image like as color, texture etc. In our proposed method feature are extracted after applying Phong shading on input image. Phong shading, flattering ou...

  2. Building high dimensional imaging database for content based image search

    Science.gov (United States)

    Sun, Qinpei; Sun, Jianyong; Ling, Tonghui; Wang, Mingqing; Yang, Yuanyuan; Zhang, Jianguo

    2016-03-01

    In medical imaging informatics, content-based image retrieval (CBIR) techniques are employed to aid radiologists in the retrieval of images with similar image contents. CBIR uses visual contents, normally called as image features, to search images from large scale image databases according to users' requests in the form of a query image. However, most of current CBIR systems require a distance computation of image character feature vectors to perform query, and the distance computations can be time consuming when the number of image character features grows large, and thus this limits the usability of the systems. In this presentation, we propose a novel framework which uses a high dimensional database to index the image character features to improve the accuracy and retrieval speed of a CBIR in integrated RIS/PACS.

  3. Image Signature Based Mean Square Error for Image Quality Assessment

    Institute of Scientific and Technical Information of China (English)

    CUI Ziguan; GAN Zongliang; TANG Guijin; LIU Feng; ZHU Xiuchang

    2015-01-01

    Motivated by the importance of Human visual system (HVS) in image processing, we propose a novel Image signature based mean square error (ISMSE) metric for full reference Image quality assessment (IQA). Efficient image signature based describer is used to predict visual saliency map of the reference image. The saliency map is incorporated into luminance diff erence between the reference and distorted images to obtain image quality score. The eff ect of luminance diff erence on visual quality with larger saliency value which is usually corresponding to foreground objects is highlighted. Experimental results on LIVE database release 2 show that by integrating the eff ects of image signature based saliency on luminance dif-ference, the proposed ISMSE metric outperforms several state-of-the-art HVS-based IQA metrics but with lower complexity.

  4. Three-dimensional sparse-aperture moving-target imaging

    Science.gov (United States)

    Ferrara, Matthew; Jackson, Julie; Stuff, Mark

    2008-04-01

    If a target's motion can be determined, the problem of reconstructing a 3D target image becomes a sparse-aperture imaging problem. That is, the data lies on a random trajectory in k-space, which constitutes a sparse data collection that yields very low-resolution images if backprojection or other standard imaging techniques are used. This paper investigates two moving-target imaging algorithms: the first is a greedy algorithm based on the CLEAN technique, and the second is a version of Basis Pursuit Denoising. The two imaging algorithms are compared for a realistic moving-target motion history applied to a Xpatch-generated backhoe data set.

  5. Study of Denoising in TEOAE Signals Using an Appropriate Mother Wavelet Function

    Directory of Open Access Journals (Sweden)

    Habib Alizadeh Dizaji

    2007-06-01

    Full Text Available Background and Aim: Matching a mother wavelet to class of signals can be of interest in signal analy­sis and denoising based on wavelet multiresolution analysis and decomposition. As transient evoked otoacoustic emissions (TEOAES are contaminated with noise, the aim of this work was to pro­vide a quantitative approach to the problem of matching a mother wavelet to TEOAE signals by us­ing tun­ing curves and to use it for analysis and denoising TEOAE signals. Approximated mother wave­let for TEOAE signals was calculated using an algorithm for designing wavelet to match a specified sig­nal.Materials and Methods: In this paper a tuning curve has used as a template for designing a mother wave­let that has maximum matching to the tuning curve. The mother wavelet matching was performed on tuning curves spectrum magnitude and phase independent of one another. The scaling function was calcu­lated from the matched mother wavelet and by using these functions, lowpass and highpass filters were designed for a filter bank and otoacoustic emissions signal analysis and synthesis. After signal analyz­ing, denoising was performed by time windowing the signal time-frequency component.Results: Aanalysis indicated more signal reconstruction improvement in comparison with coiflets mother wavelet and by using the purposed denoising algorithm it is possible to enhance signal to noise ra­tio up to dB.Conclusion: The wavelet generated from this algorithm was remarkably similar to the biorthogonal wave­lets. Therefore, by matching a biorthogonal wavelet to the tuning curve and using wavelet packet analy­sis, a high resolution time-frequency analysis for the otoacoustic emission signals is possible.

  6. Ultrasound Contrast Plane Wave Imaging Based on Bubble Wavelet Transform: In Vitro and In Vivo Validations.

    Science.gov (United States)

    Wang, Diya; Zong, Yujin; Yang, Xuan; Hu, Hong; Wan, Jinjin; Zhang, Lei; Bouakaz, Ayache; Wan, Mingxi

    2016-07-01

    The aim of the study described here was to develop an ultrasound contrast plane wave imaging (PWI) method based on pulse-inversion bubble wavelet transform imaging (PIWI) to improve the contrast-to-tissue ratio of contrast images. A pair of inverted "bubble wavelets" with plane waves was constructed according to the modified Herring equation. The original echoes were replaced by the maximum wavelet correlation coefficients obtained from bubble wavelet correlation analysis. The echoes were then summed to distinguish microbubbles from tissues. In in vivo experiments on rabbit kidney, PIWI improved the contrast-to-tissue ratio of contrast images up to 4.5 ± 1.5 dB, compared with that obtained in B-mode (p area under the curve and half transmit time estimated from time-intensity curves, respectively. After time-intensity curves were denoised by detrended fluctuation analysis, the average area under the curve and half transit time of PIWI-based PWI were 55.94% (p 51% (p < 0.05) higher than those of conventional focused imaging, respectively. Because of its high contrast-to-tissue ratio and low disruption of microbubbles, PIWI-based PWI has a long infusion time and is therefore beneficial for transient monitoring and perfusion assessment of microbubbles circulating in vessels. PMID:27067280

  7. Graph Cuts based Image Segmentation using Fuzzy Rule Based System

    OpenAIRE

    Khokher, M. R.; A. Ghafoor; A.M. Siddiqui

    2012-01-01

    This work deals with the segmentation of gray scale, color and texture images using graph cuts. From input image, a graph is constructed using intensity, color and texture profiles of the image simultaneously. Based on the nature of image, a fuzzy rule based system is designed to find the weight that should be given to a specific image feature during graph development. The graph obtained from the fuzzy rule based weighted average of different image features is further used in normalized graph...

  8. Blurred Star Image Processing for Star Sensors under Dynamic Conditions

    OpenAIRE

    Lei Guo; Weina Zhang; Wei Quan

    2012-01-01

    The precision of star point location is significant to identify the star map and to acquire the aircraft attitude for star sensors. Under dynamic conditions, star images are not only corrupted by various noises, but also blurred due to the angular rate of the star sensor. According to different angular rates under dynamic conditions, a novel method is proposed in this article, which includes a denoising method based on adaptive wavelet threshold and a restoration method based on the large ang...

  9. WE-D-9A-04: Improving Multi-Modality Image Registration Using Edge-Based Transformations

    International Nuclear Information System (INIS)

    Purpose: Multi-modality deformable image registration (DIR) for head and neck (HN) radiotherapy is difficult, particularly when matching computed tomography (CT) scans with magnetic resonance imaging (MRI) scans. We hypothesized that the ‘shared information’ between images of different modalities was to be found in some form of edge-based transformation, and that novel edge-based DIR methods might outperform standard DIR methods. Methods: We propose a novel method that combines gray-scale edge-based morphology and mutual information (MI) in two stages. In the first step, we applied a modification of a previously published mathematical morphology method as an efficient gray scale edge estimator, with denoising function. The results were fed into a MI-based solver (plastimatch). The method was tested on 5 HN patients with pretreatment CT and MR datasets and associated follow-up weekly MR scans. The followup MRs showed significant regression in tumor and normal structure volumes as compared to the pretreatment MRs. The MR images used in this study were obtained using fast spin echo based T2w images with a 1 mm isotropic resolution and FOV matching the CT scan. Results: In all cases, the novel edge-based registration method provided better registration quality than MI-based DIR using the original CT and MRI images. For example, the mismatch in carotid arteries was reduced from 3–5 mm to within 2 mm. The novel edge-based method with different registration regulation parameters did not show any distorted deformations as compared to the non-realistic deformations resulting from MI on the original images. Processing time was 1.3 to 2 times shorter (edge vs. non-edge). In general, we observed quality improvement and significant calculation time reduction with the new method. Conclusion: Transforming images to an ‘edge-space,’ if designed appropriately, greatly increases the speed and accuracy of DIR

  10. Image matching navigation based on fuzzy information

    Institute of Scientific and Technical Information of China (English)

    田玉龙; 吴伟仁; 田金文; 柳健

    2003-01-01

    In conventional image matching methods, the image matching process is mostly based on image statistic information. One aspect neglected by all these methods is that there is much fuzzy information contained in these images. A new fuzzy matching algorithm based on fuzzy similarity for navigation is presented in this paper. Because the fuzzy theory is of the ability of making good description of the fuzzy information contained in images, the image matching method based on fuzzy similarity would look forward to producing good performance results. Experimental results using matching algorithm based on fuzzy information also demonstrate its reliability and practicability.

  11. CONTENT BASED IMAGE RETRIEVAL : A REVIEW

    OpenAIRE

    Shereena V.B; Julie M.David

    2014-01-01

    In a content-based image retrieval system (CBIR), the main issue is to extract the image features that effectively represent the image contents in a database. Such an extraction requires a detailed evaluation of retrieval performance of image features. This paper presents a review of fundamental aspects of content based image retrieval including feature extraction of color and texture features. Commonly used color features including color moments, color histogram and color corr...

  12. A Shape Based Image Search Technique

    Directory of Open Access Journals (Sweden)

    Aratrika Sarkar

    2014-08-01

    Full Text Available This paper describes an interactive application we have developed based on shaped-based image retrieval technique. The key concepts described in the project are, imatching of images based on contour matching; iimatching of images based on edge matching; iiimatching of images based on pixel matching of colours. Further, the application facilitates the matching of images invariant of transformations like i translation ; ii rotation; iii scaling. The key factor of the system is, the system shows the percentage unmatched of the image uploaded with respect to the images already existing in the database graphically, whereas, the integrity of the system lies on the unique matching techniques used for optimum result. This increases the accuracy of the system. For example, when a user uploads an image say, an image of a mango leaf, then the application shows all mango leaves present in the database as well other leaves matching the colour and shape of the mango leaf uploaded.

  13. Image transformation based on learning dictionaries across image spaces.

    Science.gov (United States)

    Jia, Kui; Wang, Xiaogang; Tang, Xiaoou

    2013-02-01

    In this paper, we propose a framework of transforming images from a source image space to a target image space, based on learning coupled dictionaries from a training set of paired images. The framework can be used for applications such as image super-resolution and estimation of image intrinsic components (shading and albedo). It is based on a local parametric regression approach, using sparse feature representations over learned coupled dictionaries across the source and target image spaces. After coupled dictionary learning, sparse coefficient vectors of training image patch pairs are partitioned into easily retrievable local clusters. For any test image patch, we can fast index into its closest local cluster and perform a local parametric regression between the learned sparse feature spaces. The obtained sparse representation (together with the learned target space dictionary) provides multiple constraints for each pixel of the target image to be estimated. The final target image is reconstructed based on these constraints. The contributions of our proposed framework are three-fold. 1) We propose a concept of coupled dictionary learning based on coupled sparse coding which requires the sparse coefficient vectors of a pair of corresponding source and target image patches to have the same support, i.e., the same indices of nonzero elements. 2) We devise a space partitioning scheme to divide the high-dimensional but sparse feature space into local clusters. The partitioning facilitates extremely fast retrieval of closest local clusters for query patches. 3) Benefiting from sparse feature-based image transformation, our method is more robust to corrupted input data, and can be considered as a simultaneous image restoration and transformation process. Experiments on intrinsic image estimation and super-resolution demonstrate the effectiveness and efficiency of our proposed method. PMID:22529324

  14. A Miniature-Based Image Retrieval System

    CERN Document Server

    Islam, Md Saiful

    2010-01-01

    Due to the rapid development of World Wide Web (WWW) and imaging technology, more and more images are available in the Internet and stored in databases. Searching the related images by the querying image is becoming tedious and difficult. Most of the images on the web are compressed by methods based on discrete cosine transform (DCT) including Joint Photographic Experts Group(JPEG) and H.261. This paper presents an efficient content-based image indexing technique for searching similar images using discrete cosine transform features. Experimental results demonstrate its superiority with the existing techniques.

  15. A Visual Attention Model Based Image Fusion

    Directory of Open Access Journals (Sweden)

    Rishabh Gupta

    2013-12-01

    Full Text Available To develop an efficient image fusion algorithm based on visual attention model for images with distinct objects. Image fusion is a process of combining complementary information from multiple images of the same scene into an image, so that the resultant image contains a more accurate description of the scene than any of the individual source images. The two basic fusion techniques are pixel level and region level fusion. Pixel level fusion deals with the operations on each and every pixel separately. The various pixel level techniques are averaging, stationary wavelet transforms, discrete wavelet transforms, Principal Component Analysis (PCA. But because of less sensitivity to noise and mis-registration, the region level image fusion is an emerging approach in the field of multifocus image fusion. The most appreciated approaches in region-based methods are multifocus image fusion using the concept of focal connectivity and spatial frequency. These two methods works well on still images as well as on video frames as inputs. A new region based technique is been proposed for the multifocus images having distinct objects. The method is based on the visual attention models and results obtained are appreciating for the distinct objects input images. The Proposed method results are highlighted using tenengrade and extended spatial frequency as performance parameters by taking several pairs of multi-focus input images like microscopic images, forensic images and video frames.

  16. SURVEY ON CONTENT BASED IMAGE RETRIEVAL

    OpenAIRE

    S.R.Surya; G. Sasikala

    2011-01-01

    The digital image data is rapidly expanding in quantity and heterogeneity. The traditional information retrieval techniques does not meet the user’s demand, so there is need to develop an efficient system for content based image retrieval. The content based image retrieval are becoming a source of exact and fast retrieval. In thispaper the techniques of content based image retrieval are discussed, analysed and compared. Here, to compared features as color correlogram, texture, shape, edge den...

  17. Imaging based, patient specific dosimetry

    International Nuclear Information System (INIS)

    that this can be performed is either by sequential planar scintillation camera measurements or by SPECT methods. Scintillation cameras generally have a low spatial resolution and sensitivity (cps/MBq) due to the collimator. The resolution is in order of 1-2 cm depending on the source location and radionuclide characteristics. Image noise is also a major problem since only a small activity is given for pre-planning which can degrade the image quality. Dosimetry using 2D planar imaging and the conjugate-view activity quantitation method have been used for many years. The quantification of the activity includes several approximations. In a planar acquisition the source depth in the patient is not resolved which makes the correction for photon attenuation and unwanted contribution from scattered photons to the image less accurate and consistent. Furthermore, contributions from activity uptakes that overlap the volume of interest in the image is a major problem. For calculation of the absorbed dose, the organ mass also needs to be determined, which can be made using patient CT images, or, using less accurate estimations from standardized phantom geometries. The energy deposition and transport is done based on pre-calculated dose factors from standardized phantom geometries. Despite these problems, the conjugate-view method has been the major choice for many dosimetrical studies. SPECT provide a possibility for 3D activity measurements. In this method, correction for non-homogeneous photon attenuation, scatter and loss of spatial resolution due to the collimator are today quite accurate when incorporated in iterative reconstruction methods. SPECT also allows for an accurate 3D absorbed dose calculation in that the patient's geometry can be taken into consideration if a co-registered CT study of the patient is available. Modern hybrid SPECT/CT cameras make such calculations relatively straight-forward. A major advantage using SPECT imaging is also that the absorbed dose

  18. PERFORMANCE EVALUATION OF CONTENT BASED IMAGE RETRIEVAL FOR MEDICAL IMAGES

    Directory of Open Access Journals (Sweden)

    SASI KUMAR. M

    2013-04-01

    Full Text Available Content-based image retrieval (CBIR technology benefits not only large image collections management, but also helps clinical care, biomedical research, and education. Digital images are found in X-Rays, MRI, CT which are used for diagnosing and planning treatment schedules. Thus, visual information management is challenging as the data quantity available is huge. Currently, available medical databases utilization is limited image retrieval issues. Archived digital medical images retrieval is always challenging and this is being researched more as images are of great importance in patient diagnosis, therapy, medical reference, and medical training. In this paper, an image matching scheme using Discrete Sine Transform for relevant feature extraction is presented. The efficiency of different algorithm for classifying the features to retrieve medical images is investigated.

  19. Wavelet Image Encryption Algorithm Based on AES

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Traditional encryption techniques have some limits for multimedia information, especially image and video, which are considered only to be common data. In this paper, we propose a wavelet-based image encryption algorithm based on the Advanced Encryption Standard, which encrypts only those low frequency coefficients of image wavelet decomposition. The experimental results are satisfactory.

  20. Using Invariant Translation to Denoise Electroencephalogram Signals

    Directory of Open Access Journals (Sweden)

    Janett Walters-Williams

    2011-01-01

    Full Text Available Problem statement: Because of the distance between the skull and the brain and their different resistivitys, Electroencephalogram (EEG recordings on a machine is usually mixed with the activities generated within the area called noise. EEG signals have been used to diagnose major brain diseases such as Epilepsy, narcolepsy and dementia. The presence of these noises however can result in misdiagnosis, as such it is necessary to remove them before further analysis and processing can be done. Denoising is often done with Independent Component Analysis algorithms but of late Wavelet Transform has been utilized. Approach: In this study we utilized one of the newer Wavelet Transform methods, Translation-Invariant, to deny EEG signals. Different EEG signals were used to verify the method using the MATLAB software. Results were then compared with those of renowned ICA algorithms Fast ICA and Radical and evaluated using the performance measures Mean Square Error (MSE, Percentage Root Mean Square Difference (PRD and Signal to Noise Ratio (SNR. Results: Experiments revealed that Translation-Invariant Wavelet Transform had the smallest MSE and PRD while having the largest SNR. Conclusion/Recommendations: This indicated that it performed superior to the ICA algorithms producing cleaner EEG signals which can influence diagnosis as well as clinical studies of the brain.

  1. Quantum Image Encryption Algorithm Based on Quantum Image XOR Operations

    Science.gov (United States)

    Gong, Li-Hua; He, Xiang-Tao; Cheng, Shan; Hua, Tian-Xiang; Zhou, Nan-Run

    2016-07-01

    A novel encryption algorithm for quantum images based on quantum image XOR operations is designed. The quantum image XOR operations are designed by using the hyper-chaotic sequences generated with the Chen's hyper-chaotic system to control the control-NOT operation, which is used to encode gray-level information. The initial conditions of the Chen's hyper-chaotic system are the keys, which guarantee the security of the proposed quantum image encryption algorithm. Numerical simulations and theoretical analyses demonstrate that the proposed quantum image encryption algorithm has larger key space, higher key sensitivity, stronger resistance of statistical analysis and lower computational complexity than its classical counterparts.

  2. Content Based Image Indexing and Retrieval

    OpenAIRE

    Bhute, Avinash N; B B Meshram

    2014-01-01

    In this paper, we present the efficient content based image retrieval systems which employ the color, texture and shape information of images to facilitate the retrieval process. For efficient feature extraction, we extract the color, texture and shape feature of images automatically using edge detection which is widely used in signal processing and image compression. For facilitated the speedy retrieval we are implements the antipole-tree algorithm for indexing the images.

  3. SPOT Controlled Image Base 10 meter

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — SPOT Controlled Image Base 10 meter (CIB-10) is a collection of orthorectified panchromatic (grayscale) images. The data were acquired between 1986 and 1993 by the...

  4. Global Descriptor Attributes Based Content Based Image Retrieval of Query Images

    OpenAIRE

    Jaykrishna Joshi; Dattatray Bade

    2015-01-01

    The need for efficient content-based image retrieval system has increased hugely. Efficient and effective retrieval techniques of images are desired because of the explosive growth of digital images. Content based image retrieval (CBIR) is a promising approach because of its automatic indexing retrieval based on their semantic features and visual appearance. In this proposed system we investigate method for describing the contents of images which characterizes images by global des...

  5. ECG signals denoising using wavelet transform and independent component analysis

    Science.gov (United States)

    Liu, Manjin; Hui, Mei; Liu, Ming; Dong, Liquan; Zhao, Zhu; Zhao, Yuejin

    2015-08-01

    A method of two channel exercise electrocardiograms (ECG) signals denoising based on wavelet transform and independent component analysis is proposed in this paper. First of all, two channel exercise ECG signals are acquired. We decompose these two channel ECG signals into eight layers and add up the useful wavelet coefficients separately, getting two channel ECG signals with no baseline drift and other interference components. However, it still contains electrode movement noise, power frequency interference and other interferences. Secondly, we use these two channel ECG signals processed and one channel signal constructed manually to make further process with independent component analysis, getting the separated ECG signal. We can see the residual noises are removed effectively. Finally, comparative experiment is made with two same channel exercise ECG signals processed directly with independent component analysis and the method this paper proposed, which shows the indexes of signal to noise ratio (SNR) increases 21.916 and the root mean square error (MSE) decreases 2.522, proving the method this paper proposed has high reliability.

  6. Non-local MRI denoising using random sampling.

    Science.gov (United States)

    Hu, Jinrong; Zhou, Jiliu; Wu, Xi

    2016-09-01

    In this paper, we propose a random sampling non-local mean (SNLM) algorithm to eliminate noise in 3D MRI datasets. Non-local means (NLM) algorithms have been implemented efficiently for MRI denoising, but are always limited by high computational complexity. Compared to conventional methods, which raster through the entire search window when computing similarity weights, the proposed SNLM algorithm randomly selects a small subset of voxels which dramatically decreases the computational burden, together with competitive denoising result. Moreover, structure tensor which encapsulates high-order information was introduced as an optimal sampling pattern for further improvement. Numerical experiments demonstrated that the proposed SNLM method can get a good balance between denoising quality and computation efficiency. At a relative sampling ratio (i.e. ξ=0.05), SNLM can remove noise as effectively as full NLM, meanwhile the running time can be reduced to 1/20 of NLM's. PMID:27114338

  7. Multi region based image retrieval system

    Indian Academy of Sciences (India)

    P Manipoonchelvi; K Muneeswaran

    2014-04-01

    Multimedia information retrieval systems continue to be an active research area in the world of huge and voluminous data. The paramount challenge is to translate or convert a visual query from a human and find similar images or videos in large digital collection. In this paper, a technique of region based image retrieval, a branch of Content Based Image Retrieval, is proposed. The proposed model does not need prior knowledge or full semantic understanding of image content. It identifies significant regions in an image based on feature-based attention model which mimic viewer’s attention. The Curvelet Transform in combination with colour descriptors are used to represent each significant region in an image. Experimental results are analysed and compared with the state-of-the-art Region Based Image Retrieval Technique.

  8. Close Clustering Based Automated Color Image Annotation

    OpenAIRE

    Garg, Ankit; Dwivedi, Rahul; Asawa, Krishna

    2010-01-01

    Most image-search approaches today are based on the text based tags associated with the images which are mostly human generated and are subject to various kinds of errors. The results of a query to the image database thus can often be misleading and may not satisfy the requirements of the user. In this work we propose our approach to automate this tagging process of images, where image results generated can be fine filtered based on a probabilistic tagging mechanism. We implement a tool which...

  9. Optical image hiding based on computational ghost imaging

    Science.gov (United States)

    Wang, Le; Zhao, Shengmei; Cheng, Weiwen; Gong, Longyan; Chen, Hanwu

    2016-05-01

    Imaging hiding schemes play important roles in now big data times. They provide copyright protections of digital images. In the paper, we propose a novel image hiding scheme based on computational ghost imaging to have strong robustness and high security. The watermark is encrypted with the configuration of a computational ghost imaging system, and the random speckle patterns compose a secret key. Least significant bit algorithm is adopted to embed the watermark and both the second-order correlation algorithm and the compressed sensing (CS) algorithm are used to extract the watermark. The experimental and simulation results show that the authorized users can get the watermark with the secret key. The watermark image could not be retrieved when the eavesdropping ratio is less than 45% with the second-order correlation algorithm, whereas it is less than 20% with the TVAL3 CS reconstructed algorithm. In addition, the proposed scheme is robust against the 'salt and pepper' noise and image cropping degradations.

  10. Close Clustering Based Automated Color Image Annotation

    CERN Document Server

    Garg, Ankit; Asawa, Krishna

    2010-01-01

    Most image-search approaches today are based on the text based tags associated with the images which are mostly human generated and are subject to various kinds of errors. The results of a query to the image database thus can often be misleading and may not satisfy the requirements of the user. In this work we propose our approach to automate this tagging process of images, where image results generated can be fine filtered based on a probabilistic tagging mechanism. We implement a tool which helps to automate the tagging process by maintaining a training database, wherein the system is trained to identify certain set of input images, the results generated from which are used to create a probabilistic tagging mechanism. Given a certain set of segments in an image it calculates the probability of presence of particular keywords. This probability table is further used to generate the candidate tags for input images.

  11. Feature-based Image Sequence Compression Coding

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A novel compressing method for video teleconference applications is presented. Semantic-based coding based on human image feature is realized, where human features are adopted as parameters. Model-based coding and the concept of vector coding are combined with the work on image feature extraction to obtain the result.

  12. Wavelets in medical imaging

    Energy Technology Data Exchange (ETDEWEB)

    Zahra, Noor e; Sevindir, Huliya A.; Aslan, Zafar; Siddiqi, A. H. [Sharda University, SET, Department of Electronics and Communication, Knowledge Park 3rd, Gr. Noida (India); University of Kocaeli, Department of Mathematics, 41380 Kocaeli (Turkey); Istanbul Aydin University, Department of Computer Engineering, 34295 Istanbul (Turkey); Sharda University, SET, Department of Mathematics, 32-34 Knowledge Park 3rd, Greater Noida (India)

    2012-07-17

    The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.

  13. ADVANCED CLUSTER BASED IMAGE SEGMENTATION

    Directory of Open Access Journals (Sweden)

    D. Kesavaraja

    2011-11-01

    Full Text Available This paper presents efficient and portable implementations of a useful image segmentation technique which makes use of the faster and a variant of the conventional connected components algorithm which we call parallel Components. In the Modern world majority of the doctors are need image segmentation as the service for various purposes and also they expect this system is run faster and secure. Usually Image segmentation Algorithms are not working faster. In spite of several ongoing researches in Conventional Segmentation and its Algorithms might not be able to run faster. So we propose a cluster computing environment for parallel image Segmentation to provide faster result. This paper is the real time implementation of Distributed Image Segmentation in Clustering of Nodes. We demonstrate the effectiveness and feasibility of our method on a set of Medical CT Scan Images. Our general framework is a single address space, distributed memory programming model. We use efficient techniques for distributing and coalescing data as well as efficient combinations of task and data parallelism. The image segmentation algorithm makes use of an efficient cluster process which uses a novel approach for parallel merging. Our experimental results are consistent with the theoretical analysis and practical results. It provides the faster execution time for segmentation, when compared with Conventional method. Our test data is different CT scan images from the Medical database. More efficient implementations of Image Segmentation will likely result in even faster execution times.

  14. Survey paper on Sketch Based and Content Based Image Retrieval

    OpenAIRE

    Gaidhani, Prachi A.; S. B. Bagal

    2015-01-01

    This survey paper presents an overview of development of Sketch Based Image Retrieval (SBIR) and Content based image retrieval (CBIR) in the past few years. There is awful growth in bulk of images as well as the far-flung application in too many fields. The main attributes to represent as well index the images are color, shape, texture, spatial layout. These features of images are extracted to check similarity among the images. Generation of special query is the main problem of content based ...

  15. Image Segmentation Using Parametric Contours With Free Endpoints

    Science.gov (United States)

    Benninghoff, Heike; Garcke, Harald

    2016-04-01

    In this paper, we introduce a novel approach for active contours with free endpoints. A scheme is presented for image segmentation and restoration based on a discrete version of the Mumford-Shah functional where the contours can be both closed and open curves. Additional to a flow of the curves in normal direction, evolution laws for the tangential flow of the endpoints are derived. Using a parametric approach to describe the evolving contours together with an edge-preserving denoising, we obtain a fast method for image segmentation and restoration. The analytical and numerical schemes are presented followed by numerical experiments with artificial test images and with a real medical image.

  16. Developing stereo image based robot control system

    Energy Technology Data Exchange (ETDEWEB)

    Suprijadi,; Pambudi, I. R.; Woran, M.; Naa, C. F; Srigutomo, W. [Department of Physics, FMIPA, InstitutTeknologi Bandung Jl. Ganesha No. 10. Bandung 40132, Indonesia supri@fi.itb.ac.id (Indonesia)

    2015-04-16

    Application of image processing is developed in various field and purposes. In the last decade, image based system increase rapidly with the increasing of hardware and microprocessor performance. Many fields of science and technology were used this methods especially in medicine and instrumentation. New technique on stereovision to give a 3-dimension image or movie is very interesting, but not many applications in control system. Stereo image has pixel disparity information that is not existed in single image. In this research, we proposed a new method in wheel robot control system using stereovision. The result shows robot automatically moves based on stereovision captures.

  17. Developing stereo image based robot control system

    International Nuclear Information System (INIS)

    Application of image processing is developed in various field and purposes. In the last decade, image based system increase rapidly with the increasing of hardware and microprocessor performance. Many fields of science and technology were used this methods especially in medicine and instrumentation. New technique on stereovision to give a 3-dimension image or movie is very interesting, but not many applications in control system. Stereo image has pixel disparity information that is not existed in single image. In this research, we proposed a new method in wheel robot control system using stereovision. The result shows robot automatically moves based on stereovision captures

  18. Image processing technique based on image understanding architecture

    Science.gov (United States)

    Kuvychko, Igor

    2000-12-01

    Effectiveness of image applications is directly based on its abilities to resolve ambiguity and uncertainty in the real images. That requires tight integration of low-level image processing with high-level knowledge-based reasoning, which is the solution of the image understanding problem. This article presents a generic computational framework necessary for the solution of image understanding problem -- Spatial Turing Machine. Instead of tape of symbols, it works with hierarchical networks dually represented as discrete and continuous structures. Dual representation provides natural transformation of the continuous image information into the discrete structures, making it available for analysis. Such structures are data and algorithms at the same time and able to perform graph and diagrammatic operations being the basis of intelligence. They can create derivative structures that play role of context, or 'measurement device,' giving the ability to analyze, and run top-bottom algorithms. Symbols naturally emerge there, and symbolic operations work in combination with new simplified methods of computational intelligence. That makes images and scenes self-describing, and provides flexible ways of resolving uncertainty. Classification of images truly invariant to any transformation could be done via matching their derivative structures. New proposed architecture does not require supercomputers, opening ways to the new image technologies.

  19. 小波变换在语音去噪中的应用%Application of Wavelet Transform in Speech Denoising

    Institute of Scientific and Technical Information of China (English)

    马建芬

    2001-01-01

    A new speech denoising method based on the wavelet transform is presented.Filtering signal in the wavelet domain,by the frequency components acceptable for ear from the original speech,is a simple and effective speech denoising method.%提出一种新的基于小波变换的语音去噪算法。利用此特性对信号进行小波域滤波,可从加噪的语音中提取人耳所能接受的频率成份,是一种简单有效的语音去噪算法。

  20. Non Local Spatial and Angular Matching: Enabling higher spatial resolution diffusion MRI datasets through adaptive denoising.

    Science.gov (United States)

    St-Jean, Samuel; Coupé, Pierrick; Descoteaux, Maxime

    2016-08-01

    Diffusion magnetic resonance imaging (MRI) datasets suffer from low Signal-to-Noise Ratio (SNR), especially at high b-values. Acquiring data at high b-values contains relevant information and is now of great interest for microstructural and connectomics studies. High noise levels bias the measurements due to the non-Gaussian nature of the noise, which in turn can lead to a false and biased estimation of the diffusion parameters. Additionally, the usage of in-plane acceleration techniques during the acquisition leads to a spatially varying noise distribution, which depends on the parallel acceleration method implemented on the scanner. This paper proposes a novel diffusion MRI denoising technique that can be used on all existing data, without adding to the scanning time. We first apply a statistical framework to convert both stationary and non stationary Rician and non central Chi distributed noise to Gaussian distributed noise, effectively removing the bias. We then introduce a spatially and angular adaptive denoising technique, the Non Local Spatial and Angular Matching (NLSAM) algorithm. Each volume is first decomposed in small 4D overlapping patches, thus capturing the spatial and angular structure of the diffusion data, and a dictionary of atoms is learned on those patches. A local sparse decomposition is then found by bounding the reconstruction error with the local noise variance. We compare against three other state-of-the-art denoising methods and show quantitative local and connectivity results on a synthetic phantom and on an in-vivo high resolution dataset. Overall, our method restores perceptual information, removes the noise bias in common diffusion metrics, restores the extracted peaks coherence and improves reproducibility of tractography on the synthetic dataset. On the 1.2 mm high resolution in-vivo dataset, our denoising improves the visual quality of the data and reduces the number of spurious tracts when compared to the noisy acquisition. Our

  1. Multiple-image encryption based on computational ghost imaging

    Science.gov (United States)

    Wu, Jingjing; Xie, Zhenwei; Liu, Zhengjun; Liu, Wei; Zhang, Yan; Liu, Shutian

    2016-01-01

    We propose an optical multiple-image encryption scheme based on computational ghost imaging with the position multiplexing. In the encryption process, each plain image is encrypted into an intensity vector by using the computational ghost imaging with a different diffraction distance. The final ciphertext is generated by superposing all the intensity vectors together. Different from common multiple-image cryptosystems, the ciphertext in the proposed scheme is simply an intensity vector instead of a complex amplitude. Simulation results are presented to demonstrate the validity and security of the proposed multiple-image encryption method. The multiplexing capacity of the proposed method is also investigated. Optical experiment is presented to verify the validity of the proposed scheme in practical application.

  2. Improving Signal-to-Noise Ratio in Susceptibility Weighted Imaging: A Novel Multicomponent Non-Local Approach.

    Directory of Open Access Journals (Sweden)

    Pasquale Borrelli

    Full Text Available In susceptibility-weighted imaging (SWI, the high resolution required to obtain a proper contrast generation leads to a reduced signal-to-noise ratio (SNR. The application of a denoising filter to produce images with higher SNR and still preserve small structures from excessive blurring is therefore extremely desirable. However, as the distributions of magnitude and phase noise may introduce biases during image restoration, the application of a denoising filter is non-trivial. Taking advantage of the potential multispectral nature of MR images, a multicomponent approach using a Non-Local Means (MNLM denoising filter may perform better than a component-by-component image restoration method. Here we present a new MNLM-based method (Multicomponent-Imaginary-Real-SWI, hereafter MIR-SWI to produce SWI images with high SNR and improved conspicuity. Both qualitative and quantitative comparisons of MIR-SWI with the original SWI scheme and previously proposed SWI restoring pipelines showed that MIR-SWI fared consistently better than the other approaches. Noise removal with MIR-SWI also provided improvement in contrast-to-noise ratio (CNR and vessel conspicuity at higher factors of phase mask multiplications than the one suggested in the literature for SWI vessel imaging. We conclude that a proper handling of noise in the complex MR dataset may lead to improved image quality for SWI data.

  3. Detail Enhancement for Infrared Images Based on Propagated Image Filter

    Directory of Open Access Journals (Sweden)

    Yishu Peng

    2016-01-01

    Full Text Available For displaying high-dynamic-range images acquired by thermal camera systems, 14-bit raw infrared data should map into 8-bit gray values. This paper presents a new method for detail enhancement of infrared images to display the image with a relatively satisfied contrast and brightness, rich detail information, and no artifacts caused by the image processing. We first adopt a propagated image filter to smooth the input image and separate the image into the base layer and the detail layer. Then, we refine the base layer by using modified histogram projection for compressing. Meanwhile, the adaptive weights derived from the layer decomposition processing are used as the strict gain control for the detail layer. The final display result is obtained by recombining the two modified layers. Experimental results on both cooled and uncooled infrared data verify that the proposed method outperforms the method based on log-power histogram modification and bilateral filter-based detail enhancement in both detail enhancement and visual effect.

  4. Statistical x-ray computed tomography imaging from photon-starved measurements

    Science.gov (United States)

    Chang, Zhiqian; Zhang, Ruoqiao; Thibault, Jean-Baptiste; Sauer, Ken; Bouman, Charles

    2013-03-01

    Dose reduction in clinical X-ray computed tomography (CT) causes low signal-to-noise ratio (SNR) in photonsparse situations. Statistical iterative reconstruction algorithms have the advantage of retaining image quality while reducing input dosage, but they meet their limits of practicality when significant portions of the sinogram near photon starvation. The corruption of electronic noise leads to measured photon counts taking on negative values, posing a problem for the log() operation in preprocessing of data. In this paper, we propose two categories of projection correction methods: an adaptive denoising filter and Bayesian inference. The denoising filter is easy to implement and preserves local statistics, but it introduces correlation between channels and may affect image resolution. Bayesian inference is a point-wise estimation based on measurements and prior information. Both approaches help improve diagnostic image quality at dramatically reduced dosage.

  5. Image edge detection based on beamlet transform

    Institute of Scientific and Technical Information of China (English)

    Li Jing; Huang Peikang; Wang Xiaohu; Pan Xudong

    2009-01-01

    Combining beamlet transform with steerable filters, a new edge detection method based on line gra-dient is proposed. Compared with operators based on point local properties, the edge-detection results with this method achieve higher SNR and position accuracy, and are quite helpful for image registration, object identification, etc. Some edge-detection experiments on optical and SAR images that demonstrate the significant improvement over classical edge operators are also presented. Moreover, the template matching result based on edge information of optical reference image and SAR image also proves the validity of this method.

  6. Depth-based selective image reconstruction using spatiotemporal image analysis

    Science.gov (United States)

    Haga, Tetsuji; Sumi, Kazuhiko; Hashimoto, Manabu; Seki, Akinobu

    1999-03-01

    In industrial plants, a remote monitoring system which removes physical tour inspection is often considered desirable. However the image sequence given from the mobile inspection robot is hard to see because interested objects are often partially occluded by obstacles such as pillars or fences. Our aim is to improve the image sequence that increases the efficiency and reliability of remote visual inspection. We propose a new depth-based image processing technique, which removes the needless objects from the foreground and recovers the occluded background electronically. Our algorithm is based on spatiotemporal analysis that enables fine and dense depth estimation, depth-based precise segmentation, and accurate interpolation. We apply this technique to a real time sequence given from the mobile inspection robot. The resulted image sequence is satisfactory in that the operator can make correct visual inspection with less fatigue.

  7. Path planning for image based visual servoing

    OpenAIRE

    Fioravanti, Duccio

    2008-01-01

    This thesis deals with Visual Servoing and its strictly connected disciplines like projective geometry, image processing, robotics and non-linear control. More specifically the work addresses the problem to control a robotic manipulator through one of the largely used Visual Servoing techniques: the Image Based Visual Servoing (IBVS). In Image Based Visual Servoing the robot is driven by on-line performing a feedback control loop that is closed directly in the 2D space of the c...

  8. Image Based Authentication Using Steganography Technique

    OpenAIRE

    Satish Kumar Sonker; Sanjeev Kumar; Amit Kumar; Dr. Pragya Singh

    2013-01-01

    In the world of Information Security we are generally using Traditional (Text based) or multi factor Authentication Approach. Through which we are facing a lot of problems and it’s also less secure too. In these types conventional method attacks like brute-force attack, Dictionary Attack etc., are possible. This paper proposes the Image Based Authentication Using Steganography Technique considering the advantage of steganography technique along with the image. Including steganography in image...

  9. Agent Based Image Segmentation Method : A Review

    OpenAIRE

    Pooja Mishra; Navita Srivastava; Shukla, K. K.; Achintya Singlal

    2011-01-01

    Image segmentation is an important research area in computer vision and many segmentation methods have been proposed. This paper attempts to provide a brief overview of elemental segmentation techniques based on boundary or regional approaches. It focuses mainly on the agent based image segmentation techniques

  10. Image Based Rendering and Virtual Reality

    DEFF Research Database (Denmark)

    Livatino, Salvatore

    The Presentation concerns with an overview of Image Based Rendering approaches and their use on Virtual Reality, including Virtual Photography and Cinematography, and Mobile Robot Navigation.......The Presentation concerns with an overview of Image Based Rendering approaches and their use on Virtual Reality, including Virtual Photography and Cinematography, and Mobile Robot Navigation....

  11. Content Based Image Retrieval by Multi Features using Image Blocks

    Directory of Open Access Journals (Sweden)

    Arpita Mathur

    2013-12-01

    Full Text Available Content based image retrieval (CBIR is an effective method of retrieving images from large image resources. CBIR is a technique in which images are indexed by extracting their low level features like, color, texture, shape, and spatial location, etc. Effective and efficient feature extraction mechanisms are required to improve existing CBIR performance. This paper presents a novel approach of CBIR system in which higher retrieval efficiency is achieved by combining the information of image features color, shape and texture. The color feature is extracted using color histogram for image blocks, for shape feature Canny edge detection algorithm is used and the HSB extraction in blocks is used for texture feature extraction. The feature set of the query image are compared with the feature set of each image in the database. The experiments show that the fusion of multiple features retrieval gives better retrieval results than another approach used by Rao et al. This paper presents comparative study of performance of the two different approaches of CBIR system in which the image features color, shape and texture are used.

  12. An ECG Denoising Algorithm Based on Morphology and Wavelet Transform%基于形态学和小波变换的心电信号去噪算法

    Institute of Scientific and Technical Information of China (English)

    陈刚; 唐明浩; 程晖; 戈曼

    2012-01-01

    In processing the ECG signal denoised by baseline drift, frequency interference and EMG interference, the wavelet transform had been widely used. Since the algorithm had some shorting in dealing with ECG signal, it proposed a mixed algorithm consisting of ECG wavelet threshold and morphology. This kind of algorithm got rid of baseline drift by nonlinear morphology, then the ECG signal containing high-frequency interference would be processed by wavelet threshold transform, in the end, it got the ECG signal with little noise. The MIT/BIH arrhythmia database was used to prove the algorithm, and the three main noises were removed effectively, the effect of the algorithm was good, and it lays the foundation of recognizing the ECG signal subsequently.%在处理心电信号采集过程中混入的基线漂移、工频干扰及肌电干扰等噪声的过程中,小波变换取得了广泛的应用.针对小波算法的缺陷及不足,提出了一种基于数学形态学和小波阈值的混合算法.该算法利用非线性形态学滤波器滤除基线漂移,将获得的含高频噪声心电信号通过小波阈值算法进行处理,最后获得无噪声的ECG(心电)信号.采用MIT/BIH Arrhythmia Database中的数据对算法进行了验证,实现了三种主要干扰的滤除,本算法效果良好,为后续特征点的识别奠定了基础.

  13. An Image Filter Based on Shearlet Transformation and Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Kai Hu

    2015-01-01

    Full Text Available Digital image is always polluted by noise and made data postprocessing difficult. To remove noise and preserve detail of image as much as possible, this paper proposed image filter algorithm which combined the merits of Shearlet transformation and particle swarm optimization (PSO algorithm. Firstly, we use classical Shearlet transform to decompose noised image into many subwavelets under multiscale and multiorientation. Secondly, we gave weighted factor to those subwavelets obtained. Then, using classical Shearlet inverse transform, we obtained a composite image which is composed of those weighted subwavelets. After that, we designed fast and rough evaluation method to evaluate noise level of the new image; by using this method as fitness, we adopted PSO to find the optimal weighted factor we added; after lots of iterations, by the optimal factors and Shearlet inverse transform, we got the best denoised image. Experimental results have shown that proposed algorithm eliminates noise effectively and yields good peak signal noise ratio (PSNR.

  14. Multi Feature Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Rajshree S. Dubey,

    2010-09-01

    Full Text Available There are numbers of methods prevailing for Image Mining Techniques. This Paper includes the features of four techniques I,e Color Histogram, Color moment, Texture, and Edge Histogram Descriptor. The nature of the Image is basically based on the Human Perception of the Image. The Machine interpretation of the Image is based on the Contours and surfaces of the Images. The study of the Image Mining is a very challenging task because it involves the Pattern Recognition which is a very important tool for the Machine Vision system. A combination of four feature extraction methods namely color istogram, Color Moment, texture, and Edge Histogram Descriptor. There is a provision to add new features in future for better retrievalefficiency. In this paper the combination of the four techniques are used and the Euclidian distances are calculated of the every features are added and the averages are made .The user interface is provided by the Mat lab. The image properties analyzed in this work are by using computer vision and image processing algorithms. For colorthe histogram of images are computed, for texture co occurrence matrix based entropy, energy, etc, are calculated and for edge density it is Edge Histogram Descriptor (EHD that is found. For retrieval of images, the averages of the four techniques are made and the resultant Image is retrieved.

  15. Content-Based Image Retrial Based on Hadoop

    Directory of Open Access Journals (Sweden)

    DongSheng Yin

    2013-01-01

    Full Text Available Generally, time complexity of algorithms for content-based image retrial is extremely high. In order to retrieve images on large-scale databases efficiently, a new way for retrieving based on Hadoop distributed framework is proposed. Firstly, a database of images features is built by using Speeded Up Robust Features algorithm and Locality-Sensitive Hashing and then perform the search on Hadoop platform in a parallel way specially designed. Considerable experimental results show that it is able to retrieve images based on content on large-scale cluster and image sets effectively.

  16. Graph Cuts based Image Segmentation using Fuzzy Rule Based System

    Directory of Open Access Journals (Sweden)

    M. R. Khokher

    2012-12-01

    Full Text Available This work deals with the segmentation of gray scale, color and texture images using graph cuts. From input image, a graph is constructed using intensity, color and texture profiles of the image simultaneously. Based on the nature of image, a fuzzy rule based system is designed to find the weight that should be given to a specific image feature during graph development. The graph obtained from the fuzzy rule based weighted average of different image features is further used in normalized graph cuts framework. Graph is iteratively bi-partitioned through the normalized graph cuts algorithm to get optimum partitions resulting in the segmented image. Berkeley segmentation database is used to test our algorithm and the segmentation results are evaluated through probabilistic rand index, global consistency error, sensitivity, positive predictive value and Dice similarity coefficient. It is shown that the presented segmentation method provides effective results for most types of images.

  17. Kernel based subspace projection of hyperspectral images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten;

    In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF). Th...... MAF projection exploits the fact that interesting phenomena in images typically exhibit spatial autocorrelation. The analysis is based on nearinfrared hyperspectral images of maize grains demonstrating the superiority of the kernelbased MAF method.......In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF). The...

  18. Iris image Segmentation Based on Phase Congruency

    Institute of Scientific and Technical Information of China (English)

    GAO Chao; JIANG Da-qin; Guo Yong-cai

    2006-01-01

    @@ Iris image segmentation is very important for an iris recognition system.There are always iris noises as eyelash,eyelid,reflection and pupil in iris images.The paper proposes a virtual method of segmentation.By locating and normalizing iris images with Gabor filter,we can acquire information of image texture in a series of frequencies and orientations.Iris noise regions are determined based on phase congruency by a group of Gabor filters whose kernels are suitable for edge detection.These regions are filled according to the characteristics of iris noise.The experimental results show that the proposed method can segment iris images effectively.

  19. A gray-natural logarithm ratio bilateral filtering method for image processing

    Institute of Scientific and Technical Information of China (English)

    Guanan Chen; Kuntao Yang; Rong Chen; Zhiming Xie

    2008-01-01

    A new method based on gray-natural logarithm ratio bilateral filtering is presented for image smoothing in this work. A new gray-natural logarithm ratio range filter kernel, leading to adaptive magnitude from image gray distinction information, is pointed out for the bilateral filtering. The new method can not only well restrain noise but also keep much more weak edges and details of an image, and preserve the original color transition of color images. Experimental results show the effectiveness for image denoising with our method.

  20. Self-alignment algorithm without latitude for SINS based on gravitational apparent motion and wavelet denoising%未知纬度条件下基于重力视运动与小波去噪的SINS自对准方法

    Institute of Scientific and Technical Information of China (English)

    刘锡祥; 杨燕; 黄永江; 宋清

    2016-01-01

    Double-vector attitude determination algorithm in inertial frame takes two gravitational apparent motion vectors as non-collinear vectors. Although this method solve the traditional algorithm’s problem that the information is susceptible to angular motion disturbance on swinging base, it still needs accurate latitude information to participate in alignment calculation. Aiming to fulfill the alignment for strapdown inertial navigation system without aided latitude information, a self-alignment method with three gravitational apparent motion vectors is designed. In this method, the alignment problem is attributed to solving the attitude matrix between current navigation frame and initial body frame and is solved with vector operation. Simulation results indicate that those random noises in the accelerator will be projected in gravitation apparent motion vectors and decrease the alignment accuracy, and even cause alignment failure when with large noise. For denoising, the daubechies (db4) wavelet is introduced to decompose gravitational apparent motions with 5 layers, and three denoised apparent motion vectors are selected to participate in the alignment. Simulation results indicate that the db4 owns excellent denoising effects and the alignment method with three apparent motion vectors and db4 in inertial frame can fulfill the alignment without aided latitude information.%基于惯性系的双矢量定姿方法选择惯性系中的两个重力视运动向量作为不共线矢量,解决了传统双矢量定姿方法在晃动基座条件下易受载体角运动干扰而无法实现对准的问题,但该方法仍需要精确的地理纬度信息以参与对准计算。针对未知纬度条件下的SINS抗晃动自对准问题,提出了一种基于重力视运动的三矢量自对准方法。该方法将初始对准问题归结为求解当前时刻导航系相对于初始时刻载体系的姿态矩阵问题,并利用矢量运算进行求解,仿真结果表

  1. Imaging-based enrichment criteria using deep learning algorithms for efficient clinical trials in mild cognitive impairment.

    Science.gov (United States)

    Ithapu, Vamsi K; Singh, Vikas; Okonkwo, Ozioma C; Chappell, Richard J; Dowling, N Maritza; Johnson, Sterling C

    2015-12-01

    The mild cognitive impairment (MCI) stage of Alzheimer's disease (AD) may be optimal for clinical trials to test potential treatments for preventing or delaying decline to dementia. However, MCI is heterogeneous in that not all cases progress to dementia within the time frame of a trial and some may not have underlying AD pathology. Identifying those MCIs who are most likely to decline during a trial and thus most likely to benefit from treatment will improve trial efficiency and power to detect treatment effects. To this end, using multimodal, imaging-derived, inclusion criteria may be especially beneficial. Here, we present a novel multimodal imaging marker that predicts future cognitive and neural decline from [F-18]fluorodeoxyglucose positron emission tomography (PET), amyloid florbetapir PET, and structural magnetic resonance imaging, based on a new deep learning algorithm (randomized denoising autoencoder marker, rDAm). Using ADNI2 MCI data, we show that using rDAm as a trial enrichment criterion reduces the required sample estimates by at least five times compared with the no-enrichment regime and leads to smaller trials with high statistical power, compared with existing methods. PMID:26093156

  2. Image Based Rendering under Varying Illumination

    Institute of Scientific and Technical Information of China (English)

    Wang Chengfeng (王城峰); Hu Zhanyi

    2003-01-01

    A new approach for photorealistic rendering of a class of objects at arbitrary illumination is presented. The approach of the authors relies entirely on image based rendering techniques. A scheme is utilized for re-illumination of objects based on linear combination of low dimensional image representations. The minimum rendering condition of technique of the authors is three sample images under varying illumination of a reference object and a single input image of an interested object. Important properties of this approach are its simplicity, robustness and speediness. Experimental results validate the proposed rendering approach.

  3. A Survey: Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Javeria Ami

    2014-05-01

    Full Text Available The field of image processing is addressed significantly by the role of CBIR. Peculiar query is the main feature on which the image retrieval of content based problems is dependent. Relevant information is required for the submission of sketches or drawing and similar type of features. Many algorithms are used for the extraction of features which are related to similar nature. The process can be optimized by the use of feedback from the retrieval step. Analysis of colour and shape can be done by the visual contents of image. Here neural network, Relevance feedback techniques based on image retrieval are discussed.

  4. Color –Based Image Retrieval in Image Database System

    Directory of Open Access Journals (Sweden)

    Gunja Varshney

    2011-11-01

    Full Text Available Image Databases (IDBs are a special kind of SpatialDatabases where a large number of images are stored andqueried. IDBs find a plethora of applications in modern life, e.g.in Medical, Multimedia, Educational Applications, etc. Data in anIDB may be stored in raster or vector format. Each of these dataformats has certain properties and, in several cases, the choicebetween them is a challenge. Raster data lead to fast computing ofseveral operations and they are well suited to remote sensing. Onthe other hand, they have a fixed resolution, leading to limiteddetail. In this article, we focus on raster data. We present thedesign and architecture of an Image Database System whereseveral query types are supported. These include: queries aboutthe additional properties (descriptive information that have beenrecorded for each image (e.g. which images have been used ascovers of children’s books, queries about the color characteristics(color features of the images (e.g. find the images that depict vividblue, queries by example, or sketch (e.g. a sample image ischosen, or drawn by the user and images color-similar to thissample are sought. Color retrieval is achieved by utilizing colorhistograms. The development of our system is based onnon-specialized tools: a relational database, Visual Basic and thecomputer’s file system. The user interface of the system aims atincreased ease of use. It permits the management of the collectionof images and the effective querying of the images by all the abovequery types and their combinations.

  5. Image content authentication based on channel coding

    Science.gov (United States)

    Zhang, Fan; Xu, Lei

    2008-03-01

    The content authentication determines whether an image has been tampered or not, and if necessary, locate malicious alterations made on the image. Authentication on a still image or a video are motivated by recipient's interest, and its principle is that a receiver must be able to identify the source of this document reliably. Several techniques and concepts based on data hiding or steganography designed as a means for the image authentication. This paper presents a color image authentication algorithm based on convolution coding. The high bits of color digital image are coded by the convolution codes for the tamper detection and localization. The authentication messages are hidden in the low bits of image in order to keep the invisibility of authentication. All communications channels are subject to errors introduced because of additive Gaussian noise in their environment. Data perturbations cannot be eliminated but their effect can be minimized by the use of Forward Error Correction (FEC) techniques in the transmitted data stream and decoders in the receiving system that detect and correct bits in error. This paper presents a color image authentication algorithm based on convolution coding. The message of each pixel is convolution encoded with the encoder. After the process of parity check and block interleaving, the redundant bits are embedded in the image offset. The tamper can be detected and restored need not accessing the original image.

  6. Image File Security using Base-64 Algorithm

    Directory of Open Access Journals (Sweden)

    Pooja Guwalani

    2014-12-01

    Full Text Available Information security is becoming a vital component of any data storage and transmission operations. Since visual representation of data is gaining importance, data in the form of images are used to exchange and convey information between entities. As the use of digital techniques for transmitting and storing images are increasing, it becomes an important issue to protect the confidentiality, integrity and authenticity of images. There are various techniques which are discovered to encrypt the images to make them more secure. The primary goal of this paper is security management. A mechanism to provide authentication of users and ensure integrity, accuracy and safety of images. Moreover, an image-based data requires more effort during encryption and decryption. In this paper, we describe how Base64 algorithm can be used to achieve this purpose.

  7. ROV Based Underwater Blurred Image Restoration

    Institute of Scientific and Technical Information of China (English)

    LIU Zhishen; DING Tianfu; WANG Gang

    2003-01-01

    In this paper, we present a method of ROV based image processing to restore underwater blurry images from the theory of light and image transmission in the sea. Computer is used to simulate the maximum detection range of the ROV under different water body conditions. The receiving irradiance of the video camera at different detection ranges is also calculated. The ROV's detection performance under different water body conditions is given by simulation. We restore the underwater blurry images using the Wiener filter based on the simulation. The Wiener filter is shown to be a simple useful method for underwater image restoration in the ROV underwater experiments. We also present examples of restored images of an underwater standard target taken by the video camera in these experiments.

  8. Global Descriptor Attributes Based Content Based Image Retrieval of Query Images

    Directory of Open Access Journals (Sweden)

    Jaykrishna Joshi

    2015-02-01

    Full Text Available The need for efficient content-based image retrieval system has increased hugely. Efficient and effective retrieval techniques of images are desired because of the explosive growth of digital images. Content based image retrieval (CBIR is a promising approach because of its automatic indexing retrieval based on their semantic features and visual appearance. In this proposed system we investigate method for describing the contents of images which characterizes images by global descriptor attributes, where global features are extracted to make system more efficient by using color features which are color expectancy, color variance, skewness and texture feature correlation.

  9. Pumilio-based RNA in vivo imaging.

    Science.gov (United States)

    Tilsner, Jens

    2015-01-01

    Subcellular, sequence-specific detection of RNA in vivo is a powerful tool to study the macromolecular transport that occurs through plasmodesmata. The RNA-binding domain of Pumilio proteins can be engineered to bind RNA sequences of choice and fused to fluorescent proteins for RNA imaging. This chapter describes the construction of a Pumilio-based imaging system to track the RNA of Tobacco mosaic virus in vivo, and practical aspects of RNA live-cell imaging. PMID:25287212

  10. Pumilio-based RNA in vivo imaging

    OpenAIRE

    Tilsner, Jens

    2015-01-01

    Subcellular, sequence-specific detection of RNA in vivo is a powerful tool to study the macromolecular transport that occurs through plasmodesmata. The RNA-binding domain of Pumilio proteins can be engineered to bind RNA sequences of choice and fused to fluorescent proteins for RNA imaging. This chapter describes the construction of a Pumilio-based imaging system to track the RNA of Tobacco mosaic virus in vivo, and practical aspects of RNA live-cell imaging.

  11. Curvelet Based Offline Analysis of SEM Images

    OpenAIRE

    Shirazi, Syed Hamad; Haq, Nuhman ul; Hayat, Khizar; Naz, Saeeda; Haque, Ihsan ul

    2014-01-01

    Manual offline analysis, of a scanning electron microscopy (SEM) image, is a time consuming process and requires continuous human intervention and efforts. This paper presents an image processing based method for automated offline analyses of SEM images. To this end, our strategy relies on a two-stage process, viz. texture analysis and quantification. The method involves a preprocessing step, aimed at the noise removal, in order to avoid false edges. For texture analysis, the proposed method ...

  12. Particle Pollution Estimation Based on Image Analysis

    OpenAIRE

    Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian

    2016-01-01

    Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic...

  13. Sparse Image and Signal Processing: Wavelets and Related Geometric Multiscale Analysis

    OpenAIRE

    Starck, J-L.; Murtagh, Fionn; Fadili, J.

    2010-01-01

    This book presents the state of the art in sparse and multiscale image and signal processing, covering linear multiscale transforms, such as wavelet, ridgelet, or curvelet transforms, and non-linear multiscale transforms based on the median and mathematical morphology operators. Recent concepts of sparsity and morphological diversity are described and exploited for various problems such as denoising, inverse problem regularization, sparse signal decomposition, blind source separation, and com...

  14. Microphone Array Speech Enhancement Algorithm Based on Wavelet Threshold Denoising%基于小波阈值去噪的麦克风阵列语音增强方法

    Institute of Scientific and Technical Information of China (English)

    高婉贞; 张玲华

    2014-01-01

    Delay-sum beamforming method as a kind of microphone array speech enhancement algorithms only is effective for irrelevant noise and weak coherent noise. If the speech mixes coherent noise,the conventional method cannot remove it. For this limitation,in this paper wavelet threshold denoising is applied into conventional delay-sum beamforming speech enhancement algorithm,making it to be ef-fective for removing coherent noise. While reducing the time estimation error caused by the noise,improve the accuracy of time delay esti-mation and make the final sum result better. The simulation results show that the improved method is effective,which improves the charity of speech and degree of ear acceptance.%现有的麦克风阵列语音增强方法中的延迟-求和波束形成算法只对不相干噪声或弱相干噪声有一定的消噪能力,如果语音中混有较强相干噪声,则此传统的方法对其没有消除能力。针对这个局限性,文中把小波阈值去噪的方法与传统的延迟-求和波束形成算法有效结合,使其对相干噪声也具有很好的消噪能力,同时减少由于噪声的存在而引起的时延估计误差,提高时延估计的准确性,使最终求和结果更好。通过仿真结果表明,这种改进方法可以改善最终语音效果,提高语音清晰度,使人耳更好地接受。

  15. Denoising of X-ray pulsar observed profile in the undecimated wavelet domain

    Science.gov (United States)

    Xue, Meng-fan; Li, Xiao-ping; Fu, Ling-zhong; Liu, Xiu-ping; Sun, Hai-feng; Shen, Li-rong

    2016-01-01

    The low intensity of the X-ray pulsar signal and the strong X-ray background radiation lead to low signal-to-noise ratio (SNR) of the X-ray pulsar observed profile obtained through epoch folding, especially when the observation time is not long enough. This signifies the necessity of denoising of the observed profile. In this paper, the statistical characteristics of the X-ray pulsar signal are studied, and a signal-dependent noise model is established for the observed profile. Based on this, a profile noise reduction method by performing a local linear minimum mean square error filtering in the un-decimated wavelet domain is developed. The detail wavelet coefficients are rescaled by multiplying their amplitudes by a locally adaptive factor, which is the local variance ratio of the noiseless coefficients to the noisy ones. All the nonstationary statistics needed in the algorithm are calculated from the observed profile, without a priori information. The results of experim! ents, carried out on simulated data obtained by the ground-based simulation system and real data obtained by Rossi X-Ray Timing Explorer satellite, indicate that the proposed method is excellent in both noise suppression and preservation of peak sharpness, and it also clearly outperforms four widely accepted and used wavelet denoising methods, in terms of SNR, Pearson correlation coefficient and root mean square error.

  16. 基于蜂群算法的多小波图像去噪研究%Multi-wavelet image denoising based on artificial bee colony algorithm

    Institute of Scientific and Technical Information of China (English)

    李万高; 赵雪梅

    2013-01-01

    针对在多小波图像去噪中阈值难以选取问题,提出基于群体智能算法—人工蜂群算法(artificial bee colony algorithm,ABC)优化多小波阈值.详细介绍了群体智能算法的发展历程和分类,阐述了ABC算法的基本原理、工作流程,及其优化多小波阈值在图像去噪中的具体步骤,比较了遗传算法(genetic algorithm,GA)、粒子群算法(par-ticle swarm optimization,PSO)、蚁群算法(antcolonyoptimization,ACO)以及ABC算法4种算法各自的优缺点.将提出的方法与GA算法和PSO算法优化多小波阈值进行了对比,通过仿真,证明提出的算法可以有效地去除高斯白噪声,提高图像的峰值信噪比(peak signal to noise ratio,PSNR),具有很好的去噪效果.

  17. Tidal Flat LIDAR Image Denoising Based on Wavelet Transform%基于小波变换的沿海滩涂LIDAR影像降噪研究

    Institute of Scientific and Technical Information of China (English)

    谢璧霞; 史照良; 曹敏

    2009-01-01

    LIDAR 数据的精度受到多种因素影响,选择合适的模型优化LIDAR影像,分离LIDAR观测数据与噪声,可有效提高LIDAR数据精度.本文研究基于小波变换的LIDAR影像降噪模型,并将小渡降噪与各种常规方法进行对比分析,选择客观评价指标比较各方法的降噪效果,认为小波变换法更适合LIDAR影像降噪,能保留更多的有效信息.

  18. Research of an Improved Wavelet Threshold Denoising Method for Transformer Partial Discharge Signal

    Directory of Open Access Journals (Sweden)

    Fucheng You

    2013-02-01

    Full Text Available In order to overcome the discontinuance of the hard thresholding function and the defect of seriously slashing singularity in the soft thresholding function, improve the denoising effect and detect the transformer partial discharge signal more accurately, in this paper an improved wavelet threshold denoising method is put forward through analyzing the interference noise of transformer partial discharge signals and studying various wavelet threshold denoising method, especially the wavelet threshold denoising method that overcomes the shortcomings of the hard and soft threshold. Simulation results show that the denoising effect of this method has been greatly improved than the traditional hard and soft threshold method. This method can be widely used in practical transformer partial discharge signal denoising.

  19. Content-Based Image Retrial Based on Hadoop

    OpenAIRE

    DongSheng Yin; DeBo Liu

    2013-01-01

    Generally, time complexity of algorithms for content-based image retrial is extremely high. In order to retrieve images on large-scale databases efficiently, a new way for retrieving based on Hadoop distributed framework is proposed. Firstly, a database of images features is built by using Speeded Up Robust Features algorithm and Locality-Sensitive Hashing and then perform the search on Hadoop platform in a parallel way specially designed. Considerable experimental results show that it is abl...

  20. Region-based multimodal image fusion using ICA bases

    OpenAIRE

    Cvejic, N; Lewis, J.; Bull, DR; Canagarajah, CN

    2006-01-01

    In this paper, we present a novel region-based multimodal image fusion algorithm in the ICA domain. It uses segmentation to determine the most important regions in the input images and consequently fuses the ICA coefficients from the given regions. The proposed method exhibits considerably higher performance than the basic ICA algorithm and shows improvement over other state-of-the-art algorithms In this paper, we present a novel region-based multimodal image fusion algorithm in the ICA do...

  1. Spin scan tomographic array-based imager.

    Science.gov (United States)

    Hovland, Harald

    2014-12-29

    This work presents a novel imaging device based on tomographic reconstruction. Similar in certain aspects to the earlier presented tomographic scanning (TOSCA) principle, it provides several important enhancements. The device described generates a stream of one-dimensional projections from a linear array of thin stripe detectors onto which the (circular) image of the scene is rotated. A two-dimensional image is then reproduced from the one-dimensional signals using tomographic processing techniques. A demonstrator is presented. Various aspects of the design and construction are discussed, and resulting images and movies are presented. PMID:25607168

  2. Phase Correlation Based Iris Image Registration Model

    Institute of Scientific and Technical Information of China (English)

    Jun-Zhou Huang; Tie-Niu Tan; Li Ma; Yun-Hong Wang

    2005-01-01

    Iris recognition is one of the most reliable personal identification methods. In iris recognition systems, image registration is an important component. Accurately registering iris images leads to higher recognition rate for an iris recognition system. This paper proposes a phase correlation based method for iris image registration with sub-pixel accuracy.Compared with existing methods, it is insensitive to image intensity and can compensate to a certain extent the non-linear iris deformation caused by pupil movement. Experimental results show that the proposed algorithm has an encouraging performance.

  3. Research of an Improved Wavelet Threshold Denoising Method for Transformer Partial Discharge Signal

    OpenAIRE

    Fucheng You; Ying Zhang

    2013-01-01

    In order to overcome the discontinuance of the hard thresholding function and the defect of seriously slashing singularity in the soft thresholding function, improve the denoising effect and detect the transformer partial discharge signal more accurately, in this paper an improved wavelet threshold denoising method is put forward through analyzing the interference noise of transformer partial discharge signals and studying various wavelet threshold denoising meth...

  4. A Novel Technique for Transmission of M-Ary Signal through Wireless Fading Channel Using Wavelet Denoising

    Directory of Open Access Journals (Sweden)

    Md. Zahangir Alam

    2011-01-01

    Full Text Available The paper proposes a novel technique for reducing noise in M-ary signal transmission through wireless fading channel using wavelet denoising that play the key role. The paper also explains that the conventional threshold-based technique is not capable of denoising M-ary quadrature amplitude modulated (M-QAM signals having multilevel wavelet coefficients through wireless fading channels. A detailed step by step wavelet decomposition and reconstruction processes are discussed here to transform a signal function into wavelet coefficients using simulation software like MATLAB. A 16-QAM modulated symbol through a Rician fading channel is weighted by a control variable of complex form to force the mean of each detail coefficient except low frequency component to zero to enhance noiseless property. The bit error rate (BER performance of the simulation results are furnished to show the effectiveness of the proposed technique. The root mean square of the deviation of the reconstruct signal from the original signal is used to express the effectiveness of the proposed technique. The traditional denoising provides very high value (above 90% of the percentage root mean square difference (PDR and the proposed technique provides only 10% PDR value for the symbol through a noisy channel. The result of the simulation study reveals that the BER performance can be increased using an appropriate control variable to force the mean of each detail coefficient to zero.

  5. Spatial chaos-based image encryption design

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In recent years, the chaos based cryptographic algorithms have suggested some new and efficient ways to develop secure image encryption techniques, but the drawbacks of small key space and weak security in one-dimensional chaotic cryptosystems are obvious. In this paper, permutation and sub- stitution methods are incorporated to present a stronger image encryption algorithm. Spatial chaotic maps are used to realize the position permutation, and to confuse the relationship between the ci- pher-image and the plain-image. The experimental results demonstrate that the suggested encryption scheme of image has the advantages of large key space and high security; moreover, the distribution of grey values of the encrypted image has a random-like behavior.

  6. Spatial chaos-based image encryption design

    Institute of Scientific and Technical Information of China (English)

    LIU ShuTang; SUN FuYan

    2009-01-01

    In recent years, the chaos based cryptographic algorithms have suggested some new and efficient ways to develop secure image encryption techniques, but the drawbacks of small key space and weak security in one-dimensional chaotic cryptosystems are obvious. In this paper, permutation and sub-stitution methods are incorporated to present a stronger image encryption algorithm. Spatial chaotic maps are used to realize the position permutation, and to confuse the relationship between the ci-pher-image and the plain-image. The experimental results demonstrate that the suggested encryption scheme of image has the advantages of large key space and high security; moreover, the distribution of grey values of the encrypted image has a random-like behavior.

  7. Pixel-clarity-based multifocus image fusion

    Institute of Scientific and Technical Information of China (English)

    Zhenhua Li(李振华); Zhongliang Jing(敬忠良); Shaoyuan Sun(孙韶媛)

    2004-01-01

    @@ Due to the limited depth-of-field of optical lenses,it is difficult to get an image with all objects in focus.One way to overcome this problem is to take several images with different focus points and combine theminto a single composite which contains all the regions full focused.This paper describes a pixel-clarity-based multifocus image fusion algorithm.The characteristic of this approach is that the pixels of the fusedimage are selected from the clearest pixels in the input images according to pixel clarity criteria.For eachpixel in the source images,the pixel clarity is calculated.The fusion procedure is performed by a selectionmode according to the magnitude of pixel clarity.Consistency verification is performed on the selectedpixels.Experiments show that the proposed algorithm works well in multifocus image fusion.

  8. Comic image understanding based on polygon detection

    Science.gov (United States)

    Li, Luyuan; Wang, Yongtao; Tang, Zhi; Liu, Dong

    2013-01-01

    Comic image understanding aims to automatically decompose scanned comic page images into storyboards and then identify the reading order of them, which is the key technique to produce digital comic documents that are suitable for reading on mobile devices. In this paper, we propose a novel comic image understanding method based on polygon detection. First, we segment a comic page images into storyboards by finding the polygonal enclosing box of each storyboard. Then, each storyboard can be represented by a polygon, and the reading order of them is determined by analyzing the relative geometric relationship between each pair of polygons. The proposed method is tested on 2000 comic images from ten printed comic series, and the experimental results demonstrate that it works well on different types of comic images.

  9. An image mosaic method based on corner

    Science.gov (United States)

    Jiang, Zetao; Nie, Heting

    2015-08-01

    In view of the shortcomings of the traditional image mosaic, this paper describes a new algorithm for image mosaic based on the Harris corner. Firstly, Harris operator combining the constructed low-pass smoothing filter based on splines function and circular window search is applied to detect the image corner, which allows us to have better localisation performance and effectively avoid the phenomenon of cluster. Secondly, the correlation feature registration is used to find registration pair, remove the false registration using random sampling consensus. Finally use the method of weighted trigonometric combined with interpolation function for image fusion. The experiments show that this method can effectively remove the splicing ghosting and improve the accuracy of image mosaic.

  10. Edge Based CNN Image Segmentation Methods for Medical Imaging

    OpenAIRE

    ŢEPELEA Laviniu; GAVRILUŢ Ioan; GACSÁDI Alexandru

    2010-01-01

    This paper presents an analysis of CNN(Cellular Neural Networks) segmentation methods inmedical imaging, based on edge detection. To increasethe efficiency of segmentation, in case of the medicalimages with noise, optimization of CNN templates isproposed. Due to parallel processing, the CNNmethods are considered an advantageous solution forimage processing.

  11. Image based Monument Recognition using Graph based Visual Saliency

    DEFF Research Database (Denmark)

    Kalliatakis, Grigorios; Triantafyllidis, Georgios

    2013-01-01

    This article presents an image-based application aiming at simple image classification of well-known monuments in the area of Heraklion, Crete, Greece. This classification takes place by utilizing Graph Based Visual Saliency (GBVS) and employing Scale Invariant Feature Transform (SIFT) or Speeded...... images have been previously processed according to the Graph Based Visual Saliency model in order to keep either SIFT or SURF features corresponding to the actual monuments while the background “noise” is minimized. The application is then able to classify these images, helping the user to better...... Up Robust Features (SURF). For this purpose, images taken at various places of interest are being compared to an existing database containing images of these places at different angles and zoom. The time required for the matching progress in such application is an important element. To this goal, the...

  12. Information Audit Based on Image Content Filtering

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    At present, network information audit system is almost based on text information filtering, but badness information is embedded into image or image file directly by badness information provider, in order to avoid monitored by. The paper realizes an information audit system based on image content filtering. Taking the pornographic program identification for an example, the system can monitor the video including any abnormal human body information by matching the texture characters with those defined in advance, which consist of contrast, energy, correlation measure and entropy character measure and so on.

  13. Gradient-based compressive image fusion

    Institute of Scientific and Technical Information of China (English)

    Yang CHEN‡; Zheng QIN

    2015-01-01

    We present a novel image fusion scheme based on gradient and scrambled block Hadamard ensemble (SBHE) sam-pling for compressive sensing imaging. First, source images are compressed by compressive sensing, to facilitate the transmission of the sensor. In the fusion phase, the image gradient is calculated to reflect the abundance of its contour information. By com-positing the gradient of each image, gradient-based weights are obtained, with which compressive sensing coefficients are achieved. Finally, inverse transformation is applied to the coefficients derived from fusion, and the fused image is obtained. Information entropy (IE), Xydeas’s and Piella’s metrics are applied as non-reference objective metrics to evaluate the fusion quality in line with different fusion schemes. In addition, different image fusion application scenarios are applied to explore the scenario adaptability of the proposed scheme. Simulation results demonstrate that the gradient-based scheme has the best per-formance, in terms of both subjective judgment and objective metrics. Furthermore, the gradient-based fusion scheme proposed in this paper can be applied in different fusion scenarios.

  14. Coefficients de-noising with wavelet transform for magnetic flux leakage data obtained from oil pipeline

    Institute of Scientific and Technical Information of China (English)

    Han Wenhua; Que Peiwen

    2005-01-01

    This paper considers the problem of noise cancellation for the magnetic flux leakage (MFL) data obtained from the inspection of oil pipelines. MFL data is contaminated by various sources of noise, and the noise can considerably reduce the detectability of flaw signals in MFL data. This paper presents a new de-noising approach for removing the system noise contained in the MFL data by using the coefficients de-noising with wavelet transform. Experimental results are presented to demonstrate the advantages of this de-noising approach over the conventional wavelet de-noising method.

  15. Single Channel Speech Enhancement by De-noising Using Stationary Wavelet Transform

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A method of single channel speech enhancement is proposed by de-noising using stationary wavelet transform. The approach developed herein processes multi-resolution wavelet coefficients individually and then recovery signal is reconstructed. The time invariant characteristics of stationary wavelet transform is particularly useful in speech de-noising. Experimental results show that the proposed speech enhancement by de-noising algorithm is possible to achieve an excellent balance between suppresses noise effectively and preserves as many target characteristics of original signal as possible. This de-noising algorithm offers a superior performance to speech signal noise suppress.

  16. Fast single image dehazing based on image fusion

    Science.gov (United States)

    Liu, Haibo; Yang, Jie; Wu, Zhengping; Zhang, Qingnian

    2015-01-01

    Images captured in foggy weather conditions often fade the colors and reduce the contrast of the observed objects. An efficient image fusion method is proposed to remove haze from a single input image. First, the initial medium transmission is estimated based on the dark channel prior. Second, the method adopts an assumption that the degradation level affected by haze of each region is the same, which is similar to the Retinex theory, and uses a simple Gaussian filter to get the coarse medium transmission. Then, pixel-level fusion is achieved between the initial medium transmission and coarse medium transmission. The proposed method can recover a high-quality haze-free image based on the physical model, and the complexity of the proposed method is only a linear function of the number of input image pixels. Experimental results demonstrate that the proposed method can allow a very fast implementation and achieve better restoration for visibility and color fidelity compared to some state-of-the-art methods.

  17. Statistical denoising of signals in the S-transform domain

    Science.gov (United States)

    Weishi, Man; Jinghuai, Gao

    2009-06-01

    In this paper, the denoising of stochastic noise in the S-transform (ST) and generalized S-transform (GST) domains is discussed. First, the mean power spectrum (MPS) of white noise is derived in the ST and GST domains. The results show that the MPS varies linearly with the frequency in the ST and GST domains (with a Gaussian window). Second, the local power spectrum (LPS) of red noise is studied by employing the Monte Carlo method in the two domains. The results suggest that the LPS of Gaussian red noise can be transformed into a chi-square distribution with two degrees of freedom. On the basis of the difference between the LPS distribution of signals and noise, a denoising method is presented through hypothesis testing. The effectiveness of the method is confirmed by testing synthetic seismic data and a chirp signal.

  18. ENAS-RIF algorithm for image restoration

    Science.gov (United States)

    Yang, Yang; Yang, Zhen-wen; Shen, Tian-shuang; Chen, Bo

    2012-11-01

    mage of objects is inevitably encountered by space-based working in the atmospheric turbulence environment, such as those used in astronomy, remote sensing and so on. The observed images are seriously blurred. The restoration is required for reconstruction turbulence degraded images. In order to enhance the performance of image restoration, a novel enhanced nonnegativity and support constants recursive inverse filtering(ENAS-RIF) algorithm was presented, which was based on the reliable support region and enhanced cost function. Firstly, the Curvelet denoising algorithm was used to weaken image noise. Secondly, the reliable object support region estimation was used to accelerate the algorithm convergence. Then, the average gray was set as the gray of image background pixel. Finally, an object construction limit and the logarithm function were add to enhance algorithm stability. The experimental results prove that the convergence speed of the novel ENAS-RIF algorithm is faster than that of NAS-RIF algorithm and it is better in image restoration.

  19. Introduction to the Restoration of Astrophysical Images by Multiscale Transforms and Bayesian Methods

    Science.gov (United States)

    Bijaoui, A.

    2013-03-01

    The image restoration is today an important part of the astrophysical data analysis. The denoising and the deblurring can be efficiently performed using multiscale transforms. The multiresolution analysis constitutes the fundamental pillar for these transforms. The discrete wavelet transform is introduced from the theory of the approximation by translated functions. The continuous wavelet transform carries out a generalization of multiscale representations from translated and dilated wavelets. The à trous algorithm furnishes its discrete redundant transform. The image denoising is first considered without any hypothesis on the signal distribution, on the basis of the a contrario detection. Different softening functions are introduced. The introduction of a regularization constraint may improve the results. The application of Bayesian methods leads to an automated adaptation of the softening function to the signal distribution. The MAP principle leads to the basis pursuit, a sparse decomposition on redundant dictionaries. Nevertheless the posterior expectation minimizes, scale per scale, the quadratic error. The proposed deconvolution algorithm is based on a coupling of the wavelet denoising with an iterative inversion algorithm. The different methods are illustrated by numerical experiments on a simulated image similar to images of the deep sky. A white Gaussian stationary noise was added with three levels. In the conclusion different important connected problems are tackled.

  20. Subband-Adaptive Shrinkage for Denoising of ECG Signals

    Directory of Open Access Journals (Sweden)

    Kumaravel N

    2006-01-01

    Full Text Available This paper describes subband dependent adaptive shrinkage function that generalizes hard and soft shrinkages proposed by Donoho and Johnstone (1994. The proposed new class of shrinkage function has continuous derivative, which has been simulated and tested with normal and abnormal ECG signals with added standard Gaussian noise using MATLAB. The recovered signal is visually pleasant compared with other existing shrinkage functions. The implication of the proposed shrinkage function in denoising and data compression is discussed.