WorldWideScience

Sample records for based image denoising

  1. Wavelet Based Image Denoising Technique

    Directory of Open Access Journals (Sweden)

    Sachin D Ruikar

    2011-03-01

    Full Text Available This paper proposes different approaches of wavelet based image denoising methods. The search for efficient image denoising methods is still a valid challenge at the crossing of functional analysis and statistics. In spite of the sophistication of the recently proposed methods, most algorithms have not yet attained a desirable level of applicability. Wavelet algorithms are useful tool for signal processing such as image compression and denoising. Multi wavelets can be considered as an extension of scalar wavelets. The main aim is to modify the wavelet coefficients in the new basis, the noise can be removed from the data. In this paper, we extend the existing technique and providing a comprehensive evaluation of the proposed method. Results based on different noise, such as Gaussian, Poisson’s, Salt and Pepper, and Speckle performed in this paper. A signal to noise ratio as a measure of the quality of denoising was preferred.

  2. A New Adaptive Image Denoising Method Based on Neighboring Coefficients

    Science.gov (United States)

    Biswas, Mantosh; Om, Hari

    2016-03-01

    Many good techniques have been discussed for image denoising that include NeighShrink, improved adaptive wavelet denoising method based on neighboring coefficients (IAWDMBNC), improved wavelet shrinkage technique for image denoising (IWST), local adaptive wiener filter (LAWF), wavelet packet thresholding using median and wiener filters (WPTMWF), adaptive image denoising method based on thresholding (AIDMT). These techniques are based on local statistical description of the neighboring coefficients in a window. These methods however do not give good quality of the images since they cannot modify and remove too many small wavelet coefficients simultaneously due to the threshold. In this paper, a new image denoising method is proposed that shrinks the noisy coefficients using an adaptive threshold. Our method overcomes these drawbacks and it has better performance than the NeighShrink, IAWDMBNC, IWST, LAWF, WPTMWF, and AIDMT denoising methods.

  3. A Method Based on Geodesic Distance for Image Segmentation and Denoising

    OpenAIRE

    Liu Cuiyun; Zhang Caiming; Gao Shanshan

    2014-01-01

    The study introduces image segmentation and denoising method which is based on geodesic framework and k means algorithm. Our method combines geodesic with k means algorithm. What’s more, a denoising method is applied to denoise. We optimize the distance function of k means algorithm to achieve our goals. This method can segment and denoise image which contains a lot of noise effectually.

  4. A nonlinear Stein based estimator for multichannel image denoising

    CERN Document Server

    Chaux, Caroline; Benazza-Benyahia, Amel; Pesquet, Jean-Christophe

    2007-01-01

    The use of multicomponent images has become widespread with the improvement of multisensor systems having increased spatial and spectral resolutions. However, the observed images are often corrupted by an additive Gaussian noise. In this paper, we are interested in multichannel image denoising based on a multiscale representation of the images. A multivariate statistical approach is adopted to take into account both the spatial and the inter-component correlations existing between the different wavelet subbands. More precisely, we propose a new parametric nonlinear estimator which generalizes many reported denoising methods. The derivation of the optimal parameters is achieved by applying Stein's principle in the multivariate case. Experiments performed on multispectral remote sensing images clearly indicate that our method outperforms conventional wavelet denoising techniques

  5. A New Algorithm for Total Variation Based Image Denoising

    Institute of Scientific and Technical Information of China (English)

    Yi-ping XU

    2012-01-01

    We propose a new algorithm for the total variation based on image denoising problem.The split Bregman method is used to convert an unconstrained minimization denoising problem to a linear system in the outer iteration.An algebraic multi-grid method is applied to solve the linear system in the inner iteration.Furthermore,Krylov subspace acceleration is adopted to improve convergence in the outer iteration.Numerical experiments demonstrate that this algorithm is efficient even for images with large signal-to-noise ratio.

  6. Dictionary-based image denoising for dual energy computed tomography

    Science.gov (United States)

    Mechlem, Korbinian; Allner, Sebastian; Mei, Kai; Pfeiffer, Franz; Noël, Peter B.

    2016-03-01

    Compared to conventional computed tomography (CT), dual energy CT allows for improved material decomposition by conducting measurements at two distinct energy spectra. Since radiation exposure is a major concern in clinical CT, there is a need for tools to reduce the noise level in images while preserving diagnostic information. One way to achieve this goal is the application of image-based denoising algorithms after an analytical reconstruction has been performed. We have developed a modified dictionary denoising algorithm for dual energy CT aimed at exploiting the high spatial correlation between between images obtained from different energy spectra. Both the low-and high energy image are partitioned into small patches which are subsequently normalized. Combined patches with improved signal-to-noise ratio are formed by a weighted addition of corresponding normalized patches from both images. Assuming that corresponding low-and high energy image patches are related by a linear transformation, the signal in both patches is added coherently while noise is neglected. Conventional dictionary denoising is then performed on the combined patches. Compared to conventional dictionary denoising and bilateral filtering, our algorithm achieved superior performance in terms of qualitative and quantitative image quality measures. We demonstrate, in simulation studies, that this approach can produce 2d-histograms of the high- and low-energy reconstruction which are characterized by significantly improved material features and separation. Moreover, in comparison to other approaches that attempt denoising without simultaneously using both energy signals, superior similarity to the ground truth can be found with our proposed algorithm.

  7. Wavelet-Based Denoising Attack on Image Watermarking

    Institute of Scientific and Technical Information of China (English)

    XUAN Jian-hui; WANG Li-na; ZHANG Huan-guo

    2005-01-01

    In this paper, we propose wavelet-based denoising attack methods on image watermarking in discrete cosine transform (DCT) or discrete Fourier transform (DFT) domain or discrete wavelet transform (DWT) domain. Wiener filtering based on wavelet transform is performed in approximation subband to remove DCT or DFT domain watermark,and adaptive wavelet soft thresholding is employed to remove the watermark resided in detail subbands of DWT domain.

  8. Regularized Fractional Power Parameters for Image Denoising Based on Convex Solution of Fractional Heat Equation

    Directory of Open Access Journals (Sweden)

    Hamid A. Jalab

    2014-01-01

    Full Text Available The interest in using fractional mask operators based on fractional calculus operators has grown for image denoising. Denoising is one of the most fundamental image restoration problems in computer vision and image processing. This paper proposes an image denoising algorithm based on convex solution of fractional heat equation with regularized fractional power parameters. The performances of the proposed algorithms were evaluated by computing the PSNR, using different types of images. Experiments according to visual perception and the peak signal to noise ratio values show that the improvements in the denoising process are competent with the standard Gaussian filter and Wiener filter.

  9. An Edge-Preserved Image Denoising Algorithm Based on Local Adaptive Regularization

    Directory of Open Access Journals (Sweden)

    Li Guo

    2016-01-01

    Full Text Available Image denoising methods are often based on the minimization of an appropriately defined energy function. Many gradient dependent energy functions, such as Potts model and total variation denoising, regard image as piecewise constant function. In these methods, some important information such as edge sharpness and location is well preserved, but some detailed image feature like texture is often compromised in the process of denoising. For this reason, an image denoising method based on local adaptive regularization is proposed in this paper, which can adaptively adjust denoising degree of noisy image by adding spatial variable fidelity term, so as to better preserve fine scale features of image. Experimental results show that the proposed denoising method can achieve state-of-the-art subjective visual effect, and the signal-noise-ratio (SNR is also objectively improved by 0.3–0.6 dB.

  10. Contourlet Transform Based Method For Medical Image Denoising

    Directory of Open Access Journals (Sweden)

    Abbas H. Hassin AlAsadi

    2015-02-01

    Full Text Available Noise is an important factor of the medical image quality, because the high noise of medical imaging will not give us the useful information of the medical diagnosis. Basically, medical diagnosis is based on normal or abnormal information provided diagnose conclusion. In this paper, we proposed a denoising algorithm based on Contourlet transform for medical images. Contourlet transform is an extension of the wavelet transform in two dimensions using the multiscale and directional filter banks. The Contourlet transform has the advantages of multiscale and time-frequency-localization properties of wavelets, but also provides a high degree of directionality. For verifying the denoising performance of the Contourlet transform, two kinds of noise are added into our samples; Gaussian noise and speckle noise. Soft thresholding value for the Contourlet coefficients of noisy image is computed. Finally, the experimental results of proposed algorithm are compared with the results of wavelet transform. We found that the proposed algorithm has achieved acceptable results compared with those achieved by wavelet transform.

  11. Improved deadzone modeling for bivariate wavelet shrinkage-based image denoising

    Science.gov (United States)

    DelMarco, Stephen

    2016-05-01

    Modern image processing performed on-board low Size, Weight, and Power (SWaP) platforms, must provide high- performance while simultaneously reducing memory footprint, power consumption, and computational complexity. Image preprocessing, along with downstream image exploitation algorithms such as object detection and recognition, and georegistration, place a heavy burden on power and processing resources. Image preprocessing often includes image denoising to improve data quality for downstream exploitation algorithms. High-performance image denoising is typically performed in the wavelet domain, where noise generally spreads and the wavelet transform compactly captures high information-bearing image characteristics. In this paper, we improve modeling fidelity of a previously-developed, computationally-efficient wavelet-based denoising algorithm. The modeling improvements enhance denoising performance without significantly increasing computational cost, thus making the approach suitable for low-SWAP platforms. Specifically, this paper presents modeling improvements to the Sendur-Selesnick model (SSM) which implements a bivariate wavelet shrinkage denoising algorithm that exploits interscale dependency between wavelet coefficients. We formulate optimization problems for parameters controlling deadzone size which leads to improved denoising performance. Two formulations are provided; one with a simple, closed form solution which we use for numerical result generation, and the second as an integral equation formulation involving elliptic integrals. We generate image denoising performance results over different image sets drawn from public domain imagery, and investigate the effect of wavelet filter tap length on denoising performance. We demonstrate denoising performance improvement when using the enhanced modeling over performance obtained with the baseline SSM model.

  12. Adaptively wavelet-based image denoising algorithm with edge preserving

    Institute of Scientific and Technical Information of China (English)

    Yihua Tan; Jinwen Tian; Jian Liu

    2006-01-01

    @@ A new wavelet-based image denoising algorithm, which exploits the edge information hidden in the corrupted image, is presented. Firstly, a canny-like edge detector identifies the edges in each subband.Secondly, multiplying the wavelet coefficients in neighboring scales is implemented to suppress the noise while magnifying the edge information, and the result is utilized to exclude the fake edges. The isolated edge pixel is also identified as noise. Unlike the thresholding method, after that we use local window filter in the wavelet domain to remove noise in which the variance estimation is elaborated to utilize the edge information. This method is adaptive to local image details, and can achieve better performance than the methods of state of the art.

  13. Nonlinear Filter Based Image Denoising Using AMF Approach

    CERN Document Server

    Thivakaran, T K

    2010-01-01

    This paper proposes a new technique based on nonlinear Adaptive Median filter (AMF) for image restoration. Image denoising is a common procedure in digital image processing aiming at the removal of noise, which may corrupt an image during its acquisition or transmission, while retaining its quality. This procedure is traditionally performed in the spatial or frequency domain by filtering. The aim of image enhancement is to reconstruct the true image from the corrupted image. The process of image acquisition frequently leads to degradation and the quality of the digitized image becomes inferior to the original image. Filtering is a technique for enhancing the image. Linear filter is the filtering in which the value of an output pixel is a linear combination of neighborhood values, which can produce blur in the image. Thus a variety of smoothing techniques have been developed that are non linear. Median filter is the one of the most popular non-linear filter. When considering a small neighborhood it is highly e...

  14. Boltzmann Machines and Denoising Autoencoders for Image Denoising

    OpenAIRE

    Cho, Kyunghyun

    2013-01-01

    Image denoising based on a probabilistic model of local image patches has been employed by various researchers, and recently a deep (denoising) autoencoder has been proposed by Burger et al. [2012] and Xie et al. [2012] as a good model for this. In this paper, we propose that another popular family of models in the field of deep learning, called Boltzmann machines, can perform image denoising as well as, or in certain cases of high level of noise, better than denoising autoencoders. We empiri...

  15. An adaptive image denoising method based on local parameters optimization

    Indian Academy of Sciences (India)

    Hari Om; Mantosh Biswas

    2014-08-01

    In image denoising algorithms, the noise is handled by either modifying term-by-term, i.e., individual pixels or block-by-block, i.e., group of pixels, using suitable shrinkage factor and threshold function. The shrinkage factor is generally a function of threshold and some other characteristics of the neighbouring pixels of the pixel to be thresholded (denoised). The threshold is determined in terms of the noise variance present in the image and its size. The VisuShrink, SureShrink, and NeighShrink methods are important denoising methods that provide good results. The first two, i.e., VisuShrink and SureShrink methods follow term-by-term approach, i.e., modify the individual pixel and the third one, i.e., NeighShrink and its variants: ModiNeighShrink, IIDMWD, and IAWDMBMC, follow block-by-block approach, i.e., modify the pixels in groups, in order to remove the noise. The VisuShrink, SureShrink, and NeighShrink methods however do not give very good visual quality because they remove too many coefficients due to their high threshold values. In this paper, we propose an image denoising method that uses the local parameters of the neighbouring coefficients of the pixel to be denoised in the noisy image. In our method, we propose two new shrinkage factors and the threshold at each decomposition level, which lead to better visual quality. We also establish the relationship between both the shrinkage factors. We compare the performance of our method with that of the VisuShrink and NeighShrink including various variants. Simulation results show that our proposed method has high peak signal-to-noise ratio and good visual quality of the image as compared to the traditional methods:Weiner filter, VisuShrink, SureShrink, NeighBlock, NeighShrink, ModiNeighShrink, LAWML, IIDMWT, and IAWDMBNC methods.

  16. Image denoising using statistical model based on quaternion wavelet domain

    Institute of Scientific and Technical Information of China (English)

    YIN Ming; LIU Wei; KONG Ranran

    2012-01-01

    Image denoising is the basic problem of image processing. Quaternion wavelet transform is a new kind of multiresolution analysis tools. Image via quaternion wavelet transform, wavelet coefficients both in intrascale and in interscale have certain correla- tions. First, according to the correlation of quaternion wavelet coefficients in interscale, non-Ganssian distribution model is used to model its correlations, and the coefficients are divided into important and unimportance coefficients. Then we use the non-Gaussian distribution model to model the important coefficients and its adjacent coefficients, and utilize the MAP method estimate original image wavelet coefficients from noisy coefficients, so as to achieve the purpose of denoising. Experimental results show that our al- gorithm outperforms the other classical algorithms in peak signal-to-noise ratio and visual quality.

  17. A method for predicting DCT-based denoising efficiency for grayscale images corrupted by AWGN and additive spatially correlated noise

    Science.gov (United States)

    Rubel, Aleksey S.; Lukin, Vladimir V.; Egiazarian, Karen O.

    2015-03-01

    Results of denoising based on discrete cosine transform for a wide class of images corrupted by additive noise are obtained. Three types of noise are analyzed: additive white Gaussian noise and additive spatially correlated Gaussian noise with middle and high correlation levels. TID2013 image database and some additional images are taken as test images. Conventional DCT filter and BM3D are used as denoising techniques. Denoising efficiency is described by PSNR and PSNR-HVS-M metrics. Within hard-thresholding denoising mechanism, DCT-spectrum coefficient statistics are used to characterize images and, subsequently, denoising efficiency for them. Results of denoising efficiency are fitted for such statistics and efficient approximations are obtained. It is shown that the obtained approximations provide high accuracy of prediction of denoising efficiency.

  18. A New Method of Image Denoising for Underground Coal Mine Based on the Visual Characteristics

    Directory of Open Access Journals (Sweden)

    Gang Hua

    2014-01-01

    Full Text Available Affected by special underground circumstances of coal mine, the image clarity of most images captured in the mine is not very high, and a large amount of image noise is mingled with the images, which brings further downhole images processing many difficulties. Traditional image denoising method easily leads to blurred images, and the denoising effect is not very satisfactory. Aimed at the image characteristics of low image illumination and large amount of noise and based on the characteristics of color detail blindness and simultaneous contrast of human visual perception, this paper proposes a new method for image denoising based on visual characteristics. The method uses CIELab uniform color space to dynamically and adaptively decide the filter weights, thereby reducing the damage to the image contour edges and other details, so that the denoised image can have a higher clarity. Experimental results show that this method has a brilliant denoising effect and can significantly improve the subjective and objective picture quality of downhole images.

  19. A hybrid method for image Denoising based on Wavelet Thresholding and RBF network

    Directory of Open Access Journals (Sweden)

    Sandeep Dubey

    2012-06-01

    Full Text Available Digital image denoising is crucial part of image pre-processing. The application of denoising process in satellite image data and also in television broadcasting. Image data sets collected by image sensors are generally contaminated by noise. Imperfect instruments, problems with the data acquisition process, and interfering natural phenomena can all degrade the data of interest. Furthermore, noise can be introduced by transmission errors and compression. Thus, denoising is often a necessary and the first step to be taken before the images data is analyzed. In this paper we proposed a novel methodology for image denoising. Image denoising method based on wavelet transform and radial basis neural network and also used concept of soft thresholding. Wavelet transform decomposed image in to different layers, the decomposed layer differentiate by horizontal, vertical and diagonal. For the test of our hybrid method, we used noise image dataset. This data provided by UCI machine learning website. Our proposed method compare with traditional method and our base paper method and getting better comparative result.

  20. Image Denoising Using Sure-Based Adaptive Thresholding in Directionlet Domain

    Directory of Open Access Journals (Sweden)

    Sethunadh R

    2012-12-01

    Full Text Available The standard separable two dimensional wavelet transform has achieved a great success in image denoising applications due to its sparse representation of images. However it fails to capture efficiently the anisotropic geometric structures like edges and contours in images as they intersect too many wavelet basis functions and lead to a non-sparse representation. In this paper a novel de-noising scheme based on multi directional and anisotropic wavelet transform called directionlet is presented. The image denoising in wavelet domain has been extended to the directionlet domain to make the image features to concentrate on fewer coefficients so that more effective thresholding is possible. The image is first segmented and the dominant direction of each segment is identified to make a directional map. Then according to the directional map, the directionlet transform is taken along the dominant direction of the selected segment. The decomposed images with directional energy are used for scale dependent subband adaptive optimal threshold computation based on SURE risk. This threshold is then applied to the sub-bands except the LLL subband. The threshold corrected sub-bands with the unprocessed first sub-band (LLL are given as input to the inverse directionlet algorithm for getting the de-noised image. Experimental results show that the proposed method outperforms the standard wavelet-based denoising methods in terms of numeric and visual quality.

  1. [A novel denoising approach to SVD filtering based on DCT and PCA in CT image].

    Science.gov (United States)

    Feng, Fuqiang; Wang, Jun

    2013-10-01

    Because of various effects of the imaging mechanism, noises are inevitably introduced in medical CT imaging process. Noises in the images will greatly degrade the quality of images and bring difficulties to clinical diagnosis. This paper presents a new method to improve singular value decomposition (SVD) filtering performance in CT image. Filter based on SVD can effectively analyze characteristics of the image in horizontal (and/or vertical) directions. According to the features of CT image, we can make use of discrete cosine transform (DCT) to extract the region of interest and to shield uninterested region so as to realize the extraction of structure characteristics of the image. Then we transformed SVD to the image after DCT, constructing weighting function for image reconstruction adaptively weighted. The algorithm for the novel denoising approach in this paper was applied in CT image denoising, and the experimental results showed that the new method could effectively improve the performance of SVD filtering.

  2. Dictionary-Based Image Denoising by Fused-Lasso Atom Selection

    Directory of Open Access Journals (Sweden)

    Ao Li

    2014-01-01

    Full Text Available We proposed an efficient image denoising scheme by fused lasso with dictionary learning. The scheme has two important contributions. The first one is that we learned the patch-based adaptive dictionary by principal component analysis (PCA with clustering the image into many subsets, which can better preserve the local geometric structure. The second one is that we coded the patches in each subset by fused lasso with the clustering learned dictionary and proposed an iterative Split Bregman to solve it rapidly. We present the capabilities with several experiments. The results show that the proposed scheme is competitive to some excellent denoising algorithms.

  3. Improved DCT-based nonlocal means filter for MR images denoising.

    Science.gov (United States)

    Hu, Jinrong; Pu, Yifei; Wu, Xi; Zhang, Yi; Zhou, Jiliu

    2012-01-01

    The nonlocal means (NLM) filter has been proven to be an efficient feature-preserved denoising method and can be applied to remove noise in the magnetic resonance (MR) images. To suppress noise more efficiently, we present a novel NLM filter based on the discrete cosine transform (DCT). Instead of computing similarity weights using the gray level information directly, the proposed method calculates similarity weights in the DCT subspace of neighborhood. Due to promising characteristics of DCT, such as low data correlation and high energy compaction, the proposed filter is naturally endowed with more accurate estimation of weights thus enhances denoising effectively. The performance of the proposed filter is evaluated qualitatively and quantitatively together with two other NLM filters, namely, the original NLM filter and the unbiased NLM (UNLM) filter. Experimental results demonstrate that the proposed filter achieves better denoising performance in MRI compared to the others.

  4. Multicomponent MR Image Denoising

    Directory of Open Access Journals (Sweden)

    José V. Manjón

    2009-01-01

    Full Text Available Magnetic Resonance images are normally corrupted by random noise from the measurement process complicating the automatic feature extraction and analysis of clinical data. It is because of this reason that denoising methods have been traditionally applied to improve MR image quality. Many of these methods use the information of a single image without taking into consideration the intrinsic multicomponent nature of MR images. In this paper we propose a new filter to reduce random noise in multicomponent MR images by spatially averaging similar pixels using information from all available image components to perform the denoising process. The proposed algorithm also uses a local Principal Component Analysis decomposition as a postprocessing step to remove more noise by using information not only in the spatial domain but also in the intercomponent domain dealing in a higher noise reduction without significantly affecting the original image resolution. The proposed method has been compared with similar state-of-art methods over synthetic and real clinical multicomponent MR images showing an improved performance in all cases analyzed.

  5. Improving performance of wavelet-based image denoising algorithm using complex diffusion process

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Sharifzadeh, Sara; Korhonen, Jari

    2012-01-01

    ). In this paper, we present a new algorithm for image noise reduction based on the combination of complex diffusion process and wavelet thresholding. In the existing wavelet thresholding methods, the noise reduction is limited, because the approximate coefficients containing the main information of the image...... are kept unchanged. Since noise affects both the approximate and detail coefficients, the proposed algorithm for noise reduction applies the complex diffusion process on the approximation band in order to alleviate the deficiency of the existing wavelet thresholding methods. The algorithm has been examined......Image enhancement and de-noising is an essential pre-processing step in many image processing algorithms. In any image de-noising algorithm, the main concern is to keep the interesting structures of the image. Such interesting structures often correspond to the discontinuities (edges...

  6. A Novel Hénon Map Based Adaptive PSO for Wavelet Shrinkage Image Denoising

    Directory of Open Access Journals (Sweden)

    Shruti Gandhi

    2013-07-01

    Full Text Available Degradation of images due to noise has led to the formulation of various techniques for image restoration. Wavelet shrinkage image denoising being one such technique has been improved over the years by using Particle Swarm Optimization (PSO and its variants for optimization of the wavelet parameters. However, the use of PSO has been rendered ineffective due to premature convergence and failure to maintain good population diversity. This paper proposes a Hénon map based adaptive PSO (HAPSO for wavelet shrinkage image denoising. While significantly improving the population diversity of the particles, it also increases the convergence rate and thereby the precision of the denoising technique. The proposed PSO uses adaptive cognitive and social components and adaptive inertia weight factors. The Hénon map sequence is applied to the control parameters instead of random variables, which introduces ergodicity and stochastic property in the PSO. This results in a more improved global convergence as compared to the traditional PSO and classical thresholding techniques. Simulation results and comparisons with the standard approaches show the effectiveness of the proposed algorithm.

  7. Digital Image Watermarking Based On Gradient Direction Quantization and Denoising Using Guided Image Filtering

    Directory of Open Access Journals (Sweden)

    I.Kullayamma

    2016-05-01

    Full Text Available Digital watermarking is the art of hiding of information or data in documents, where the embedded information or data can be extracted to resist copyright violation or to verify the uniqueness of a document which leads to security. Protecting the digital content has become a major issue for content owners and service providers. Watermarking using gradient direction quantization is based on the uniform quantization of the direction of gradient vectors, which is called gradient direction watermarking (GDWM. In GDWM, the watermark bits are embedded by quantizing the angles of significant gradient vectors at multiple wavelet scales. The proposed scheme has the advantages of increased invisibility and robustness to amplitude scaling effects. The DWT coefficients are modified to quantize the gradient direction based on the on the derived relationship between the changes in the coefficients and the change in the gradient direction. In this paper, we propose a novel explicit image filter called guided filter. It is derived from a local linear model that computes the filtering output using the content of guidance image, which can be the input image itself or any other different image. The guided filter naturally has a fast and non approximate linear time algorithm, regardless of the kernel size and the intensity range. Finally, we show simulation results of denoising method using guided image filtering over bilateral filtering

  8. Survey on Denoising Techniques in Medical Images

    Directory of Open Access Journals (Sweden)

    Ravi Mohan

    2013-07-01

    Full Text Available Denoising of Medical Images is challenging problems for researchers noise is not only effect the quality of image but it Creates a major change in calculation of medical field. The Medical Images normally have a problem of high level components of noises. There are different techniques for producing medical images such as Magnetic Resonance Imaging(MRI, X-ray, Computed Tomography and Ultrasound, during this process noise is added that decreases the image quality and image analysis. Image denoising is an important task in image processing, use of wavelet transform improves the quality of an image and reduces noise level. Noise is an inherent property of medical imaging, and it generally tends to reduce the image resolution and contrast, thereby reducing the diagnostic value of this imaging modality there is an emergent attentiveness in using multi-resolution Wavelet filters in a variety of medical imaging applications. We Have review recent wavelet based denoising techniques for medical ultrasound, magnetic resonance images, and some tomography imaging techniques like Positron Emission tomography and Computer tomography imaging and discuss some of their potential applications in the clinical investigations of the brain. The paper deals with the use of wavelet transform for signal and image de-noising employing a selected method of thresholding of appropriate decomposition coefficients

  9. Fractional Partial Differential Equation: Fractional Total Variation and Fractional Steepest Descent Approach-Based Multiscale Denoising Model for Texture Image

    Directory of Open Access Journals (Sweden)

    Yi-Fei Pu

    2013-01-01

    Full Text Available The traditional integer-order partial differential equation-based image denoising approaches often blur the edge and complex texture detail; thus, their denoising effects for texture image are not very good. To solve the problem, a fractional partial differential equation-based denoising model for texture image is proposed, which applies a novel mathematical method—fractional calculus to image processing from the view of system evolution. We know from previous studies that fractional-order calculus has some unique properties comparing to integer-order differential calculus that it can nonlinearly enhance complex texture detail during the digital image processing. The goal of the proposed model is to overcome the problems mentioned above by using the properties of fractional differential calculus. It extended traditional integer-order equation to a fractional order and proposed the fractional Green’s formula and the fractional Euler-Lagrange formula for two-dimensional image processing, and then a fractional partial differential equation based denoising model was proposed. The experimental results prove that the abilities of the proposed denoising model to preserve the high-frequency edge and complex texture information are obviously superior to those of traditional integral based algorithms, especially for texture detail rich images.

  10. GPU-Based Block-Wise Nonlocal Means Denoising for 3D Ultrasound Images

    Directory of Open Access Journals (Sweden)

    Liu Li

    2013-01-01

    Full Text Available Speckle suppression plays an important role in improving ultrasound (US image quality. While lots of algorithms have been proposed for 2D US image denoising with remarkable filtering quality, there is relatively less work done on 3D ultrasound speckle suppression, where the whole volume data rather than just one frame needs to be considered. Then, the most crucial problem with 3D US denoising is that the computational complexity increases tremendously. The nonlocal means (NLM provides an effective method for speckle suppression in US images. In this paper, a programmable graphic-processor-unit- (GPU- based fast NLM filter is proposed for 3D ultrasound speckle reduction. A Gamma distribution noise model, which is able to reliably capture image statistics for Log-compressed ultrasound images, was used for the 3D block-wise NLM filter on basis of Bayesian framework. The most significant aspect of our method was the adopting of powerful data-parallel computing capability of GPU to improve the overall efficiency. Experimental results demonstrate that the proposed method can enormously accelerate the algorithm.

  11. A Novel Directionlet-Based Image Denoising Method Using MMSE Estimator and Laplacian Mixture Distribution

    Directory of Open Access Journals (Sweden)

    Yixiang Lu

    2015-01-01

    Full Text Available A novel method based on directionlet transform is proposed for image denoising under Bayesian framework. In order to achieve noise removal, the directionlet coefficients of the uncorrupted image are modeled independently and identically by a two-state Laplacian mixture model with zero mean. The expectation-maximization algorithm is used to estimate the parameters that characterize the assumed prior model. Within the framework of Bayesian theory, the directionlet coefficients of noise-free image are estimated by a nonlinear shrinkage function based on weighted average of the minimum mean square error estimator. We demonstrate through simulations with images contaminated by additive white Gaussian noise that the proposed method is very competitive when compared with other methods in terms of both peak signal-to-noise ratio and visual quality.

  12. IMAGE WAVELET DENOISING USING THE ROBUST LOCAL THRESHOLD

    Institute of Scientific and Technical Information of China (English)

    Lin Kezheng; Zhou Hongyu; Li Dianpu

    2002-01-01

    This paper suggests a scheme of image denoising based on two-dimensional discrete wavelet transform. The denoising algorithm is described with some operators. By thresholding the wavelet transform coefficients of noisy images, the original image can be reconstructed correctly. Different threshold selections and thresholding methods are discussed. A new robust local threshold scheme is proposed. Quantifying the performance of image denoising schemes by using the mean square error, the performance of the robust local threshold scheme is demonstrated and is compared with the universal threshold scheme. The experiment shows that image denoising using the robust local threshold performs better than that using the universal threshold.

  13. Image denoising using the squared eigenfunctions of the Schrodinger operator

    KAUST Repository

    Kaisserli, Zineb

    2015-02-02

    This study introduces a new image denoising method based on the spectral analysis of the semi-classical Schrodinger operator. The noisy image is considered as a potential of the Schrodinger operator, and the denoised image is reconstructed using the discrete spectrum of this operator. First results illustrating the performance of the proposed approach are presented and compared to the singular value decomposition method.

  14. Energy-based adaptive orthogonal FRIT and its application in image denoising

    Institute of Scientific and Technical Information of China (English)

    LIU YunXia; PENG YuHua; QU HuaiJing; YiN Yong

    2007-01-01

    Efficient representation of linear singularities is discussed in this paper. We analyzed the relationship between the "wrap around" effect and the distribution of FRAT (Finite Radon Transform) coefficients first, and then based on study of some properties of the columnwisely FRAT reconstruction procedure, we proposed an energy-based adaptive orthogonal FRIT scheme (EFRIT). Experiments using nonlinear approximation show its superiority in energy concentration over both Discrete Wavelet Transform (DWT) and Finite Ridgelet Transform (FRIT). Furthermore, we have modeled the denoising problem and proposed a novel threshold selecting method. Experiments carried out on images containing strong linear singularities and texture components with varying levels of addictive white Gaussian noise show that our method achieves prominent improvement in terms of both SNR and visual quality as compared with that of DWT and FRIT.

  15. Combining interior and exterior characteristics for remote sensing image denoising

    Science.gov (United States)

    Peng, Ni; Sun, Shujin; Wang, Runsheng; Zhong, Ping

    2016-04-01

    Remote sensing image denoising faces many challenges since a remote sensing image usually covers a wide area and thus contains complex contents. Using the patch-based statistical characteristics is a flexible method to improve the denoising performance. There are usually two kinds of statistical characteristics available: interior and exterior characteristics. Different statistical characteristics have their own strengths to restore specific image contents. Combining different statistical characteristics to use their strengths together may have the potential to improve denoising results. This work proposes a method combining statistical characteristics to adaptively select statistical characteristics for different image contents. The proposed approach is implemented through a new characteristics selection criterion learned over training data. Moreover, with the proposed combination method, this work develops a denoising algorithm for remote sensing images. Experimental results show that our method can make full use of the advantages of interior and exterior characteristics for different image contents and thus improve the denoising performance.

  16. Non parametric denoising methods based on wavelets: Application to electron microscopy images in low exposure time

    Energy Technology Data Exchange (ETDEWEB)

    Soumia, Sid Ahmed, E-mail: samasoumia@hotmail.fr [Science and Technology Faculty, El Bachir El Ibrahimi University, BordjBouArreridj (Algeria); Messali, Zoubeida, E-mail: messalizoubeida@yahoo.fr [Laboratory of Electrical Engineering(LGE), University of M' sila (Algeria); Ouahabi, Abdeldjalil, E-mail: abdeldjalil.ouahabi@univ-tours.fr [Polytechnic School, University of Tours (EPU - PolytechTours), EPU - Energy and Electronics Department (France); Trepout, Sylvain, E-mail: sylvain.trepout@curie.fr, E-mail: cedric.messaoudi@curie.fr, E-mail: sergio.marco@curie.fr; Messaoudi, Cedric, E-mail: sylvain.trepout@curie.fr, E-mail: cedric.messaoudi@curie.fr, E-mail: sergio.marco@curie.fr; Marco, Sergio, E-mail: sylvain.trepout@curie.fr, E-mail: cedric.messaoudi@curie.fr, E-mail: sergio.marco@curie.fr [INSERMU759, University Campus Orsay, 91405 Orsay Cedex (France)

    2015-01-13

    The 3D reconstruction of the Cryo-Transmission Electron Microscopy (Cryo-TEM) and Energy Filtering TEM images (EFTEM) hampered by the noisy nature of these images, so that their alignment becomes so difficult. This noise refers to the collision between the frozen hydrated biological samples and the electrons beam, where the specimen is exposed to the radiation with a high exposure time. This sensitivity to the electrons beam led specialists to obtain the specimen projection images at very low exposure time, which resulting the emergence of a new problem, an extremely low signal-to-noise ratio (SNR). This paper investigates the problem of TEM images denoising when they are acquired at very low exposure time. So, our main objective is to enhance the quality of TEM images to improve the alignment process which will in turn improve the three dimensional tomography reconstructions. We have done multiple tests on special TEM images acquired at different exposure time 0.5s, 0.2s, 0.1s and 1s (i.e. with different values of SNR)) and equipped by Golding beads for helping us in the assessment step. We herein, propose a structure to combine multiple noisy copies of the TEM images. The structure is based on four different denoising methods, to combine the multiple noisy TEM images copies. Namely, the four different methods are Soft, the Hard as Wavelet-Thresholding methods, Bilateral Filter as a non-linear technique able to maintain the edges neatly, and the Bayesian approach in the wavelet domain, in which context modeling is used to estimate the parameter for each coefficient. To ensure getting a high signal-to-noise ratio, we have guaranteed that we are using the appropriate wavelet family at the appropriate level. So we have chosen âĂIJsym8âĂİ wavelet at level 3 as the most appropriate parameter. Whereas, for the bilateral filtering many tests are done in order to determine the proper filter parameters represented by the size of the filter, the range parameter and the

  17. 多尺度几何分析的图像去噪方法综述%Overview on image denoising based on multi-scale geometric analysis

    Institute of Scientific and Technical Information of China (English)

    李彦; 汪胜前; 邓承志

    2011-01-01

    Wavelet image denoising has become the most widely classical method in image denoising area, and the concomitant emergence of multi-scale denoising method makes it as a hot spot in the current method of image denoising.This paper gives a overall summary on the current status of image denoising and wavelet denoising, also gives a brief description of the multi-scale geometric analysis and its development.And further more,it takes the detailed analysis and summary of image denoising methods based on multi-scale transform.Based on the understanding of the wavelet transform denoising and multi-scale image denoising,it puts forward on some prospects in the multi-scale image denoising.%小波图像去噪已经成为图像去噪中应用最广泛的经典方法,而随之出现的多尺度变换去噪方法也已是当前图像去噪研究的一个热点.在对目前图像去噪的现状以及小波去噪总体概括的基础上,简要介绍了多尺度几何分析的产生和发展,进一步详细分析和总结了基于多尺度变换的图像去噪方法.基于对小波去噪以及多尺度变换图像去噪问题的理解,提出了对多尺度变换图像去噪方法的一些展望.

  18. Adaptive Image Denoising by Mixture Adaptation.

    Science.gov (United States)

    Luo, Enming; Chan, Stanley H; Nguyen, Truong Q

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the expectation-maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper. First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. The experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms. PMID:27416593

  19. Maximum likelihood estimation-based denoising of magnetic resonance images using restricted local neighborhoods

    International Nuclear Information System (INIS)

    In this paper, we propose a method to denoise magnitude magnetic resonance (MR) images, which are Rician distributed. Conventionally, maximum likelihood methods incorporate the Rice distribution to estimate the true, underlying signal from a local neighborhood within which the signal is assumed to be constant. However, if this assumption is not met, such filtering will lead to blurred edges and loss of fine structures. As a solution to this problem, we put forward the concept of restricted local neighborhoods where the true intensity for each noisy pixel is estimated from a set of preselected neighboring pixels. To this end, a reference image is created from the noisy image using a recently proposed nonlocal means algorithm. This reference image is used as a prior for further noise reduction. A scheme is developed to locally select an appropriate subset of pixels from which the underlying signal is estimated. Experimental results based on the peak signal to noise ratio, structural similarity index matrix, Bhattacharyya coefficient and mean absolute difference from synthetic and real MR images demonstrate the superior performance of the proposed method over other state-of-the-art methods.

  20. Twofold processing for denoising ultrasound medical images

    OpenAIRE

    P.V.V.Kishore; Kumar, K. V. V.; kumar, D. Anil; M.V.D.Prasad; Goutham, E. N. D.; Rahul, R.; Krishna, C. B. S. Vamsi; Sandeep, Y.

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing s...

  1. SPATIAL-VARIANT MORPHOLOGICAL FILTERS WITH NONLOCAL-PATCH-DISTANCE-BASED AMOEBA KERNEL FOR IMAGE DENOISING

    Directory of Open Access Journals (Sweden)

    Shuo Yang

    2015-01-01

    Full Text Available Filters of the Spatial-Variant amoeba morphology can preserve edges better, but with too much noise being left. For better denoising, this paper presents a new method to generate structuring elements for Spatially-Variant amoeba morphology.  The amoeba kernel in the proposed strategy is divided into two parts: one is the patch distance based amoeba center, and another is the geodesic distance based amoeba boundary, by which the nonlocal patch distance and local geodesic distance are both taken into consideration. Compared to traditional amoeba kernel, the new one has more stable center and its shape can be less influenced by noise in pilot image. What’s more important is that the nonlocal processing approach can induce a couple of adjoint dilation and erosion, and combinations of them can construct adaptive opening, closing, alternating sequential filters, etc. By designing the new amoeba kernel, a family of morphological filters therefore is derived. Finally, this paper presents a series of results on both synthetic and real images along with comparisons with current state-of-the-art techniques, including novel applications to medical image processing and noisy SAR image restoration.

  2. Astronomical Image Denoising Using Dictionary Learning

    CERN Document Server

    Beckouche, Simon; Fadili, Jalal

    2013-01-01

    Astronomical images suffer a constant presence of multiple defects that are consequences of the intrinsic properties of the acquisition equipments, and atmospheric conditions. One of the most frequent defects in astronomical imaging is the presence of additive noise which makes a denoising step mandatory before processing data. During the last decade, a particular modeling scheme, based on sparse representations, has drawn the attention of an ever growing community of researchers. Sparse representations offer a promising framework to many image and signal processing tasks, especially denoising and restoration applications. At first, the harmonics, wavelets, and similar bases and overcomplete representations have been considered as candidate domains to seek the sparsest representation. A new generation of algorithms, based on data-driven dictionaries, evolved rapidly and compete now with the off-the-shelf fixed dictionaries. While designing a dictionary beforehand leans on a guess of the most appropriate repre...

  3. Result Analysis of Blur and Noise on Image Denoising based on PDE

    Directory of Open Access Journals (Sweden)

    Meenal Jain , Sumit Sharma, Ravi Mohan Sairam

    2012-12-01

    Full Text Available The effect of noise on image is still a challenging problem for researchers. Image Denoising has remained a fundamental problem in the field of image processing. Wavelets give a superior performance in image denoising due to properties such as sparsity and multi resolution structure. Many of the previous research use the basic noise reduction through image blurring. Blurring can be done locally, as in the Gaussian smoothing model or in anisotropic filtering; by calculus of variations; or in the frequency domain, such as Weiner filters. In this paper we proposed an image denoising method using partial differential equation. In our proposed approach we proposed three different approaches first is for blur, second is for noise and finally for blur and noise. These approaches are compared by Average absolute difference, signal to noise ratio (SNR, peak signal to noise ratio (PSNR, Image Fidelity and Mean square error. So we can achieve better result on different scenario. We also compare our result on the basis of the above five parameters and the result is better in comparison to the traditional technique.

  4. Nonlocal Markovian models for image denoising

    Science.gov (United States)

    Salvadeo, Denis H. P.; Mascarenhas, Nelson D. A.; Levada, Alexandre L. M.

    2016-01-01

    Currently, the state-of-the art methods for image denoising are patch-based approaches. Redundant information present in nonlocal regions (patches) of the image is considered for better image modeling, resulting in an improved quality of filtering. In this respect, nonlocal Markov random field (MRF) models are proposed by redefining the energy functions of classical MRF models to adopt a nonlocal approach. With the new energy functions, the pairwise pixel interaction is weighted according to the similarities between the patches corresponding to each pair. Also, a maximum pseudolikelihood estimation of the spatial dependency parameter (β) for these models is presented here. For evaluating this proposal, these models are used as an a priori model in a maximum a posteriori estimation to denoise additive white Gaussian noise in images. Finally, results display a notable improvement in both quantitative and qualitative terms in comparison with the local MRFs.

  5. Image Denoising with Modified Wavelets Feature Restoration

    Directory of Open Access Journals (Sweden)

    Sachin D Ruikar

    2012-03-01

    Full Text Available Image Denoising is the principle problem of image restoration and many scholars have been devoted to this area and proposed lots of methods. In this paper we propose modified feature restoration algorithm based on threshold and neighbor technique which gives better result for all types of noise. Because of some limits of conventional methods in image denoising, several drawbacks are seen in the conventional methods such as introduction of blur and edges degradation. Those can be removed by using the new technique which is based on the wavelet transforms. The shrinkage algorithms like Universal shrink, Visue shrink, bays shrink; have strengths in Gaussian noise removal. Our proposed method gives noise removal for all types of noise, in wavelet domain. It gives a better peak signal to noise ratio as compared to traditional methods.

  6. Novel Spatially Adaptive Image Denoising Algorithm Based on Covariance Estimation in Wavelet Domain

    Institute of Scientific and Technical Information of China (English)

    谢志宏; 沈庭芝; 王海

    2003-01-01

    A new method for image denoising is proposed. By analyzing image's statistical properties in wavelet domain, it is shown that the natural image has a strong and spatial variable covariance structure relationship in local space of sub-band. A non-direct estimation method is suggested to make an adaptive estimate of spatial variable covariance by estimating the correlation coefficient and variance of subband image separately. It can be used to estimate adaptive filtering of subband image. The experiment shows that this method can improve the image's SNR, and has strong ability to preserve edges.

  7. Randomized denoising autoencoders for smaller and efficient imaging based AD clinical trials

    Science.gov (United States)

    Ithapu, Vamsi K.; Singh, Vikas; Okonkwo, Ozioma; Johnson, Sterling C.

    2015-01-01

    There is growing body of research devoted to designing imaging-based biomarkers that identify Alzheimer’s disease (AD) in its prodromal stage using statistical machine learning methods. Recently several authors investigated how clinical trials for AD can be made more efficient (i.e., smaller sample size) using predictive measures from such classification methods. In this paper, we explain why predictive measures given by such SVM type objectives may be less than ideal for use in the setting described above. We give a solution based on a novel deep learning model, randomized denoising autoencoders (rDA), which regresses on training labels y while also accounting for the variance, a property which is very useful for clinical trial design. Our results give strong improvements in sample size estimates over strategies based on multi-kernel learning. Also, rDA predictions appear to more accurately correlate to stages of disease. Separately, our formulation empirically shows how deep architectures can be applied in the large d, small n regime — the default situation in medical imaging. This result is of independent interest. PMID:25485413

  8. Image Denoising via Nonlinear Hybrid Diffusion

    OpenAIRE

    Xiaoping Ji; Dazhi Zhang; Zhichang Guo; Boying Wu

    2013-01-01

    A nonlinear anisotropic hybrid diffusion equation is discussed for image denoising, which is a combination of mean curvature smoothing and Gaussian heat diffusion. First, we propose a new edge detection indicator, that is, the diffusivity function. Based on this diffusivity function, the new diffusion is nonlinear anisotropic and forward-backward. Unlike the Perona-Malik (PM) diffusion, the new forward-backward diffusion is adjustable and under control. Then, the existence, uniqueness, and lo...

  9. Bilateral Filtering using Modified Fuzzy Clustering for Image Denoising

    Directory of Open Access Journals (Sweden)

    G.Vijaya,

    2011-01-01

    Full Text Available This paper presents a novel bilateral filtering using weighed fcm algorithm based on Gaussian kernel unction for image manipulations such as segmentation and denoising . Our proposed bilateral filteringconsists of the standard bilateral filter and the original Euclidean distance is replaced by a kernel – induced distance in the algorithm. We have applied the proposed filtering for image denoising with both the impulse and Gaussian random noise, which achieves better results than the bilateral filtering based denoising approaches, the Perona-Maliks anisotropic diffusion filter, the fuzzy vector median filter and the Non-Local Means filter.

  10. Denoising of Magnetic Resonance and X-Ray Images Using Variance Stabilization and Patch Based Algorithms

    Directory of Open Access Journals (Sweden)

    V N Prudhvi Raj

    2013-01-01

    Full Text Available Developments in Medical imaging systems which are providing the anatomical and physiological details ofthe patients made the diagnosis simple day by day. But every medical imaging modality suffers from somesort of noise. Noise in medical images will decrease the contrast in the image, due to this effect lowcontrast lesions may not be detected in the diagnostic phase. So the removal of noise from medical imagesis very important task. In this paper we are presenting the Denoising techniques developed for removingthe poison noise from X-ray images due to low photon count and Rician noise from the MRI (magneticresonance images. The Poisson and Rician noise are data dependent so they won’t follow the Gaussiandistribution most of the times. In our algorithm we are converting the Poisson and Rician noise distributioninto Gaussian distribution using variance stabilization technique and then we used the patch basedalgorithms for denoising the images. The performance of the algorithms was evaluated using various imagequality metrics such as PSNR (Peak signal to noise ratio, UQI (Universal Quality Index, SSIM (Structuralsimilarity index etc. The results proved that the Anscombe transform, Freeman & Tukey transform withblock matching 3D algorithm is giving a better result.

  11. Literature Review of Image Denoising Methods

    Institute of Scientific and Technical Information of China (English)

    LIU Qian; YANG Xing-qiang; LI Yun-liang

    2014-01-01

    Image denoising is a fundamental and important task in image processing and computer vision fields. A lot of methods are proposed to reconstruct clean images from their noisy versions. These methods differ in both methodology and performance. On one hand, denoising methods can be classified into local and nonlocal methods. On the other hand, they can be marked as spatial and frequency domain methods. Sparse coding and low-rank are two popular techniques for denoising recently. This paper summarizes existing techniques and provides several promising directions for further studying in the future.

  12. Denoising CT Images using wavelet transform

    Directory of Open Access Journals (Sweden)

    Lubna Gabralla

    2015-05-01

    Full Text Available Image denoising is one of the most significant tasks especially in medical image processing, where the original images are of poor quality due the noises and artifacts introduces by the acquisition systems. In this paper, we propose a new image denoising scheme by modifying the wavelet coefficients using soft-thresholding method, we present a comparative study of different wavelet denoising techniques for CT images and we discuss the obtained results. The denoising process rejects noise by thresholding in the wavelet domain. The performance is evaluated using Peak Signal-to-Noise Ratio (PSNR and Mean Squared Error (MSE. Finally, Gaussian filter provides better PSNR and lower MSE values. Hence, we conclude that this filter is an efficient one for preprocessing medical images.

  13. Geometric moment based nonlocal-means filter for ultrasound image denoising

    Science.gov (United States)

    Dou, Yangchao; Zhang, Xuming; Ding, Mingyue; Chen, Yimin

    2011-06-01

    It is inevitable that there is speckle noise in ultrasound image. Despeckling is the important process. The original nonlocal means (NLM) filter can remove speckle noise and protect the texture information effectively when the image corruption degree is relatively low. But when the noise in the image is strong, NLM will produce fictitious texture information, which has the disadvantageous influence on its denoising performance. In this paper, a novel nonlocal means (NLM) filter is proposed. We introduce geometric moments into the NLM filter. Though geometric moments are not orthogonal moments, it is popular by its concision, and its restoration ability is not yet proved. Results on synthetic data and real ultrasound image show that the proposed method can get better despeckling performance than other state-of-the-art method.

  14. Study of Denoising Method of Images- A Review

    Directory of Open Access Journals (Sweden)

    Ravi Mohan Sairam

    2013-05-01

    Full Text Available This paper attempts to undertake the study of Denoising Methods. Different noise densities have been removed by using filters Wavelet based Methods. Fourier transform method is localized in frequency domain where the Wavelet transform method is localized in both frequency and spatial domain but both the above methods are not data adaptive .Independent Component Analysis (ICA is a higher order statistical tool for the analysis of multidimensional data with inherent data adaptiveness property. In This paper we try to presents a review of some significant work in the area of image denoising and finds the one is better for image denoising. Here, some popular approaches are classified into different groups .after that we conclude for best technique for Image Denoising

  15. Denoising of Chinese calligraphy tablet images based on run-length statistics and structure characteristic of character strokes

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jun-song; YU Jin-hui; MAO Guo-hong; YE Xiu-zi

    2006-01-01

    In this paper, a novel approach is proposed for denoising of Chinese calligraphy tablet documents. The method includes two phases: First, a partial differential equations (PDE) based the total variation model and Otsu thresholding method are used to preprocess the calligraphy document image. Second, a new method based on on-length statistics and structure characteristics of Chinese characters is proposed to remove some random and ant-like noises. This includes the optimal threshold selection from histogram of run-length probability density, and improved Hough transform algorithm for line shape noise detection and removal. Examples are given in the paper to demonstrate the proposed method.

  16. A flexible patch based approach for combined denoising and contrast enhancement of digital X-ray images.

    Science.gov (United States)

    Irrera, Paolo; Bloch, Isabelle; Delplanque, Maurice

    2016-02-01

    Denoising and contrast enhancement play key roles in optimizing the trade-off between image quality and X-ray dose. However, these tasks present multiple challenges raised by noise level, low visibility of fine anatomical structures, heterogeneous conditions due to different exposure parameters, and patient characteristics. This work proposes a new method to address these challenges. We first introduce a patch-based filter adapted to the properties of the noise corrupting X-ray images. The filtered images are then used as oracles to define non parametric noise containment maps that, when applied in a multiscale contrast enhancement framework, allow optimizing the trade-off between improvement of the visibility of anatomical structures and noise reduction. A significant amount of tests on both phantoms and clinical images has shown that the proposed method is better suited than others for visual inspection for diagnosis, even when compared to an algorithm used to process low dose images in clinical routine. PMID:26716719

  17. Twofold processing for denoising ultrasound medical images.

    Science.gov (United States)

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India. PMID:26697285

  18. A HYBRID APPROACH FOR DENOISING DICOM IMAGES

    Directory of Open Access Journals (Sweden)

    J. UMAMAHESWARI

    2011-11-01

    Full Text Available This paper provides a new model based on the hybridization of wavelet and relaxed median filter for denoising of noisy medical images. The present study focuses on proposing a technique to reduce speckle and salt & pepper noise from CT (Computed Tomography scan devices. In diagnosis of diseases, devices are frequently used by healthcare professionals. The main problem during diagnosis is the distortion of visual image signals that are obtained, which is due to the consequence of the coherent of nature of the liquid speckle noise and salt and pepper noise added during transmission. We validate the new model by evaluating the standard brain images in terms of Peak Signal to Noise Ratio (PSNR, Mean Square Error (MSE and Elapsed Time (ET. The proposed filter is compared with existing filters. Experimental results prove, the proposed method is efficient.

  19. Dictionary Pair Learning on Grassmann Manifolds for Image Denoising.

    Science.gov (United States)

    Zeng, Xianhua; Bian, Wei; Liu, Wei; Shen, Jialie; Tao, Dacheng

    2015-11-01

    Image denoising is a fundamental problem in computer vision and image processing that holds considerable practical importance for real-world applications. The traditional patch-based and sparse coding-driven image denoising methods convert 2D image patches into 1D vectors for further processing. Thus, these methods inevitably break down the inherent 2D geometric structure of natural images. To overcome this limitation pertaining to the previous image denoising methods, we propose a 2D image denoising model, namely, the dictionary pair learning (DPL) model, and we design a corresponding algorithm called the DPL on the Grassmann-manifold (DPLG) algorithm. The DPLG algorithm first learns an initial dictionary pair (i.e., the left and right dictionaries) by employing a subspace partition technique on the Grassmann manifold, wherein the refined dictionary pair is obtained through a sub-dictionary pair merging. The DPLG obtains a sparse representation by encoding each image patch only with the selected sub-dictionary pair. The non-zero elements of the sparse representation are further smoothed by the graph Laplacian operator to remove the noise. Consequently, the DPLG algorithm not only preserves the inherent 2D geometric structure of natural images but also performs manifold smoothing in the 2D sparse coding space. We demonstrate that the DPLG algorithm also improves the structural SIMilarity values of the perceptual visual quality for denoised images using the experimental evaluations on the benchmark images and Berkeley segmentation data sets. Moreover, the DPLG also produces the competitive peak signal-to-noise ratio values from popular image denoising algorithms.

  20. Fuzzy logic-based approach to wavelet denoising of 3D images produced by time-of-flight cameras.

    Science.gov (United States)

    Jovanov, Ljubomir; Pižurica, Aleksandra; Philips, Wilfried

    2010-10-25

    In this paper we present a new denoising method for the depth images of a 3D imaging sensor, based on the time-of-flight principle. We propose novel ways to use luminance-like information produced by a time-of flight camera along with depth images. Firstly, we propose a wavelet-based method for estimating the noise level in depth images, using luminance information. The underlying idea is that luminance carries information about the power of the optical signal reflected from the scene and is hence related to the signal-to-noise ratio for every pixel within the depth image. In this way, we can efficiently solve the difficult problem of estimating the non-stationary noise within the depth images. Secondly, we use luminance information to better restore object boundaries masked with noise in the depth images. Information from luminance images is introduced into the estimation formula through the use of fuzzy membership functions. In particular, we take the correlation between the measured depth and luminance into account, and the fact that edges (object boundaries) present in the depth image are likely to occur in the luminance image as well. The results on real 3D images show a significant improvement over the state-of-the-art in the field. PMID:21164605

  1. Comparative Study of Image Denoising Algorithms in Digital Image Processing

    Directory of Open Access Journals (Sweden)

    Aarti Kumari

    2015-11-01

    Full Text Available This paper proposes a basic scheme for understanding the fundamentals of digital image processing and the image denising algorithm. There are three basic operation categorized on during image processing i.e. image rectification and restoration, enhancement and information extraction. Image denoising is the basic problem in digital image processing. The main task is to make the image free from Noise. Salt & pepper (Impulse noise and the additive white Gaussian noise and blurredness are the types of noise that occur during transmission and capturing. For denoising the image there are some algorithms which denoise the image.

  2. Local sparse representation for astronomical image denoising

    Institute of Scientific and Technical Information of China (English)

    杨阿锋; 鲁敏; 滕书华; 孙即祥

    2013-01-01

    Motivated by local coordinate coding(LCC) theory in nonlinear manifold learning, a new image representation model called local sparse representation(LSR) for astronomical image denoising was proposed. Borrowing ideas from surrogate function and applying the iterative shrinkage-thresholding algorithm(ISTA), an iterative shrinkage operator for LSR was derived. Meanwhile, a fast approximated LSR method by first performing a K-nearest-neighbor search and then solving a l1optimization problem was presented under the guarantee of denoising performance. In addition, the LSR model and adaptive dictionary learning were incorporated into a unified optimization framework, which explicitly established the inner connection of them. Such processing allows us to simultaneously update sparse coding vectors and the dictionary by alternating optimization method. The experimental results show that the proposed method is superior to the traditional denoising method and reaches state-of-the-art performance on astronomical image.

  3. Combined self-learning based single-image super-resolution and dual-tree complex wavelet transform denoising for medical images

    Science.gov (United States)

    Yang, Guang; Ye, Xujiong; Slabaugh, Greg; Keegan, Jennifer; Mohiaddin, Raad; Firmin, David

    2016-03-01

    In this paper, we propose a novel self-learning based single-image super-resolution (SR) method, which is coupled with dual-tree complex wavelet transform (DTCWT) based denoising to better recover high-resolution (HR) medical images. Unlike previous methods, this self-learning based SR approach enables us to reconstruct HR medical images from a single low-resolution (LR) image without extra training on HR image datasets in advance. The relationships between the given image and its scaled down versions are modeled using support vector regression with sparse coding and dictionary learning, without explicitly assuming reoccurrence or self-similarity across image scales. In addition, we perform DTCWT based denoising to initialize the HR images at each scale instead of simple bicubic interpolation. We evaluate our method on a variety of medical images. Both quantitative and qualitative results show that the proposed approach outperforms bicubic interpolation and state-of-the-art single-image SR methods while effectively removing noise.

  4. Denoising Of Ultrasonographic Images Using DTCWT

    Directory of Open Access Journals (Sweden)

    Anil Dudy

    2012-08-01

    Full Text Available Digital image acquisition and processing pays a very important role in current medical diagnosis techniques. Medical images are corrupted by noise in its acquisition and transmission process. Ultrasound has historically suffered from an inherent imaging artifact known as speckle. Speckle significantly degrades the image quality. It makes it more difficult for observer to discriminate fine details of the images in diagnostic examination. Dual tree complex wavelet transform is an efficient method for denoising of ultrasound images. It not only reduces the speckle noise but also preserves the detail features of image. In this paper denoising of ultrasound images has been performed using Dual tree complex wavelet transform. In experimental analysis, it is found that the performance in terms of PSNR for a set of acquired medical images brain and mammogram is better with DTCWT as compared to the performance with DWT.

  5. 基于稀疏序列的图像去噪方法及应用%Image Denoising Based on Sparse Sequences and Its Application

    Institute of Scientific and Technical Information of China (English)

    王蓓; 张欣; 刘洪

    2011-01-01

    文中基于图像稀疏分解,根据图像与噪声的稀疏分解不同,提出一种基于非对称原子模型的原子库,通过算法优化,实现对采集的布坯图像进行有效去噪分析,提高去噪图像的PSNR值,且具有更好的视觉效果.将所采集到的布坯数字图像去噪后将背景和缺陷进行分离,才能更有效地将缺陷进行界定,以利后续的相关特征提取.通过实验,与小波类去噪方法对比,文中的学习算法能更好地去除图像噪声,保留图像细节信息,获得更高PSNB值.%Base on the image sparse decomposition,according to the different characters of image and noise in sparse decomposition, proposed a model based on asymmetric atomic atoms library ,by algorithm the acquisition of effective de-noising analysis of gray images.Denoising to improve image PSNR values, and has a better visual effect. Will be collected by digital image denoising cloth blank background and the defects after separation in order to more effectively define the defects in order to facilitate the follow-up of the relevant characteristics of extraction. Experimental results show that :in comparison with the wavelet based denoising methods,our learning based algorithm has better denoising ability, keep more detail image information and improve the peak signal to noise ratio.

  6. Edge preserved enhancement of medical images using adaptive fusion-based denoising by shearlet transform and total variation algorithm

    Science.gov (United States)

    Gupta, Deep; Anand, Radhey Shyam; Tyagi, Barjeev

    2013-10-01

    Edge preserved enhancement is of great interest in medical images. Noise present in medical images affects the quality, contrast resolution, and most importantly, texture information and can make post-processing difficult also. An enhancement approach using an adaptive fusion algorithm is proposed which utilizes the features of shearlet transform (ST) and total variation (TV) approach. In the proposed method, three different denoised images processed with TV method, shearlet denoising, and edge information recovered from the remnant of the TV method and processed with the ST are fused adaptively. The result of enhanced images processed with the proposed method helps to improve the visibility and detectability of medical images. For the proposed method, different weights are evaluated from the different variance maps of individual denoised image and the edge extracted information from the remnant of the TV approach. The performance of the proposed method is evaluated by conducting various experiments on both the standard images and different medical images such as computed tomography, magnetic resonance, and ultrasound. Experiments show that the proposed method provides an improvement not only in noise reduction but also in the preservation of more edges and image details as compared to the others.

  7. Accelerated graph-based nonlinear denoising filters

    OpenAIRE

    Knyazev, Andrew; Malyshev, Alexander,

    2015-01-01

    Denoising filters, such as bilateral, guided, and total variation filters, applied to images on general graphs may require repeated application if noise is not small enough. We formulate two acceleration techniques of the resulted iterations: conjugate gradient method and Nesterov's acceleration. We numerically show efficiency of the accelerated nonlinear filters for image denoising and demonstrate 2-12 times speed-up, i.e., the acceleration techniques reduce the number of iterations required...

  8. PERFORMANCE ANALYSIS OF IMAGE DENOISING IN LIFTING WAVELET TRANSFORM

    Directory of Open Access Journals (Sweden)

    G.M.Rajathi

    2012-11-01

    Full Text Available Images are contaminated by noise due to several unavoidable reasons, Poor image sensors, imperfect instruments, problems with data acquisition process, transmission errors and interfering natural phenomena are its main sources. Therefore, it is necessary to detect and remove noises present in the images. Reserving the details of an image and removing the random noise as far as possible is the goal of image denoising approaches . Lifting wavelet transform (LWTis based on the theory of lazy wavelet and completely recoverable filter banks,improving the wavelet and its performance through the lifting process under the condition of maintaining the feature of the wavelet compared with the classical constructions (DWT is rely on the Fourier transform. In this paper we compare the image denoising performance of LWT with DWT . We demonstrated through simulations with images contaminated by white Gaussian noise that exhibits performance in both PSNR (Peak Signal-to-Noise Ratio and visual effect.

  9. Robust Image Denoising using a Virtual Flash Image for Monte Carlo Ray Tracing

    DEFF Research Database (Denmark)

    Moon, Bochang; Jun, Jong Yun; Lee, JongHyeob;

    2013-01-01

    We propose an efficient and robust image-space denoising method for noisy images generated by Monte Carlo ray tracing methods. Our method is based on two new concepts: virtual flash images and homogeneous pixels. Inspired by recent developments in flash photography, virtual flash images emulate...... parameters. To highlight the benefits of our method, we apply our method to two Monte Carlo ray tracing methods, photon mapping and path tracing, with various input scenes. We demonstrate that using virtual flash images and homogeneous pixels with a standard denoising method outperforms state...... values. While denoising each pixel, we consider only homogeneous pixels—pixels that are statistically equivalent to each other. This makes it possible to define a stochastic error bound of our method, and this bound goes to zero as the number of ray samples goes to infinity, irrespective of denoising...

  10. MEDICAL IMAGE DENOISE METHOD BASED ON CURVELET TRANSFORM: AN APPROACH FOR EDGE PRESERVATION

    OpenAIRE

    T. Janardhan Reddy

    2016-01-01

    In medical images noise and artifacts are presented due to the measurement techniques and instrumentation. Because of the noise present in the medical images, physicians are unable to obtain required information from the images. The paper proposes a noise reduction method for both computed tomography (CT) and magnetic resonance imaging (MRI) which fuses the Curvelet transform based method. The performance is analysed by computing Peak Signal to Noise Ratio (PSNR).The results show the proposed...

  11. Image denoising using new pixon representation based on fuzzy filtering and partial differential equations

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Nikpour, Mohsen

    2012-01-01

    In this paper, we have proposed two extensions to pixon-based image modeling. The first one is using bicubic interpolation instead of bilinear interpolation and the second one is using fuzzy filtering method, aiming to improve the quality of the pixonal image. Finally, partial differential...... equations (PDEs) are applied on the pixonal image for noise removing. The proposed algorithm has been examined on variety of standard images and their performance compared with the existing algorithms. Experimental results show that in comparison with the other existing methods, the proposed algorithm has...

  12. Application of multi-resolution analysis in sonar image denoising

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Sonar images have complex background, low contrast, and deteriorative edges; these characteristics make it difficult for researchers to dispose the sonar objects. The multi-resolution analysis represents the signals in different scales efficiently, which is widely used in image processing. Wavelets are successful in disposing point discontinuities in one dimension, but not in two dimensions. The finite Ridgelet transform (FRIT) deals efficiently with the singularity in high dimension. It presents three improved denoising approaches, which are based on FRIT and used in the sonar image disposal technique. By experiment and comparison with traditional methods, these approaches not only suppress the artifacts, but also obtain good effect in edge keeping and SNR of the sonar image denoising.

  13. ORTHOGONAL-DIRECTIONAL FORWARD DIFFUSION IMAGE INPAINTING AND DENOISING MODEL

    Institute of Scientific and Technical Information of China (English)

    Wu Jiying; Ruan Qiuqi; An Gaoyun

    2008-01-01

    In this paper,an orthogonal-directional forward diffusion Partial Differential Equation (PDE) image inpainting and denoising model which processes image based on variation problem is proposed. The novel model restores the damaged information and smoothes the noise in image si-multaneously. The model is morphological invariant which processes image based on the geometrical property. The regularization item of it diffuses along and cross the isophote,and then the known image information is transported into the target region through two orthogonal directions. The cross isophote diffusion part is the TV (Total Variation) equation and the along isophote diffusion part is the inviscid Helmholtz vorticity equation. The equivalence between the Helmholtz equation and the inpainting PDEs is proved. The model with the fidelity item which is used in the whole image domain denoises while preserving edges. So the novel model could inpaint and denoise simultaneously. Both theoretical analysis and experiments have verified the validity of the novel model proposed in this paper.

  14. A Comparative Study of Wavelet Thresholding for Image Denoising

    Directory of Open Access Journals (Sweden)

    Arun Dixit

    2014-11-01

    Full Text Available Image denoising using wavelet transform has been successful as wavelet transform generates a large number of small coefficients and a small number of large coefficients. Basic denoising algorithm that using the wavelet transform consists of three steps – first computing the wavelet transform of the noisy image, thresholding is performed on the detail coefficients in order to remove noise and finally inverse wavelet transform of the modified coefficients is taken. This paper reviews the state of art methods of image denoising using wavelet thresholding. An Experimental analysis of wavelet based methods Visu Shrink, Sure Shrink, Bayes Shrink, Prob Shrink, Block Shrink and Neigh Shrink Sure is performed. These wavelet based methods are also compared with spatial domain methods like median filter and wiener filter. Results are evaluated on the basis of Peak Signal to Noise Ratio and visual quality of images. In the experiment, wavelet based methods perform better than spatial domain methods. In wavelet domain, recent methods like prob shrink, block shrink and neigh shrink sure performed better as compared to other wavelet based methods.

  15. A scale-based forward-and-backward diffusion process for adaptive image enhancement and denoising

    Directory of Open Access Journals (Sweden)

    Zhang Liangpei

    2011-01-01

    Full Text Available Abstract This work presents a scale-based forward-and-backward diffusion (SFABD scheme. The main idea of this scheme is to perform local adaptive diffusion using local scale information. To this end, we propose a diffusivity function based on the Minimum Reliable Scale (MRS of Elder and Zucker (IEEE Trans. Pattern Anal. Mach. Intell. 20(7, 699-716, 1998 to detect the details of local structures. The magnitude of the diffusion coefficient at each pixel is determined by taking into account the local property of the image through the scales. A scale-based variable weight is incorporated into the diffusivity function for balancing the forward and backward diffusion. Furthermore, as numerical scheme, we propose a modification of the Perona-Malik scheme (IEEE Trans. Pattern Anal. Mach. Intell. 12(7, 629-639, 1990 by incorporating edge orientations. The article describes the main principles of our method and illustrates image enhancement results on a set of standard images as well as simulated medical images, together with qualitative and quantitative comparisons with a variety of anisotropic diffusion schemes.

  16. Stacked Denoise Autoencoder Based Feature Extraction and Classification for Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Chen Xing

    2016-01-01

    Full Text Available Deep learning methods have been successfully applied to learn feature representations for high-dimensional data, where the learned features are able to reveal the nonlinear properties exhibited in the data. In this paper, deep learning method is exploited for feature extraction of hyperspectral data, and the extracted features can provide good discriminability for classification task. Training a deep network for feature extraction and classification includes unsupervised pretraining and supervised fine-tuning. We utilized stacked denoise autoencoder (SDAE method to pretrain the network, which is robust to noise. In the top layer of the network, logistic regression (LR approach is utilized to perform supervised fine-tuning and classification. Since sparsity of features might improve the separation capability, we utilized rectified linear unit (ReLU as activation function in SDAE to extract high level and sparse features. Experimental results using Hyperion, AVIRIS, and ROSIS hyperspectral data demonstrated that the SDAE pretraining in conjunction with the LR fine-tuning and classification (SDAE_LR can achieve higher accuracies than the popular support vector machine (SVM classifier.

  17. Comparison of the Fuzzy-based Wavelet Shrinkage Image Denoising Techniques

    Directory of Open Access Journals (Sweden)

    Ali Adeli

    2012-03-01

    Full Text Available In this paper, a comparative study on the different membership functions which are used for fuzzy-based noise reduction methods is done. This study focuses on the three different membership functions such as Gaussian, Sigmaf and Trapezoidal. The fuzzy wavelet shrinkage method is tested with different membership functions in order to reduce different types of noise such as Gaussian, Salt Pepper, Poisson and Speckle. The measure of comparison between different membership function is based on PSNR (Peak Signal to Noise Ratio. Experimental results show that on the some well-known images, such as Lena, Barbara and Baboon, the Gaussian membership function can efficiently remove the additive Gaussian and the Poisson noises from the grey level images. Furthermore, on the Speckle and Salt Pepper noises, the Sigmaf membership function outperforms the Trapezoidal one to remove noise.

  18. Comparative Study of Image Denoising Algorithms in Digital Image Processing

    OpenAIRE

    Aarti Kumari; Gaurav Pushkarna

    2015-01-01

    This paper proposes a basic scheme for understanding the fundamentals of digital image processing and the image denising algorithm. There are three basic operation categorized on during image processing i.e. image rectification and restoration, enhancement and information extraction. Image denoising is the basic problem in digital image processing. The main task is to make the image free from Noise. Salt & pepper (Impulse) noise and the additive white Gaussian noise and blurredness are th...

  19. Comparative Study of Image Denoising Algorithms in Digital Image Processing

    OpenAIRE

    Aarti; Gaurav Pushkarna

    2014-01-01

    This paper proposes a basic scheme for understanding the fundamentals of digital image processing and the image denising algorithm. There are three basic operation categorized on during image processing i.e. image rectification and restoration, enhancement and information extraction. Image denoising is the basic problem in digital image processing. The main task is to make the image free from Noise. Salt & pepper (Impulse) noise and the additive white Gaussian noise and blurrednes...

  20. PERFORMANCE ANALYSIS OF IMAGE DENOISING WITH WAVELET THRESHOLDING METHODS FOR DIFFERENT LEVELS OF DECOMPOSITION

    Directory of Open Access Journals (Sweden)

    Anutam

    2014-10-01

    Full Text Available Image Denoising is an important part of diverse image processing and computer vision problems. The important property of a good image denoising model is that it should completely remove noise as far as possible as well as preserve edges. One of the most powerful and perspective approaches in this area is image denoising using discrete wavelet transform (DWT. In this paper, comparison of various Wavelets at different decomposition levels has been done. As number of levels increased, Peak Signal to Noise Ratio (PSNR of image gets decreased whereas Mean Absolute Error (MAE and Mean Square Error (MSE get increased . A comparison of filters and various wavelet based methods has also been carried out to denoise the image. The simulation results reveal that wavelet based Bayes shrinkage method outperforms other methods.

  1. HYPERSPECTRAL IMAGE DENOISING WITH CUBIC TOTAL VARIATION MODEL

    OpenAIRE

    Zhang, H.

    2012-01-01

    Image noise is generated unavoidably in the hyperspectral image acquision process and has a negative effect on subsequent image analysis. Therefore, it is necessary to perform image denoising for hyperspectral images. This paper proposes a cubic total variation (CTV) model by combining the 2-D total variation model for spatial domain with the 1-D total variation model for spectral domain, and then applies the termed CTV model to hyperspectral image denoising. The augmented Lagrangian...

  2. A new method for mobile phone image denoising

    Science.gov (United States)

    Jin, Lianghai; Jin, Min; Li, Xiang; Xu, Xiangyang

    2015-12-01

    Images captured by mobile phone cameras via pipeline processing usually contain various kinds of noises, especially granular noise with different shapes and sizes in both luminance and chrominance channels. In chrominance channels, noise is closely related to image brightness. To improve image quality, this paper presents a new method to denoise such mobile phone images. The proposed scheme converts the noisy RGB image to luminance and chrominance images, which are then denoised by a common filtering framework. The common filtering framework processes a noisy pixel by first excluding the neighborhood pixels that significantly deviate from the (vector) median and then utilizing the other neighborhood pixels to restore the current pixel. In the framework, the strength of chrominance image denoising is controlled by image brightness. The experimental results show that the proposed method obviously outperforms some other representative denoising methods in terms of both objective measure and visual evaluation.

  3. Total-variation-based methods for gravitational wave denoising

    CERN Document Server

    Torres, Alejandro; Font, José A; Ibáñez, José M

    2014-01-01

    We describe new methods for denoising and detection of gravitational waves embedded in additive Gaussian noise. The methods are based on Total Variation denoising algorithms. These algorithms, which do not need any a priori information about the signals, have been originally developed and fully tested in the context of image processing. To illustrate the capabilities of our methods we apply them to two different types of numerically-simulated gravitational wave signals, namely bursts produced from the core collapse of rotating stars and waveforms from binary black hole mergers. We explore the parameter space of the methods to find the set of values best suited for denoising gravitational wave signals under different conditions such as waveform type and signal-to-noise ratio. Our results show that noise from gravitational wave signals can be successfully removed with our techniques, irrespective of the signal morphology or astrophysical origin. We also combine our methods with spectrograms and show how those c...

  4. Efficient bias correction for magnetic resonance image denoising.

    Science.gov (United States)

    Mukherjee, Partha Sarathi; Qiu, Peihua

    2013-05-30

    Magnetic resonance imaging (MRI) is a popular radiology technique that is used for visualizing detailed internal structure of the body. Observed MRI images are generated by the inverse Fourier transformation from received frequency signals of a magnetic resonance scanner system. Previous research has demonstrated that random noise involved in the observed MRI images can be described adequately by the so-called Rician noise model. Under that model, the observed image intensity at a given pixel is a nonlinear function of the true image intensity and of two independent zero-mean random variables with the same normal distribution. Because of such a complicated noise structure in the observed MRI images, denoised images by conventional denoising methods are usually biased, and the bias could reduce image contrast and negatively affect subsequent image analysis. Therefore, it is important to address the bias issue properly. To this end, several bias-correction procedures have been proposed in the literature. In this paper, we study the Rician noise model and the corresponding bias-correction problem systematically and propose a new and more effective bias-correction formula based on the regression analysis and Monte Carlo simulation. Numerical studies show that our proposed method works well in various applications. PMID:23074149

  5. FETAL ULTRASOUND IMAGE DENOISING USING CURVELET TRANSFORM

    Directory of Open Access Journals (Sweden)

    J. Nithya

    2015-02-01

    Full Text Available The random speckle noise in the acquired fetal ultrasound images is caused by the interference of reflected ultrasound wave fronts. The presence of speckle noise will degrade the quality of the image and even hide image details, which in turn affect the process of image segmentation, feature extraction and recognition and most importantly disease diagnosis. The standardization of measurements from the fetal ultrasound images will help the physicians to make correct diagnosis. The accuracy of diagnosis is possible only when the image is noise free. Hence it is very much important to perform filtering of the speckle noise. It is proposed that curvelet transform serves as a better edge preserving filter compared to other speckle reducing anisotropic diffusion filters. Curvelet transform is designed to handle images which involve curves using only a less number of coefficients. Hence a multiscale representation called curvelet transform is applied to enhance the visual quality of the ultrasound images. The experimented results indicate that the proposed curvelet denoising suppresses the noise effectively both in quantitative and visual means by producing high PSNR.

  6. A Denoising Filter Design based on No-Reference Image Content Metric

    Directory of Open Access Journals (Sweden)

    B.Padhmavathi

    2011-12-01

    Full Text Available DIGITAL images are subject to a wide variety of distortions during acquisition, processing,compression, storage, transmission and reproduction. Any of these may result in degradation of theirvisual quality. Hence, there has been an increasing need to develop quality measurement techniques that can predict perceived image/video quality automatically. These methods are useful in various image/videoprocessing applications such as compression, communication, printing, display, analysis, registration,restoration, and enhancement. Subjective quality metrics are considered to give the most reliable results since, it is the end user who is judging the quality of the output in many applications. Subjective quality metrics are costly, time-consuming and impractical for real-time implementation and system integration. On the other hand, objective metrics like full-reference, reduced-reference, and no-reference metrics aremost popular. This paper proposes an ideal no-reference measure that is useful for the parameter optimization problem and it takes care of both noise and blur on the reconstructed image into account.The experimental results have shown that the technique works well with images with various kinds of noise.

  7. Image denoising based on stationary ridgelet transform%基于平稳脊波变换的图像降噪

    Institute of Scientific and Technical Information of China (English)

    徐巍; 孔建益; 陈东方

    2015-01-01

    Orthogonal wavelet transform is commonly used in finite ridgelet transform (FRIT)to han-dle the point singularity in Radon transform domain.However,due to the non-redundancy of orthog-onal wavelet transform,image denoising using FRIT causes Gibbs phenomenon.In order to overcome these Gibbs interference fringes,this paper introduces a new concept of stationary ridgelet transform (SRT)based on FRIT and proposes a SRT-based image denoising algorithm.The key of this algo-rithm is to deal with the Radon transform coefficient matrixes by using stationary wavelet transform instead of orthogonal wavelet transform.Experimental results show that the proposed algorithm has better noise reduction performance than the FRIT-based image denoising method.Denoised images boast detailed edge feature information and great visual effect,and the ringing effect is restrained.%有限脊波变换在 Radon 变换域中用正交小波处理点奇异,而正交小波变换不存在冗余性,因此在应用有限脊波变换进行图像降噪时会产生 Gibbs 现象。为了解决 Gibbs 条纹干扰问题,本文在有限脊波变换的基础上提出一种新的基于平稳脊波变换的图像降噪方法,其关键是引入一维平稳小波变换来代替正交小波变换对 Radon 系数矩阵进行处理。实验结果表明,与基于有限脊波变换的图像降噪方法相比,本文提出的算法具有更优的降噪性能,可使图像降噪后保持更好的边缘特征和视觉效果,振铃效应得到改善。

  8. Image Denoising via Bayesian Estimation of Statistical Parameter Using Generalized Gamma Density Prior in Gaussian Noise Model

    Science.gov (United States)

    Kittisuwan, Pichid

    2015-03-01

    The application of image processing in industry has shown remarkable success over the last decade, for example, in security and telecommunication systems. The denoising of natural image corrupted by Gaussian noise is a classical problem in image processing. So, image denoising is an indispensable step during image processing. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. One of the cruxes of the Bayesian image denoising algorithms is to estimate the statistical parameter of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with generalized Gamma density prior for local observed variance and Laplacian or Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by efficient and flexible properties of generalized Gamma density. The experimental results show that the proposed method yields good denoising results.

  9. Evaluating image denoising methods in myocardial perfusion single photon emission computed tomography (SPECT) imaging

    International Nuclear Information System (INIS)

    The statistical nature of single photon emission computed tomography (SPECT) imaging, due to the Poisson noise effect, results in the degradation of image quality, especially in the case of lesions of low signal-to-noise ratio (SNR). A variety of well-established single-scale denoising methods applied on projection raw images have been incorporated in SPECT imaging applications, while multi-scale denoising methods with promising performance have been proposed. In this paper, a comparative evaluation study is performed between a multi-scale platelet denoising method and the well-established Butterworth filter applied as a pre- and post-processing step on images reconstructed without and/or with attenuation correction. Quantitative evaluation was carried out employing (i) a cardiac phantom containing two different size cold defects, utilized in two experiments conducted to simulate conditions without and with photon attenuation from myocardial surrounding tissue and (ii) a pilot-verified clinical dataset of 15 patients with ischemic defects. Image noise, defect contrast, SNR and defect contrast-to-noise ratio (CNR) metrics were computed for both phantom and patient defects. In addition, an observer preference study was carried out for the clinical dataset, based on rankings from two nuclear medicine clinicians. Without photon attenuation conditions, denoising by platelet and Butterworth post-processing methods outperformed Butterworth pre-processing for large size defects, while for small size defects, as well as with photon attenuation conditions, all methods have demonstrated similar denoising performance. Under both attenuation conditions, the platelet method showed improved performance with respect to defect contrast, SNR and defect CNR in the case of images reconstructed without attenuation correction, however not statistically significant (p > 0.05). Quantitative as well as preference results obtained from clinical data showed similar performance of the

  10. Evaluating image denoising methods in myocardial perfusion single photon emission computed tomography (SPECT) imaging

    Science.gov (United States)

    Skiadopoulos, S.; Karatrantou, A.; Korfiatis, P.; Costaridou, L.; Vassilakos, P.; Apostolopoulos, D.; Panayiotakis, G.

    2009-10-01

    The statistical nature of single photon emission computed tomography (SPECT) imaging, due to the Poisson noise effect, results in the degradation of image quality, especially in the case of lesions of low signal-to-noise ratio (SNR). A variety of well-established single-scale denoising methods applied on projection raw images have been incorporated in SPECT imaging applications, while multi-scale denoising methods with promising performance have been proposed. In this paper, a comparative evaluation study is performed between a multi-scale platelet denoising method and the well-established Butterworth filter applied as a pre- and post-processing step on images reconstructed without and/or with attenuation correction. Quantitative evaluation was carried out employing (i) a cardiac phantom containing two different size cold defects, utilized in two experiments conducted to simulate conditions without and with photon attenuation from myocardial surrounding tissue and (ii) a pilot-verified clinical dataset of 15 patients with ischemic defects. Image noise, defect contrast, SNR and defect contrast-to-noise ratio (CNR) metrics were computed for both phantom and patient defects. In addition, an observer preference study was carried out for the clinical dataset, based on rankings from two nuclear medicine clinicians. Without photon attenuation conditions, denoising by platelet and Butterworth post-processing methods outperformed Butterworth pre-processing for large size defects, while for small size defects, as well as with photon attenuation conditions, all methods have demonstrated similar denoising performance. Under both attenuation conditions, the platelet method showed improved performance with respect to defect contrast, SNR and defect CNR in the case of images reconstructed without attenuation correction, however not statistically significant (p > 0.05). Quantitative as well as preference results obtained from clinical data showed similar performance of the

  11. 基于Laplace-Impact混合模型的DR图像去噪算法%DR Image Denoising Based on Laplace-Impact Mixture Model

    Institute of Scientific and Technical Information of China (English)

    丰国栋; 何祥彬; 周荷琴

    2009-01-01

    A novel DR image denoising algorithm based on Laplace-Impact mixture model in dual-tree complex wavelet domain is proposed in this paper.It uses local variance to build probability density function of Laplace-Impact model fitted to the distribution of high-frequency subband coefficients well.Within Laplace-Impact framework, this paper describes a novel method for image denoising based on designing minimum mean squared error(MMSE) estimators, which relies on strong correlation between amplitudes of nearby coefficients.The experimental results show that the algorithm proposed in this paper outperforms several state-of-art denoising methods such as Bayes least squared Gaussian scale mixture and Laplace prior.%提出一种双树复小波域中基于Laplace-Impact混合模型的DR图像去噪算法.该法利用局部方差建立的拉普拉斯-冲击模型的概率密度函数来逼近高频子带的系数分布.由于高频子带系数局部高度相关,所以在Laplace-Impact混合模型的框架下,采用最小均方误差估计可以很好消除DR图像噪声.实验结果表明,本文提出的算法与BLS-GSM模型和Laplace模型等经典去噪算法相比,对DR图像中的高斯噪声有更好的去噪效果.

  12. Exploiting the self-similarity in ERP images by nonlocal means for single-trial denoising.

    Science.gov (United States)

    Strauss, Daniel J; Teuber, Tanja; Steidl, Gabriele; Corona-Strauss, Farah I

    2013-07-01

    Event related potentials (ERPs) represent a noninvasive and widely available means to analyze neural correlates of sensory and cognitive processing. Recent developments in neural and cognitive engineering proposed completely new application fields of this well-established measurement technique when using an advanced single-trial processing. We have recently shown that 2-D diffusion filtering methods from image processing can be used for the denoising of ERP single-trials in matrix representations, also called ERP images. In contrast to conventional 1-D transient ERP denoising techniques, the 2-D restoration of ERP images allows for an integration of regularities over multiple stimulations into the denoising process. Advanced anisotropic image restoration methods may require directional information for the ERP denoising process. This is especially true if there is a lack of a priori knowledge about possible traces in ERP images. However due to the use of event related experimental paradigms, ERP images are characterized by a high degree of self-similarity over the individual trials. In this paper, we propose the simple and easy to apply nonlocal means method for ERP image denoising in order to exploit this self-similarity rather than focusing on the edge-based extraction of directional information. Using measured and simulated ERP data, we compare our method to conventional approaches in ERP denoising. It is concluded that the self-similarity in ERP images can be exploited for single-trial ERP denoising by the proposed approach. This method might be promising for a variety of evoked and event-related potential applications, including nonstationary paradigms such as changing exogeneous stimulus characteristics or endogenous states during the experiment. As presented, the proposed approach is for the a posteriori denoising of single-trial sequences. PMID:23060344

  13. Exploiting the self-similarity in ERP images by nonlocal means for single-trial denoising.

    Science.gov (United States)

    Strauss, Daniel J; Teuber, Tanja; Steidl, Gabriele; Corona-Strauss, Farah I

    2013-07-01

    Event related potentials (ERPs) represent a noninvasive and widely available means to analyze neural correlates of sensory and cognitive processing. Recent developments in neural and cognitive engineering proposed completely new application fields of this well-established measurement technique when using an advanced single-trial processing. We have recently shown that 2-D diffusion filtering methods from image processing can be used for the denoising of ERP single-trials in matrix representations, also called ERP images. In contrast to conventional 1-D transient ERP denoising techniques, the 2-D restoration of ERP images allows for an integration of regularities over multiple stimulations into the denoising process. Advanced anisotropic image restoration methods may require directional information for the ERP denoising process. This is especially true if there is a lack of a priori knowledge about possible traces in ERP images. However due to the use of event related experimental paradigms, ERP images are characterized by a high degree of self-similarity over the individual trials. In this paper, we propose the simple and easy to apply nonlocal means method for ERP image denoising in order to exploit this self-similarity rather than focusing on the edge-based extraction of directional information. Using measured and simulated ERP data, we compare our method to conventional approaches in ERP denoising. It is concluded that the self-similarity in ERP images can be exploited for single-trial ERP denoising by the proposed approach. This method might be promising for a variety of evoked and event-related potential applications, including nonstationary paradigms such as changing exogeneous stimulus characteristics or endogenous states during the experiment. As presented, the proposed approach is for the a posteriori denoising of single-trial sequences.

  14. 基于相似图像检索与字典学习的图像去噪算法%A Image Denoising Method Based on Similar Image Retrieval and Dictionary Learning

    Institute of Scientific and Technical Information of China (English)

    胡占强; 耿龙

    2016-01-01

    为了更好地分析与理解图像,需对图像进行去噪。提出一种基于相似图像检索与字典学习的图像去噪方法。首先,为了提高图像检索的准确度,对噪声图像进行初始去噪提高信噪比;然后使用初始去噪图像在图片库里进行基于SIFT特征的图像检索,使用匹配到的相似图像作为字典学习的样本,提高字典与噪声图像的相关性;最后进行高频补偿。卫星图像被用于去噪实验证明所提算法的优越性。与传统去噪方法相比,所提出的方法不仅获得较好的去噪效果,而且在一定程度上有效地抑制去噪带来的高频信息丢失。%In order to analyze and understand the image effectively, it's necessary to conduct denoising for image. Proposes a denoising method based on similar image retrieval and dictionary learning. Firstly, to have the better accuracy of image retrieval by improving noise signal ratio, denoising initially is executed for noise image; secondly, carry on image retrieval based on SIFT feature by using the initial noise image in the picture library and regard the similar image as a dictionary learning samples matched to improve correlation of dictionary and noise image; finally, the compensation of high frequency is needed. Satellite images are used to demonstrate the superiority of the proposed algorithm. Compared with the traditional denoising methods,the proposed method obtains better denoising effect,furthermore,it can effectively suppress the loss of high frequency information caused by the denoising procession.

  15. Improved PDE image denoising method based on logarithmic image processing%改进的LIP偏微分方程图像去噪方法

    Institute of Scientific and Technical Information of China (English)

    郭茂银; 田有先

    2011-01-01

    Concerning the defects of Logarithmic Image Processing-Total Variation (LIP_TV) denoising model, an improved Partial Differential Equation (PDE) image denoising method based on LIP was proposed.Based on LIP mathematic theory, the new LIP gradient operator was obtained by introducing four directional derivatives in the original one, which can control the diffusion process effectively because it measures image information comprehensively and objectively.The fidelity coefficient was constructed by adopting the noise visibility function based on the structure characteristic of human visual system, which can further preserve the edge details and avoid estimating noise level faetitionsly.The theoretical analysis and experimental results show that the improved method has superiority in the visual effect and objective quality, which can better remove noise and preserve detailed edge features.%针对对数图像处理-全变分(LIP-TV)去噪模型存在的不足,提出一种改进的LIP偏微分方程去噪方法.首先基于LIP数学理论,在LIP梯度算子中,引入四方向导数信息,得到改进的LIP梯度算子以全面客观地度量图像信息,更好地控制扩散过程.然后利用人类视觉系统的结构化特性,用噪声可见度函数构造新的保真项系数,进一步保持了图像的边缘细节并避免了人为估计噪声水平.理论分析和实验结果表明,该改进方法能够更好地去除噪声和保持图像边缘细节特征,在视觉效果和客观评价指标上都明显优于LIP-TV方法.

  16. A Total Variation Model Based on the Strictly Convex Modification for Image Denoising

    Directory of Open Access Journals (Sweden)

    Boying Wu

    2014-01-01

    Full Text Available We propose a strictly convex functional in which the regular term consists of the total variation term and an adaptive logarithm based convex modification term. We prove the existence and uniqueness of the minimizer for the proposed variational problem. The existence, uniqueness, and long-time behavior of the solution of the associated evolution system is also established. Finally, we present experimental results to illustrate the effectiveness of the model in noise reduction, and a comparison is made in relation to the more classical methods of the traditional total variation (TV, the Perona-Malik (PM, and the more recent D-α-PM method. Additional distinction from the other methods is that the parameters, for manual manipulation, in the proposed algorithm are reduced to basically only one.

  17. Image Denoising And Enhancement Using Multiwavelet With Hard Threshold In Digital Mammographic Images

    Directory of Open Access Journals (Sweden)

    Kother Mohideen

    2011-01-01

    Full Text Available Breast cancer continues to be a significant public health problem in the world. The diagnosing mammographymethod is the most effective technology for early detection of the breast cancer. However, in some cases, it is difficult forradiologists to detect the typical diagnostic signs, such as masses and microcalcifications on the mammograms. Dense regionin digital mammographic images are usually noisy and have low contrast. And their visual screening is difficult to view forphysicians. This paper describes a new multiwavelet method for noise suppression and enhancement in digital mammographicimages. Initially the image is pre-processed to improve its local contrast and discriminations of subtle details. Imagesuppression and edge enhancement are performed based on the multiwavelet transform. At each resolution, coefficientassociated with the noise is modelled and generalized by laplacian random variables. Multiwavelet can satisfy both symmetryand asymmetry which are very important characteristics in Digital image processing. The better denoising result depends onthe degree of the noise, generally its energy distributed over low frequency band while both its noise and details aredistributed over high frequency band and also applied hard threshold in different scale of frequency sub-bands to limit theimage. This paper is proposed to indicate the suitability of different wavelets and multiwavelet on the neighbourhood in theperformance of image denoising algorithms in terms of PSNR.. Finally it compares the wavelet and multiwavelet techniques toproduce the best denoised mammographic image using efficient multiwavelet algorithm with hard threshold based on theperformance of image denoising algorithm in terms of PSNR values.

  18. Penalizing local correlations in the residual improves image denoising performance

    OpenAIRE

    Riot, Paul; Almansa, Andrès; Gousseau, Yann; Tupin, Florence

    2016-01-01

    International audience In this work, we address the problem of denoising an image corrupted by an additive white Gaussian noise. This hypothesis on the noise, despite being very common and justified as the result of a variance normalization step, is hardly used by classical denoising methods. Indeed, very few methods directly constrain the whiteness of the residual (the removed noise). We propose a new variational approach defining generic fidelity terms to locally control the residual dis...

  19. Medical image denoising using dual tree complex thresholding wavelet transform and Wiener filter

    Directory of Open Access Journals (Sweden)

    Hilal Naimi

    2015-01-01

    Full Text Available Image denoising is the process to remove the noise from the image naturally corrupted by the noise. The wavelet method is one among various methods for recovering infinite dimensional objects like curves, densities, images, etc. The wavelet techniques are very effective to remove the noise because of their ability to capture the energy of a signal in few energy transform values. The wavelet methods are based on shrinking the wavelet coefficients in the wavelet domain. We propose in this paper, a denoising approach basing on dual tree complex wavelet and shrinkage with the Wiener filter technique (where either hard or soft thresholding operators of dual tree complex wavelet transform for the denoising of medical images are used. The results proved that the denoised images using DTCWT (Dual Tree Complex Wavelet Transform with Wiener filter have a better balance between smoothness and accuracy than the DWT and are less redundant than SWT (StationaryWavelet Transform. We used the SSIM (Structural Similarity Index Measure along with PSNR (Peak Signal to Noise Ratio and SSIM map to assess the quality of denoised images.

  20. Modified Method for Denoising the Ultrasound Images by Wavelet Thresholding

    Directory of Open Access Journals (Sweden)

    Alka Vishwa

    2012-06-01

    Full Text Available Medical practitioners are increasingly using digital images during disease diagnosis. Several state-of-the-art medical equipment are producing images of different organs, which are used during various stages of analysis. Examples of such equipment include MRI, CT, ultrasound and X-Ray. In medical image processing, image denoising has become a very essential exercise all through the diagnosis as Ultrasound images are normally affected by speckle noise. The noise in the image has two negative outcomes, the first being the degradation of the image quality and the second and more important, obscures important information required for accurate diagnosis.Arbitration between the perpetuation of useful diagnostic information and noise suppression must be treasured in medical images. In general we rely on the intervention of a proficient to control the quality of processed images. In certain cases, for instance in Ultrasound images, the noise can suppress the information which is valuable for the general practitioner. Consequently medical images can be very inconsistent, and it is crucial to operate case to case. This paper presents a wavelet-based thresholding scheme for noise suppression in Ultrasound images and provides the knowledge about adaptive and anisotropic diffusion techniques for speckle noise removal from different types of images, like Ultrasound.

  1. Medical Image De-Noising Schemes using Wavelet Transform with Fixed form Thresholding

    Directory of Open Access Journals (Sweden)

    Nadir Mustafa

    2015-10-01

    Full Text Available Medical Imaging is currently a hot area of bio-medical engineers, researchers and medical doctors as it is extensively used in diagnosing of human health and by health care institutes. The imaging equipment is the device, which is used for better image processing and highlighting the important features. These images are affected by random noise during acquisition, analyzing and transmission process. This condition results in the blurry image visible in low contrast. The Image De-noising System (IDs is used as a tool for removing image noise and preserving important data. Image de-noising is one of the most interesting research areas among researchers of technology-giants and academic institutions. For Criminal Identification Systems (CIS & Magnetic Resonance Imaging (MRI, IDs is more beneficial in the field of medical imaging. This paper proposes an algorithm for de-noising medical images using different types of wavelet transform, such as Haar, Daubechies, Symlets and Bi-orthogonal. In this paper noise image quality has been evaluated using filter assessment parameters like Peak Signal to Noise Ratio (PSNR, Mean Square Error (MSE and Variance, It has been observed to form the numerical results that, the presentation of proposed algorithm reduced the mean square error and achieved best value of peak signal to noise ratio (PSNR. In this paper, the wavelet based de-noising algorithm has been investigated on medical images along with threshold.

  2. GPU-accelerated denoising of 3D magnetic resonance images

    Energy Technology Data Exchange (ETDEWEB)

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  3. Image Restoration and Denoising By Using Nonlocally Centralized Sparse Representation and Histogram Clipping

    Directory of Open Access Journals (Sweden)

    Dr. T. V. S. Prasad Gupta

    2014-10-01

    Full Text Available Due to the degradation of observed image the noisy, blurred, Distorted image can be occurred .for restoring image information we propose the sparse representations by conventional modelsmay not be accurate enough for a faithful reconstruction of the original image. To improve the performance of sparse representation-based image restoration,In this method the sparse coding noise is added for image restoration, due to this image restoration the sparse coefficients of original image can be detected. The so-called nonlocally centralized sparse representation (NCSR model is as simple as the standard sparse representation model,for denoising the image here we use the Histogram clipping method by using histogram based sparse representation effectively reduce the noise.and also implement the TMR filter for Quality image.various types of image restoration problems, including denoising, deblurring and super-resolution, validate the generality and state-of-the-art performance of the proposed algorithm.

  4. A novel super resolution reconstruction of low reoslution images progressively using dct and zonal filter based denoising

    CERN Document Server

    Liyakathunisa,

    2011-01-01

    Due to the factors like processing power limitations and channel capabilities images are often down sampled and transmitted at low bit rates resulting in a low resolution compressed image. High resolution images can be reconstructed from several blurred, noisy and down sampled low resolution images using a computational process know as super resolution reconstruction. Super-resolution is the process of combining multiple aliased low-quality images to produce a high resolution, high-quality image. The problem of recovering a high resolution image progressively from a sequence of low resolution compressed images is considered. In this paper we propose a novel DCT based progressive image display algorithm by stressing on the encoding and decoding process. At the encoder we consider a set of low resolution images which are corrupted by additive white Gaussian noise and motion blur. The low resolution images are compressed using 8 by 8 blocks DCT and noise is filtered using our proposed novel zonal filter. Multifr...

  5. Quaternion Wavelet Analysis and Application in Image Denoising

    Directory of Open Access Journals (Sweden)

    Ming Yin

    2012-01-01

    Full Text Available The quaternion wavelet transform is a new multiscale analysis tool. Firstly, this paper studies the standard orthogonal basis of scale space and wavelet space of quaternion wavelet transform in spatial L2(R2, proves and presents quaternion wavelet’s scale basis function and wavelet basis function concepts in spatial scale space L2(R2;H, and studies quaternion wavelet transform structure. Finally, the quaternion wavelet transform is applied to image denoising, and generalized Gauss distribution is used to model QWT coefficients’ magnitude distribution, under the Bayesian theory framework, to recover the original coefficients from the noisy wavelet coefficients, and so as to achieve the aim of denoising. Experimental results show that our method is not only better than many of the current denoising methods in the peak signal to noise ratio (PSNR, but also obtained better visual effect.

  6. Image restoration using regularized inverse filtering and adaptive threshold wavelet denoising

    Directory of Open Access Journals (Sweden)

    Mr. Firas Ali

    2007-01-01

    Full Text Available Although the Wiener filtering is the optimal tradeoff of inverse filtering and noise smoothing, in the case when the blurring filter is singular, the Wiener filtering actually amplify the noise. This suggests that a denoising step is needed to remove the amplified noise .Wavelet-based denoising scheme provides a natural technique for this purpose .In this paper a new image restoration scheme is proposed, the scheme contains two separate steps : Fourier-domain inverse filtering and wavelet-domain image denoising. The first stage is Wiener filtering of the input image , the filtered image is inputted to adaptive threshold wavelet denoising stage . The choice of the threshold estimation is carried out by analyzing the statistical parameters of the wavelet sub band coefficients like standard deviation, arithmetic mean and geometrical mean . The noisy image is first decomposed into many levels to obtain different frequency bands. Then soft thresholding method is used to remove the noisy coefficients, by fixing the optimum thresholding value by this method .Experimental results on test image by using this method show that this method yields significantly superior image quality and better Peak Signal to Noise Ratio (PSNR. Here, to prove the efficiency of this method in image restoration , we have compared this with various restoration methods like Wiener filter alone and inverse filter.

  7. Computed tomography perfusion imaging denoising using Gaussian process regression

    Science.gov (United States)

    Zhu, Fan; Carpenter, Trevor; Rodriguez Gonzalez, David; Atkinson, Malcolm; Wardlaw, Joanna

    2012-06-01

    Brain perfusion weighted images acquired using dynamic contrast studies have an important clinical role in acute stroke diagnosis and treatment decisions. However, computed tomography (CT) images suffer from low contrast-to-noise ratios (CNR) as a consequence of the limitation of the exposure to radiation of the patient. As a consequence, the developments of methods for improving the CNR are valuable. The majority of existing approaches for denoising CT images are optimized for 3D (spatial) information, including spatial decimation (spatially weighted mean filters) and techniques based on wavelet and curvelet transforms. However, perfusion imaging data is 4D as it also contains temporal information. Our approach using Gaussian process regression (GPR), which takes advantage of the temporal information, to reduce the noise level. Over the entire image, GPR gains a 99% CNR improvement over the raw images and also improves the quality of haemodynamic maps allowing a better identification of edges and detailed information. At the level of individual voxel, GPR provides a stable baseline, helps us to identify key parameters from tissue time-concentration curves and reduces the oscillations in the curve. GPR is superior to the comparable techniques used in this study.

  8. A Novel Super Resolution Reconstruction of Low Reoslution Images Progressively Using DCT and Zonal Filter Based Denoising

    Directory of Open Access Journals (Sweden)

    Liyakathunisa

    2011-02-01

    Full Text Available Due to the factors like processing power limitations and channel capabilities images are often down sampled and transmitted at low bit rates resulting in a low resolution compressed image. High resolutionimages can be reconstructed from several blurred, noisy and down sampled low resolution images using a computational process know as super resolution reconstruction. Super-resolution is the process ofcombining multiple aliased low-quality images to produce a high resolution, high-quality image. The problem of recovering a high resolution image progressively from a sequence of low resolutioncompressed images is considered. In this paper we propose a novel DCT based progressive image display algorithm by stressing on the encoding and decoding process. At the encoder we consider a set of lowresolution images which are corrupted by additive white Gaussian noise and motion blur. The low resolution images are compressed using 8 by 8 blocks DCT and noise is filtered using our proposed novelzonal filter. Multiframe fusion is performed in order to obtain a single noise free image. At the decoder the image is reconstructed progressively by transmitting the coarser image first followed by the detail image. And finally a super resolution image is reconstructed by applying our proposed novel adaptive interpolation technique. We have performed both objective and subjective analysis of the reconstructed image, and the resultant image has better super resolution factor, and a higher ISNR and PSNR. A comparative study done with Iterative Back Projection (IBP and Projection on to Convex Sets (POCS,Papoulis Grechberg, FFT based Super resolution Reconstruction shows that our method has out performed the previous contributions.

  9. A new study on mammographic image denoising using multiresolution techniques

    Science.gov (United States)

    Dong, Min; Guo, Ya-Nan; Ma, Yi-De; Ma, Yu-run; Lu, Xiang-yu; Wang, Ke-ju

    2015-12-01

    Mammography is the most simple and effective technology for early detection of breast cancer. However, the lesion areas of breast are difficult to detect which due to mammograms are mixed with noise. This work focuses on discussing various multiresolution denoising techniques which include the classical methods based on wavelet and contourlet; moreover the emerging multiresolution methods are also researched. In this work, a new denoising method based on dual tree contourlet transform (DCT) is proposed, the DCT possess the advantage of approximate shift invariant, directionality and anisotropy. The proposed denoising method is implemented on the mammogram, the experimental results show that the emerging multiresolution method succeeded in maintaining the edges and texture details; and it can obtain better performance than the other methods both on visual effects and in terms of the Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Structure Similarity (SSIM) values.

  10. Denoising of Medical Images Using Total Variational Method

    Directory of Open Access Journals (Sweden)

    V N Prudhvi Raj

    2012-05-01

    Full Text Available Feature extraction and object recognition from images acquired by various imaging modalities are playingthe key role in diagnosing the various diseases. These operations will become difficult if the images arecorrupted with noise. So the need for developing the efficient algorithms for noise removal became animportant research area today. Developing Image denoising algorithms is a difficult operation because finedetails in a medical image embedding diagnostic information should not be destroyed during noiseremoval. In this paper the total variational method which had success in computational fluid dynamics isadopted to denoise the medical images. We are using split Bregman method from optimisation theory tofind the solution to this non-linear convex optimisation problem. The present approach will outperform indenoising the medical images while compared with the traditional spatial domain filtering methods. Theperformance metrics we used to measure the quality of the denoised images is PSNR (Peak signal to noiseratio.The results showed that these methods are removing the noise effectively while preserving the edgeinformation in the images.

  11. Multi-level denoising and enhancement method based on wavelet transform for mine monitoring

    Institute of Scientific and Technical Information of China (English)

    Yanqin Zhao

    2013-01-01

    Based on low illumination and a large number of mixed noises contained in coal mine,denoising with one method usually cannot achieve good results,So a multi-level image denoising method based on wavelet correlation relevant inter-scale is presented.Firstly,we used directional median filter to effectively reduce impulse noise in the spatial domain,which is the main cause of noise in mine.Secondly,we used a Wiener filtration method to mainly reduce the Gaussian noise,and then finally used a multi-wavelet transform to minimize the remaining noise of low-light images in the transform domain.This multi-level image noise reduction method combines spatial and transform domain denoising to enhance benefits,and effectively reduce impulse noise and Gaussian noise in a coal mine,while retaining good detailed image characteristics of the underground for improving quality of images with mixing noise and effective low-light environment.

  12. 基于稀疏性的图像去噪综述%Overview on sparse image denoising

    Institute of Scientific and Technical Information of China (English)

    郭德全; 杨红雨; 刘东权; 何文森

    2012-01-01

    利用图像的稀疏与冗余表达模型去噪是当前较为新颖的去噪方法,在对国内外稀疏模型去噪文献进行理解和分析的基础上,回顾稀疏性去噪研究的发展,阐明稀疏去噪的原理与降噪模型.总结用于稀疏去噪中的各类方法,介绍利用稀疏性在图像去噪中的分解与重构过程,并将小波法去噪、多尺度几何分析法去噪、独立成分法去噪中所涉及的传统稀疏性与当前的稀疏与冗余表达模型去噪对比分析.最后基于对稀疏性去噪方法的分析,提出对稀疏去噪研究方法的一些展望.%Image denoising through sparse and redundant representation modeling has been well acknowledged as an important approach of image denoising in recent years. This paper attempted to make an overview of sparse model denoising based on understanding and analysis of recent domestic and abroad literatures. To begin with, this paper reviewed the development of sparse denoising research, and clarified the principle and noise model of sparse denoising. Next, summarized several methods in procedure of sparse denoising, and introduced sparse decomposition and reconstruction in the process of image denoising. In addition, described the other denoising methods, such as the wavelet denoising method, multi-scale geometric analysis(MGA) denoising approach, independent component analysis denoising technique, and then compared and analyzd the relationships between the recent sparse and redundant representation modeling denoising method and other traditional sparse methods. Finally , pointed the problem and some future directions of sparse denoising method.

  13. Denoising ECG signal based on ensemble empirical mode decomposition

    Science.gov (United States)

    Zhi-dong, Zhao; Liu, Juan; Wang, Sheng-tao

    2011-10-01

    The electrocardiogram (ECG) has been used extensively for detection of heart disease. Frequently the signal is corrupted by various kinds of noise such as muscle noise, electromyogram (EMG) interference, instrument noise etc. In this paper, a new ECG denoising method is proposed based on the recently developed ensemble empirical mode decomposition (EEMD). Noisy ECG signal is decomposed into a series of intrinsic mode functions (IMFs). The statistically significant information content is build by the empirical energy model of IMFs. Noisy ECG signal collected from clinic recording is processed using the method. The results show that on contrast with traditional methods, the novel denoising method can achieve the optimal denoising of the ECG signal.

  14. Denoising Algorithm Based on Generalized Fractional Integral Operator with Two Parameters

    Directory of Open Access Journals (Sweden)

    Hamid A. Jalab

    2012-01-01

    Full Text Available In this paper, a novel digital image denoising algorithm called generalized fractional integral filter is introduced based on the generalized Srivastava-Owa fractional integral operator. The structures of n×n fractional masks of this algorithm are constructed. The denoising performance is measured by employing experiments according to visual perception and PSNR values. The results demonstrate that apart from enhancing the quality of filtered image, the proposed algorithm also reserves the textures and edges present in the image. Experiments also prove that the improvements achieved are competent with the Gaussian smoothing filter.

  15. Image denoising based on Poisson-like noise model%基于泊松噪音模型的图像去噪方法

    Institute of Scientific and Technical Information of China (English)

    赵梦柳; 李宏伟

    2012-01-01

    In computed tomography,the projection data are usually thought of being corrupted by Poisson noise.However,there is no specific noise model for CT images constructed from sino-data.Nevertheless,the gray values of CT images bare some characteristics of Poisson distribution.So we can regard that CT images are corrupted by Poisson-like noise.In this paper,we propose a denoising model for Poisson-like noise,whose effectiveness has been validated by numerical experiments implemented on synthesized as well as real CT data.Furthermore,our model could be generalized to deal with mixture model noiseor even noise with unknown type.A fast numerical algorithm is also developed based on a dual formulation and an iterative relaxation technique.%通常认为,CT中的投影数据带有泊松噪音.然而目前尚未有定论,从投影数据重建得到的CT图像带有何种类型的噪音.由于CT图像的灰度值呈现一定泊松分布的性质,因此我们可以假定CT图像被“类泊松”噪音所污染.本文中,我们提出一种去除“类泊松”噪音的模型,基于仿真数据和真实CT数据的数值实验结果证明了该模型的有效性.此外,该模型可以被扩展到混合噪音模型甚至未知类型的噪音上.我们还提出了一种基于对偶和松弛迭代技术的快速数值算法.

  16. 3D Wavelet Sub-Bands Mixing for Image De-noising and Segmentation of Brain Images

    Directory of Open Access Journals (Sweden)

    Joyjit Patra

    2016-07-01

    Full Text Available A critical issue in image restoration is the problem of noise removal while keeping the integrity of relevant image information. The method proposed in this paper is a fully automatic 3D block wise version of the Non Local (NL Means filter with wavelet sub-bands mixing. The proposed a wavelet sub-bands mixing is based on a multi-resolution approach for improving the quality of image de-noising filter. Quantitative validation was carried out on synthetic datasets generated with the Brain Web simulator. The results show that our NL-means filter with wavelet sub-band mixing outperforms the classical implementation of the NL-means filter in of de -noising quality and computation time. Comparison with well established methods, such as non linear diffusion filter and total variation minimization, shows that the proposed NL-means filter produces better de-noising results. Finally, qualitative results on real data are presented. And this paper presents an algorithm for medical 3D image de-noising and segmentation using redundant discrete wavelet transform. First, we present a two stage de-noising algorithm using the image fusion concept. The algorithm starts with globally de-noising the brain images (3D volume using Perona Malik’s algorithm and RDWT based algorithms followed by combining the outputs using entropy based fusion approach. Next, a region segmentation algorithm is proposed using texture information and k-means clustering. The proposed algorithms are evaluated using brain 3D image/volume data. The results suggest that the proposed algorithms provide improved performance compared to existing algorithms.

  17. A Robust and Fast Non-Local Means Algorithm for Image Denoising

    Institute of Scientific and Technical Information of China (English)

    Yan-Li Liu; Jin Wang; Xi Chen; Yan-Wen Guo; Qun-Sheng Peng

    2008-01-01

    In the paper, we propose a robust and fast image denoising method. The approach integrates both Non- Local means algorithm and Laplacian Pyramid. Given an image to be denoised, we first decompose it into Laplacian pyramid. Exploiting the redundancy property of Laplacian pyramid, we then perform non-local means on every level image of Laplacian pyramid. Essentially, we use the similarity of image features in Laplacian pyramid to act as weight to denoise image. Since the features extracted in Laplacian pyramid are localized in spatial position and scale, they are much more able to describe image, and computing the similarity between them is more reasonable and more robust. Also, based on the efficient Summed Square Image (SSI) scheme and Fast Fourier Transform (FFT), we present an accelerating algorithm to break the bottleneck of non-local means algorithm - similarity computation of compare windows. After speedup, our algorithm is fifty times faster than original non-local means algorithm. Experiments demonstrated the effectiveness of our algorithm.

  18. Preliminary study on effects of 60Co γ-irradiation on video quality and the image de-noising methods

    International Nuclear Information System (INIS)

    There will be variable noises appear on images in video once the play device irradiated by γ-rays, so as to affect the image clarity. In order to eliminate the image noising, the affection mechanism of γ-irradiation on video-play device was studied in this paper and the methods to improve the image quality with both hardware and software were proposed by use of protection program and de-noising algorithm. The experimental results show that the scheme of video de-noising based on hardware and software can improve effectively the PSNR by 87.5 dB. (authors)

  19. Performance evaluation and optimization of BM4D-AV denoising algorithm for cone-beam CT images

    Science.gov (United States)

    Huang, Kuidong; Tian, Xiaofei; Zhang, Dinghua; Zhang, Hua

    2015-12-01

    The broadening application of cone-beam Computed Tomography (CBCT) in medical diagnostics and nondestructive testing, necessitates advanced denoising algorithms for its 3D images. The block-matching and four dimensional filtering algorithm with adaptive variance (BM4D-AV) is applied to the 3D image denoising in this research. To optimize it, the key filtering parameters of the BM4D-AV algorithm are assessed firstly based on the simulated CBCT images and a table of optimized filtering parameters is obtained. Then, considering the complexity of the noise in realistic CBCT images, possible noise standard deviations in BM4D-AV are evaluated to attain the chosen principle for the realistic denoising. The results of corresponding experiments demonstrate that the BM4D-AV algorithm with optimized parameters presents excellent denosing effect on the realistic 3D CBCT images.

  20. [A fast non-local means algorithm for denoising of computed tomography images].

    Science.gov (United States)

    Kang, Changqing; Cao, Wenping; Fang, Lei; Hua, Li; Cheng, Hong

    2012-11-01

    A fast non-local means image denoising algorithm is presented based on the single motif of existing computed tomography images in medical archiving systems. The algorithm is carried out in two steps of prepossessing and actual possessing. The sample neighborhood database is created via the data structure of locality sensitive hashing in the prepossessing stage. The CT image noise is removed by non-local means algorithm based on the sample neighborhoods accessed fast by locality sensitive hashing. The experimental results showed that the proposed algorithm could greatly reduce the execution time, as compared to NLM, and effectively preserved the image edges and details.

  1. Hand Depth Image Denoising and Superresolution via Noise-Aware Dictionaries

    Directory of Open Access Journals (Sweden)

    Huayang Li

    2016-01-01

    Full Text Available This paper proposes a two-stage method for hand depth image denoising and superresolution, using bilateral filters and learned dictionaries via noise-aware orthogonal matching pursuit (NAOMP based K-SVD. The bilateral filtering phase recovers singular points and removes artifacts on silhouettes by averaging depth data using neighborhood pixels on which both depth difference and RGB similarity restrictions are imposed. The dictionary learning phase uses NAOMP for training dictionaries which separates faithful depth from noisy data. Compared with traditional OMP, NAOMP adds a residual reduction step which effectively weakens the noise term within the residual during the residual decomposition in terms of atoms. Experimental results demonstrate that the bilateral phase and the NAOMP-based learning dictionaries phase corporately denoise both virtual and real depth images effectively.

  2. Adaptive wiener filter based on Gaussian mixture distribution model for denoising chest X-ray CT image

    International Nuclear Information System (INIS)

    In recent decades, X-ray CT imaging has become more important as a result of its high-resolution performance. However, it is well known that the X-ray dose is insufficient in the techniques that use low-dose imaging in health screening or thin-slice imaging in work-up. Therefore, the degradation of CT images caused by the streak artifact frequently becomes problematic. In this study, we applied a Wiener filter (WF) using the universal Gaussian mixture distribution model (UNI-GMM) as a statistical model to remove streak artifact. In designing the WF, it is necessary to estimate the statistical model and the precise co-variances of the original image. In the proposed method, we obtained a variety of chest X-ray CT images using a phantom simulating a chest organ, and we estimated the statistical information using the images for training. The results of simulation showed that it is possible to fit the UNI-GMM to the chest X-ray CT images and reduce the specific noise. (author)

  3. Infrared imaging denoising processing based on regular granule resampling algorithm%基于正则粒子重采样算法的红外成像消噪处理

    Institute of Scientific and Technical Information of China (English)

    陈淑静; 马天才

    2009-01-01

    针对红外成像消噪的粒子滤波退化问题,提出正则粒子重采样算法.该算法从粒子群重采样获得粒子云(x_k~j,n_j)_j~m=1,解决了粒子多样性消失的问题并克服粒子匮乏的现象;接着又通过添加辅助粒子v,将下一时刻观测值权值大的粒子进行标识,使粒子权值ω_k~i∝(p (x_k|y_(k-1)~i))/(p (x_k|μ_(k-1)~i))更加稳定;给出了运动物体的红外成像消噪模型.实验仿真表明:正则粒子重采样算法通过添加辅助粒子使红外成像消噪效果好,成像清晰度在95%以上.%The regular granule heavy sampling algorithm is proposed for solving the deterioration of the granules in the infrared imaging denoising process. The granule cloud is obtained by the algorithm based on the granule resampling which can eliminate the phenomena of the granule diversity vanishing and granule want, the granules with great weight value observed at the next moment are marked to make the granule weight value more stable by adding some auxiliary granules, and then an infrared imaging denoising model of a moving object is established. The experimental result indicates that the method makes the effect of the infrared imaging denoising much better, and the imaging definition above 95%.

  4. Image Pretreatment Tools I: Algorithms for Map Denoising and Background Subtraction Methods.

    Science.gov (United States)

    Cannistraci, Carlo Vittorio; Alessio, Massimo

    2016-01-01

    One of the critical steps in two-dimensional electrophoresis (2-DE) image pre-processing is the denoising, that might aggressively affect either spot detection or pixel-based methods. The Median Modified Wiener Filter (MMWF), a new nonlinear adaptive spatial filter, resulted to be a good denoising approach to use in practice with 2-DE. MMWF is suitable for global denoising, and contemporary for the removal of spikes and Gaussian noise, being its best setting invariant on the type of noise. The second critical step rises because of the fact that 2-DE gel images may contain high levels of background, generated by the laboratory experimental procedures, that must be subtracted for accurate measurements of the proteomic optical density signals. Here we discuss an efficient mathematical method for background estimation, that is suitable to work even before the 2-DE image spot detection, and it is based on the 3D mathematical morphology (3DMM) theory. PMID:26611410

  5. EMD-based Adaptive Wavelet Threshold for Pulse Wave Denoising

    Institute of Scientific and Technical Information of China (English)

    XU Li-shengl; SHEN Yan-hua; ZHONG Yue; KANG Yan; Max Q-H Meng

    2015-01-01

    It is inevitable that noises will be introduced during the acquisition of pulse wave signal, which can result in morphology changes of the original pulse wave, and affect the hemodynamic analysis and diagnosis based on pulse wave signalsIn order to remove these noises, an adaptive de-noising method based on empirical mode decomposition (EMD) and wavelet threshold is proposed in this paperCompared with the wavelet threshold method for denoising pulse wave, the proposed approach is more effective, especially at low signal-to-noise ratio.

  6. TRANSLATION-INVARIANT BASED ADAPTIVE THRESHOLD DENOISING FOR IMPACT SIGNAL

    Institute of Scientific and Technical Information of China (English)

    Gai Guanghong; Qu Liangsheng

    2004-01-01

    A translation-invariant based adaptive threshold denoising method for mechanical impact signal is proposed. Compared with traditional wavelet denoising methods, it suppresses pseudo-Gibbs phenomena in the neighborhood of signal discontinuities. To remedy the drawbacks of conventional threshold functions, a new improved threshold function is introduced. It possesses more advantages than others. Moreover, based on utilizing characteristics of signal, a adaptive threshold selection procedure for impact signal is proposed. It is data-driven and level-dependent, therefore, it is more rational than other threshold estimation methods. The proposed method is compared to alternative existing methods, and its superiority is revealed by simulation and real data examples.

  7. Vibrator Data Denoising Based on Fractional Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Zheng Jing

    2015-06-01

    Full Text Available In this paper, a novel data denoising method is proposed for seismic exploration with a vibrator which produces a chirp-like signal. The method is based on fractional wavelet transform (FRWT, which is similar to the fractional Fourier transform (FRFT. It can represent signals in the fractional domain, and has the advantages of multi-resolution analysis as the wavelet transform (WT. The fractional wavelet transform can process the reflective chirp signal as pulse seismic signal and decompose it into multi-resolution domain to denoise. Compared with other methods, FRWT can offer wavelet transform for signal analysis in the timefractional- frequency plane which is suitable for processing vibratory seismic data. It can not only achieve better denoising performance, but also improve the quality and continuity of the reflection syncphase axis.

  8. Denoising of Medical Ultrasound Images Using Spatial Filtering and Multiscale Transforms

    Directory of Open Access Journals (Sweden)

    V N Prudhvi Raj

    2013-01-01

    Full Text Available Medical imaging became the integral part in health care where all the critical diagnosis such as blocks inthe veins, plaques in the carotid arteries, minute fractures in the bones, blood flow in the brain etc arecarried out without opening the patient’s body. There are various imaging modalities for differentapplications to observe the anatomical and physiological conditions of the patient. These modalities willintroduce noise and artifacts during medical image acquisition. If the noise and artifacts are not minimiseddiagnosis will become difficult. One of the non-invasive modality widely used is ultrasound Imaging whereno question of radiation but suffers from speckle noise produced by the small particles in the tissues who’ssize is less than the wavelength of the ultrasound. The presence of the speckle noise will cause the lowcontrast images because of this the low contrast lesions and tumours can’t be detected in the diagnosticphase. So there is a strong need in developing the despeckling techniques to improve the quality ofultrasound images. Here in this paper we are presenting the denoising techniques for speckle reduction inultrasound imaging. First we presented the various spatial filters and their suitability for reducing thespeckle. Then we developed the denoising methods using multiscale transforms such as Discrete WaveletTransform (DWT, Undecimated Discrete Wavelet Transform (UDWT, dual tree complex wavelettransform (DTCDWT and Double density dual tree complex wavelet transform (DDDTCDWT. Theperformance of the filters was evaluated using various metrics based on pixel based, correlation based,edge based and Human visual system (HVS based and we found that denoising using double density dualtree complex discrete wavelet transform is outperformed with best edge preserving feature.

  9. APPLICATION OF SUBBAND ADAPTIVE THRESHOLDING TECHNIQUE WITH NEIGHBOURHOOD PIXEL FILTERING FOR DENOISING MRI IMAGES

    Directory of Open Access Journals (Sweden)

    S. KALAVATHY

    2012-02-01

    Full Text Available The image de-noising naturally corrupted by noise is a classical problem in the field of signal or image processing. Image denoising has become an essential exercise in medical imaging especially the Magnetic Resonance Imaging (MRI..We propose a new method for MRI restoration. Because MR magnitude images suffer from a contrast-reducing signal-dependent bias. Also the noise is often assumed to be white, however a widely used acquisition technique to decrease the acquisition time gives rise to correlated noise. Subband adaptive thresholding technique based on wavelet coefficient along with Neighbourhood Pixel Filtering Algorithm (NPFA for noise suppression of Magnetic Resonance Images (MRI is presented in this paper. Astatistical model is proposed to estimate the noise variance for each coefficient based on the subband using Maximum Likelihood (ML estimator or a Maximum a Posterior (MAP estimator. Also this model describes a new method for suppression of noise by fusing the wavelet denoising technique with optimized thresholding function. This is achieved by including a multiplying factor (α to make the threshold value dependent on decomposition level. By finding Neighbourhood Pixel Difference (NPD and adding NPFA along with subband thresholding the clarity of the image is improved. The filtered value is generated by minimizing NPD and Weighted Mean Square Error (WMSE using method of leastsquare.Areduction in noise pixel is well observedon replacing the optimal weight namely NPFA filter solution with the noisy value of the current pixel. Due to this NPFA filter gains the effect of both high pass and low pass filter. Hence the proposed technique yields significantly superior image quality by preserving the edges, producing a better PSNR value. To confirm the efficiency this is further compared with Median filter, Weiner Filter, Subband thresholding technique along with NPFA filter.

  10. Statistics of Natural Stochastic Textures and Their Application in Image Denoising.

    Science.gov (United States)

    Zachevsky, Ido; Zeevi, Yehoshua Y Josh

    2016-05-01

    Natural stochastic textures (NSTs), characterized by their fine details, are prone to corruption by artifacts, introduced during the image acquisition process by the combined effect of blur and noise. While many successful algorithms exist for image restoration and enhancement, the restoration of natural textures and textured images based on suitable statistical models has yet to be further improved. We examine the statistical properties of NST using three image databases. We show that the Gaussian distribution is suitable for many NST, while other natural textures can be properly represented by a model that separates the image into two layers; one of these layers contains the structural elements of smooth areas and edges, while the other contains the statistically Gaussian textural details. Based on these statistical properties, an algorithm for the denoising of natural images containing NST is proposed, using patch-based fractional Brownian motion model and regularization by means of anisotropic diffusion. It is illustrated that this algorithm successfully recovers both missing textural details and structural attributes that characterize natural images. The algorithm is compared with classical as well as the state-of-the-art denoising algorithms. PMID:27045423

  11. Motion Fuzzy Image Denoising Algorithm Based on Wavelet Threshold Compression%基于小波阈值压缩的运动模糊图像去噪算法

    Institute of Scientific and Technical Information of China (English)

    李敏; 郭磊

    2015-01-01

    强干扰环境下采集的模糊运动图像通常含有大量的噪声,难以进行有效的细节分析,需要进行图像降噪滤波处理.传统方法采用小波分析的图像细节滤波算法进行降噪,对运动场景下的图像角点偏移部分的降噪效果不好.提出一种基于小波阈值压缩的运动模糊图像去噪算法.构建模糊运动图像的小波分析模型,采用小波阈值压缩方法进行运动模糊图像的角点检测,形成具有角点的图层小波阈值压缩库,实现图像降噪滤波处理.仿真结果表明,采用该算法能有效实现图像去噪滤波,提高图像质量和峰值信噪比.%Under the condition of strong interference, the fuzzy motion picture usually contains a lot of noise, it is difficult to carry out the detailed analysis, and it is necessary to carry out the image noise reduction filtering. The traditional method us-es wavelet analysis to reduce the noise of the image details. A new image denoising algorithm based on wavelet threshold compression is proposed. To construct a fuzzy wavelet image motion analysis model, using wavelet threshold compression method of motion blurred image corner detection, forming with corner layer wavelet threshold compression library, to real-ize filter for image denoising processing. Simulation results show that the using the algorithm can effectively achieve the im-age denoising and improve the image quality and peak signal-to-noise ratio.

  12. Simultaneous Fusion and Denoising of Panchromatic and Multispectral Satellite Images

    Science.gov (United States)

    Ragheb, Amr M.; Osman, Heba; Abbas, Alaa M.; Elkaffas, Saleh M.; El-Tobely, Tarek A.; Khamis, S.; Elhalawany, Mohamed E.; Nasr, Mohamed E.; Dessouky, Moawad I.; Al-Nuaimy, Waleed; Abd El-Samie, Fathi E.

    2012-12-01

    To identify objects in satellite images, multispectral (MS) images with high spectral resolution and low spatial resolution, and panchromatic (Pan) images with high spatial resolution and low spectral resolution need to be fused. Several fusion methods such as the intensity-hue-saturation (IHS), the discrete wavelet transform, the discrete wavelet frame transform (DWFT), and the principal component analysis have been proposed in recent years to obtain images with both high spectral and spatial resolutions. In this paper, a hybrid fusion method for satellite images comprising both the IHS transform and the DWFT is proposed. This method tries to achieve the highest possible spectral and spatial resolutions with as small distortion in the fused image as possible. A comparison study between the proposed hybrid method and the traditional methods is presented in this paper. Different MS and Pan images from Landsat-5, Spot, Landsat-7, and IKONOS satellites are used in this comparison. The effect of noise on the proposed hybrid fusion method as well as the traditional fusion methods is studied. Experimental results show the superiority of the proposed hybrid method to the traditional methods. The results show also that a wavelet denoising step is required when fusion is performed at low signal-to-noise ratios.

  13. Electrocardiogram signal denoising based on a new improved wavelet thresholding

    Science.gov (United States)

    Han, Guoqiang; Xu, Zhijun

    2016-08-01

    Good quality electrocardiogram (ECG) is utilized by physicians for the interpretation and identification of physiological and pathological phenomena. In general, ECG signals may mix various noises such as baseline wander, power line interference, and electromagnetic interference in gathering and recording process. As ECG signals are non-stationary physiological signals, wavelet transform is investigated to be an effective tool to discard noises from corrupted signals. A new compromising threshold function called sigmoid function-based thresholding scheme is adopted in processing ECG signals. Compared with other methods such as hard/soft thresholding or other existing thresholding functions, the new algorithm has many advantages in the noise reduction of ECG signals. It perfectly overcomes the discontinuity at ±T of hard thresholding and reduces the fixed deviation of soft thresholding. The improved wavelet thresholding denoising can be proved to be more efficient than existing algorithms in ECG signal denoising. The signal to noise ratio, mean square error, and percent root mean square difference are calculated to verify the denoising performance as quantitative tools. The experimental results reveal that the waves including P, Q, R, and S waves of ECG signals after denoising coincide with the original ECG signals by employing the new proposed method.

  14. Semi-implicit Image Denoising Algorithm for Different Boundary Conditions

    Directory of Open Access Journals (Sweden)

    Yuying Shi

    2013-04-01

    Full Text Available In this paper, the Crank-Nicolson semi-implicit difference scheme in matrix form is applied to discrete the Rudin-Osher-Fatemi model. We also consider different boundary conditions: Dirichlet boundary conditions, periodic boundary conditions, Neumann boundary conditions, antireflective boundary conditions and mean boundary conditions. By comparing the experimental results of Crank-Nicolson semi-implicit scheme and explicit scheme with the proposed boundary conditions, we can get that the semi-implicit scheme can overcome the instability and the number of iterations of the shortcomings that the explicit discrete scheme has, and its recovery effects are better than the explicit discrete scheme. In addition, the antireflective boundary conditions and Neumann boundary conditions can better maintain the continuity of the boundary in image denoising.

  15. Multi-focus Image Fusion Using De-noising and Sharpness Criterion

    Directory of Open Access Journals (Sweden)

    sukhdip kaur

    2013-01-01

    Full Text Available the concept of multi-focus image fusion is used to combine multiple images with different objects in focus to obtain all the objects in focus and for better information in a scene. But the challenge is to how evaluate the information of the input images with better quality of image. In order to provide solution of this problem, a new criterion is proposed to give better quality of image using PCA, by de-noising and bilateral gradient based sharpness criterion that is evaluated using the gradient information of the images. Then the proposed method is further exploited to perform weighted aggregation of multi-focus images. The experimental results show that the proposed method is better than the other method in terms of quality matrices like Mutual information, spatial frequency and Average difference.

  16. 基于压缩感知的水下图像去噪%Underwater image denoising based on compressed sensing

    Institute of Scientific and Technical Information of China (English)

    丁伟; 王国宇; 王宝锋

    2013-01-01

    The images got by underwater camera are fuzzy due to complex underwater environment. A number of data without useful information may be collected when data collection is performed, and the noise effect may be more serious. There-fore, a theory of compressed sensing is proposed, in which the signal can be reconstructed in high-probability by a low sampling rate. In order to research the effect of compressed sensing on underwater image denoising, OMP, SP, COSAMP greedy algorithms are used for the reconstruction analysis of underwater images in different sampling rates. The experimental results show that the image can be reconstructed by a small amount data and the underwater noise can be inhibited if an appropriate sampling rate is chosen. The best effect was got by OMP algorithm.%水下环境复杂,水下摄像得到的图像较为模糊.采集数据时会采集到大量不包含任何有用信息的数据,噪声影响更严重.压缩感知理论提出,能用较低采样率高概率重构信号.为研究压缩感知对水下图像噪声的抑制作用,采用OMP,SP,COSAMP不同贪婪重构算法对水下图像进行不同采样率重构分析.实验结果表明,选取合适采样率既可以以少量数据重构图像,又可以抑制水下噪声,且OMP算法效果最好.

  17. An algorithm for 252Cf-Source-Driven neutron signal denoising based on Compressive Sensing

    Institute of Scientific and Technical Information of China (English)

    李鹏程; 魏彪; 冯鹏; 何鹏; 米德伶

    2015-01-01

    As photoelectrically detected 252Cf-source-driven neutron signals always contain noise, a denoising algorithm is proposed based on compressive sensing for the noised neutron signal. In the algorithm, Empirical Mode Decomposition (EMD) is applied to decompose the noised neutron signal and then find out the noised Intrinsic Mode Function (IMF) automatically. Thus, we only need to use the basis pursuit denoising (BPDN) algorithm to denoise these IMFs. For this reason, the proposed algorithm can be called EMDCSDN (Empirical Mode Decomposition Compressive Sensing Denoising). In addition, five indicators are employed to evaluate the denoising effect. The results show that the EMDCSDN algorithm is more effective than the other denoising algorithms including BPDN. This study provides a new approach for signal denoising at the front-end.

  18. Similarity-based denoising of point-sampled surface

    Institute of Scientific and Technical Information of China (English)

    Ren-fang WANG; Wen-zhi CHEN; San-yuan ZHANG; Yin ZHANG; Xiu-zi YE

    2008-01-01

    A non-local denoising (NLD) algorithm for point-sampled surfaces (PSSs) is presented based on similarities, including geometry intensity and features of sample points. By using the trilateral filtering operator, the differential signal of each sample point is determined and called "geometry intensity". Based on covariance analysis, a regular grid of geometry intensity of a sample point is constructed, and the geometry-intensity similarity of two points is measured according to their grids. Based on mean shift clustering, the PSSs are clustered in terms of the local geometry-features similarity. The smoothed geometry intensity, i.e., offset distance, of the sample point is estimated according to the two similarities. Using the resulting intensity, the noise component from PSSs is finally removed by adjusting the position of each sample point along its own normal direction. Experimental results demonstrate that the algorithm is robust and can produce a more accurate denoising result while having better feature preservation.

  19. 3D Wavelet Sub-Bands Mixing for Image De-noising and Segmentation of Brain Images

    OpenAIRE

    Joyjit Patra; Himadri Nath Moulick; Shreyosree Mallick; Arun Kanti Manna

    2016-01-01

    A critical issue in image restoration is the problem of noise removal while keeping the integrity of relevant image information. The method proposed in this paper is a fully automatic 3D block wise version of the Non Local (NL) Means filter with wavelet sub-bands mixing. The proposed a wavelet sub-bands mixing is based on a multi-resolution approach for improving the quality of image de-noising filter. Quantitative validation was carried out on synthetic datasets generated with the Brain W...

  20. Blind Analysis of CT Image Noise Using Residual Denoised Images

    CERN Document Server

    Roychowdhury, Sohini; Alessio, Adam

    2016-01-01

    CT protocol design and quality control would benefit from automated tools to estimate the quality of generated CT images. These tools could be used to identify erroneous CT acquisitions or refine protocols to achieve certain signal to noise characteristics. This paper investigates blind estimation methods to determine global signal strength and noise levels in chest CT images. Methods: We propose novel performance metrics corresponding to the accuracy of noise and signal estimation. We implement and evaluate the noise estimation performance of six spatial- and frequency- based methods, derived from conventional image filtering algorithms. Algorithms were tested on patient data sets from whole-body repeat CT acquisitions performed with a higher and lower dose technique over the same scan region. Results: The proposed performance metrics can evaluate the relative tradeoff of filter parameters and noise estimation performance. The proposed automated methods tend to underestimate CT image noise at low-flux levels...

  1. Point Set Denoising Using Bootstrap-Based Radial Basis Function

    Science.gov (United States)

    Ramli, Ahmad; Abd. Majid, Ahmad

    2016-01-01

    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study. PMID:27315105

  2. Edge-preserving image denoising via group coordinate descent on the GPU.

    Science.gov (United States)

    McGaffin, Madison Gray; Fessler, Jeffrey A

    2015-04-01

    Image denoising is a fundamental operation in image processing, and its applications range from the direct (photographic enhancement) to the technical (as a subproblem in image reconstruction algorithms). In many applications, the number of pixels has continued to grow, while the serial execution speed of computational hardware has begun to stall. New image processing algorithms must exploit the power offered by massively parallel architectures like graphics processing units (GPUs). This paper describes a family of image denoising algorithms well-suited to the GPU. The algorithms iteratively perform a set of independent, parallel 1D pixel-update subproblems. To match GPU memory limitations, they perform these pixel updates in-place and only store the noisy data, denoised image, and problem parameters. The algorithms can handle a wide range of edge-preserving roughness penalties, including differentiable convex penalties and anisotropic total variation. Both algorithms use the majorize-minimize framework to solve the 1D pixel update subproblem. Results from a large 2D image denoising problem and a 3D medical imaging denoising problem demonstrate that the proposed algorithms converge rapidly in terms of both iteration and run-time.

  3. Nonlinear Denoising and Analysis of Neuroimages With Kernel Principal Component Analysis and Pre-Image Estimation

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Abrahamsen, Trine Julie; Madsen, Kristoffer Hougaard;

    2012-01-01

    We investigate the use of kernel principal component analysis (PCA) and the inverse problem known as pre-image estimation in neuroimaging: i) We explore kernel PCA and pre-image estimation as a means for image denoising as part of the image preprocessing pipeline. Evaluation of the denoising...... procedure is performed within a data-driven split-half evaluation framework. ii) We introduce manifold navigation for exploration of a nonlinear data manifold, and illustrate how pre-image estimation can be used to generate brain maps in the continuum between experimentally defined brain states/classes. We...

  4. Flotation froth image de-noising algorithm based on lifting improved directionlet transform%基于提升改进方向波变换的浮选泡沫图像降噪方法

    Institute of Scientific and Technical Information of China (English)

    李建奇; 阳春华; 朱红求; 曹斌芳

    2013-01-01

    针对矿物浮选过程中泡沫图像易受噪声影响,存在纹理细节模糊、灰度值对比度低等问题,提出一种浮选泡沫图像的非线性降噪方法。首先构造一种改进方向波变换,保证信号的平移不变性,同时采用提升算法减小其运算量。然后通过对分解系数建模,针对低频子带系数采用多尺度 Retinex 算法进行处理,以改善整体亮度均匀性,提高对比度;对各高通子带构建基于高斯混合尺度模型的分解系数邻域模型,并利用Bayes最小均方(BLS)估计进行局部去噪。最后利用所提出的方法对大量浮选泡沫图像进行去噪分析。结果表明:所提出的降噪方法能突出泡沫图像的纹理细节信息,提高泡沫图像的对比度,在信噪比和实时性上有明显提高,为后续泡沫图像的分割和工况识别奠定基础。%Considering the defects, such as easy sensitivity to noise and heavy texture, low contrast of gray value in the process of the floatation of foam image, a non-linear de-noising method was proposed. Lifting improved directionlet transform was firstly constructed, which not only ensured the shifting invariance but reduced its complexity. Multi-scale Retinex algorithm dealing with low-frequency subband coefficient was proposed for improving luminance uniformity and overall contrast. For high-pass subband, a model of decomposition coefficients neighbourhood based on Gaussian scale mixtures model was proposed for de-noising the image locally using Bayes least square (BLS). The analysis on the effect of de-noising was given to lots of real froth images. The results show that the proposed method is successful in maintaining edges and is superior in de-noising in term of PSNR and visual effect. It lays a foundation for foamy segmentation and analyzing grade from flotation froth image.

  5. DENOISING METHOD BASED ON SINGULAR SPECTRUM ANALYSIS AND ITS APPLICATIONS IN CALCULATION OF MAXIMAL LIAPUNOV EXPONENT

    Institute of Scientific and Technical Information of China (English)

    LIU Yuan-feng; ZHAO Mei

    2005-01-01

    An algorithm based on the data-adaptive filtering characteristics of singular spectrum analysis (SSA) is proposed to denoise chaotic data. Firstly, the empirical orthogonal functions ( EOFs ) and principal components ( PCs ) of the signal were calculated, reconstruct the signal using the EOFs and PCs, and choose the optimal reconstructing order based on sigular spectrum to obtain the denoised signal. The noise of the signal can influence the calculating precision of maximal Liapunov exponents. The proposed denoising algorithm was applied to the maximal Liapunov exponents calculations of two chaotic system, Henon map and Logistic map. Some numerical results show that this denoising algorithm could improve the calculating precision of maximal Liapunov exponent.

  6. A shape-optimized framework for kidney segmentation in ultrasound images using NLTV denoising and DRLSE

    Directory of Open Access Journals (Sweden)

    Yang Fan

    2012-10-01

    Full Text Available Abstract Background Computer-assisted surgical navigation aims to provide surgeons with anatomical target localization and critical structure observation, where medical image processing methods such as segmentation, registration and visualization play a critical role. Percutaneous renal intervention plays an important role in several minimally-invasive surgeries of kidney, such as Percutaneous Nephrolithotomy (PCNL and Radio-Frequency Ablation (RFA of kidney tumors, which refers to a surgical procedure where access to a target inside the kidney by a needle puncture of the skin. Thus, kidney segmentation is a key step in developing any ultrasound-based computer-aided diagnosis systems for percutaneous renal intervention. Methods In this paper, we proposed a novel framework for kidney segmentation of ultrasound (US images combined with nonlocal total variation (NLTV image denoising, distance regularized level set evolution (DRLSE and shape prior. Firstly, a denoised US image was obtained by NLTV image denoising. Secondly, DRLSE was applied in the kidney segmentation to get binary image. In this case, black and white region represented the kidney and the background respectively. The last stage is that the shape prior was applied to get a shape with the smooth boundary from the kidney shape space, which was used to optimize the segmentation result of the second step. The alignment model was used occasionally to enlarge the shape space in order to increase segmentation accuracy. Experimental results on both synthetic images and US data are given to demonstrate the effectiveness and accuracy of the proposed algorithm. Results We applied our segmentation framework on synthetic and real US images to demonstrate the better segmentation results of our method. From the qualitative results, the experiment results show that the segmentation results are much closer to the manual segmentations. The sensitivity (SN, specificity (SP and positive predictive value

  7. Electrocardiogram de-noising based on forward wavelet transform translation invariant application in bionic wavelet domain

    Indian Academy of Sciences (India)

    Mourad Talbi

    2014-08-01

    In this paper, we propose a new technique of Electrocardiogram (ECG) signal de-noising based on thresholding of the coefficients obtained from the application of the Forward Wavelet Transform Translation Invariant (FWT_TI) to each Bionic Wavelet coefficient. The De-noise De-noised ECG is obtained from the application of the inverse of BWT (BWT−1) to the de-noise de-noised bionic wavelet coefficients. For evaluating this new proposed de-noising technique, we have compared it to a thresholding technique in the FWT_TI domain. Preliminary tests of the application of the two de-noising techniques were constructed on a number of ECG signals taken from MIT-BIH database. The obtained results from Signal to Noise Ratio (SNR) and Mean Square Error (MSE) computations showed that our proposed de-noising technique outperforms the second technique. We have also compared the proposed technique to the thresholding technique in the bionic wavelet domain and this comparison was performed by SNR improvement computing. The obtained results from this evaluation showed that the proposed technique also outperforms the de-noising technique based on bionic wavelet coefficients thresholding.

  8. Regularized Pre-image Estimation for Kernel PCA De-noising

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    . As a consequence the most widely used estimation schemes lack stability. A common way to stabilize such estimates is by augmenting the cost function by a suitable constraint on the solution values. For de-noising applications we here propose Tikhonov input space distance regularization as a stabilizer for pre......The main challenge in de-noising by kernel Principal Component Analysis (PCA) is the mapping of de-noised feature space points back into input space, also referred to as “the pre-image problem”. Since the feature space mapping is typically not bijective, pre-image estimation is inherently illposed......-image estimation, or sparse reconstruction by Lasso regularization in cases where the main objective is to improve the visual simplicity. We perform extensive experiments on the USPS digit modeling problem to evaluate the stability of three widely used pre-image estimators. We show that the previous methods lack...

  9. Projection Lens Wave-Front Aberration Measurement Method Based on Adaptive Aerial Image Denoising%基于空间像自适应降噪的投影物镜波像差检测方法

    Institute of Scientific and Technical Information of China (English)

    杨济硕; 李思坤; 王向朝; 闫观勇; 徐东波

    2013-01-01

    提出了一种基于空间像自适应降噪的投影物镜波像差检测方法.通过对空间像进行统计分析,获取空间像的噪声模型和噪声标准差模型.以噪声标准差为权重因子,利用加权最小二乘法对空间像进行主成分分解,可以实现对空间像的自适应、无损降噪,从而得到更为精确的主成分系数和泽尼克系数.使用光刻仿真软件PROLITH的仿真结果表明,在相同的噪声水平下,0.1λ像差幅值内,与基于空间像主成分分析的波像差检测技术相比,精度提高30%以上.在使用光刻实验平台测量Z8调整量的实验中,该方法的精度更高.%A wave-front aberration measurement method of lithographic projection lens based on adaptive aerial image denoising is proposed. Principal component analysis (PCA) and multivariate linear regression analysis are used for model generation. Weighted least-square (WLSQ) method based PCA is used to get the principal component coefficients that are used for extracting the actual Zernike coefficients. Both the noise model of aerial images and the standard deviation model of noises are obtained by statistical analysis of actually measured aerial images. The standard deviation of the noise is used as weighting factors of the weighted least-square method. Accurate principal component coefficients and Zernike coefficients can be calculated because of the adaptive and lossless denoising ability of this method. Compared with wave-front aberration measurement techniques based on principal component analysis of aerial images (AMAI-PCA), the new method can provide more accurate results. Simulations show that AMAI-WLSQ can enhance the accuracy by more than 30 % when the range of wavefront aberration is within 0. 1A. Experiments also show that AMAI-WLSQ can detect aberration shifts more accurately.

  10. Study on Image Denoising Method Based On Curvelet Transform and Weighted Mean Filter%基于 Curvelet 变换和均值滤波的图像去噪方法

    Institute of Scientific and Technical Information of China (English)

    陈木生

    2014-01-01

    The purpose of this paper is to study a method of de-noising of images corrupted with additive white Gaussian noise.Firstly,the noisy image is decomposed into many levels to obtain different frequency sub-bands by Curvelet transform.Secondly,the threshold estimation and the weighted average method are used to remove the noisy coefficients according to generalized Gaussian distribution modeling of sub-band coefficients.Ultimately,invert the multi-scale decomposition to re-construct the de-noised image.Here,to prove the performance of the proposed method,the results are compared with other existent algorithms such as hard and soft threshold based on wavelet.The simulation results on several testing images indicate that the proposed method outperforms the other methods in peak signal to noise ratio and keeps better visual in edges information reservation as well. The results also suggest that Curvelet transform can achieve a better performance than the wavelet transform in image de-noising.%针对图像中的高斯噪声干扰,提出一种改进的图像去噪方法。首先利用 Curvelet 变换将含噪声图像分解成多个子频带,再根据子带系数的高斯分布特性,利用阈值去噪和加权平均滤波相结合的方法对高频子带进行去噪处理,最后利用 Curvelet 反变换得到去噪后的图像。为了验证该方法的有效性,与传统的硬阈值、软阈值、基于小波变换的方法相比较,实验结果表明,该方法能够获得较好的峰值信噪比和视觉特性,保留较多的细节信息。同时也说明了 Curvelet 变换比小波变换能够得到更好的去噪效果。

  11. A multi-scale non-local means algorithm for image de-noising

    Science.gov (United States)

    Nercessian, Shahan; Panetta, Karen A.; Agaian, Sos S.

    2012-06-01

    A highly studied problem in image processing and the field of electrical engineering in general is the recovery of a true signal from its noisy version. Images can be corrupted by noise during their acquisition or transmission stages. As noisy images are visually very poor in quality, and complicate further processing stages of computer vision systems, it is imperative to develop algorithms which effectively remove noise in images. In practice, it is a difficult task to effectively remove the noise while simultaneously retaining the edge structures within the image. Accordingly, many de-noising algorithms have been considered attempt to intelligent smooth the image while still preserving its details. Recently, a non-local means (NLM) de-noising algorithm was introduced, which exploited the redundant nature of images to achieve image de-noising. The algorithm was shown to outperform current de-noising standards, including Gaussian filtering, anisotropic diffusion, total variation minimization, and multi-scale transform coefficient thresholding. However, the NLM algorithm was developed in the spatial domain, and therefore, does not leverage the benefit that multi-scale transforms provide a framework in which signals can be better distinguished by noise. Accordingly, in this paper, a multi-scale NLM (MS-NLM) algorithm is proposed, which combines the advantage of the NLM algorithm and multi-scale image processing techniques. Experimental results via computer simulations illustrate that the MS-NLM algorithm outperforms the NLM, both visually and quantitatively.

  12. A new method for fusion, denoising and enhancement of x-ray images retrieved from Talbot-Lau grating interferometry

    Science.gov (United States)

    Scholkmann, Felix; Revol, Vincent; Kaufmann, Rolf; Baronowski, Heidrun; Kottler, Christian

    2014-03-01

    This paper introduces a new image denoising, fusion and enhancement framework for combining and optimal visualization of x-ray attenuation contrast (AC), differential phase contrast (DPC) and dark-field contrast (DFC) images retrieved from x-ray Talbot-Lau grating interferometry. The new image fusion framework comprises three steps: (i) denoising each input image (AC, DPC and DFC) through adaptive Wiener filtering, (ii) performing a two-step image fusion process based on the shift-invariant wavelet transform, i.e. first fusing the AC with the DPC image and then fusing the resulting image with the DFC image, and finally (iii) enhancing the fused image to obtain a final image using adaptive histogram equalization, adaptive sharpening and contrast optimization. Application examples are presented for two biological objects (a human tooth and a cherry) and the proposed method is compared to two recently published AC/DPC/DFC image processing techniques. In conclusion, the new framework for the processing of AC, DPC and DFC allows the most relevant features of all three images to be combined in one image while reducing the noise and enhancing adaptively the relevant image features. The newly developed framework may be used in technical and medical applications.

  13. A new method for fusion, denoising and enhancement of x-ray images retrieved from Talbot–Lau grating interferometry

    International Nuclear Information System (INIS)

    This paper introduces a new image denoising, fusion and enhancement framework for combining and optimal visualization of x-ray attenuation contrast (AC), differential phase contrast (DPC) and dark-field contrast (DFC) images retrieved from x-ray Talbot–Lau grating interferometry. The new image fusion framework comprises three steps: (i) denoising each input image (AC, DPC and DFC) through adaptive Wiener filtering, (ii) performing a two-step image fusion process based on the shift-invariant wavelet transform, i.e. first fusing the AC with the DPC image and then fusing the resulting image with the DFC image, and finally (iii) enhancing the fused image to obtain a final image using adaptive histogram equalization, adaptive sharpening and contrast optimization. Application examples are presented for two biological objects (a human tooth and a cherry) and the proposed method is compared to two recently published AC/DPC/DFC image processing techniques. In conclusion, the new framework for the processing of AC, DPC and DFC allows the most relevant features of all three images to be combined in one image while reducing the noise and enhancing adaptively the relevant image features. The newly developed framework may be used in technical and medical applications. (paper)

  14. Integration of the denoising, inpainting and local harmonic Bz algorithm for MREIT imaging of intact animals

    Science.gov (United States)

    Jeon, Kiwan; Kim, Hyung Joong; Lee, Chang-Ock; Seo, Jin Keun; Woo, Eung Je

    2010-12-01

    Conductivity imaging based on the current-injection MRI technique has been developed in magnetic resonance electrical impedance tomography. Current injected through a pair of surface electrodes induces a magnetic flux density distribution inside an imaging object, which results in additional magnetic field inhomogeneity. We can extract phase changes related to the current injection and obtain an image of the induced magnetic flux density. Without rotating the object inside the bore, we can measure only one component Bz of the magnetic flux density B = (Bx, By, Bz). Based on a relation between the internal conductivity distribution and Bz data subject to multiple current injections, one may reconstruct cross-sectional conductivity images. As the image reconstruction algorithm, we have been using the harmonic Bz algorithm in numerous experimental studies. Performing conductivity imaging of intact animal and human subjects, we found technical difficulties that originated from the MR signal void phenomena in the local regions of bones, lungs and gas-filled tubular organs. Measured Bz data inside such a problematic region contain an excessive amount of noise that deteriorates the conductivity image quality. In order to alleviate this technical problem, we applied hybrid methods incorporating ramp-preserving denoising, harmonic inpainting with isotropic diffusion and ROI imaging using the local harmonic Bz algorithm. These methods allow us to produce conductivity images of intact animals with best achievable quality. We suggest guidelines to choose a hybrid method depending on the overall noise level and existence of distinct problematic regions of MR signal void.

  15. Extreme value analysis of frame coefficients and implications for image denoising

    CERN Document Server

    Haltmeier, Markus

    2012-01-01

    Denoising by frame thresholding is one of the most basic and efficient methods for recovering a discrete signal or image from data that are corrupted by additive Gaussian white noise. The basic idea is to select a frame of analyzing elements that separates the data in few large coefficients due to the signal and many small coefficients mainly due to the noise $\\epsilon_n$. Removing all data coefficients being in magnitude below a certain threshold yields an approximation to the original signal. In order that a significant amount of the noise is removed and at the same time relevant information about the original image is kept, a precise understanding of the statistical properties of thresholding is important. For that purpose we compute, for the first time, the asymptotic distribution of $\\max_{\\om\\in\\Om_n} \\abs{\\inner{\\base_\\om^n, \

  16. 基于图像扩散去噪的距离特征量提取研究%Research on extraction of distance characteristic feature based on image diffusion denoising method

    Institute of Scientific and Technical Information of China (English)

    笪良龙; 何青海; 杨建设

    2013-01-01

    The distance change ratio of target in movement can be described by distance characteristic feature. It can be extracted from LOFAR (Low Frequency Analysis Recording) spectrum striations of target radiated noise. Low precision of distance characteristic feature has been obtained because of the low signal to noise ratio and striation pollution. In order to improve the precision, a new edge-directed enhancement diffusion equation model has been put forward to denoise the LOFAR spectrum image. Because denoising performance has been greatly affected by the iteration times of diffusion equation, a method of choosing the times is presented. In order to let the striations characteristic more clear, a method of binary image based on image dividing is proposed. The results of numerical simulation and ocean experiments show that the precision of distance characteristic feature is one times higher after LOFAR spectrum image processing.%距离特征量反映了目标运动过程中的距离变化率,该观测量可由目标辐射噪声LOFAR谱图的干涉条纹中提取得到.低信噪比情况下,由于噪声的影响,干涉条纹特征会变模糊,提取的距离特征量值精度会降低.为提高提取精度,提出了一种边缘定向增强型扩散方程去噪方法,对LOFAR谱图进行处理,使条纹特征更加明显.迭代次数对扩散方程去噪效果影响很大,根据相关性原理,提出了一种迭代次数选择方法.为了使条纹特征更加易于区分,提出了一种基于区域分割的图像二值化方法.数值模拟实验和海试数据处理结果表明,LOFAR谱图经过处理后,提取出的距离特征量精度明显提高.

  17. A review of wavelet denoising in MRI and ultrasound brain imaging

    NARCIS (Netherlands)

    Pižurica, Aleksandra; Wink, Alle Meije; Vansteenkiste, Ewout; Philips, Wilfried; Roerdink, Jos B.T.M.

    2006-01-01

    There is a growing interest in using multiresolution noise filters in a variety of medical imaging applications. We review recent wavelet denoising techniques for medical ultrasound and for magnetic resonance images and discuss some of their potential applications in the clinical investigations of t

  18. Blind Analysis of CT Image Noise Using Residual Denoised Images

    OpenAIRE

    Roychowdhury, Sohini; Hollraft, Nathan; Alessio, Adam

    2016-01-01

    CT protocol design and quality control would benefit from automated tools to estimate the quality of generated CT images. These tools could be used to identify erroneous CT acquisitions or refine protocols to achieve certain signal to noise characteristics. This paper investigates blind estimation methods to determine global signal strength and noise levels in chest CT images. Methods: We propose novel performance metrics corresponding to the accuracy of noise and signal estimation. We implem...

  19. X-Ray image denoising with directional support value transform

    NARCIS (Netherlands)

    Zheng, S.; Hendriks, E.A.; Lei, B.; Hao, W.

    2009-01-01

    Under the support vector machine framework, the support value analysis-based image fusion has been studied, where the salient features of the original images are represented by their support values. The support value transform (SVT)-based image fusion approach have demonstrated some advantages over

  20. Group-sparse representation with dictionary learning for medical image denoising and fusion.

    Science.gov (United States)

    Li, Shutao; Yin, Haitao; Fang, Leyuan

    2012-12-01

    Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.

  1. Image denoising algorithm of refuge chamber by combining wavelet transform and bilateral filtering

    Institute of Scientific and Technical Information of China (English)

    Zhang Weipeng

    2013-01-01

    In order to preferably identify infrared image of refuge chamber,reduce image noises of refuge chamber and retain more image details,we propose the method of combining two-dimensional discrete wavelet transform and bilateral denoising.First,the wavelet transform is adopted to decompose the image of refuge chamber,of which low frequency component remains unchanged.Then,three high-frequency components are treated by bilateral filtering,and the image is reconstructed.The result shows that the combination of bilateral filtering and wavelet transform for image denoising can better retain the details which are included in the image,while providing better visual effect.This is superior to using either bilateral filtering or wavelet transform alone.It is useful for perfecting emergency refuge system of coal.

  2. Improvement of HMT based on uniform discrete curvelet coeffi-cients and application in image denoising%基于UDCT系数的改进HMT和在图像去噪中应用

    Institute of Scientific and Technical Information of China (English)

    杨兴明; 陈海燕; 王刚; 王彬彬; 赵银平

    2013-01-01

    通过对均匀离散曲波变换(Uniform Discrete Curvelet Transform,UDCT)系数的统计特性研究,同时对系数相关性度量指标互信息量的分析,最终选择隐马尔可夫树模型对其系数建模,且用EM算法训练序列;针对训练时间过长问题,通过分析系数的衰减性和尺度间系数延续性,提出一种新的对算法参数初值的方差和状态转移矩阵的优化方法,实验结果证明,在采用峰值信噪比和相似度作为图像去噪效果的度量时,同等条件下文中提出的算法比Wavelet HMT、Contourlet HMT、UDCT HMT算法有较好的实时性和去噪效果。%Based on the statistical properties of coefficients of the Uniform Discrete Curvelet Transform(UDCT), and the analysis of correlation metric mutual information about the coefficients, this paper chooses the Hidden Markov Tree to model the coefficients finally and trains the sequence with the EM algorithm. With amount of time consuming, an optimization EM algorithm based on HMT of UDCT coefficients is presented; it further optimizes the algorithm by defining the variance and state transition matrix based on the attenuation of coefficients and continuity between the scales. Experimental results show that, in the use of similarity and Peak Signal to Noise Ratio effect as the measurement of image de-noising, under the same conditions, the algorithm proposed has better real-time and de-noising effect than the Wavelet HMT, Contourlet HMT, UDCT HMT algorithm.

  3. Biomedical image and signal de-noising using dual tree complex wavelet transform

    Science.gov (United States)

    Rizi, F. Yousefi; Noubari, H. Ahmadi; Setarehdan, S. K.

    2011-10-01

    Dual tree complex wavelet transform(DTCWT) is a form of discrete wavelet transform, which generates complex coefficients by using a dual tree of wavelet filters to obtain their real and imaginary parts. The purposes of de-noising are reducing noise level and improving signal to noise ratio (SNR) without distorting the signal or image. This paper proposes a method for removing white Gaussian noise from ECG signals and biomedical images. The discrete wavelet transform (DWT) is very valuable in a large scope of de-noising problems. However, it has limitations such as oscillations of the coefficients at a singularity, lack of directional selectivity in higher dimensions, aliasing and consequent shift variance. The complex wavelet transform CWT strategy that we focus on in this paper is Kingsbury's and Selesnick's dual tree CWT (DTCWT) which outperforms the critically decimated DWT in a range of applications, such as de-noising. Each complex wavelet is oriented along one of six possible directions, and the magnitude of each complex wavelet has a smooth bell-shape. In the final part of this paper, we present biomedical image and signal de-noising by the means of thresholding magnitude of the wavelet coefficients.

  4. Voxel-Wise Functional Connectomics Using Arterial Spin Labeling Functional Magnetic Resonance Imaging: The Role of Denoising.

    Science.gov (United States)

    Liang, Xiaoyun; Connelly, Alan; Calamante, Fernando

    2015-11-01

    The objective of this study was to investigate voxel-wise functional connectomics using arterial spin labeling (ASL) functional magnetic resonance imaging (fMRI). Since ASL signal has an intrinsically low signal-to-noise ratio (SNR), the role of denoising is evaluated; in particular, a novel denoising method, dual-tree complex wavelet transform (DT-CWT) combined with the nonlocal means (NLM) algorithm is implemented and evaluated. Simulations were conducted to evaluate the performance of the proposed method in denoising images and in detecting functional networks from noisy data (including the accuracy and sensitivity of detection). In addition, denoising was applied to in vivo ASL datasets, followed by network analysis using graph theoretical approaches. Efficiencies cost was used to evaluate the performance of denoising in detecting functional networks from in vivo ASL fMRI data. Simulations showed that denoising is effective in detecting voxel-wise functional networks from low SNR data and/or from data with small total number of time points. The capability of denoised voxel-wise functional connectivity analysis was also demonstrated with in vivo data. We concluded that denoising is important for voxel-wise functional connectivity using ASL fMRI and that the proposed DT-CWT-NLM method should be a useful ASL preprocessing step.

  5. Autocorrelation based denoising of manatee vocalizations using the undecimated discrete wavelet transform.

    Science.gov (United States)

    Gur, Berke M; Niezrecki, Christopher

    2007-07-01

    Recent interest in the West Indian manatee (Trichechus manatus latirostris) vocalizations has been primarily induced by an effort to reduce manatee mortality rates due to watercraft collisions. A warning system based on passive acoustic detection of manatee vocalizations is desired. The success and feasibility of such a system depends on effective denoising of the vocalizations in the presence of high levels of background noise. In the last decade, simple and effective wavelet domain nonlinear denoising methods have emerged as an alternative to linear estimation methods. However, the denoising performances of these methods degrades considerably with decreasing signal-to-noise ratio (SNR) and therefore are not suited for denoising manatee vocalizations in which the typical SNR is below 0 dB. Manatee vocalizations possess a strong harmonic content and a slow decaying autocorrelation function. In this paper, an efficient denoising scheme that exploits both the autocorrelation function of manatee vocalizations and effectiveness of the nonlinear wavelet transform based denoising algorithms is introduced. The suggested wavelet-based denoising algorithm is shown to outperform linear filtering methods, extending the detection range of vocalizations. PMID:17614478

  6. Chebyshev and Conjugate Gradient Filters for Graph Image Denoising

    OpenAIRE

    Tian, Dong; Mansour, Hassan; Knyazev, Andrew; Vetro, Anthony

    2015-01-01

    In 3D image/video acquisition, different views are often captured with varying noise levels across the views. In this paper, we propose a graph-based image enhancement technique that uses a higher quality view to enhance a degraded view. A depth map is utilized as auxiliary information to match the perspectives of the two views. Our method performs graph-based filtering of the noisy image by directly computing a projection of the image to be filtered onto a lower dimensional Krylov subspace o...

  7. SET OPERATOR-BASED METHOD OF DENOISING MEDICAL VOLUME DATA

    Institute of Scientific and Technical Information of China (English)

    程兵; 郑南宁; 袁泽剑

    2002-01-01

    Objective To investigate impulsive noise suppression of medical volume data. Methods The volume data is represented as level sets and a special set operator is defined and applied to filtering it. The small connected components, which are likely to be produced by impulsive noise, are eliminated after the filtering process. A fast algorithm that uses a heap data structure is also designed. Results Compared with traditional linear filters such as a Gaussian filter, this method preserves the fine structure features of the medical volume data while removing noise, and the fast algorithm developed by us reduces memory consumption and improves computing efficiency. The experimental results given illustrate the efficiency of the method and the fast algorithm. Conclusion The set operator-based method shows outstanding denoising properties in our experiment, especially for impulsive noise. The method has a wide variety of applications in the areas of volume visualization and high dimensional data processing.

  8. Impact of image denoising on image quality, quantitative parameters and sensitivity of ultra-low-dose volume perfusion CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Othman, Ahmed E. [RWTH Aachen University, Department of Diagnostic and Interventional Neuroradiology, Aachen (Germany); Eberhard Karls University Tuebingen, University Hospital Tuebingen, Department for Diagnostic and Interventional Radiology, Tuebingen (Germany); Brockmann, Carolin; Afat, Saif; Pjontek, Rastislav; Nikoubashman, Omid; Brockmann, Marc A.; Wiesmann, Martin [RWTH Aachen University, Department of Diagnostic and Interventional Neuroradiology, Aachen (Germany); Yang, Zepa; Kim, Changwon [Seoul National University, Department of Transdisciplinary Studies, Graduate School of Convergence Science and Technology, Suwon (Korea, Republic of); Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Nikolaou, Konstantin [Eberhard Karls University Tuebingen, University Hospital Tuebingen, Department for Diagnostic and Interventional Radiology, Tuebingen (Germany); Kim, Jong Hyo [Seoul National University, Department of Transdisciplinary Studies, Graduate School of Convergence Science and Technology, Suwon (Korea, Republic of); Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Advanced Institute of Convergence Technology, Center for Medical-IT Convergence Technology Research, Suwon (Korea, Republic of); Seoul National University Hospital, Department of Radiology, Seoul (Korea, Republic of)

    2016-01-15

    To examine the impact of denoising on ultra-low-dose volume perfusion CT (ULD-VPCT) imaging in acute stroke. Simulated ULD-VPCT data sets at 20 % dose rate were generated from perfusion data sets of 20 patients with suspected ischemic stroke acquired at 80 kVp/180 mAs. Four data sets were generated from each ULD-VPCT data set: not-denoised (ND); denoised using spatiotemporal filter (D1); denoised using quanta-stream diffusion technique (D2); combination of both methods (D1 + D2). Signal-to-noise ratio (SNR) was measured in the resulting 100 data sets. Image quality, presence/absence of ischemic lesions, CBV and CBF scores according to a modified ASPECTS score were assessed by two blinded readers. SNR and qualitative scores were highest for D1 + D2 and lowest for ND (all p ≤ 0.001). In 25 % of the patients, ND maps were not assessable and therefore excluded from further analyses. Compared to original data sets, in D2 and D1 + D2, readers correctly identified all patients with ischemic lesions (sensitivity 1.0, kappa 1.0). Lesion size was most accurately estimated for D1 + D2 with a sensitivity of 1.0 (CBV) and 0.94 (CBF) and an inter-rater agreement of 1.0 and 0.92, respectively. An appropriate combination of denoising techniques applied in ULD-VPCT produces diagnostically sufficient perfusion maps at substantially reduced dose rates as low as 20 % of the normal scan. (orig.)

  9. Impact of image denoising on image quality, quantitative parameters and sensitivity of ultra-low-dose volume perfusion CT imaging

    International Nuclear Information System (INIS)

    To examine the impact of denoising on ultra-low-dose volume perfusion CT (ULD-VPCT) imaging in acute stroke. Simulated ULD-VPCT data sets at 20 % dose rate were generated from perfusion data sets of 20 patients with suspected ischemic stroke acquired at 80 kVp/180 mAs. Four data sets were generated from each ULD-VPCT data set: not-denoised (ND); denoised using spatiotemporal filter (D1); denoised using quanta-stream diffusion technique (D2); combination of both methods (D1 + D2). Signal-to-noise ratio (SNR) was measured in the resulting 100 data sets. Image quality, presence/absence of ischemic lesions, CBV and CBF scores according to a modified ASPECTS score were assessed by two blinded readers. SNR and qualitative scores were highest for D1 + D2 and lowest for ND (all p ≤ 0.001). In 25 % of the patients, ND maps were not assessable and therefore excluded from further analyses. Compared to original data sets, in D2 and D1 + D2, readers correctly identified all patients with ischemic lesions (sensitivity 1.0, kappa 1.0). Lesion size was most accurately estimated for D1 + D2 with a sensitivity of 1.0 (CBV) and 0.94 (CBF) and an inter-rater agreement of 1.0 and 0.92, respectively. An appropriate combination of denoising techniques applied in ULD-VPCT produces diagnostically sufficient perfusion maps at substantially reduced dose rates as low as 20 % of the normal scan. (orig.)

  10. Input Space Regularization Stabilizes Pre-images for Kernel PCA De-noising

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2009-01-01

    -noising applications we propose input space distance regularization as a stabilizer for pre-image estimation. We perform extensive experiments on the USPS digit modeling problem to evaluate the stability of three widely used pre-image estimators. We show that the previous methods lack stability when the feature...... mapping is non-linear, however, by applying a simple input space distance regularizer we can reduce variability with very limited sacrifice in terms of de-noising efficiency....

  11. System and method for image reconstruction, analysis, and/or de-noising

    KAUST Repository

    Laleg-Kirati, Taous-Meriem

    2015-11-12

    A method and system can analyze, reconstruct, and/or denoise an image. The method and system can include interpreting a signal as a potential of a Schrödinger operator, decomposing the signal into squared eigenfunctions, reducing a design parameter of the Schrödinger operator, analyzing discrete spectra of the Schrödinger operator and combining the analysis of the discrete spectra to construct the image.

  12. Two Level DCT and Wavelet Packets Denoising Robust Image Watermarking

    Directory of Open Access Journals (Sweden)

    N.Koteswara Rao

    2014-01-01

    Full Text Available In this paper we present a low frequency watermarking scheme on gray level images, which is based on DCT transform and spread spectrum communications technique.The DCT of non overlapping 8x8 blocks of the host image is computed, then using each block DC coefficients we construct a low-resolution approximation image. We apply block based DCT on this approximation image, then a pseudo random noise sequence is added into its high frequencies. For detection, we extract the approximation image from the watermarked image, then the same pseudo random noise sequence is generated, and its correlation is computed with high frequencies of the watermarked approximation image. In our method, higher robustness is obtained because of embedding the watermark in low frequency. In addition, higher imperceptibility is gained by scattering the watermark's bit in different blocks. We evaluated the robustness of the proposed technique against many common attacks such as JPEG compression, additive Gaussian noise and median filter. Compared with related works, our method proved to be highly resistant in cases of compression and additive noise, while preserving high PSNR for the watermarked images.

  13. Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform

    OpenAIRE

    Chitchian, Shahab; Mayer, Markus A.; Boretsky, Adam R.; Van Kuijk, Frederik J.; Motamedi, Massoud

    2012-01-01

    Image enhancement of retinal structures, in optical coherence tomography (OCT) scans through denoising, has the potential to aid in the diagnosis of several eye diseases. In this paper, a locally adaptive denoising algorithm using double-density dual-tree complex wavelet transform, a combination of the double-density wavelet transform and the dual-tree complex wavelet transform, is applied to reduce speckle noise in OCT images of the retina. The algorithm overcomes the limitations of commonly...

  14. Image Denoising Using Total Variation Model Guided by Steerable Filter

    OpenAIRE

    Wenxue Zhang; Yongzhen Cao; Rongxin Zhang; Yuanquan Wang

    2014-01-01

    We propose an adaptive total variation (TV) model by introducing the steerable filter into the TV-based diffusion process for image filtering. The local energy measured by the steerable filter can effectively characterize the object edges and ramp regions and guide the TV-based diffusion process so that the new model behaves like the TV model at edges and leads to linear diffusion in flat and ramp regions. This way, the proposed model can provide a better image processing tool which enables n...

  15. BL_Wiener Denoising Method for Removal of Speckle Noise in Ultrasound Image

    Directory of Open Access Journals (Sweden)

    Suhaila Sari

    2015-02-01

    Full Text Available Medical imaging techniques are extremely important tools in medical diagnosis. One of these important imaging techniques is ultrasound imaging. However, during ultrasound image acquisition process, the quality of image can be degraded due to corruption by speckle noise. The enhancement of ultrasound images quality from the 2D ultrasound imaging machines is expected to provide medical practitioners more reliable medical images in their patients’ diagnosis. However, developing a denoising technique which could remove noise effectively without eliminating the image’s edges and details is still an ongoing issue. The objective of this paper is to develop a new method that is capable to remove speckle noise from the ultrasound image effectively. Therefore, in this paper we proposed the utilization of Bilateral Filter and Adaptive Wiener Filter (BL_Wiener denoising method for images corrupted by speckle noise. Bilateral Filter is a non-linear filter effective in removing noise, while Adaptive Wiener Filter balances the tradeoff between inverse filtering and noise smoothing by removing additive noise while inverting blurring. From our simulation results, it is found that the BL_Wiener method has improved between 0.89 [dB] to 3.35 [dB] in terms of PSNR for test images in different noise levels in comparison to conventional methods.

  16. 基于主成分分析的图像自适应阈值去噪算法%Adaptive Threshold Image Denoising Algorithm Based on Principal Component Analysis

    Institute of Scientific and Technical Information of China (English)

    李俊秀; 姜三平

    2014-01-01

    Principal component analysis(PCA) is a multivariate statistical method which selects a few important variables through a linear transformation of Multiple variables. In image denoising, because of the local similarity of images, a new and effective noise removal algorithm is put forwarded. The similar blocks are found out as training samples by block matching algorithm and the main signal feature extraction is extracted by PCA, and then, an adaptive threshold is used to each denoised block to remove noise. The experimental results show that the method can effectively remove the image of Gauss white noise, and at the same time, can be very good to keep the edge detail information.%主成分分析(PCA)是一种将多个变量通过线性变换选出较少个数重要变量的一种多元统计方法。在图像去噪中,由于图像的局部相似性,提出一种新的有效的去除噪声的算法。通过块匹配法寻找出相似块作为训练样本,利用主成分分析提取信号的主要特征,然后根据统计理论中最小均方误差方法构造线性自适应阈值方程,对含噪图像的每一块进行自适应阈值去噪。实验结果表明,该方法能有效去除图像的高斯白噪声,并同时能很好的保持边缘等的细节信息。

  17. Relevant modes selection method based on Spearman correlation coefficient for laser signal denoising using empirical mode decomposition

    Science.gov (United States)

    Duan, Yabo; Song, Chengtian

    2016-10-01

    Empirical mode decomposition (EMD) is a recently proposed nonlinear and nonstationary laser signal denoising method. A noisy signal is broken down using EMD into oscillatory components that are called intrinsic mode functions (IMFs). Thresholding-based denoising and correlation-based partial reconstruction of IMFs are the two main research directions for EMD-based denoising. Similar to other decomposition-based denoising approaches, EMD-based denoising methods require a reliable threshold to determine which IMFs are noise components and which IMFs are noise-free components. In this work, we propose a new approach in which each IMF is first denoised using EMD interval thresholding (EMD-IT), and then a robust thresholding process based on Spearman correlation coefficient is used for relevant modes selection. The proposed method tackles the problem using a thresholding-based denoising approach coupled with partial reconstruction of the relevant IMFs. Other traditional denoising methods, including correlation-based EMD partial reconstruction (EMD-Correlation), discrete Fourier transform and wavelet-based methods, are investigated to provide a comparison with the proposed technique. Simulation and test results demonstrate the superior performance of the proposed method when compared with the other methods.

  18. Denoising of Nuclear Medicine images using Wavelet Transform

    International Nuclear Information System (INIS)

    Diagnosis using images is widely used in Nuclear Medicine. However, in the case of planar images some problems can appear related to low detectability of small lesions, due to noise contamination. This phenomenon is emphasized because of the impossibility of increasing the radiopharmaceutical dose or the exposure time above the established levels. An algorithm to reduce the random Gaussian noise in planar images using the Wavelet Transform is described in this paper. Results are compared among a set of filters designed by this procedure, in order to select those that offer the best images considering the evaluation of the image quality through quantitative metrics (au)

  19. A NEW DE-NOISING METHOD BASED ON 3-BAND WAVELET AND NONPARAMETRIC ADAPTIVE ESTIMATION

    Institute of Scientific and Technical Information of China (English)

    Li Li; Peng Yuhua; Yang Mingqiang; Xue Peijun

    2007-01-01

    Wavelet de-noising has been well known as an important method of signal de-noising.Recently,most of the research efforts about wavelet de-noising focus on how to select the threshold,where Donoho method is applied widely.Compared with traditional 2-band wavelet,3-band wavelet has advantages in many aspects.According to this theory,an adaptive signal de-noising method in 3-band wavelet domain based on nonparametric adaptive estimation is proposed.The experimental results show that in 3-band wavelet domain,the proposed method represents better characteristics than Donoho method in protecting detail and improving the signal-to-noise ratio of reconstruction signal.

  20. Image Variational Denoising Using Gradient Fidelity on Curvelet Shrinkage

    Directory of Open Access Journals (Sweden)

    Roysam Badrinath

    2010-01-01

    Full Text Available A new variational image model is presented for image restoration using a combination of the curvelet shrinkage method and the total variation (TV functional. In order to suppress the staircasing effect and curvelet-like artifacts, we use the multiscale curvelet shrinkage to compute an initial estimated image, and then we propose a new gradient fidelity term, which is designed to force the gradients of desired image to be close to the curvelet approximation gradients. Then, we introduce the Euler-Lagrange equation and make an investigation on the mathematical properties. To improve the ability of preserving the details of edges and texture, the spatial-varying parameters are adaptively estimated in the iterative process of the gradient descent flow algorithm. Numerical experiments demonstrate that our proposed method has good performance in alleviating both the staircasing effect and curvelet-like artifacts, while preserving fine details.

  1. Blind Deblurring and Denoising of Images Corrupted by Unidirectional Object Motion Blur and Sensor Noise.

    Science.gov (United States)

    Zhang, Yi; Hirakawa, Keigo

    2016-09-01

    Low light photography suffers from blur and noise. In this paper, we propose a novel method to recover a dense estimate of spatially varying blur kernel as well as a denoised and deblurred image from a single noisy and object motion blurred image. A proposed method takes the advantage of the sparse representation of double discrete wavelet transform-a generative model of image blur that simplifies the wavelet analysis of a blurred image-and the Bayesian perspective of modeling the prior distribution of the latent sharp wavelet coefficient and the likelihood function that makes the noise handling explicit. We demonstrate the effectiveness of the proposed method on moderate noise and severely blurred images using simulated and real camera data. PMID:27337717

  2. Static Myocardial Perfusion Imaging using denoised dynamic Rb-82 PET/CT scans

    DEFF Research Database (Denmark)

    Petersen, Maiken N.M.; Hoff, Camilla; Harms, Hans;

    2015-01-01

    quantitative accuracy. In this study, we examine static images created by summing late frames of denoised dynamic series. Method: 47 random clinical 82Rb stress and rest scans (27 male, age 68+/- 12 y., BMI 27.9 +/- 5.5 kg/m2) performed on a GE Discovery 690 PET/CT scanner were included in the study....... Administered 82Rb dose was 1110 MBq. Denoising using HYPR-LR or Hotelling 3D algorithms was performed as post-processing on the dynamic images series. Static series were created by summing frames from 2.5-5 min. The image data was analysed in QPET (Cedars-Sinai). Relative segmental perfusion (normalized...... and Bland-Altman analysis. Results: For HYPR-LR, a good correlation was found for relative segmental perfusion for both stress (y=1.007x+0.313, R2=0.98) and rest (y=1.007x+ 0.421, R2=0.96) scans with negative bias of -0.79±1.44 and -0.90±1.63, respectively. Correlations for SSS (R2=0.94), SRS (R2=0.92), SDS...

  3. Medical image denoising via optimal implementation of non-local means on hybrid parallel architecture.

    Science.gov (United States)

    Nguyen, Tuan-Anh; Nakib, Amir; Nguyen, Huy-Nam

    2016-06-01

    The Non-local means denoising filter has been established as gold standard for image denoising problem in general and particularly in medical imaging due to its efficiency. However, its computation time limited its applications in real world application, especially in medical imaging. In this paper, a distributed version on parallel hybrid architecture is proposed to solve the computation time problem and a new method to compute the filters' coefficients is also proposed, where we focused on the implementation and the enhancement of filters' parameters via taking the neighborhood of the current voxel more accurately into account. In terms of implementation, our key contribution consists in reducing the number of shared memory accesses. The different tests of the proposed method were performed on the brain-web database for different levels of noise. Performances and the sensitivity were quantified in terms of speedup, peak signal to noise ratio, execution time, the number of floating point operations. The obtained results demonstrate the efficiency of the proposed method. Moreover, the implementation is compared to that of other techniques, recently published in the literature.

  4. Medical image denoising via optimal implementation of non-local means on hybrid parallel architecture.

    Science.gov (United States)

    Nguyen, Tuan-Anh; Nakib, Amir; Nguyen, Huy-Nam

    2016-06-01

    The Non-local means denoising filter has been established as gold standard for image denoising problem in general and particularly in medical imaging due to its efficiency. However, its computation time limited its applications in real world application, especially in medical imaging. In this paper, a distributed version on parallel hybrid architecture is proposed to solve the computation time problem and a new method to compute the filters' coefficients is also proposed, where we focused on the implementation and the enhancement of filters' parameters via taking the neighborhood of the current voxel more accurately into account. In terms of implementation, our key contribution consists in reducing the number of shared memory accesses. The different tests of the proposed method were performed on the brain-web database for different levels of noise. Performances and the sensitivity were quantified in terms of speedup, peak signal to noise ratio, execution time, the number of floating point operations. The obtained results demonstrate the efficiency of the proposed method. Moreover, the implementation is compared to that of other techniques, recently published in the literature. PMID:27084318

  5. a Universal De-Noising Algorithm for Ground-Based LIDAR Signal

    Science.gov (United States)

    Ma, Xin; Xiang, Chengzhi; Gong, Wei

    2016-06-01

    Ground-based lidar, working as an effective remote sensing tool, plays an irreplaceable role in the study of atmosphere, since it has the ability to provide the atmospheric vertical profile. However, the appearance of noise in a lidar signal is unavoidable, which leads to difficulties and complexities when searching for more information. Every de-noising method has its own characteristic but with a certain limitation, since the lidar signal will vary with the atmosphere changes. In this paper, a universal de-noising algorithm is proposed to enhance the SNR of a ground-based lidar signal, which is based on signal segmentation and reconstruction. The signal segmentation serving as the keystone of the algorithm, segments the lidar signal into three different parts, which are processed by different de-noising method according to their own characteristics. The signal reconstruction is a relatively simple procedure that is to splice the signal sections end to end. Finally, a series of simulation signal tests and real dual field-of-view lidar signal shows the feasibility of the universal de-noising algorithm.

  6. Locally homogenized and de-noised vector fields for cardiac fiber tracking in DT-MRI images

    Science.gov (United States)

    Akhbardeh, Alireza; Vadakkumpadan, Fijoy; Bayer, Jason; Trayanova, Natalia A.

    2009-02-01

    In this study we develop a methodology to accurately extract and visualize cardiac microstructure from experimental Diffusion Tensor (DT) data. First, a test model was constructed using an image-based model generation technique on Diffusion Tensor Magnetic Resonance Imaging (DT-MRI) data. These images were derived from a dataset having 122x122x500 um3 voxel resolution. De-noising and image enhancement was applied to this high-resolution dataset to clearly define anatomical boundaries within the images. The myocardial tissue was segmented from structural images using edge detection, region growing, and level set thresholding. The primary eigenvector of the diffusion tensor for each voxel, which represents the longitudinal direction of the fiber, was calculated to generate a vector field. Then an advanced locally regularizing nonlinear anisotropic filter, termed Perona-Malik (PEM), was used to regularize this vector field to eliminate imaging artifacts inherent to DT-MRI from volume averaging of the tissue with the surrounding medium. Finally, the vector field was streamlined to visualize fibers within the segmented myocardial tissue to compare the results with unfiltered data. With this technique, we were able to recover locally regularized (homogenized) fibers with a high accuracy by applying the PEM regularization technique, particularly on anatomical surfaces where imaging artifacts were most apparent. This approach not only aides in the visualization of noisy complex 3D vector fields obtained from DT-MRI, but also eliminates volume averaging artifacts to provide a realistic cardiac microstructure for use in electrophysiological modeling studies.

  7. Optimization of wavelet- and curvelet-based denoising algorithms by multivariate SURE and GCV

    Science.gov (United States)

    Mortezanejad, R.; Gholami, A.

    2016-06-01

    One of the most crucial challenges in seismic data processing is the reduction of noise in the data or improving the signal-to-noise ratio (SNR). Wavelet- and curvelet-based denoising algorithms have become popular to address random noise attenuation for seismic sections. Wavelet basis, thresholding function, and threshold value are three key factors of such algorithms, having a profound effect on the quality of the denoised section. Therefore, given a signal, it is necessary to optimize the denoising operator over these factors to achieve the best performance. In this paper a general denoising algorithm is developed as a multi-variant (variable) filter which performs in multi-scale transform domains (e.g. wavelet and curvelet). In the wavelet domain this general filter is a function of the type of wavelet, characterized by its smoothness, thresholding rule, and threshold value, while in the curvelet domain it is only a function of thresholding rule and threshold value. Also, two methods, Stein’s unbiased risk estimate (SURE) and generalized cross validation (GCV), evaluated using a Monte Carlo technique, are utilized to optimize the algorithm in both wavelet and curvelet domains for a given seismic signal. The best wavelet function is selected from a family of fractional B-spline wavelets. The optimum thresholding rule is selected from general thresholding functions which contain the most well known thresholding functions, and the threshold value is chosen from a set of possible values. The results obtained from numerical tests show high performance of the proposed method in both wavelet and curvelet domains in comparison to conventional methods when denoising seismic data.

  8. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los

    2013-11-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  9. Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform

    Science.gov (United States)

    Chitchian, Shahab; Mayer, Markus A.; Boretsky, Adam R.; van Kuijk, Frederik J.; Motamedi, Massoud

    2012-11-01

    Image enhancement of retinal structures, in optical coherence tomography (OCT) scans through denoising, has the potential to aid in the diagnosis of several eye diseases. In this paper, a locally adaptive denoising algorithm using double-density dual-tree complex wavelet transform, a combination of the double-density wavelet transform and the dual-tree complex wavelet transform, is applied to reduce speckle noise in OCT images of the retina. The algorithm overcomes the limitations of commonly used multiple frame averaging technique, namely the limited number of frames that can be recorded due to eye movements, by providing a comparable image quality in significantly less acquisition time equal to an order of magnitude less time compared to the averaging method. In addition, improvements of image quality metrics and 5 dB increase in the signal-to-noise ratio are attained.

  10. New variational image decomposition model for simultaneously denoising and segmenting optical coherence tomography images

    Science.gov (United States)

    Duan, Jinming; Tench, Christopher; Gottlob, Irene; Proudlock, Frank; Bai, Li

    2015-11-01

    Optical coherence tomography (OCT) imaging plays an important role in clinical diagnosis and monitoring of diseases of the human retina. Automated analysis of optical coherence tomography images is a challenging task as the images are inherently noisy. In this paper, a novel variational image decomposition model is proposed to decompose an OCT image into three components: the first component is the original image but with the noise completely removed; the second contains the set of edges representing the retinal layer boundaries present in the image; and the third is an image of noise, or in image decomposition terms, the texture, or oscillatory patterns of the original image. In addition, a fast Fourier transform based split Bregman algorithm is developed to improve computational efficiency of solving the proposed model. Extensive experiments are conducted on both synthesised and real OCT images to demonstrate that the proposed model outperforms the state-of-the-art speckle noise reduction methods and leads to accurate retinal layer segmentation.

  11. Application of Wavelet Based Denoising for T-Wave Alternans Analysis in High Resolution ECG Maps

    Science.gov (United States)

    Janusek, D.; Kania, M.; Zaczek, R.; Zavala-Fernandez, H.; Zbieć, A.; Opolski, G.; Maniewski, R.

    2011-01-01

    T-wave alternans (TWA) allows for identification of patients at an increased risk of ventricular arrhythmia. Stress test, which increases heart rate in controlled manner, is used for TWA measurement. However, the TWA detection and analysis are often disturbed by muscular interference. The evaluation of wavelet based denoising methods was performed to find optimal algorithm for TWA analysis. ECG signals recorded in twelve patients with cardiac disease were analyzed. In seven of them significant T-wave alternans magnitude was detected. The application of wavelet based denoising method in the pre-processing stage increases the T-wave alternans magnitude as well as the number of BSPM signals where TWA was detected.

  12. De-noising of Raman spectrum signal based on stationary wavelet transform

    Institute of Scientific and Technical Information of China (English)

    Qingwei Gao(高清维); Zhaoqi Sun(孙兆奇); Zhuoliang Cao(曹卓良); Pu Cheng(程蒲)

    2004-01-01

    @@ In this paper,the Raman spectrum signal de-noising based on stationary wavelet transform is discussed.Haar wavelet is selected to decompose the Raman spectrum signal for several levels based on stationarywavelet transform.The noise mean square σj is estimated by the wavelet details at every level,and thewavelet details toward 0 by a threshold σj √2lnn,where n is length of the detail,then recovery signalis reconstructed.Experimental results show this method not only suppresses noise effectively,but alsopreserves as many target characteristics of original signal as possible.This de-noising method offers a veryattractive alternative to Raman spectrum signal noise suppress.

  13. A novel structured dictionary for fast processing of 3D medical images, with application to computed tomography restoration and denoising

    Science.gov (United States)

    Karimi, Davood; Ward, Rabab K.

    2016-03-01

    Sparse representation of signals in learned overcomplete dictionaries has proven to be a powerful tool with applications in denoising, restoration, compression, reconstruction, and more. Recent research has shown that learned overcomplete dictionaries can lead to better results than analytical dictionaries such as wavelets in almost all image processing applications. However, a major disadvantage of these dictionaries is that their learning and usage is very computationally intensive. In particular, finding the sparse representation of a signal in these dictionaries requires solving an optimization problem that leads to very long computational times, especially in 3D image processing. Moreover, the sparse representation found by greedy algorithms is usually sub-optimal. In this paper, we propose a novel two-level dictionary structure that improves the performance and the speed of standard greedy sparse coding methods. The first (i.e., the top) level in our dictionary is a fixed orthonormal basis, whereas the second level includes the atoms that are learned from the training data. We explain how such a dictionary can be learned from the training data and how the sparse representation of a new signal in this dictionary can be computed. As an application, we use the proposed dictionary structure for removing the noise and artifacts in 3D computed tomography (CT) images. Our experiments with real CT images show that the proposed method achieves results that are comparable with standard dictionary-based methods while substantially reducing the computational time.

  14. Prognostics of Lithium-Ion Batteries Based on Wavelet Denoising and DE-RVM.

    Science.gov (United States)

    Zhang, Chaolong; He, Yigang; Yuan, Lifeng; Xiang, Sheng; Wang, Jinping

    2015-01-01

    Lithium-ion batteries are widely used in many electronic systems. Therefore, it is significantly important to estimate the lithium-ion battery's remaining useful life (RUL), yet very difficult. One important reason is that the measured battery capacity data are often subject to the different levels of noise pollution. In this paper, a novel battery capacity prognostics approach is presented to estimate the RUL of lithium-ion batteries. Wavelet denoising is performed with different thresholds in order to weaken the strong noise and remove the weak noise. Relevance vector machine (RVM) improved by differential evolution (DE) algorithm is utilized to estimate the battery RUL based on the denoised data. An experiment including battery 5 capacity prognostics case and battery 18 capacity prognostics case is conducted and validated that the proposed approach can predict the trend of battery capacity trajectory closely and estimate the battery RUL accurately.

  15. Prognostics of Lithium-Ion Batteries Based on Wavelet Denoising and DE-RVM

    Science.gov (United States)

    Zhang, Chaolong; He, Yigang; Yuan, Lifeng; Xiang, Sheng; Wang, Jinping

    2015-01-01

    Lithium-ion batteries are widely used in many electronic systems. Therefore, it is significantly important to estimate the lithium-ion battery's remaining useful life (RUL), yet very difficult. One important reason is that the measured battery capacity data are often subject to the different levels of noise pollution. In this paper, a novel battery capacity prognostics approach is presented to estimate the RUL of lithium-ion batteries. Wavelet denoising is performed with different thresholds in order to weaken the strong noise and remove the weak noise. Relevance vector machine (RVM) improved by differential evolution (DE) algorithm is utilized to estimate the battery RUL based on the denoised data. An experiment including battery 5 capacity prognostics case and battery 18 capacity prognostics case is conducted and validated that the proposed approach can predict the trend of battery capacity trajectory closely and estimate the battery RUL accurately. PMID:26413090

  16. Adaptive Wavelet Threshold Denoising Method for Machinery Sound Based on Improved Fruit Fly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Jing Xu

    2016-07-01

    Full Text Available As the sound signal of a machine contains abundant information and is easy to measure, acoustic-based monitoring or diagnosis systems exhibit obvious superiority, especially in some extreme conditions. However, the sound directly collected from industrial field is always polluted. In order to eliminate noise components from machinery sound, a wavelet threshold denoising method optimized by an improved fruit fly optimization algorithm (WTD-IFOA is proposed in this paper. The sound is firstly decomposed by wavelet transform (WT to obtain coefficients of each level. As the wavelet threshold functions proposed by Donoho were discontinuous, many modified functions with continuous first and second order derivative were presented to realize adaptively denoising. However, the function-based denoising process is time-consuming and it is difficult to find optimal thresholds. To overcome these problems, fruit fly optimization algorithm (FOA was introduced to the process. Moreover, to avoid falling into local extremes, an improved fly distance range obeying normal distribution was proposed on the basis of original FOA. Then, sound signal of a motor was recorded in a soundproof laboratory, and Gauss white noise was added into the signal. The simulation results illustrated the effectiveness and superiority of the proposed approach by a comprehensive comparison among five typical methods. Finally, an industrial application on a shearer in coal mining working face was performed to demonstrate the practical effect.

  17. AMA- and RWE- Based Adaptive Kalman Filter for Denoising Fiber Optic Gyroscope Drift Signal.

    Science.gov (United States)

    Yang, Gongliu; Liu, Yuanyuan; Li, Ming; Song, Shunguang

    2015-10-23

    An improved double-factor adaptive Kalman filter called AMA-RWE-DFAKF is proposed to denoise fiber optic gyroscope (FOG) drift signal in both static and dynamic conditions. The first factor is Kalman gain updated by random weighting estimation (RWE) of the covariance matrix of innovation sequence at any time to ensure the lowest noise level of output, but the inertia of KF response increases in dynamic condition. To decrease the inertia, the second factor is the covariance matrix of predicted state vector adjusted by RWE only when discontinuities are detected by adaptive moving average (AMA).The AMA-RWE-DFAKF is applied for denoising FOG static and dynamic signals, its performance is compared with conventional KF (CKF), RWE-based adaptive KF with gain correction (RWE-AKFG), AMA- and RWE- based dual mode adaptive KF (AMA-RWE-DMAKF). Results of Allan variance on static signal and root mean square error (RMSE) on dynamic signal show that this proposed algorithm outperforms all the considered methods in denoising FOG signal.

  18. [Research on ECG de-noising method based on ensemble empirical mode decomposition and wavelet transform using improved threshold function].

    Science.gov (United States)

    Ye, Linlin; Yang, Dan; Wang, Xu

    2014-06-01

    A de-noising method for electrocardiogram (ECG) based on ensemble empirical mode decomposition (EEMD) and wavelet threshold de-noising theory is proposed in our school. We decomposed noised ECG signals with the proposed method using the EEMD and calculated a series of intrinsic mode functions (IMFs). Then we selected IMFs and reconstructed them to realize the de-noising for ECG. The processed ECG signals were filtered again with wavelet transform using improved threshold function. In the experiments, MIT-BIH ECG database was used for evaluating the performance of the proposed method, contrasting with de-noising method based on EEMD and wavelet transform with improved threshold function alone in parameters of signal to noise ratio (SNR) and mean square error (MSE). The results showed that the ECG waveforms de-noised with the proposed method were smooth and the amplitudes of ECG features did not attenuate. In conclusion, the method discussed in this paper can realize the ECG denoising and meanwhile keep the characteristics of original ECG signal.

  19. [Research on ECG de-noising method based on ensemble empirical mode decomposition and wavelet transform using improved threshold function].

    Science.gov (United States)

    Ye, Linlin; Yang, Dan; Wang, Xu

    2014-06-01

    A de-noising method for electrocardiogram (ECG) based on ensemble empirical mode decomposition (EEMD) and wavelet threshold de-noising theory is proposed in our school. We decomposed noised ECG signals with the proposed method using the EEMD and calculated a series of intrinsic mode functions (IMFs). Then we selected IMFs and reconstructed them to realize the de-noising for ECG. The processed ECG signals were filtered again with wavelet transform using improved threshold function. In the experiments, MIT-BIH ECG database was used for evaluating the performance of the proposed method, contrasting with de-noising method based on EEMD and wavelet transform with improved threshold function alone in parameters of signal to noise ratio (SNR) and mean square error (MSE). The results showed that the ECG waveforms de-noised with the proposed method were smooth and the amplitudes of ECG features did not attenuate. In conclusion, the method discussed in this paper can realize the ECG denoising and meanwhile keep the characteristics of original ECG signal. PMID:25219236

  20. Contourlet transform as an effective method for agricultural product image denoising%Contourlet变换为农产品图像去噪的有效方法

    Institute of Scientific and Technical Information of China (English)

    宋怀波; 何东健; 韩韬

    2012-01-01

    农产品图像去噪是农业图像处理中最基本、最重要的工作之一.现有小波去噪方法存在各向同性的缺陷,从而限制了其去噪的效果.针对这一问题,提出了一种基于Contourlet变换的农产品图像去噪算法,该方法充分利用了Contourlet变换具有的多分辨率、各向异性和稀疏性的优点.算法首先对含噪农产品图像进行塔形方向滤波器组(pyramidal directional filter bank,PDFB)分解,然后通过多尺度萎缩阈值进行高频子带去噪,保留信号系数并抑制噪声系数,最后通过Contourlet反变换得到去噪后的图像,实现农产品图像的去噪.为了验证Contourlet变换的去噪效果,分别采用小波去噪、中值滤波、均值滤波、高斯滤波和维纳滤波对常见农产品图像进行了对比试验.试验结果表明,基于Contourlet变换的图像去噪方法应用于农产品图像去噪具有信噪比高、视觉效果好等优点,将Contourlet变换用于农产品图像去噪是有效的、可行的.%Image denoising for agricultural product image is a one of the most basic and important step in agricultural image processing. Wavelet transform has the weakness of isotropy,which limits its use in image denoising. To solve this problem,a new image denoising algorithm based on Contourlet transform is presented. The algorithm fully utilized the advantages of Contourlet transform such as flexible multi-resolution,anisotropy and a sparse representation. In the first step,the image is decomposed by PDFB (pyramidal directional filter bank),and in the second step,the muti-scale threshold shrinkage algorithm is presented to remove the noise in high frequency sub-band,in the last step,inverse transformation of Contourlet is used and the agricultural product image denoising is realized. In order to test the performance of Contourlet denoising algorithm,a comparative test is made by using Wavelet,median filter,mean filter,Gaussian Filter and Wiener filtering

  1. Research on Mechanical Fault Diagnosis Scheme Based on Improved Wavelet Total Variation Denoising

    Directory of Open Access Journals (Sweden)

    Wentao He

    2016-01-01

    Full Text Available Wavelet analysis is a powerful tool for signal processing and mechanical equipment fault diagnosis due to the advantages of multiresolution analysis and excellent local characteristics in time-frequency domain. Wavelet total variation (WATV was recently developed based on the traditional wavelet analysis method, which combines the advantages of wavelet-domain sparsity and total variation (TV regularization. In order to guarantee the sparsity and the convexity of the total objective function, nonconvex penalty function is chosen as a new wavelet penalty function in WATV. The actual noise reduction effect of WATV method largely depends on the estimation of the noise signal variance. In this paper, an improved wavelet total variation (IWATV denoising method was introduced. The local variance analysis on wavelet coefficients obtained from the wavelet decomposition of noisy signals is employed to estimate the noise variance so as to provide a scientific evaluation index. Through the analysis of the numerical simulation signal and real-word failure data, the results demonstrated that the IWATV method has obvious advantages over the traditional wavelet threshold denoising and total variation denoising method in the mechanical fault diagnose.

  2. Segmentation of confocal Raman microspectroscopic imaging data using edge-preserving denoising and clustering.

    Science.gov (United States)

    Alexandrov, Theodore; Lasch, Peter

    2013-06-18

    Over the past decade, confocal Raman microspectroscopic (CRM) imaging has matured into a useful analytical tool to obtain spatially resolved chemical information on the molecular composition of biological samples and has found its way into histopathology, cytology, and microbiology. A CRM imaging data set is a hyperspectral image in which Raman intensities are represented as a function of three coordinates: a spectral coordinate λ encoding the wavelength and two spatial coordinates x and y. Understanding CRM imaging data is challenging because of its complexity, size, and moderate signal-to-noise ratio. Spatial segmentation of CRM imaging data is a way to reveal regions of interest and is traditionally performed using nonsupervised clustering which relies on spectral domain-only information with the main drawback being the high sensitivity to noise. We present a new pipeline for spatial segmentation of CRM imaging data which combines preprocessing in the spectral and spatial domains with k-means clustering. Its core is the preprocessing routine in the spatial domain, edge-preserving denoising (EPD), which exploits the spatial relationships between Raman intensities acquired at neighboring pixels. Additionally, we propose to use both spatial correlation to identify Raman spectral features colocalized with defined spatial regions and confidence maps to assess the quality of spatial segmentation. For CRM data acquired from midsagittal Syrian hamster ( Mesocricetus auratus ) brain cryosections, we show how our pipeline benefits from the complex spatial-spectral relationships inherent in the CRM imaging data. EPD significantly improves the quality of spatial segmentation that allows us to extract the underlying structural and compositional information contained in the Raman microspectra. PMID:23701523

  3. Optimization of Wavelet-Based De-noising in MRI

    Directory of Open Access Journals (Sweden)

    K. Bartusek

    2011-04-01

    Full Text Available In the paper, a method for MR image enhancement using the wavelet analysis is described. The wavelet analysis is concentrated on the influence of threshold level and mother wavelet choices on the resultant MR image. The influence is expressed by the measurement and mutual comparison of three MT image parameters: signal to noise ratio, image contrast, and linear slope edge approximation. Unlike most standard methods working exclusively with the MR image magnitude, in our case both the MR image magnitude and the MR image phase were used in the enhancement process. Some recommendations are mentioned in conclusion, such as how to use a combination of mother wavelets with threshold levels for various types of MR images.

  4. New variational image decomposition model for simultaneously denoising and segmenting optical coherence tomography images

    International Nuclear Information System (INIS)

    Optical coherence tomography (OCT) imaging plays an important role in clinical diagnosis and monitoring of diseases of the human retina. Automated analysis of optical coherence tomography images is a challenging task as the images are inherently noisy. In this paper, a novel variational image decomposition model is proposed to decompose an OCT image into three components: the first component is the original image but with the noise completely removed; the second contains the set of edges representing the retinal layer boundaries present in the image; and the third is an image of noise, or in image decomposition terms, the texture, or oscillatory patterns of the original image. In addition, a fast Fourier transform based split Bregman algorithm is developed to improve computational efficiency of solving the proposed model. Extensive experiments are conducted on both synthesised and real OCT images to demonstrate that the proposed model outperforms the state-of-the-art speckle noise reduction methods and leads to accurate retinal layer segmentation. (paper)

  5. Performance comparison of wavelet denoising based fast DOA estimation of MIMO OFDM system over Rayleigh Fading Channel

    Directory of Open Access Journals (Sweden)

    A.V.Meenakshi

    2012-09-01

    Full Text Available This paper presents a tool for the analysis, and simulation of direction-of-arrival estimation for MIMO OFDM signal over the Rayleigh fading channel. The performance of the proposed technique is tested for wavelet denoising based CYCLIC MUSIC algorithm. Simulation results demonstrate that the proposed system not only has good ability of suppressing interference, but also significantly improves the DOA estimation of the system. In this paper, it is proposed to find DOA of the received MIMO OFDM signal, and the performances are analyzed using matlab simulation by the Monte Carlo computer iteration. This paper provides a fairly complete image of the performance and statistical efficiency with QPSK signal model for coherent system at a lower SNR(18dB and interference environment.

  6. Method and application of wavelet shrinkage denoising based on genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Genetic algorithm (GA) based on wavelet transform threshold shrinkage (WTS) and translation-invafiant threshold shrinkage (TIS) is introduced into the method of noise reduction, where parameters used in WTS and TIS, such as wavelet function,decomposition levels, hard or soft threshold and threshold can be selected automatically. This paper ends by comparing two noise reduction methods on the basis of their denoising performances, computation time, etc. The effectiveness of these methods introduced in this paper is validated by the results of analysis of the simulated and real signals.

  7. Simultaneous de-noising in phase contrast tomography

    Science.gov (United States)

    Koehler, Thomas; Roessl, Ewald

    2012-07-01

    In this work, we investigate methods for de-noising of tomographic differential phase contrast and absorption contrast images. We exploit the fact that in grating-based differential phase contrast imaging (DPCI), first, several images are acquired simultaneously in exactly the same geometry, and second, these different images can show very different contrast-to-noise-ratios. These features of grating-based DPCI are used to generalize the conventional bilateral filter. Experiments using simulations show a superior de-noising performance of the generalized algorithm compared with the conventional one.

  8. Denoising and artefact reduction in dynamic flat detector CT perfusion imaging using high speed acquisition: first experimental and clinical results.

    Science.gov (United States)

    Manhart, Michael T; Aichert, André; Struffert, Tobias; Deuerling-Zheng, Yu; Kowarschik, Markus; Maier, Andreas K; Hornegger, Joachim; Doerfler, Arnd

    2014-08-21

    Flat detector CT perfusion (FD-CTP) is a novel technique using C-arm angiography systems for interventional dynamic tissue perfusion measurement with high potential benefits for catheter-guided treatment of stroke. However, FD-CTP is challenging since C-arms rotate slower than conventional CT systems. Furthermore, noise and artefacts affect the measurement of contrast agent flow in tissue. Recent robotic C-arms are able to use high speed protocols (HSP), which allow sampling of the contrast agent flow with improved temporal resolution. However, low angular sampling of projection images leads to streak artefacts, which are translated to the perfusion maps. We recently introduced the FDK-JBF denoising technique based on Feldkamp (FDK) reconstruction followed by joint bilateral filtering (JBF). As this edge-preserving noise reduction preserves streak artefacts, an empirical streak reduction (SR) technique is presented in this work. The SR method exploits spatial and temporal information in the form of total variation and time-curve analysis to detect and remove streaks. The novel approach is evaluated in a numerical brain phantom and a patient study. An improved noise and artefact reduction compared to existing post-processing methods and faster computation speed compared to an algebraic reconstruction method are achieved.

  9. Denoising and artefact reduction in dynamic flat detector CT perfusion imaging using high speed acquisition: first experimental and clinical results

    Science.gov (United States)

    Manhart, Michael T.; Aichert, André; Struffert, Tobias; Deuerling-Zheng, Yu; Kowarschik, Markus; Maier, Andreas K.; Hornegger, Joachim; Doerfler, Arnd

    2014-08-01

    Flat detector CT perfusion (FD-CTP) is a novel technique using C-arm angiography systems for interventional dynamic tissue perfusion measurement with high potential benefits for catheter-guided treatment of stroke. However, FD-CTP is challenging since C-arms rotate slower than conventional CT systems. Furthermore, noise and artefacts affect the measurement of contrast agent flow in tissue. Recent robotic C-arms are able to use high speed protocols (HSP), which allow sampling of the contrast agent flow with improved temporal resolution. However, low angular sampling of projection images leads to streak artefacts, which are translated to the perfusion maps. We recently introduced the FDK-JBF denoising technique based on Feldkamp (FDK) reconstruction followed by joint bilateral filtering (JBF). As this edge-preserving noise reduction preserves streak artefacts, an empirical streak reduction (SR) technique is presented in this work. The SR method exploits spatial and temporal information in the form of total variation and time-curve analysis to detect and remove streaks. The novel approach is evaluated in a numerical brain phantom and a patient study. An improved noise and artefact reduction compared to existing post-processing methods and faster computation speed compared to an algebraic reconstruction method are achieved.

  10. A volume-based method for denoising on curved surfaces

    KAUST Repository

    Biddle, Harry

    2013-09-01

    We demonstrate a method for removing noise from images or other data on curved surfaces. Our approach relies on in-surface diffusion: we formulate both the Gaussian diffusion and Perona-Malik edge-preserving diffusion equations in a surface-intrinsic way. Using the Closest Point Method, a recent technique for solving partial differential equations (PDEs) on general surfaces, we obtain a very simple algorithm where we merely alternate a time step of the usual Gaussian diffusion (and similarly Perona-Malik) in a small 3D volume containing the surface with an interpolation step. The method uses a closest point function to represent the underlying surface and can treat very general surfaces. Experimental results include image filtering on smooth surfaces, open surfaces, and general triangulated surfaces. © 2013 IEEE.

  11. A Novel De-noising Model Based on Independent Component Analysis and Beamlet Transform

    Directory of Open Access Journals (Sweden)

    Guangming Zhang

    2012-06-01

    Full Text Available Vehicle video key frame processing as an important part of intelligent transportation systems plays a significant role. Traditional vehicle video key frame extraction often has lots of noises, it can’t meet the requirements of the recognition and tracking. In this paper, a novel method which is combined independent component analysis with beamlet transform is proposed. Firstly, a random matrix was produce to separate the key frame into a separated image for estimate. Then beamlet transform was applied to optimize the coefficients. At last, the coefficients were selected for image reconstruction by inverse of the beamlet transform. By contrast, this approach could remove more noises and reserve more details, and the efficiency of our approach is better than other traditional de-noising approaches.

  12. A Denoising Based Autoassociative Model for Robust Sensor Monitoring in Nuclear Power Plants

    Directory of Open Access Journals (Sweden)

    Ahmad Shaheryar

    2016-01-01

    Full Text Available Sensors health monitoring is essentially important for reliable functioning of safety-critical chemical and nuclear power plants. Autoassociative neural network (AANN based empirical sensor models have widely been reported for sensor calibration monitoring. However, such ill-posed data driven models may result in poor generalization and robustness. To address above-mentioned issues, several regularization heuristics such as training with jitter, weight decay, and cross-validation are suggested in literature. Apart from these regularization heuristics, traditional error gradient based supervised learning algorithms for multilayered AANN models are highly susceptible of being trapped in local optimum. In order to address poor regularization and robust learning issues, here, we propose a denoised autoassociative sensor model (DAASM based on deep learning framework. Proposed DAASM model comprises multiple hidden layers which are pretrained greedily in an unsupervised fashion under denoising autoencoder architecture. In order to improve robustness, dropout heuristic and domain specific data corruption processes are exercised during unsupervised pretraining phase. The proposed sensor model is trained and tested on sensor data from a PWR type nuclear power plant. Accuracy, autosensitivity, spillover, and sequential probability ratio test (SPRT based fault detectability metrics are used for performance assessment and comparison with extensively reported five-layer AANN model by Kramer.

  13. Simultaneous Dehazing and Denoising of Single Hazing Image%单幅雾天图像的同步去噪与复原

    Institute of Scientific and Technical Information of China (English)

    方帅; 王峰; 占吉清; 曹洋; 袁宏武; 饶瑞中

    2012-01-01

    Various noise exists in images in practice, which brings great influence on dehazing results. Aiming at this, single image dehazing algorithm is proposed which can realize simultaneous dehazing and denoising based on joint bilateral filter. Firstly, the initial rough transmission map is estimated based on dark prior. Then, a joint bilateral filter is applied to refine the rough transmission map under the guidance of original image, which decreases the halo artifacts in the dehazing image effectively. Next, another bilateral filter is applied to obtain the dehzaing image, which can realize image denoising at the same time. Finally, a color factor is introduced into the bilateral filtering process to deal with the colordistort problem. Various contrastive experimental results verify that the proposed algorithm realizes single image dehazing and denoising simultaneously with low computational costs. Besides, the color factor brings abundant chromatic details in the dehazing results.%在单幅雾天图像复原中图像不可避免地存在大量噪声,这会对复原结果带来很大影响.文中提出一种基于联合双边滤波的单幅雾天图像同步去噪和复原算法.该算法首先根据暗通道先验假设估计出可反映场景深度特性的初始传输图.其次,利用联合双边滤波器,在原始图像的引导下对初始的粗糙传输图进行细化,有效降低光晕现象的出现.再使用一次双边滤波求解复原图像,在得到去雾图像的同时实现图像去噪.最后,在滤波过程中引入一个色彩恢复因子,解决复原过程引起的色彩失真问题.文中对各种类型的图片进行对比实验,结果表明该算法能在去雾的同时有效抑制图像中的噪声,并保持较低的计算复杂度.此外,引入的色彩恢复因子也给复原图像带来丰富的色彩.

  14. A new modified differential evolution algorithm scheme-based linear frequency modulation radar signal de-noising

    Science.gov (United States)

    Dawood Al-Dabbagh, Mohanad; Dawoud Al-Dabbagh, Rawaa; Raja Abdullah, R. S. A.; Hashim, F.

    2015-06-01

    The main intention of this study was to investigate the development of a new optimization technique based on the differential evolution (DE) algorithm, for the purpose of linear frequency modulation radar signal de-noising. As the standard DE algorithm is a fixed length optimizer, it is not suitable for solving signal de-noising problems that call for variability. A modified crossover scheme called rand-length crossover was designed to fit the proposed variable-length DE, and the new DE algorithm is referred to as the random variable-length crossover differential evolution (rvlx-DE) algorithm. The measurement results demonstrate a highly efficient capability for target detection in terms of frequency response and peak forming that was isolated from noise distortion. The modified method showed significant improvements in performance over traditional de-noising techniques.

  15. An Imbalanced Data Classification Algorithm of De-noising Auto-Encoder Neural Network Based on SMOTE

    Directory of Open Access Journals (Sweden)

    Zhang Chenggang

    2016-01-01

    Full Text Available Imbalanced data classification problem has always been one of the hot issues in the field of machine learning. Synthetic minority over-sampling technique (SMOTE is a classical approach to balance datasets, but it may give rise to such problem as noise. Stacked De-noising Auto-Encoder neural network (SDAE, can effectively reduce data redundancy and noise through unsupervised layer-wise greedy learning. Aiming at the shortcomings of SMOTE algorithm when synthesizing new minority class samples, the paper proposed a Stacked De-noising Auto-Encoder neural network algorithm based on SMOTE, SMOTE-SDAE, which is aimed to deal with imbalanced data classification. The proposed algorithm is not only able to synthesize new minority class samples, but it also can de-noise and classify the sampled data. Experimental results show that compared with traditional algorithms, SMOTE-SDAE significantly improves the minority class classification accuracy of the imbalanced datasets.

  16. Binary Codes for Tagging X-Ray Images via Deep De-Noising Autoencoders

    OpenAIRE

    Sze-To, Antonio; Tizhoosh, Hamid R; Wong, Andrew K. C.

    2016-01-01

    A Content-Based Image Retrieval (CBIR) system which identifies similar medical images based on a query image can assist clinicians for more accurate diagnosis. The recent CBIR research trend favors the construction and use of binary codes to represent images. Deep architectures could learn the non-linear relationship among image pixels adaptively, allowing the automatic learning of high-level features from raw pixels. However, most of them require class labels, which are expensive to obtain, ...

  17. Wavelet-based denoising of the Fourier metric in real-time wavefront correction for single molecule localization microscopy

    Science.gov (United States)

    Tehrani, Kayvan Forouhesh; Mortensen, Luke J.; Kner, Peter

    2016-03-01

    Wavefront sensorless schemes for correction of aberrations induced by biological specimens require a time invariant property of an image as a measure of fitness. Image intensity cannot be used as a metric for Single Molecule Localization (SML) microscopy because the intensity of blinking fluorophores follows exponential statistics. Therefore a robust intensity-independent metric is required. We previously reported a Fourier Metric (FM) that is relatively intensity independent. The Fourier metric has been successfully tested on two machine learning algorithms, a Genetic Algorithm and Particle Swarm Optimization, for wavefront correction about 50 μm deep inside the Central Nervous System (CNS) of Drosophila. However, since the spatial frequencies that need to be optimized fall into regions of the Optical Transfer Function (OTF) that are more susceptible to noise, adding a level of denoising can improve performance. Here we present wavelet-based approaches to lower the noise level and produce a more consistent metric. We compare performance of different wavelets such as Daubechies, Bi-Orthogonal, and reverse Bi-orthogonal of different degrees and orders for pre-processing of images.

  18. Method for signal decomposition and denoising based on nonuniform cosine-modulated filter banks

    Institute of Scientific and Technical Information of China (English)

    Xuemei Xie; Li Li; Guangming Shi; Bin Peng

    2008-01-01

    In this paper,a novel method for signal decomposition and denoising is proposed based on a nonuniform filter bank (NUFB),which is derived from a uniform filter bank.With this method,the signal is firstly decomposed into M subbands using a uniform filter bank.Then according to their energy distribution,the corresponding consecutive filters are merged to compose the nonuniform filters.With the resulting NUFB,the signal can be readily matched and flexibly decomposed according to its power spectrum distribution.As another advantage,this method can be used to detect and remove the narrow-band noise from the corrupted signal.To verify the proposed method,a simulation of extracting the main information of an audio signal and removing its glitch is given.

  19. Implemented Wavelet Packet Tree based Denoising Algorithm in Bus Signals of a Wearable Sensorarray

    Science.gov (United States)

    Schimmack, M.; Nguyen, S.; Mercorelli, P.

    2015-11-01

    This paper introduces a thermosensing embedded system with a sensor bus that uses wavelets for the purposes of noise location and denoising. From the principle of the filter bank the measured signal is separated in two bands, low and high frequency. The proposed algorithm identifies the defined noise in these two bands. With the Wavelet Packet Transform as a method of Discrete Wavelet Transform, it is able to decompose and reconstruct bus input signals of a sensor network. Using a seminorm, the noise of a sequence can be detected and located, so that the wavelet basis can be rearranged. This particularly allows for elimination of any incoherent parts that make up unavoidable measuring noise of bus signals. The proposed method was built based on wavelet algorithms from the WaveLab 850 library of the Stanford University (USA). This work gives an insight to the workings of Wavelet Transformation.

  20. Threshold Optimized Wavelet for Remotely Sensed Image Denoising%阈值优化的遥感影像小波去噪

    Institute of Scientific and Technical Information of China (English)

    刘晓莉; 任丽秋; 李伟; 王小国; 胡忠威

    2016-01-01

    To solve the problems of denoising limitation,noise residue and noise misjudgment of traditional wavelet threshold in removing noise of remote sensing image,an optimization algorithm of wavelet threshold function in view of the remote sensing image was brought forward.The algorithm made use of wavelet edge detection algorithm to determine the wavelet coefficients of remote sensing image edge character.Then,according to the noise variance,it set optimized threshold function to remove noise,in other words,it modified the noise variance and made it vary with the decomposition scale based on the previous unified threshold.The algorithm preserved advantages of the traditional soft threshold and hard threshold, improved their defects,and generated a new threshold function,while making the new threshold more flexible in the processing of wavelet coefficients.After optimization of wavelet threshold denoising,the smooth remote sensing image could be obtained. Finally,the remote sensing image of wavelet edge detection was embedded into the smooth remote sensing image.The experimental results show that compared with the traditional wavelet threshold denoising method,the algorithm could solve the denoising problems in the traditional threshold function,keep the details of the remote sensing image in the course of removing noise,and improve the signal-to-noise ratio.%针对传统的小波阈值在去除遥感影像噪声时存在噪声残留和噪声误判的问题,提出了针对遥感影像的小波阈值函数优化算法.该算法利用小波边缘检测算法确定遥感影像边缘特征的小波系数,然后根据噪声的方差设置优化的阈值函数去噪,即在以往的统一阈值基础上加以修改,使阈值能随着分解尺度的变化而改变,对传统的软阈值和硬阈值的优点予以保留,改进它们的缺点,生成一种新的阈值函数,使它在处理小波系数时更加灵活.经过优化的小波阈值去噪后得到平滑遥感影像,之

  1. Denoising autoencoder with modulated lateral connections learns invariant representations of natural images

    OpenAIRE

    Rasmus, Antti; Raiko, Tapani; Valpola, Harri

    2014-01-01

    Suitable lateral connections between encoder and decoder are shown to allow higher layers of a denoising autoencoder (dAE) to focus on invariant representations. In regular autoencoders, detailed information needs to be carried through the highest layers but lateral connections from encoder to decoder relieve this pressure. It is shown that abstract invariant features can be translated to detailed reconstructions when invariant features are allowed to modulate the strength of the lateral conn...

  2. Sinogram denoising via simultaneous sparse representation in learned dictionaries

    Science.gov (United States)

    Karimi, Davood; Ward, Rabab K.

    2016-05-01

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster.

  3. Noise Removal From Microarray Images Using Maximum a Posteriori Based Bivariate Estimator

    Directory of Open Access Journals (Sweden)

    A.Sharmila Agnal

    2013-01-01

    Full Text Available Microarray Image contains information about thousands of genes in an organism and these images are affected by several types of noises. They affect the circular edges of spots and thus degrade the image quality. Hence noise removal is the first step of cDNA microarray image analysis for obtaining gene expression level and identifying the infected cells. The Dual Tree Complex Wavelet Transform (DT-CWT is preferred for denoising microarray images due to its properties like improved directional selectivity and near shift-invariance. In this paper, bivariate estimators namely Linear Minimum Mean Squared Error (LMMSE and Maximum A Posteriori (MAP derived by applying DT-CWT are used for denoising microarray images. Experimental results show that MAP based denoising method outperforms existing denoising techniques for microarray images.

  4. Decoding Stacked Denoising Autoencoders

    OpenAIRE

    Sonoda, Sho; Murata, Noboru

    2016-01-01

    Data representation in a stacked denoising autoencoder is investigated. Decoding is a simple technique for translating a stacked denoising autoencoder into a composition of denoising autoencoders in the ground space. In the infinitesimal limit, a composition of denoising autoencoders is reduced to a continuous denoising autoencoder, which is rich in analytic properties and geometric interpretation. For example, the continuous denoising autoencoder solves the backward heat equation and transpo...

  5. An Integrated Denoising Method for Sensor Mixed Noises Based on Wavelet Packet Transform and Energy-Correlation Analysis

    Directory of Open Access Journals (Sweden)

    Chao Tan

    2014-01-01

    Full Text Available In order to solve the problem of industrial sensor signal denoising, an integrated denoising method for sensor mixed noises based on wavelet packet transform and energy-correlation analysis is proposed. The architecture of proposed method is designed and the key technologies, such as wavelet packet transformation, energy-correlation analysis, and processing method of wavelet packet coefficients based on energy-correlation analysis, are presented. Finally, a simulation example for a specific signal and an application of shearer cutting current signal, which mainly contain white Gaussian noise and impact noise, are carried out, and the simulation and application results show that the proposed method is effective and is outperforming others.

  6. Image Denoising And Enhancement Using Multiwavelet With Hard Threshold In Digital Mammographic Images

    OpenAIRE

    Kother Mohideen; Arumuga Perumal; Nallaperumal Krishnan; Mohamed Sathik

    2011-01-01

    Breast cancer continues to be a significant public health problem in the world. The diagnosing mammographymethod is the most effective technology for early detection of the breast cancer. However, in some cases, it is difficult forradiologists to detect the typical diagnostic signs, such as masses and microcalcifications on the mammograms. Dense regionin digital mammographic images are usually noisy and have low contrast. And their visual screening is difficult to view forphysicians. This pap...

  7. A spatio-temporal filtering approach to denoising of single-trial ERP in rapid image triage.

    Science.gov (United States)

    Yu, Ke; Shen, Kaiquan; Shao, Shiyun; Ng, Wu Chun; Kwok, Kenneth; Li, Xiaoping

    2012-03-15

    Conventional search for images containing points of interest (POI) in large-volume imagery is costly and sometimes even infeasible. The rapid image triage (RIT) system which is a human cognition guided computer vision technique is potentially a promising solution to the problem. In the RIT procedure, images are sequentially presented to a subject at a high speed. At the instant of observing a POI image, unique POI event-related potentials (ERP) characterized by P300 will be elicited and measured on the scalp. With accurate single-trial detection of such unique ERP, RIT can differentiate POI images from non-POI images. However, like other brain-computer interface systems relying on single-trial detection, RIT suffers from the low signal-to-noise ratio (SNR) of the single-trial ERP. This paper presents a spatio-temporal filtering approach tailored for the denoising of single-trial ERP for RIT. The proposed approach is essentially a non-uniformly delayed spatial Gaussian filter that attempts to suppress the non-event related background electroencephalogram (EEG) and other noises without significantly attenuating the useful ERP signals. The efficacy of the proposed approach is illustrated by both simulation tests and real RIT experiments. In particular, the real RIT experiments on 20 subjects show a statistically significant and meaningful average decrease of 9.8% in RIT classification error rate, compared to that without the proposed approach.

  8. The Hilbert-Huang Transform-Based Denoising Method for the TEM Response of a PRBS Source Signal

    Science.gov (United States)

    Hai, Li; Guo-qiang, Xue; Pan, Zhao; Hua-sen, Zhong; Khan, Muhammad Younis

    2016-08-01

    The denoising process is critical in processing transient electromagnetic (TEM) sounding data. For the full waveform pseudo-random binary sequences (PRBS) response, an inadequate noise estimation may result in an erroneous interpretation. We consider the Hilbert-Huang transform (HHT) and its application to suppress the noise in the PRBS response. The focus is on the thresholding scheme to suppress the noise and the analysis of the signal based on its Hilbert time-frequency representation. The method first decomposes the signal into the intrinsic mode function, and then, inspired by the thresholding scheme in wavelet analysis; an adaptive and interval thresholding is conducted to set to zero all the components in intrinsic mode function which are lower than a threshold related to the noise level. The algorithm is based on the characteristic of the PRBS response. The HHT-based denoising scheme is tested on the synthetic and field data with the different noise levels. The result shows that the proposed method has a good capability in denoising and detail preservation.

  9. Rolling element bearing instantaneous rotational frequency estimation based on EMD soft-thresholding denoising and instantaneous fault characteristic frequency

    Institute of Scientific and Technical Information of China (English)

    赵德尊; 李建勇; 程卫东; 王天杨; 温伟刚

    2016-01-01

    The accurate estimation of the rolling element bearing instantaneous rotational frequency (IRF) is the key capability of the order tracking method based on time-frequency analysis. The rolling element bearing IRF can be accurately estimated according to the instantaneous fault characteristic frequency(IFCF). However, in an environment with a low signal-to-noise ratio (SNR), e.g., an incipient fault or function at a low speed, the signal contains strong background noise that seriously affects the effectiveness of the aforementioned method. An algorithm of signal preprocessing based on empirical mode decomposition (EMD) and wavelet shrinkage was proposed in this work. Compared with EMD denoising by the cross-correlation coefficient and kurtosis(CCK) criterion, the method of EMD soft-thresholding (ST) denoising can ensure the integrity of the signal, improve the SNR, and highlight fault features. The effectiveness of the algorithm for rolling element bearing IRF estimation by EMD ST denoising and the IFCF was validated by both simulated and experimental bearing vibration signals at a low SNR.

  10. Adaptive de-noising method based on wavelet and adaptive learning algorithm in on-line PD monitoring

    Institute of Scientific and Technical Information of China (English)

    王立欣; 诸定秋; 蔡惟铮

    2002-01-01

    It is an important step in the online monitoring of partial discharge (PD) to extract PD pulses from various background noises. An adaptive de-noising method is introduced for adaptive noise reduction during detection of PD pulses. This method is based on Wavelet Transform (WT) , and in the wavelet domain the noises decomposed at the levels are reduced by independent thresholds. Instead of the standard hard thresholding function, a new type of hard thresholding function with continuous derivative is employed by this method. For the selection of thresholds, an unsupervised learning algorithm based on gradient in a mean square error (MSE) is present to search for the optimal threshold for noise reduction, and the optimal threshold is selected when the minimum MSE is obtained. With the simulating signals and on-site experimental data processed by this method,it is shown that the background noises such as narrowband noises can be reduced efficiently. Furthermore, it is proved that in comparison with the conventional wavelet de-noising method the adaptive de-noising method has a better performance in keeping the pulses and is more adaptive when suppressing the background noises of PD signals.

  11. De-noising and retrieving algorithm of Mie lidar data based on the particle filter and the Fernald method.

    Science.gov (United States)

    Li, Chen; Pan, Zengxin; Mao, Feiyue; Gong, Wei; Chen, Shihua; Min, Qilong

    2015-10-01

    The signal-to-noise ratio (SNR) of an atmospheric lidar decreases rapidly as range increases, so that maintaining high accuracy when retrieving lidar data at the far end is difficult. To avoid this problem, many de-noising algorithms have been developed; in particular, an effective de-noising algorithm has been proposed to simultaneously retrieve lidar data and obtain a de-noised signal by combining the ensemble Kalman filter (EnKF) and the Fernald method. This algorithm enhances the retrieval accuracy and effective measure range of a lidar based on the Fernald method, but sometimes leads to a shift (bias) in the near range as a result of the over-smoothing caused by the EnKF. This study proposes a new scheme that avoids this phenomenon using a particle filter (PF) instead of the EnKF in the de-noising algorithm. Synthetic experiments show that the PF performs better than the EnKF and Fernald methods. The root mean square error of PF are 52.55% and 38.14% of that of the Fernald and EnKF methods, and PF increases the SNR by 44.36% and 11.57% of that of the Fernald and EnKF methods, respectively. For experiments with real signals, the relative bias of the EnKF is 5.72%, which is reduced to 2.15% by the PF in the near range. Furthermore, the suppression impact on the random noise in the far range is also made significant via the PF. An extensive application of the PF method can be useful in determining the local and global properties of aerosols. PMID:26480164

  12. A hybrid fault diagnosis method based on second generation wavelet de-noising and local mean decomposition for rotating machinery.

    Science.gov (United States)

    Liu, Zhiwen; He, Zhengjia; Guo, Wei; Tang, Zhangchun

    2016-03-01

    In order to extract fault features of large-scale power equipment from strong background noise, a hybrid fault diagnosis method based on the second generation wavelet de-noising (SGWD) and the local mean decomposition (LMD) is proposed in this paper. In this method, a de-noising algorithm of second generation wavelet transform (SGWT) using neighboring coefficients was employed as the pretreatment to remove noise in rotating machinery vibration signals by virtue of its good effect in enhancing the signal-noise ratio (SNR). Then, the LMD method is used to decompose the de-noised signals into several product functions (PFs). The PF corresponding to the faulty feature signal is selected according to the correlation coefficients criterion. Finally, the frequency spectrum is analyzed by applying the FFT to the selected PF. The proposed method is applied to analyze the vibration signals collected from an experimental gearbox and a real locomotive rolling bearing. The results demonstrate that the proposed method has better performances such as high SNR and fast convergence speed than the normal LMD method.

  13. Image Denoising and Segmentation Basedon Mutual Information Criterion%基于互信息准则的图像平滑和分割

    Institute of Scientific and Technical Information of China (English)

    温铁祥; 潘正洋; 辜嘉

    2014-01-01

    Scalespace play an important role in many computer vision tasks. Automatic scale selection is the foundation of multi-scale image analysis, but its performance is still very subjective and empirical. To automatically select the appropriate scale for a particular application, a scale selection model based on information theory was proposed in this paper. The proposed model utilizes the mutual information as a measuring criterion of similarity for the optimal scale selection in multi-scale analysis, with applications to the image denoising and segmentation. Firstly, the multi-scale image smoothing and denoising method based on the morphological operator was studied. This technique does not require the prior knowledge of the noise variance and can effectively eliminate the changes of illumination. Secondly, a clustering-based unsupervised image segmentation algorithm was developed by recursively pruning the Huffman coding tree. The proposed clustering algorithm can preserve the maximum amount of information at a speciifc clustering number from the information-theoretical point of view. Finally, for the feasibility of the proposed algorithms, its theoretical properties were analyzed mathematically and its performance was tested through a series of experiments, which demonstrate that it yields the optimal scale for the developed image denoising and segmentation algorithms.%在计算机视觉领域,尺度空间扮演着一个很重要的角色。多尺度图像分析的基础是自动尺度选择,但它的性能非常主观和依赖于经验。基于互信息的度量准则,文章提出了一种自动选取最优尺度的模型。首先,研究专注于基于形态学算子的多尺度图像平滑去噪方法,这种技术不需要噪声方差的先验知识,可以有效地消除照度的变化。其次,通过递归修剪Huffman编码树,设计了一个基于聚类的无监督图像分割算法。一个特定的聚类数从信息理论的角度来看,提

  14. Intelligent Mechanical Fault Diagnosis Based on Multiwavelet Adaptive Threshold Denoising and MPSO

    Directory of Open Access Journals (Sweden)

    Hao Sun

    2014-01-01

    Full Text Available The condition diagnosis of rotating machinery depends largely on the feature analysis of vibration signals measured for the condition diagnosis. However, the signals measured from rotating machinery usually are nonstationary and nonlinear and contain noise. The useful fault features are hidden in the heavy background noise. In this paper, a novel fault diagnosis method for rotating machinery based on multiwavelet adaptive threshold denoising and mutation particle swarm optimization (MPSO is proposed. Geronimo, Hardin, and Massopust (GHM multiwavelet is employed for extracting weak fault features under background noise, and the method of adaptively selecting appropriate threshold for multiwavelet with energy ratio of multiwavelet coefficient is presented. The six nondimensional symptom parameters (SPs in the frequency domain are defined to reflect the features of the vibration signals measured in each state. Detection index (DI using statistical theory has been also defined to evaluate the sensitiveness of SP for condition diagnosis. MPSO algorithm with adaptive inertia weight adjustment and particle mutation is proposed for condition identification. MPSO algorithm effectively solves local optimum and premature convergence problems of conventional particle swarm optimization (PSO algorithm. It can provide a more accurate estimate on fault diagnosis. Practical examples of fault diagnosis for rolling element bearings are given to verify the effectiveness of the proposed method.

  15. 一种新颖的Contourlet域中子辐射图像降噪方法%A Novel Method of Neutron Radiography Image Denoising Using Contourlet Transform

    Institute of Scientific and Technical Information of China (English)

    金炜; 魏彪; 潘英俊; 冯鹏; 唐彬

    2006-01-01

    由于受CCD相机、中子散射及控制电路等因素的影响,数字中子照相系统所获图像常被噪音污染,抑制噪音对于提高数字中子照相系统图像质量具有重要意义.利用多尺度几何分析能捕获图像几何结构的特性,提出一种新颖的基于contourlet变换的图像去噪方法.通过计算方差一致性测度(VHM),确定局部自适应窗口,从而最优估计contourlet系数的阈值萎缩因子,对contourlet系数进行萎缩,实现降噪功能.该方法将阈值去噪法与基于子带相关的图像去噪法相结合,充分利用在同一方向子带中沿边缘或轮廓contourlet系数的相关性,它能实现"去噪"和"保留信号"之间的平衡.实验结果表明,该方法在峰值信噪比指标上优于传统的contourlet系数硬阈值处理方法及维纳滤波方法,能有效地抑制图像噪音,同时适合于中子辐射图像的处理.%A new image transform, namely, the contourlet transform is introduced,which can capture the intrinsic geometrical structure of image. Furthermore, a novel image denoising scheme based on contourlet is presented. Via calculating variance homogeneous measurement (VHM), the locally adaptive window is determined to estimate the shrinkage factor optimally, then the contourlet coefficient is shrunk using the shrinkage factor. The scheme utilizing the correlation of contourlet coefficients in the same subband along the edge or contour of the image, which can get the tradeoff between "noises removing" and "details preserving". In numerical comparisons with various methods, the presented scheme outperforms the traditional contourlet denoising method based on hard-thresholding and Wiener filter in terms of PSNR. Experiments also show that this scheme could not only remove the noises effectively, but also suit for the neutron radiography system.

  16. Compression and denoising in magnetic resonance imaging via SVD on the Fourier domain using computer algebra

    Science.gov (United States)

    Díaz, Felipe

    2015-09-01

    Magnetic resonance (MR) data reconstruction can be computationally a challenging task. The signal-to-noise ratio might also present complications, especially with high-resolution images. In this sense, data compression can be useful not only for reducing the complexity and memory requirements, but also to reduce noise, even to allow eliminate spurious components.This article proposes the use of a system based on singular value decomposition of low order for noise reconstruction and reduction in MR imaging system. The proposed method is evaluated using in vivo MRI data. Rebuilt images with less than 20 of the original data and with similar quality in terms of visual inspection are presented. Also a quantitative evaluation of the method is presented.

  17. Data-adaptive image-denoising for detecting and quantifying nanoparticle entry in mucosal tissues through intravital 2-photon microscopy

    Directory of Open Access Journals (Sweden)

    Torsten Bölke

    2014-11-01

    Full Text Available Intravital 2-photon microscopy of mucosal membranes across which nanoparticles enter the organism typically generates noisy images. Because the noise results from the random statistics of only very few photons detected per pixel, it cannot be avoided by technical means. Fluorescent nanoparticles contained in the tissue may be represented by a few bright pixels which closely resemble the noise structure. We here present a data-adaptive method for digital denoising of datasets obtained by 2-photon microscopy. The algorithm exploits both local and non-local redundancy of the underlying ground-truth signal to reduce noise. Our approach automatically adapts the strength of noise suppression in a data-adaptive way by using a Bayesian network. The results show that the specific adaption to both signal and noise characteristics improves the preservation of fine structures such as nanoparticles while less artefacts were produced as compared to reference algorithms. Our method is applicable to other imaging modalities as well, provided the specific noise characteristics are known and taken into account.

  18. Angle Based Orthogonal FRIT and Its Application in Image Denoising%一种基于角度的正交FRIT变换及其在图像去噪中的应用

    Institute of Scientific and Technical Information of China (English)

    刘云霞; 彭玉华; 孟庆芳; 尹勇

    2007-01-01

    M.N.Do和M.Vetterli提出的Finite Ridgelet Transform(FRIT)因其对线奇异特征的高效表示能力而被广泛应用,但其在图像压缩、去噪的处理中却受到"环绕"现象的严重影响.本文在揭示"环绕"现象和FRAT域系数关系的基础上,提出一种基于角度的正交FRIT变换方案(Angle-based FRIT,AFRIT).该方案具有更好的能量集中特性,并有效地降低了"环绕"现象.进一步,对正交FRIT去噪问题建模,并提出了一种基于AFRIT的改进阈值.不同噪声水平下各种图像的去噪实验结果表明,采用改进阈值的AFRIT同现在常用的FRIT去噪方法相比,具有明显的优越性.

  19. A Fast Alternating Minimization Algorithm for Nonlocal Vectorial Total Variational Multichannel Image Denoising

    Directory of Open Access Journals (Sweden)

    Rubing Xi

    2014-01-01

    Full Text Available The variational models with nonlocal regularization offer superior image restoration quality over traditional method. But the processing speed remains a bottleneck due to the calculation quantity brought by the recent iterative algorithms. In this paper, a fast algorithm is proposed to restore the multichannel image in the presence of additive Gaussian noise by minimizing an energy function consisting of an l2-norm fidelity term and a nonlocal vectorial total variational regularization term. This algorithm is based on the variable splitting and penalty techniques in optimization. Following our previous work on the proof of the existence and the uniqueness of the solution of the model, we establish and prove the convergence properties of this algorithm, which are the finite convergence for some variables and the q-linear convergence for the rest. Experiments show that this model has a fabulous texture-preserving property in restoring color images. Both the theoretical derivation of the computation complexity analysis and the experimental results show that the proposed algorithm performs favorably in comparison to the widely used fixed point algorithm.

  20. Real-time Dynamic MRI Reconstruction using Stacked Denoising Autoencoder

    OpenAIRE

    Majumdar, Angshul

    2015-01-01

    In this work we address the problem of real-time dynamic MRI reconstruction. There are a handful of studies on this topic; these techniques are either based on compressed sensing or employ Kalman Filtering. These techniques cannot achieve the reconstruction speed necessary for real-time reconstruction. In this work, we propose a new approach to MRI reconstruction. We learn a non-linear mapping from the unstructured aliased images to the corresponding clean images using a stacked denoising aut...

  1. Intensity-constrained total variation regularization for image denoising and deblurring

    OpenAIRE

    Swenson, Daniel

    2011-01-01

    Many problems in digital image processing are problems in digital image restoration. Our goal in this technical report is to examine what benefits are incurred when we impose an intensity constraint in the context of total variation regularization, which is a common image restoration tool that works be encouraging smoothness of the restored image--Abstract.

  2. Classical low-pass filter and real-time wavelet-based denoising technique implemented on a DSP: a comparison study

    Science.gov (United States)

    Dolabdjian, Ch.; Fadili, J.; Huertas Leyva, E.

    2002-11-01

    We have implemented a real-time numerical denoising algorithm, using the Discrete Wavelet Transform (DWT), on a TMS320C3x Digital Signal Processor (DSP). We also compared from a theoretical and practical viewpoints this post-processing approach to a more classical low-pass filter. This comparison was carried out using an ECG-type signal (ElectroCardiogram). The denoising approach is an elegant and extremely fast alternative to the classical linear filters class. It is particularly adapted to non-stationary signals such as those encountered in biological applications. The denoising allows to substantially improve detection of such signals over Fourier-based techniques. This processing step is a vital element in our acquisition chain using high sensitivity magnetic sensors. It should enhance detection of cardiac-type magnetic signals or magnetic particles in movement.

  3. Denoising and covariance estimation of single particle cryo-EM images.

    Science.gov (United States)

    Bhamre, Tejal; Zhang, Teng; Singer, Amit

    2016-07-01

    The problem of image restoration in cryo-EM entails correcting for the effects of the Contrast Transfer Function (CTF) and noise. Popular methods for image restoration include 'phase flipping', which corrects only for the Fourier phases but not amplitudes, and Wiener filtering, which requires the spectral signal to noise ratio. We propose a new image restoration method which we call 'Covariance Wiener Filtering' (CWF). In CWF, the covariance matrix of the projection images is used within the classical Wiener filtering framework for solving the image restoration deconvolution problem. Our estimation procedure for the covariance matrix is new and successfully corrects for the CTF. We demonstrate the efficacy of CWF by applying it to restore both simulated and experimental cryo-EM images. Results with experimental datasets demonstrate that CWF provides a good way to evaluate the particle images and to see what the dataset contains even without 2D classification and averaging.

  4. Segmentation of Confocal Raman Microspectroscopic Imaging Data Using Edge-Preserving Denoising and Clustering

    OpenAIRE

    Alexandrov, Theodore; Lasch, Peter

    2013-01-01

    Over the past decade, confocal Raman microspectroscopic (CRM) imaging has matured into a useful analytical tool to obtain spatially resolved chemical information on the molecular composition of biological samples and has found its way into histopathology, cytology, and microbiology. A CRM imaging data set is a hyperspectral image in which Raman intensities are represented as a function of three coordinates: a spectral coordinate λ encoding the wavelength and two spatial coordinates x and y. U...

  5. Denoising of Medical Ultrasound Images Using Spatial Filtering and Multiscale Transforms

    OpenAIRE

    V N Prudhvi Raj; T Venkateswarlu

    2013-01-01

    Medical imaging became the integral part in health care where all the critical diagnosis such as blocks inthe veins, plaques in the carotid arteries, minute fractures in the bones, blood flow in the brain etc arecarried out without opening the patient’s body. There are various imaging modalities for differentapplications to observe the anatomical and physiological conditions of the patient. These modalities willintroduce noise and artifacts during medical image acquisition. If the noise and a...

  6. Bayesian inference on multiscale models for poisson intensity estimation: applications to photon-limited image denoising.

    Science.gov (United States)

    Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George

    2009-08-01

    We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.

  7. Integration of speckle de-noising and image segmentation using Synthetic Aperture Radar image for flood extent extraction

    Indian Academy of Sciences (India)

    J Senthilnath; H Vikram Shenoy; Ritwik Rajendra; S N Omkar; V Mani; P G Diwakar

    2013-06-01

    Flood is one of the detrimental hydro-meteorological threats to mankind. This compels very efficient flood assessment models. In this paper, we propose remote sensing based flood assessment using Synthetic Aperture Radar (SAR) image because of its imperviousness to unfavourable weather conditions. However, they suffer from the speckle noise. Hence, the processing of SAR image is applied in two stages: speckle removal filters and image segmentation methods for flood mapping. The speckle noise has been reduced with the help of Lee, Frost and Gamma MAP filters. A performance comparison of these speckle removal filters is presented. From the results obtained, we deduce that the Gamma MAP is reliable. The selected Gamma MAP filtered image is segmented using Gray Level Co-occurrence Matrix (GLCM) and Mean Shift Segmentation (MSS). The GLCM is a texture analysis method that separates the image pixels into water and non-water groups based on their spectral feature whereas MSS is a gradient ascent method, here segmentation is carried out using spectral and spatial information. As test case, Kosi river flood is considered in our study. From the segmentation result of both these methods are comprehensively analysed and concluded that the MSS is efficient for flood mapping.

  8. Log-Gabor Feature-Based Nonlocal Means Denoising Algorithm and Its Acceleration Scheme%基于Log-Gabor特征的非局部均值去噪算法及其加速方案研究

    Institute of Scientific and Technical Information of China (English)

    张嵩; 景华炯

    2015-01-01

    非局部均值是一种基于像素长程相似性的图像空域去噪算法,它一般采用灰度块特征估计图像像素间的相似度。文中首先使用基于Log-Gabor特征的像素间相似度估计获得较好的去噪效果。然后将Log-Gabor几何特征与灰度特征相融合,所形成的混合相似度具有更佳的图像局部自适应性,去噪性能也得到进一步提升。最后基于Johnson-Lindenstrauss引理研究利用随机降维方法降低相似度计算的复杂度,并对该加速方案的效果,包括降维前后运行时间对比、降维程度以及随机矩阵生成方法对去噪性能的影响,进行详细试验分析,结果证明基于随机降维的加速方案的有效性。%The nonlocal means ( NLM) is a spatial domain image denoising method, and it exploits long range similarities between pixels of natural images. Notably, the similarity between true pixel values in original NLM is estimated based on patch information of noise-corrupted input image. In this paper, the pixel similarities in NLM are estimated based on Log-Gabor features to achieve good denoising results. Moreover, the mixed similarity combining the Log-Gabor features with intensity information is exploited to get better adaptivity to local image characteristics and further improve the denoising quality. In addition, the random projection-based NLM speed-up method is studied based on Johnson-Lindenstrauss lemma. Extensive tests including the running time comparison before and after dimensionality reduction, the impact of types of projection matrices and the extent of dimensionality reduction on final denoising performances are carried out. The experimental results confirm the effectiveness of the proposed acceleration scheme.

  9. Fingerprint Image Segmentation Algorithm Based on Contourlet Transform Technology

    Directory of Open Access Journals (Sweden)

    Guanghua Zhang

    2016-09-01

    Full Text Available This paper briefly introduces two classic algorithms for fingerprint image processing, which include the soft threshold denoise algorithm of wavelet domain based on wavelet domain and the fingerprint image enhancement algorithm based on Gabor function. Contourlet transform has good texture sensitivity and can be used for the segmentation enforcement of the fingerprint image. The method proposed in this paper has attained the final fingerprint segmentation image through utilizing a modified denoising for a high-frequency coefficient after Contourlet decomposition, highlighting the fingerprint ridge line through modulus maxima detection and finally connecting the broken fingerprint line using a value filter in direction. It can attain richer direction information than the method based on wavelet transform and Gabor function and can make the positioning of detailed features more accurate. However, its ridge should be more coherent. Experiments have shown that this algorithm is obviously superior in fingerprint features detection.

  10. A de-noising algorithm based on wavelet threshold-exponential adaptive window width-fitting for ground electrical source airborne transient electromagnetic signal

    Science.gov (United States)

    Ji, Yanju; Li, Dongsheng; Yu, Mingmei; Wang, Yuan; Wu, Qiong; Lin, Jun

    2016-05-01

    The ground electrical source airborne transient electromagnetic system (GREATEM) on an unmanned aircraft enjoys considerable prospecting depth, lateral resolution and detection efficiency, etc. In recent years it has become an important technical means of rapid resources exploration. However, GREATEM data are extremely vulnerable to stationary white noise and non-stationary electromagnetic noise (sferics noise, aircraft engine noise and other human electromagnetic noises). These noises will cause degradation of the imaging quality for data interpretation. Based on the characteristics of the GREATEM data and major noises, we propose a de-noising algorithm utilizing wavelet threshold method and exponential adaptive window width-fitting. Firstly, the white noise is filtered in the measured data using the wavelet threshold method. Then, the data are segmented using data window whose step length is even logarithmic intervals. The data polluted by electromagnetic noise are identified within each window based on the discriminating principle of energy detection, and the attenuation characteristics of the data slope are extracted. Eventually, an exponential fitting algorithm is adopted to fit the attenuation curve of each window, and the data polluted by non-stationary electromagnetic noise are replaced with their fitting results. Thus the non-stationary electromagnetic noise can be effectively removed. The proposed algorithm is verified by the synthetic and real GREATEM signals. The results show that in GREATEM signal, stationary white noise and non-stationary electromagnetic noise can be effectively filtered using the wavelet threshold-exponential adaptive window width-fitting algorithm, which enhances the imaging quality.

  11. An Ultrahigh Frequency Partial Discharge Signal De-Noising Method Based on a Generalized S-Transform and Module Time-Frequency Matrix.

    Science.gov (United States)

    Liu, Yushun; Zhou, Wenjun; Li, Pengfei; Yang, Shuai; Tian, Yan

    2016-01-01

    Due to electromagnetic interference in power substations, the partial discharge (PD) signals detected by ultrahigh frequency (UHF) antenna sensors often contain various background noises, which may hamper high voltage apparatus fault diagnosis and localization. This paper proposes a novel de-noising method based on the generalized S-transform and module time-frequency matrix to suppress noise in UHF PD signals. The sub-matrix maximum module value method is employed to calculate the frequencies and amplitudes of periodic narrowband noise, and suppress noise through the reverse phase cancellation technique. In addition, a singular value decomposition de-noising method is employed to suppress Gaussian white noise in UHF PD signals. Effective singular values are selected by employing the fuzzy c-means clustering method to recover the PD signals. De-noising results of simulated and field detected UHF PD signals prove the feasibility of the proposed method. Compared with four conventional de-noising methods, the results show that the proposed method can suppress background noise in the UHF PD signal effectively, with higher signal-to-noise ratio and less waveform distortion. PMID:27338409

  12. Denoising of T Wave Using Wavelet Transform

    Directory of Open Access Journals (Sweden)

    K. Srinivas

    2014-03-01

    Full Text Available A Wavelet transform based denoising of T Wave is proposed in the work. The T wave is denoised with fixed form threshold, Rigorous SURE, Heuristic SURE threshold methods. Daubechies wavelets at different levels are used in the work. The Simulated results obtained from the data collected from MATLAB based ECG simulator and data recorded from 8 Channel Physiograph. It is observed that for hard thresholding the standard deviation is very much decreased with scaled noise

  13. Denoised and texture enhanced MVCT to improve soft tissue conspicuity

    Energy Technology Data Exchange (ETDEWEB)

    Sheng, Ke, E-mail: ksheng@mednet.ucla.edu; Qi, Sharon X. [Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States); Gou, Shuiping [Department of Radiation Oncology, University of California, Los Angeles, California 90095 and Xidian University, Xi’An 710071 (China); Wu, Jiaolong [Xidian University, Xi’An 710071 (China)

    2014-10-15

    Purpose: MVCT images have been used in TomoTherapy treatment to align patients based on bony anatomies but its usefulness for soft tissue registration, delineation, and adaptive radiation therapy is limited due to insignificant photoelectric interaction components and the presence of noise resulting from low detector quantum efficiency of megavoltage x-rays. Algebraic reconstruction with sparsity regularizers as well as local denoising methods has not significantly improved the soft tissue conspicuity. The authors aim to utilize a nonlocal means denoising method and texture enhancement to recover the soft tissue information in MVCT (DeTECT). Methods: A block matching 3D (BM3D) algorithm was adapted to reduce the noise while keeping the texture information of the MVCT images. Following imaging denoising, a saliency map was created to further enhance visual conspicuity of low contrast structures. In this study, BM3D and saliency maps were applied to MVCT images of a CT imaging quality phantom, a head and neck, and four prostate patients. Following these steps, the contrast-to-noise ratios (CNRs) were quantified. Results: By applying BM3D denoising and saliency map, postprocessed MVCT images show remarkable improvements in imaging contrast without compromising resolution. For the head and neck patient, the difficult-to-see lymph nodes and vein in the carotid space in the original MVCT image became conspicuous in DeTECT. For the prostate patients, the ambiguous boundary between the bladder and the prostate in the original MVCT was clarified. The CNRs of phantom low contrast inserts were improved from 1.48 and 3.8 to 13.67 and 16.17, respectively. The CNRs of two regions-of-interest were improved from 1.5 and 3.17 to 3.14 and 15.76, respectively, for the head and neck patient. DeTECT also increased the CNR of prostate from 0.13 to 1.46 for the four prostate patients. The results are substantially better than a local denoising method using anisotropic diffusion

  14. A New Matlab De-noising Algorithm for Signal Extraction

    Institute of Scientific and Technical Information of China (English)

    ZHANG Fu-ming; WU Song-lin

    2007-01-01

    The goal of a de-noising algorithm is to reconstruct a signal from its noise-corrupted observations. Perfect reconstruction is seldom possible and performance is measured under a given fidelity criterion. In a recent work, the authors addressed a new Matlab algorithm for de-noising. A key method of the algorithm is selecting an optimal basis from a library of wavelet bases for ideal de-noising. The algorithm with an optimal basis from a library of wavelet bases for de-noising was created through making use of Matlab′s Wavelet Toolbox. The experimental results show that the new algorithm is efficient in signal de-nosing.

  15. Denoising in Wavelet Packet Domain via Approximation Coefficients

    Directory of Open Access Journals (Sweden)

    Zahra Vahabi

    2012-01-01

    Full Text Available In this paper we propose a new approach in the wavelet domain for image denoising. In recent researches wavelet transform has introduced a time-Frequency transform for computing wavelet coefficient and eliminating noise. Some coefficients have effected smaller than the other's from noise, so they can be use reconstruct images with other subbands. We have developed Approximation image to estimate better denoised image. Naturally noiseless subimage introduced image with lower noise. Beside denoising we obtain a bigger compression rate. Increasing image contrast is another advantage of this method. Experimental results demonstrate that our approach compares favorably to more typical methods of denoising and compression in wavelet domain.100 images of LIVE Dataset were tested, comparing signal to noise ratios (SNR,soft thresholding was %1.12 better than hard thresholding, POAC was %1.94 better than soft thresholding and POAC with wavelet packet was %1.48 better than POAC.

  16. ECG De-noising Based on Hilbert-Huang Transform%基于Hilbert-Huang变换的ECG消噪

    Institute of Scientific and Technical Information of China (English)

    杨向林; 严洪; 许志; 任兆瑞; 宋晋忠; 姚宇华; 李延军

    2011-01-01

    提出一种基于Hilbert-Huang变换的ECG消噪方法,该方法对含噪ECG进行经验模态分解,对分解后的IMF进行Hilbert 频谱分析,然后根据ECG信号噪声特点对三种主要噪声分别消噪.工频干扰和高频噪声主要存在于ECG的低阶IMF中,而基线漂移主要存在于ECG的高阶IMF中,对低阶IMF采用基于自适应阈值的形态学滤波方法进行消噪,对高阶IMF采用平滑滤波法进行基线漂移估计.仿真实验和实际应用结果表明该方法优于小波消噪法,不仅对三种主要噪声具有较好的抑制作用,还能很好的保留ECG波形特征.%A method for ECG de-noising based on Hilbert-Huang Transform has been proposed. ECG was analyzed by the Hilbert spectrum of the IMF, which was produced by the Empirical mode decomposition method. The three main noises were respectively removed according to the characteristics of noises. Power line interference and high frequency noises were mainly mixed into the low IMF of ECG, while baseline wander noises were mainly mixed into the high IMF. The morphological filter method based on adaptive threshold was used in the low IMF for de-noising,while baseline wander was estimated by the smooth filter in the high IMF of ECG. The results of simulation experiment and practical application demonstrate that the method proposed in this paper performs better than wavelet de-noising method significantly, not only in suppressing the three mainly noises effectively, but also in preserving the primary characteristics of ECG signal.

  17. Analysis of hydrological trend for radioactivity content in bore-hole water samples using wavelet based denoising

    International Nuclear Information System (INIS)

    A wavelet transform based denoising methodology has been applied to detect the presence of any discernable trend in 137Cs and 90Sr activity levels in bore-hole water samples collected four times a year over a period of eight years, from 2002 to 2009, in the vicinity of typical nuclear facilities inside the restricted access zones. The conventional non-parametric methods viz., Mann–Kendall and Spearman rho, along with linear regression when applied for detecting the linear trend in the time series data do not yield results conclusive for trend detection with a confidence of 95% for most of the samples. The stationary wavelet based hard thresholding data pruning method with Haar as the analyzing wavelet was applied to remove the noise present in the same data. Results indicate that confidence interval of the established trend has significantly improved after pre-processing to more than 98% compared to the conventional non-parametric methods when applied to direct measurements. -- Highlights: ► Environmental trend analysis with wavelet pre-processing was carried out. ► Removal of local fluctuations to obtain the trend in a time series with various mother wavelets. ► Theoretical validation of the methodology with model outputs. ► Efficient detection of trend for 137Cs, 90Sr in bore-hole water samples improves the associated confidence interval to more than 98%. ► Wavelet based pre-processing reduces the indecisive nature of the detected trend

  18. FOG Random Drift Signal Denoising Based on the Improved AR Model and Modified Sage-Husa Adaptive Kalman Filter

    Science.gov (United States)

    Sun, Jin; Xu, Xiaosu; Liu, Yiting; Zhang, Tao; Li, Yao

    2016-01-01

    In order to reduce the influence of fiber optic gyroscope (FOG) random drift error on inertial navigation systems, an improved auto regressive (AR) model is put forward in this paper. First, based on real-time observations at each restart of the gyroscope, the model of FOG random drift can be established online. In the improved AR model, the FOG measured signal is employed instead of the zero mean signals. Then, the modified Sage-Husa adaptive Kalman filter (SHAKF) is introduced, which can directly carry out real-time filtering on the FOG signals. Finally, static and dynamic experiments are done to verify the effectiveness. The filtering results are analyzed with Allan variance. The analysis results show that the improved AR model has high fitting accuracy and strong adaptability, and the minimum fitting accuracy of single noise is 93.2%. Based on the improved AR(3) model, the denoising method of SHAKF is more effective than traditional methods, and its effect is better than 30%. The random drift error of FOG is reduced effectively, and the precision of the FOG is improved. PMID:27420062

  19. FOG Random Drift Signal Denoising Based on the Improved AR Model and Modified Sage-Husa Adaptive Kalman Filter.

    Science.gov (United States)

    Sun, Jin; Xu, Xiaosu; Liu, Yiting; Zhang, Tao; Li, Yao

    2016-01-01

    In order to reduce the influence of fiber optic gyroscope (FOG) random drift error on inertial navigation systems, an improved auto regressive (AR) model is put forward in this paper. First, based on real-time observations at each restart of the gyroscope, the model of FOG random drift can be established online. In the improved AR model, the FOG measured signal is employed instead of the zero mean signals. Then, the modified Sage-Husa adaptive Kalman filter (SHAKF) is introduced, which can directly carry out real-time filtering on the FOG signals. Finally, static and dynamic experiments are done to verify the effectiveness. The filtering results are analyzed with Allan variance. The analysis results show that the improved AR model has high fitting accuracy and strong adaptability, and the minimum fitting accuracy of single noise is 93.2%. Based on the improved AR(3) model, the denoising method of SHAKF is more effective than traditional methods, and its effect is better than 30%. The random drift error of FOG is reduced effectively, and the precision of the FOG is improved. PMID:27420062

  20. FOG Random Drift Signal Denoising Based on the Improved AR Model and Modified Sage-Husa Adaptive Kalman Filter.

    Science.gov (United States)

    Sun, Jin; Xu, Xiaosu; Liu, Yiting; Zhang, Tao; Li, Yao

    2016-07-12

    In order to reduce the influence of fiber optic gyroscope (FOG) random drift error on inertial navigation systems, an improved auto regressive (AR) model is put forward in this paper. First, based on real-time observations at each restart of the gyroscope, the model of FOG random drift can be established online. In the improved AR model, the FOG measured signal is employed instead of the zero mean signals. Then, the modified Sage-Husa adaptive Kalman filter (SHAKF) is introduced, which can directly carry out real-time filtering on the FOG signals. Finally, static and dynamic experiments are done to verify the effectiveness. The filtering results are analyzed with Allan variance. The analysis results show that the improved AR model has high fitting accuracy and strong adaptability, and the minimum fitting accuracy of single noise is 93.2%. Based on the improved AR(3) model, the denoising method of SHAKF is more effective than traditional methods, and its effect is better than 30%. The random drift error of FOG is reduced effectively, and the precision of the FOG is improved.

  1. Scheduled denoising autoencoders

    OpenAIRE

    Geras, Krzysztof J.; Sutton, Charles

    2014-01-01

    We present a representation learning method that learns features at multiple different levels of scale. Working within the unsupervised framework of denoising autoencoders, we observe that when the input is heavily corrupted during training, the network tends to learn coarse-grained features, whereas when the input is only slightly corrupted, the network tends to learn fine-grained features. This motivates the scheduled denoising autoencoder, which starts with a high level of noise that lower...

  2. Scheduled Denoising Autoencoders

    OpenAIRE

    Geras, Krzysztof; Sutton, Charles

    2015-01-01

    We present a representation learning method that learns features at multiple different levels of scale. Working within the unsupervised framework of denoising autoencoders, we observe that when the input is heavily corrupted during training, the network tends to learn coarse-grained features, whereas when the input is only slightly corrupted, the network tends to learn fine-grained features. This motivates the scheduled denoising autoencoder, which starts with a high level of noise that lower...

  3. Denoising by semi-supervised kernel PCA preimaging

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Abrahamsen, Trine Julie; Hansen, Lars Kai

    2014-01-01

    Kernel Principal Component Analysis (PCA) has proven a powerful tool for nonlinear feature extraction, and is often applied as a pre-processing step for classification algorithms. In denoising applications Kernel PCA provides the basis for dimensionality reduction, prior to the so-called pre...... by incorporating a loss term, leading to an iterative algorithm for finding orthonormal components biased by the class labels, and (2) a fixed-point iteration for solving the pre-image problem based on a manifold warped RKHS. We prove viability of the proposed methods on both synthetic data and images from...

  4. Non-local means de-noising approach based on dictionary learning%基于字典学习的非局部均值去噪算法

    Institute of Scientific and Technical Information of China (English)

    崔学英; 张权; 桂志国

    2013-01-01

    Concerning the measurement of the similarity of non-local means, a method based on dictionary learning was presented. First, block matching based local pixel grouping was used to eliminate the interference by dissimilar image blocks. Then, the corrupted similar blocks were denoised by dictionary learning. As a further development of classical sparse representation model, the similar patches were unified for joint sparse representation and learning an efficient and compact dictionary by principal component analysis, so that the similar patches relevency could be well preserved. This similarity between the pixels was measured by the Euclidean distance of denoised image blocks, which can well show the similarity of the similar blocks. The experimental results show the modified algorithm has a superior denoising performance than the original one in terms of both Peak Signal-to-Noise Ratio (PSNR) and subjective visual quality. For some images whose structural similarity is large and with rich detail information, their structures and details are well preserved. The robustness of the presented method is superior to the original one.%针对非局部均值中相似度的衡量问题,提出了一种基于字典学习的度量算法.首先利用局部像素群块匹配方法消除不相似的图像块带来的干扰,然后对含有噪声的相似块采用字典学习的方法降噪.与经典的字典学习不同的是,对相似块采用联合稀疏编码的思想,利用主成分分析法学习一个高效紧字典,保留相似块间的相关性信息.采用降噪后的图像块间的欧氏距离计算像素间的相似度,能更好地反映相似块的相似性.实验结果表明,所提出的方法在峰值信噪比和视觉效果方面都优于传统算法,尤其对含有较多细节且结构相似性强的图像,细节和纹理部分的保持效果更好,算法的鲁棒性也优于传统算法.

  5. Lidar signal de-noising by singular value decomposition

    Science.gov (United States)

    Wang, Huanxue; Liu, Jianguo; Zhang, Tianshu

    2014-11-01

    Signal de-noising remains an important problem in lidar signal processing. This paper presents a de-noising method based on singular value decomposition. Experimental results on lidar simulated signal and real signal show that the proposed algorithm not only improves the signal-to-noise ratio effectively, but also preserves more detail information.

  6. Fusion Based Gaussian noise Removal in the Images using Curvelets and Wavelets with Gaussian Filter

    Directory of Open Access Journals (Sweden)

    Naga Sravanthi Kota, G.Umamaheswara Reddy

    2011-10-01

    Full Text Available Denoising images using Curvelet transform approach has been widely used in many fields for itsability to obtain high quality images. Curvelet transform is superior to wavelet in the expression ofimage edge, such as geometry characteristic of curve, which has been already obtained goodresults in image denoising. However artifacts those appear in the result images of Curveletsapproach prevent its application in some fields such as medical image. This paper puts forward afusion based method using both Wavelets and Curvelet transforms because certain regions of theimage have the ringing and radial stripe after Curvelets transform. The experimental resultsindicate that fusion method has an abroad future for eliminating the noise of images. The resultsof the algorithm applied to ultrasonic medical images indicate that the algorithm can be usedefficiently in medical image fields also.

  7. A connection between score matching and denoising autoencoders.

    Science.gov (United States)

    Vincent, Pascal

    2011-07-01

    Denoising autoencoders have been previously shown to be competitive alternatives to restricted Boltzmann machines for unsupervised pretraining of each layer of a deep architecture. We show that a simple denoising autoencoder training criterion is equivalent to matching the score (with respect to the data) of a specific energy-based model to that of a nonparametric Parzen density estimator of the data. This yields several useful insights. It defines a proper probabilistic model for the denoising autoencoder technique, which makes it in principle possible to sample from them or rank examples by their energy. It suggests a different way to apply score matching that is related to learning to denoise and does not require computing second derivatives. It justifies the use of tied weights between the encoder and decoder and suggests ways to extend the success of denoising autoencoders to a larger family of energy-based models. PMID:21492012

  8. Total Variation based Multivariate Shearlet Shrinkage for Image Reconstruction

    Directory of Open Access Journals (Sweden)

    Shengqian Wang

    2013-01-01

    Full Text Available Shearlet as a new multidirectional and multiscale transform is optimally efficient in representing images containing edges. In this paper, a total variation based multivariate shearlet adaptive shrinkage is proposed for discontinuity-preserving image denoising. The multivariate adaptive threshold is employed to reduce the noise. Projected total variation diffusion is used to suppress the pseudo-Gibbs and shearlet-like artifacts. Numerical experiments from piecewise-smooth to textured images demonstrate that the proposed method can effectively suppress noise and nonsmooth artifacts caused by shearlet transform. Furthermore, it outperforms several existing techniques in terms of structural similarity (SSIM index, peak signal-to-noise ratio (PSNR and visual quality.

  9. New denoising method based on dual-tree complex wavelet transform and nonlinear time series%基于双树复小波与非线性时间序列的降噪方法

    Institute of Scientific and Technical Information of China (English)

    胥永刚; 赵国亮; 马朝永; 张建宇

    2015-01-01

    A new denoising method based on dual-tree complex wavelet transform and nonlinear time series was proposed,considering the weakness,such as the phase distortion,of the wavelet soft-threshold denoising method,in which the real and image parts of the coefficient are processed individually.The new method process the magnitude of the complex coefficients instead,taking into account the fact that the magnitude does not oscillate in positive and negative directions which is more suitable for threshold denoising and the fact that the coefficients of the fault signal are always periodic.The nonlinear time series method can be used to strengthen the periodicity of the coefficients caused by the fault signal and to restrain the noise meanwhile.In the method proposed,the fault signal was decomposed by dual-tree complex wavelet transform to obtain the coefficients of different layers,the nonlinear time series method was used to strengthen the periodicity of the coefficient,and then the soft-threshold denoising was carried out to remove the DC component.Finally, the fault characteristic signal was obtained by coefficient reconstruction.The simulation and experimental results show the effectiveness of the method,and a new efficient denoising method was provided.%针对双树复小波变换传统软阈值降噪方法对实、虚部树系数分别进行阈值处理时提取的强背景噪声下轴承故障特征信号效果不理想,且实、虚部分离的阈值处理方法会引起局部相位失真问题,利用故障信号小波变换系数具有周期性与双树复小波系数模震荡小等特点,提出双树复小波变换与非线性时间序列方法相结合的强背景噪声下轴承故障特征提取方法。对故障信号进行双树复小波变换,获得各层小波系数并求模,选择系数模周期性较强层系数进行非线性时间序列处理,增强系数中周期性成分,抑制随机噪声;对增强后系数进行软阈值处理消除

  10. A fast method for video deblurring based on a combination of gradient methods and denoising algorithms in Matlab and C environments

    Science.gov (United States)

    Mirzadeh, Zeynab; Mehri, Razieh; Rabbani, Hossein

    2010-01-01

    In this paper the degraded video with blur and noise is enhanced by using an algorithm based on an iterative procedure. In this algorithm at first we estimate the clean data and blur function using Newton optimization method and then the estimation procedure is improved using appropriate denoising methods. These noise reduction techniques are based on local statistics of clean data and blur function. For estimated blur function we use LPA-ICI (local polynomial approximation - intersection of confidence intervals) method that use an anisotropic window around each point and obtain the enhanced data employing Wiener filter in this local window. Similarly, to improvement the quality of estimated clean video, at first we transform the data to wavelet transform domain and then improve our estimation using maximum a posterior (MAP) estimator and local Laplace prior. This procedure (initial estimation and improvement of estimation by denoising) is iterated and finally the clean video is obtained. The implementation of this algorithm is slow in MATLAB1 environment and so it is not suitable for online applications. However, MATLAB has the capability of running functions written in C. The files which hold the source for these functions are called MEX-Files. The MEX functions allow system-specific APIs to be called to extend MATLAB's abilities. So, in this paper to speed up our algorithm, the written code in MATLAB is sectioned and the elapsed time for each section is measured and slow sections (that use 60% of complete running time) are selected. Then these slow sections are translated to C++ and linked to MATLAB. In fact, the high loads of information in images and processed data in the "for" loops of relevant code, makes MATLAB an unsuitable candidate for writing such programs. The written code for our video deblurring algorithm in MATLAB contains eight "for" loops. These eighth "for" utilize 60% of the total execution time of the entire program and so the runtime should be

  11. Image Denoising by Second-order Total Generalized Variation Combined with Wavelet Transform Modulus%二阶TGV结合小波变换模的图像去噪算法

    Institute of Scientific and Technical Information of China (English)

    张文静; 吴传生; 刘欣

    2015-01-01

    The application of Total Variation model in image denoising was studied .This model has disadvantage of causing staircasing phenomenon .As for this defect , a second-order Total Generalized Variation regularization was proposed instead of Total Variation regularization in denoising model .The advantage of the application of the wavelet transform modulus maxima in edge detection was discussed .The edge detection function whose parameter is the wavelet transform modulus was introduced to the new image denoising model .The proposed model can effectively denoise as well as protect the edges and details of image .It can also ease the produce of staircase effect .%研究了总变分( total variation,TV)模型在图像去噪中的应用,针对TV正则化模型在图像去噪过程中容易导致阶梯效应的缺陷,提出利用二阶总广义变分( total generalized variation ,TGV)正则项代替TV正则项的图像去噪模型,并利用小波变换模极大值在检测图像边缘应用中的优点,在TGV正则化模型中引入以小波变换模为参数值的边缘检测函数,利用边缘检测函数引导扩散。该模型具备良好去噪和保持图像边缘的优点,还能缓解阶梯效应的产生。

  12. An Imbalanced Data Classification Algorithm of De-noising Auto-Encoder Neural Network Based on SMOTE

    OpenAIRE

    Zhang Chenggang; Song Jiazhi; Pei Zhili; Jiang Jingqing

    2016-01-01

    Imbalanced data classification problem has always been one of the hot issues in the field of machine learning. Synthetic minority over-sampling technique (SMOTE) is a classical approach to balance datasets, but it may give rise to such problem as noise. Stacked De-noising Auto-Encoder neural network (SDAE), can effectively reduce data redundancy and noise through unsupervised layer-wise greedy learning. Aiming at the shortcomings of SMOTE algorithm when synthesizing new minority class samples...

  13. License plate character recognition based on stacked denoising autoencoder%基于栈式降噪自编码神经网络的车牌字符识别

    Institute of Scientific and Technical Information of China (English)

    贾文其; 李明; 朱美强; 王军

    2016-01-01

    To solve problem of the license plate characters under complex natural scenes affected by noise and etc .,a method of the license plate characters recognition based on a stacked denoising autoencoder was proposed .Relevant features based on the reconstruction theory of denoising autoencoder were automatically extracted ,and unsupervised greedy layer‐wise pre‐training and supervised fine‐tuning were utilized to train the deep autoencoder network ,so that it had good robust performance on obtaining the license plate characters with low quality in the complex environment .Compared with the shallow machine learning algo‐rithms ,the traditional stacked autoencoder and convolutional neural network ,stacked denoising autoencoder is superior in recog‐nition .Results of experiments of the license plate image test set collected by electronic police at the actual crossing verified the ef‐fectiveness of the application method .%为解决复杂自然场景下车牌字符受噪声等影响识别困难的问题,提出一种基于栈式降噪自编码神经网络的车牌识别方法。基于降噪自编码模型重构思想自动提取相关特征,通过使用无监督逐层贪婪预训练和有监督微调的方法对深度自编码神经网络进行训练,对复杂环境下低质量的车牌字符图像具有较好的鲁棒性能。与浅层的机器学习算法、传统栈式自编码神经网络和卷积神经网络相比,栈式降噪自编码神经网络有较好的字符识别性能。基于实际道口电子警察采集的车牌图像测试集的实验结果验证了该方法的有效性。

  14. Wavelet-domain TI Wiener-like filtering for complex MR data denoising.

    Science.gov (United States)

    Hu, Kai; Cheng, Qiaocui; Gao, Xieping

    2016-10-01

    Magnetic resonance (MR) images are affected by random noises, which degrade many image processing and analysis tasks. It has been shown that the noise in magnitude MR images follows a Rician distribution. Unlike additive Gaussian noise, the noise is signal-dependent, and consequently difficult to reduce, especially in low signal-to-noise ratio (SNR) images. Wirestam et al. in [20] proposed a Wiener-like filtering technique in wavelet-domain to reduce noise before construction of the magnitude MR image. Based on Wirestam's study, we propose a wavelet-domain translation-invariant (TI) Wiener-like filtering algorithm for noise reduction in complex MR data. The proposed denoising algorithm shows the following improvements compared with Wirestam's method: (1) we introduce TI property into the Wiener-like filtering in wavelet-domain to suppress artifacts caused by translations of the signal; (2) we integrate one Stein's Unbiased Risk Estimator (SURE) thresholding with two Wiener-like filters to make the hard-thresholding scale adaptive; and (3) the first Wiener-like filtering is used to filter the original noisy image in which the noise obeys Gaussian distribution and it provides more reasonable results. The proposed algorithm is applied to denoise the real and imaginary parts of complex MR images. To evaluate our proposed algorithm, we conduct extensive denoising experiments using T1-weighted simulated MR images, diffusion-weighted (DW) phantom and in vivo data. We compare our algorithm with other popular denoising methods. The results demonstrate that our algorithm outperforms others in term of both efficiency and robustness. PMID:27238055

  15. Adaptive Total Variation Minimization-Based Image Enhancement from Flash and No-Flash Pairs

    OpenAIRE

    Sang Min Yoon; Yeon Ju Lee; Gang-Joon Yoon; Jungho Yoon

    2014-01-01

    We present a novel approach for enhancing the quality of an image captured from a pair of flash and no-flash images. The main idea for image enhancement is to generate a new image by combining the ambient light of the no-flash image and the details of the flash image. In this approach, we propose a method based on Adaptive Total Variation Minimization (ATVM) so that it has an efficient image denoising effect by preserving strong gradients of the flash image. Some numerical results are present...

  16. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  17. Mammograms Enhancement and Denoising Using Generalized Gaussian Mixture Model in Nonsubsampled Contourlet Transform

    Directory of Open Access Journals (Sweden)

    Xinsheng Zhang

    2009-12-01

    Full Text Available In this paper, a novel algorithm for mammographic images enhancement and denoising based on Multiscale Geometric Analysis (MGA is proposed. Firstly mammograms are decomposed into different scales and directional subbands using Nonsubsampled Contourlet Transform (NSCT. After modeling the coefficients of each directional subbands using Generalized Gaussian Mixture Model (GGMM according to the statistical property, they are categorized into strong edges, weak edges and noise by Bayesian classifier. To enhance the suspicious lesion and suppress the noise, a nonlinear mapping function is designed to adjust the coefficients adaptively so as to obtain a good enhancement result with significant features. Finally, the resulted mammographic images are obtained by reconstructing with the modified coefficients using NSCT. Experimental results illustrate that the proposed approach is practicable and robustness, which outperforms the spatial filters and other methods based on wavelets in terms of mass and microcalcification denoising and enhancement.

  18. Tuning of a Wavelet Filter for Miniature Accelerometers Denoising based Joint Symbolic Dynamics (JSD Method

    Directory of Open Access Journals (Sweden)

    Ioana Raluca EDU

    2015-06-01

    Full Text Available The paper exposes a wavelet filtering mechanism related to the noise suppression in the acceleration sensors, with direct application in the strap-down inertial navigation systems. The presented procedure is related to the actual trend in the inertial navigation field to use miniaturized inertial measurement units, which includes MEMS or NEMS sensors. Beside the already wavelet filtering used method, based on different thresholding mechanisms, the here proposed work refers to the use of an alternative tuning mechanism for the wavelet filters, based on the Joint Symbolic Dynamics (JSD method. The main idea of the proposed method is to process and analyze signals received from the sensors in the inertial measurement unit of the navigator by using the Wavelet transform until optimal levels of decomposition are established and the useful signals are achieved.

  19. SPEECH ENHANCEMENT BASED ON SELF ADAPTIVE LAGRANGE MULTIPLIER WITH WEIGHTED PERCEPTUAL WIENER DE-NOISING TECHNIQUE

    OpenAIRE

    S. Arjuna Rao*, K. Murali Krishna

    2016-01-01

    The most voice based communication systems facing many problems such as lack of perceptual clarity, musical noise or residual noise, speech distortion and noise distortion. The main objective of speech enhancement is to improve the speech quality and intelligibility. By using wiener filter with Lagrange multiplier makes tradeoff between the speech distortion and residual noise, when the value of Lagrange multiplier is greater than or equal to zero otherwise causes speech distortion and residu...

  20. Lidar signal de-noising based on wavelet trimmed thresholding technique

    Institute of Scientific and Technical Information of China (English)

    Haitao Fang(方海涛); Deshuang Huang(黄德双)

    2004-01-01

    Lidar is an efficient tool for remote monitoring, but the effective range is often limited by signal-to-noise ratio (SNR). By the power spectral estimation, we find that digital filters are not fit for processing lidar signals buried in noise. In this paper, we present a new method of the lidar signal acquisition based on the wavelet trimmed thresholding technique to increase the effective range of lidar measurements. The performance of our method is investigated by detecting the real signals in noise. The experiment results show that our approach is superior to the traditional methods such as Butterworth filter.

  1. Focal artifact removal from ongoing EEG--a hybrid approach based on spatially-constrained ICA and wavelet de-noising.

    Science.gov (United States)

    Akhtar, Muhammad Tahir; James, Christopher J

    2009-01-01

    Detecting artifacts produced in electroencephalographic (EEG) data by muscle activity, eye blinks and electrical noise, etc., is an important problem in EEG signal processing research. These artifacts must be corrected before further analysis because it renders subsequent analysis very error-prone. One solution is to reject the data segment if artifact is present during the observation interval, however, the rejected data segment could contain important information masked by the artifact. It has already been demonstrated that independent component analysis (ICA) can be an effective and applicable method for EEG de-noising. The goal of this paper is to propose a framework, based on ICA and wavelet denoising (WD), to improve the pre-processing of EEG signals. In particular we employ the concept of spatially-constrained ICA (SCICA) to extract artifact-only independent components (ICs) from the given EEG data, use WD to remove any brain activity from extracted artifacts, and finally project back the artifacts to be subtracted from EEG signals to get clean EEG data. The main advantage of the proposed approach is faster computation, as all ICs are not identified in the usual manner due to the square mixing assumption. Simulation results demonstrate the effectiveness of the proposed approach in removing focal artifacts that can be well separated by SCICA.

  2. Denoising and Back Ground Clutter of Video Sequence using Adaptive Gaussian Mixture Model Based Segmentation for Human Action Recognition

    Directory of Open Access Journals (Sweden)

    Shanmugapriya. K

    2014-01-01

    Full Text Available The human action recognition system first gathers images by simply querying the name of the action on a web image search engine like Google or Yahoo. Based on the assumption that the set of retrieved images contains relevant images of the queried action, we construct a dataset of action images in an incremental manner. This yields a large image set, which includes images of actions taken from multiple viewpoints in a range of environments, performed by people who have varying body proportions and different clothing. The images mostly present the “key poses” since these images try to convey the action with a single pose. In existing system to support this they first used an incremental image retrieval procedure to collect and clean up the necessary training set for building the human pose classifiers. There are challenges that come at the expense of this broad and representative data. First, the retrieved images are very noisy, since the Web is very diverse. Second, detecting and estimating the pose of humans in still images is more difficult than in videos, partly due to the background clutter and the lack of a foreground mask. In videos, foreground segmentation can exploit motion cues to great benefit. In still images, the only cue at hand is the appearance information and therefore, our model must address various challenges associated with different forms of appearance. Therefore for robust separation, in proposed work a segmentation algorithm based on Gaussian Mixture Models is proposed which is adaptive to light illuminations, shadow and white balance is proposed here. This segmentation algorithm processes the video with or without noise and sets up adaptive background models based on the characteristics also this method is a very effective technique for background modeling which classifies the pixels of a video frame either background or foreground based on probability distribution.

  3. Real-time Virtual Environment Signal Extraction and Denoising Using Programmable Graphics Hardware

    Institute of Scientific and Technical Information of China (English)

    Yang Su; Zhi-Jie Xu; Xiang-Qian Jiang

    2009-01-01

    The sense of being within a three-dimensional (3D) space and interacting with virtual 3D objects in a computer-generated virtual environment (VE) often requires essential image, vision and sensor signal processing techniques such as differentiating and denoising. This paper describes novel implementations of thc Gaussian filtering for characteristic signal extraction and wavelet-based image denoising algorithms that run on the graphics processing unit (GPU). While significant acceleration over standard CPU implementations is obtained through exploiting data parallelism provided by the modern programmable graphics hardware, the CPU can be freed up to run other computations more efficiently such as artificial intelligence (AI) and physics. The proposed GPU-based Gaussian filtering can extract surface information from a real object and provide its material features for rendering and illumination. The wavelet-based signal denoising for large size digital images realized in this project provided better realism for VE visualization without sacrificing real-time and interactive performances of an application.

  4. Using Normal Inverse Gaussian Model for Image Denoising in NSCT Domain%基于正态逆高斯模型的非下采样Contourlet变换图像去噪

    Institute of Scientific and Technical Information of China (English)

    贾建; 陈莉

    2011-01-01

    提出一种基于正态逆高斯先验模型的非下采样Contourlet变换图像去噪算法.在非下采样Contourlet变换域中,以正态逆高斯模型为先验模型,对图像分解系数的稀疏分布统计建模,估计每个子带内的模型参数,在贝叶斯最大后验概率估计准则下推导出与正态逆高斯模型相应的阈值函数表达式,以此对图像进行去噪处理.对于被加性高斯白噪声污染的图像,实验结果表明该去噪算法能有效地去除图像中的高斯白噪声,提高图像的峰值信噪比值,在边缘特征方面保持了良好的视觉效果.%A novel non-subsampled Contourlet transform denoising scheme based on the normal inverse Gaussian prior (NIG) and Bayesian estimation has been proposed. Normal inverse Gaussian model is used to describe the distributions of the image coefficients of each subband in non-subsampled Contourlet transform domain, corresponding threshold function is derived from the model using Bayesian maximum a posteriori probability estimation theory. This scheme achieves enhanced estimation results for images that are corrupted with additive Gaussian noise over a wide range of noise variance.The simulation results indicate that the proposed method can remove Gaussian white noise effectively, improve the peak signal-to-noise ratio of the image, and keep better visual result in edges information reservation as well.

  5. Nonlocal two dimensional denoising of frequency specific chirp evoked ABR single trials.

    Science.gov (United States)

    Schubert, J Kristof; Teuber, Tanja; Steidl, Gabriele; Strauss, Daniel J; Corona-Strauss, Farah I

    2012-01-01

    Recently, we have shown that denoising evoked potential (EP) images is possible using two dimensional diffusion filtering methods. This restoration allows for an integration of regularities over multiple stimulations into the denoising process. In the present work we propose the nonlocal means (NLM) method for EP image denoising. The EP images were constructed using auditory brainstem responses (ABR) collected in young healthy subjects using frequency specific and broadband chirp stimulations. It is concluded that the NLM method is more efficient than conventional approaches in EP imaging denoising, specially in the case of ABRs, where the relevant information can be easily masked by the ongoing EEG activity, i.e., signals suffer from rather low signal-to-noise ratio SNR. The proposed approach is for the a posteriori denoising of single trials after the experiment and not for real time applications. PMID:23366439

  6. Denoising time-domain induced polarisation data using wavelet techniques

    Science.gov (United States)

    Deo, Ravin N.; Cull, James P.

    2016-05-01

    Time-domain induced polarisation (TDIP) methods are routinely used for near-surface evaluations in quasi-urban environments harbouring networks of buried civil infrastructure. A conventional technique for improving signal to noise ratio in such environments is by using analogue or digital low-pass filtering followed by stacking and rectification. However, this induces large distortions in the processed data. In this study, we have conducted the first application of wavelet based denoising techniques for processing raw TDIP data. Our investigation included laboratory and field measurements to better understand the advantages and limitations of this technique. It was found that distortions arising from conventional filtering can be significantly avoided with the use of wavelet based denoising techniques. With recent advances in full-waveform acquisition and analysis, incorporation of wavelet denoising techniques can further enhance surveying capabilities. In this work, we present the rationale for utilising wavelet denoising methods and discuss some important implications, which can positively influence TDIP methods.

  7. A method for improving wavelet threshold denoising in laser-induced breakdown spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Bo, E-mail: zhangbo@sia.cn [Lab. of Networked Control Systems, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016 (China); University of the Chinese Academy of Sciences, Beijing 100039 (China); Sun, Lanxiang, E-mail: sunlanxiang@sia.cn [Lab. of Networked Control Systems, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016 (China); Yu, Haibin; Xin, Yong; Cong, Zhibo [Lab. of Networked Control Systems, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016 (China)

    2015-05-01

    The wavelet threshold denoising method is an effective noise suppression approach for noisy laser-induced breakdown spectroscopy signal. In this paper, firstly, the noise sources of LIBS system are summarized. Secondly, wavelet multi-resolution analysis and wavelet threshold denoising method are introduced briefly. As one of the major factors influencing the denoising results in the process of wavelet threshold denoising, the optimal decomposition level selection is studied. Based on the entropy analysis of noisy LIBS signal and noise, a method of choosing optimal decomposition level is presented. Thirdly, the performance of the proposed method is verified by analyzing some synthetic signals. Not only the denoising results of the synthetic signals are analyzed, but also the ultimate denoising capacity of the wavelet threshold denoising method with the optimal decomposition level is explored. Finally, the experimental data analysis implies that the fluctuation of the noisy LIBS signals can be decreased and the weak LIBS signals can be recovered. The optimal decomposition level is able to improve the performance of the denoising results obtained by wavelet threshold denoising with non-optimal wavelet functions. The signal to noise ratios of the elements are improved and the limit of detection values are reduced by more than 50% by using the proposed method. - Highlights: • The noise sources of LIBS system are summarized. • The optimal decomposition level selection method in wavelet threshold denoising is obtained by entropy analysis.

  8. A Wavelet-Based Approach for Ultrasound Image Restoration

    Directory of Open Access Journals (Sweden)

    Mohammed Tarek GadAllah

    2014-08-01

    Full Text Available Ultrasound's images are generally affected by speckle noise which is mainly due to the scattering phenomenon’s coherent nature. Speckle filtration is accompanied with loss of diagnostic features. In this paper a modest new trial introduced to remove speckles while keeping the fine features of the tissue under diagnosis by enhancing image’s edges; via Curvelet denoising and Wavelet based image fusion. Performance evaluation of our work is done by four quantitative measures: the peak signal to noise ratio (PSNR, the square root of the mean square of error (RMSE, a universal image quality index (Q, and the Pratt’s figure of merit (FOM as a quantitative measure for edge preservation. Plus Canny edge map which is extracted as a qualitative measure of edge preservation. The measurements of the proposed approach assured its qualitative and quantitative success into image denoising while maintaining edges as possible. A Gray phantom is designed to test our proposed enhancement method. The phantom results assure the success and applicability of this paper approach not only to this research works but also for gray scale diagnostic scans’ images including ultrasound’s B-scans.

  9. A New Wavelet Threshold Determination Method Considering Interscale Correlation in Signal Denoising

    Directory of Open Access Journals (Sweden)

    Can He

    2015-01-01

    Full Text Available Due to simple calculation and good denoising effect, wavelet threshold denoising method has been widely used in signal denoising. In this method, the threshold is an important parameter that affects the denoising effect. In order to improve the denoising effect of the existing methods, a new threshold considering interscale correlation is presented. Firstly, a new correlation index is proposed based on the propagation characteristics of the wavelet coefficients. Then, a threshold determination strategy is obtained using the new index. At the end of the paper, a simulation experiment is given to verify the effectiveness of the proposed method. In the experiment, four benchmark signals are used as test signals. Simulation results show that the proposed method can achieve a good denoising effect under various signal types, noise intensities, and thresholding functions.

  10. D3PO - Denoising, Deconvolving, and Decomposing Photon Observations

    CERN Document Server

    Selig, Marco

    2013-01-01

    The analysis of astronomical images is a non-trivial task. The D3PO algorithm addresses the inference problem of denoising, deconvolving, and decomposing photon observations. The primary goal is the simultaneous reconstruction of the diffuse and point-like photon flux from a given photon count image. In order to discriminate between these morphologically different signal components, a probabilistic algorithm is derived in the language of information field theory based on a hierarchical Bayesian parameter model. The signal inference exploits prior information on the spatial correlation structure of the diffuse component and the brightness distribution of the spatially uncorrelated point-like sources. A maximum a posteriori solution and a solution minimizing the Gibbs free energy of the inference problem using variational Bayesian methods are discussed. Since the derivation of the solution does not dependent on the underlying position space, the implementation of the D3PO algorithm uses the NIFTY package to ens...

  11. Method of Infrared Image Enhancement Based on Stationary Wavelet Transform

    Institute of Scientific and Technical Information of China (English)

    QI Fei; LI Yan-jun; ZHANG Ke

    2008-01-01

    Aiming at the problem, i.e. infrared images own the characters of bad contrast ratio and fuzzy edges, a method to enhance the contrast of infrared image is given, which is based on stationary wavelet transform. After making stationary wavelet transform to an infrared image, denoising is done by the proposed method of double-threshold shrinkage in detail coefficient matrixes that have high noisy intensity. For the approximation coefficient matrix with low noisy intensity, enhancement is done by the proposed method based on histogram. The enhanced image can be got by wavelet coefficient reconstruction. Furthermore, an evaluation criterion of enhancement performance is introduced. The results show that this algorithm ensures target enhancement and restrains additive Gauss white noise effectively. At the same time, its amount of calculation is small and operation speed is fast.

  12. Fast DOA estimation using wavelet denoising on MIMO fading channel

    CERN Document Server

    Meenakshi, A V; Kayalvizhi, R; Asha, S

    2011-01-01

    This paper presents a tool for the analysis, and simulation of direction-of-arrival (DOA) estimation in wireless mobile communication systems over the fading channel. It reviews two methods of Direction of arrival (DOA) estimation algorithm. The standard Multiple Signal Classification (MUSIC) can be obtained from the subspace based methods. In improved MUSIC procedure called Cyclic MUSIC, it can automatically classify the signals as desired and undesired based on the known spectral correlation property and estimate only the desired signal's DOA. In this paper, the DOA estimation algorithm using the de-noising pre-processing based on time-frequency conversion analysis was proposed, and the performances were analyzed. This is focused on the improvement of DOA estimation at a lower SNR and interference environment. This paper provides a fairly complete image of the performance and statistical efficiency of each of above two methods with QPSK signal.

  13. Fast DOA estimation using wavelet denoising on MIMO fading channel

    Directory of Open Access Journals (Sweden)

    A.V. Meenakshi

    2011-12-01

    Full Text Available This paper presents a tool for the analysis, and simulation of direction-of-arrival (DOA estimation inwireless mobile communication systems over the fading channel. It reviews two methods of Direction ofarrival (DOA estimation algorithm. The standard Multiple Signal Classification (MUSIC can be obtainedfrom the subspace based methods. In improved MUSIC procedure called Cyclic MUSIC, it canautomatically classify the signals as desired and undesired based on the known spectral correlationproperty and estimate only the desired signal’s DOA. In this paper, the DOA estimation algorithm usingthe de-noising pre-processing based on time-frequency conversion analysis was proposed, and theperformances were analyzed. This is focused on the improvement of DOA estimation at a lower SNR andinterference environment. This paper provides a fairly complete image of the performance and statisticalefficiency of each of above two methods with QPSK signal.

  14. Multi-focus image fusion algorithm based on shearlets

    Institute of Scientific and Technical Information of China (English)

    Qiguang Miao; Cheng Shi; Pengfei Xu; Mei Yang; Yaobo Shi

    2011-01-01

    Shearlets not only possess all properties that other transforms have, but also are equipped with a rich mathematical structure similar to wavelets, which are associated to a multi-resolution analysis. Recently,shearlets have been used in image denoising, sparse image representation, and edge detection. However, its application in image fusion is still under study. In this letter, we study the feasibility of image fusion using shearlets. Fusion rules of larger high-frequency coefficients based on regional energy, regional variance,and absolute value are proposed because shearlet transform can catch detailed information in any scale and any direction. The fusion accuracy is also further improved by a region consistency check. Several different experiments are adopted to prove that fusion results based on shearlet transform can acquire better fusion quality than any other method.%@@ Shearlets not only possess all properties that other transforms have, but also are equipped with a rich mathematical structure similar to wavelets, which are associated to a multi-resolution analysis.Recently,shearlets have been used in image denoising, sparse image representation, and edge detection.However, its application in image fusion is still under study.In this letter, we study the feasibility of image fusion using shearlets.Fusion rules of larger high-frequency coefficients based on regional energy, regional variance,and absolute value are proposed because shearlet transform can catch detailed information in any scale and any direction.The fusion accuracy is also further improved by a region consistency check.Several different experiments are adopted to prove that fusion results based on shearlet transform can acquire better fusion quality than any other method.

  15. Signal Denoising Methods Based on Wavelet Analysis%基于小波分析对信号噪声的处理及应用

    Institute of Scientific and Technical Information of China (English)

    刘时华; 张亚

    2015-01-01

    简要介绍了小波去噪的主要作用、特点和发展现状,分析了小波对普通信号的去噪效果。结合小波去噪的具体应用实例,通过使用两种不同的小波去噪方法,分析比较其对侵彻弹体加速度信号的去噪效果,讨论使用不同小波去噪方法的特点,并分析不同背景下小波去噪方法的选择。%The wavelet denoising’s main function ,characteristics and development situation is briefly introduced ,and the effect of wavelet denoising to ordinary signal is analyzed .Combined with the concrete application of the wavelet denoising ,the denoising effects of acceleration signal of penetrating missile body by two denoising methods are compared .The different characteristics of wavelet denoising methods are discussed ,the selection of wavelet denoising methods is analyzed .

  16. Airborne Gravity Data Denoising Based on Empirical Mode Decomposition: A Case Study for SGA-WZ Greenland Test Data

    Directory of Open Access Journals (Sweden)

    Lei Zhao

    2015-10-01

    Full Text Available Surveying the Earth’s gravity field refers to an important domain of Geodesy, involving deep connections with Earth Sciences and Geo-information. Airborne gravimetry is an effective tool for collecting gravity data with mGal accuracy and a spatial resolution of several kilometers. The main obstacle of airborne gravimetry is extracting gravity disturbance from the extremely low signal to noise ratio measuring data. In general, the power of noise concentrates on the higher frequency of measuring data, and a low pass filter can be used to eliminate it. However, the noise could distribute in a broad range of frequency while low pass filter cannot deal with it in pass band of the low pass filter. In order to improve the accuracy of the airborne gravimetry, Empirical Mode Decomposition (EMD is employed to denoise the measuring data of two primary repeated flights of the strapdown airborne gravimetry system SGA-WZ carried out in Greenland. Comparing to the solutions of using finite impulse response filter (FIR, the new results are improved by 40% and 10% of root mean square (RMS of internal consistency and external accuracy, respectively.

  17. Denoising by Higher Order Statistics

    OpenAIRE

    Teuber, Tanja; Remmele, Steffen; Hesser, Jürgen; Steidl, Gabriele

    2011-01-01

    A standard approach for deducing a variational denoising method is the maximum a posteriori strategy. Here, the denoising result is chosen in such a way that it maximizes the conditional density function of the reconstruction given its observed noisy version. Unfortunately, this approach does not imply that the empirical distribution of the reconstructed noise components follows the statistics of the assumed noise model. In this paper, we propose to overcome this drawback by applying an addit...

  18. Use of Empirical Mode Decomposition based Denoised NDVI in Extended Three-Temperature Model to estimate Evapotranspiration in Northeast Indian Ecosystems

    Science.gov (United States)

    Padhee, S. K.

    2015-12-01

    Evapotranspiration (ET) is an essential component involved in the energy balance and water budgeting methods, and its precise assessment are crucial for estimation of various hydrological parameters. Traditional point estimation methods for ET computation offer quantitative analysis, but lag in spatial distribution. The use of Remote Sensing (RS) data with good spatial, spectral and temporal resolution having broad spatial coverage, could lead the estimations with some advantages. However, approaches which requires data rich environment, demands time and resources. The estimation of spatially distributed soil evaporation (Es) and transpiration from canopy (Ec) by RS data, followed by their combination to provide the total ET, could be a simpler approach for accurate estimates of ET flux at macro-scale level. The 'Extended Three Temperature Model' (Extended 3T Model) is an established model based on same approach and is capable to compute ET and its partition of Es and Ec within the same algorithm. A case study was conducted using Extended 3T Model and MODIS products for the Brahmaputra river basin within the Northeast India for years 2000-2010. The extended 3T model was used by including its pre-requisite the land surface temperature (Ts), which was separated into the surface temperature of dry soil (Tsm) and the surface temperature of vegetation (Tcm), decided by a derivative of vegetation index (NDVI) called fractional vegetation cover (f). However, NDVI time series which is nonlinear and nonstationary can be decomposed by the Empirical Mode Decomposition (EMD) into components called intrinsic mode functions (IMFs), based on inherent temporal scales. The highest frequency component which was found to represent noise was subtracted from the original NDVI series to get the denoised product from which f was derived. The separated land surface temperatures (Tsm and Tcm) were used to calculate the Es and Ec followed by estimation of total ET. The spatiotemporal

  19. Variational denoising method for electronic speckle pattern interferometry

    Institute of Scientific and Technical Information of China (English)

    Fang Zhang; Wenyao Liu; Chen Tang; Jinjiang Wang; Li Ren

    2008-01-01

    Traditional speckle fringe patterns by electronic speckle pattern interferometry (ESPI) are inherently noisy and of limited visibility, so denoising is the key problem in ESPI. We present the variational denoising method for ESPI. This method transforms the image denosing to minimizing an appropriate penalized energy function and solving a partial differential equation. We test the proposed method on computer-simulated and experimental speckle correlation fringes, respectively. The results show that this technique is capable of significantly improving the quality of fringe patterns. It works well as a pre-processing for the fringe patterns by ESPI.

  20. Algorithm of Spatio-temporal Combination Video Real-time Denoising Based on Non_local Means%基于Non_local means的时空联合视频降噪算法

    Institute of Scientific and Technical Information of China (English)

    叶庆伟; 谢永昌; 狄红卫

    2012-01-01

    An efficient algorithm of video real-time denoising is proposed for the video surveillance syetem. By motion detection to multi-frame images based on Non_local means,this algorithm can adaptively distinguish the still regions and motion regions of video image. Temporal weighted average filter to the still regions and spatial ANL filter to the motion regions are used separately. Experimental results show that because of fully utilizing the spatio-temporal information of the video sequences,the proposed algorithm can significantly improve the signal-to-noise ratio and the subjective image quality without movement ghosting.%针对监控视频图像的特点,提出了一种有效的实时视频降噪算法.首先结合多帧图像采用基于Non_local means的运动检测方法自适应地区分图像的运动区域和静止区域,对静止区域采用时域加权均值滤波,对运动区域采用空域ANL滤波.充分利用了视频的时域、空域信息,在去除视频序列噪声的同时很好地保护了图像的细节.实验结果表明,提出的算法在不造成运动拖影的前提下,能够显著提高视频的信噪比和图像的主观质量.

  1. Wavelet decomposition and reconstruction denoising based on the mallat algorithm%基于Mallat算法的小波分解重构的心电信号处理

    Institute of Scientific and Technical Information of China (English)

    钟丽辉; 魏贯军

    2012-01-01

    In order to extract the ECG,the wavelet decomposition and reconstruction denoising based on the Mallat algorithm was discussed in the application of ECG denoising.Firstly,determined the wavelet base function;secondly,determined the decomposition layers;thirdly,re-constructed the ECG by the useful signals;finally,simulation experiments showed that the wavelet decomposition and re-construction denoising can effectively filter out the baseline drift and high frequency of the EMG interference.%为了实现对微弱低信噪比的心电信号的有效提取,采用了Mallat算法的小波分解重构法去除心电信号的噪声。首先确定小波分解重构的小波基;其次确定分解的层数;然后直接提取有用信号所在的频带(有用信号占优的频带)进行重构;最后,Matlab仿真MIT-BIT标准数据库中的心电信号表明小波分解重构法可以有效的去除心电信号中的多种干扰;同时比起传统滤波器法来说,小波分解与重构去噪法应用起来更方便。

  2. Wavelet-domain de-noising of optical coherent tomography data for biomedical applications

    International Nuclear Information System (INIS)

    Optical coherent tomography (OCT) is a rapidly developing method of fundamental and applied research. Detection and processing of OCT images is a very important problem of applied physics and optical signal processing. In the present paper we are demonstrating the ability for effective wavelet-domain de-noising of OCT images. We are realizing an algorithm for wavelet-domain de-noising of OCT data and implementing it for the purpose of studying test samples and for in vivo nail tomography. High de-noising efficiency with no significant losses of information about the internal sample structure is observed

  3. Multi-Scale Patch-Based Image Restoration.

    Science.gov (United States)

    Papyan, Vardan; Elad, Michael

    2016-01-01

    Many image restoration algorithms in recent years are based on patch processing. The core idea is to decompose the target image into fully overlapping patches, restore each of them separately, and then merge the results by a plain averaging. This concept has been demonstrated to be highly effective, leading often times to the state-of-the-art results in denoising, inpainting, deblurring, segmentation, and other applications. While the above is indeed effective, this approach has one major flaw: the prior is imposed on intermediate (patch) results, rather than on the final outcome, and this is typically manifested by visual artifacts. The expected patch log likelihood (EPLL) method by Zoran and Weiss was conceived for addressing this very problem. Their algorithm imposes the prior on the patches of the final image, which in turn leads to an iterative restoration of diminishing effect. In this paper, we propose to further extend and improve the EPLL by considering a multi-scale prior. Our algorithm imposes the very same prior on different scale patches extracted from the target image. While all the treated patches are of the same size, their footprint in the destination image varies due to subsampling. Our scheme comes to alleviate another shortcoming existing in patch-based restoration algorithms--the fact that a local (patch-based) prior is serving as a model for a global stochastic phenomenon. We motivate the use of the multi-scale EPLL by restricting ourselves to the simple Gaussian case, comparing the aforementioned algorithms and showing a clear advantage to the proposed method. We then demonstrate our algorithm in the context of image denoising, deblurring, and super-resolution, showing an improvement in performance both visually and quantitatively. PMID:26571527

  4. A unified variational approach to denoising and bias correction in MR.

    Science.gov (United States)

    Fan, Ayres; Wells, William M; Fisher, John W; Cetin, Müjdat; Haker, Steven; Mulkern, Robert; Tempany, Clare; Willsky, Alan S

    2003-07-01

    We propose a novel bias correction method for magnetic resonance (MR) imaging that uses complementary body coil and surface coil images. The former are spatially homogeneous but have low signal intensity; the latter provide excellent signal response but have large bias fields. We present a variational framework where we optimize an energy functional to estimate the bias field and the underlying image using both observed images. The energy functional contains smoothness-enforcing regularization for both the image and the bias field. We present extensions of our basic framework to a variety of imaging protocols. We solve the optimization problem using a computationally efficient numerical algorithm based on coordinate descent, preconditioned conjugate gradient, half-quadratic regularization, and multigrid techniques. We show qualitative and quantitative results demonstrating the effectiveness of the proposed method in producing debiased and denoised MR images. PMID:15344454

  5. 含噪信号的特征基表示与信号去噪重构%Characteristic Base Representation of Noisy Signal and Reconstruction of Denoised Signal

    Institute of Scientific and Technical Information of China (English)

    孙亮; 陈梅莲

    2000-01-01

    A denoising reconstruction algorithm using the signal magnitude spectrum is proposed and the simulation studies have been performed. The magnitude-frequency characteristics of the noisy signal is utilized in the algorithm and the denoising is realized by the use of characteristic bases. The simulation results have shown that the wide-band noise can be largely cancelled using the algorithm proposed to perform the noisy signal reconstruction. The algorithm can be used for the off-line denoising of the strong noise corrupted signals.%提出了一种利用信号幅度谱的去噪重构算法,并作了仿真实验研究. 该算法利用含噪信号在幅频特性上的特征,通过特征基来进行信号的去噪处理. 仿真研究结果表明,采用该种算法作含噪信号的重构,可以明显地去除信号的宽带噪声. 该算法可以应用于噪声严重污染信号的离线去噪处理.

  6. Study on an improved wavelet shift-invariant threshold denoising for pulsed laser induced glucose photoacoustic signals

    Science.gov (United States)

    Wang, Zhengzi; Ren, Zhong; Liu, Guodong

    2015-10-01

    Noninvasive measurement of blood glucose concentration has become a hotspot research in the world due to its characteristic of convenient, rapid and non-destructive etc. The blood glucose concentration monitoring based on photoacoustic technique has attracted many attentions because the detected signal is ultrasonic signals rather than the photo signals. But during the acquisition of the photoacoustic signals of glucose, the photoacoustic signals are not avoid to be polluted by some factors, such as the pulsed laser, electronic noises and circumstance noises etc. These disturbances will impact the measurement accuracy of the glucose concentration, So, the denoising of the glucose photoacoustic signals is a key work. In this paper, a wavelet shift-invariant threshold denoising method is improved, and a novel wavelet threshold function is proposed. For the novel wavelet threshold function, two threshold values and two different factors are set, and the novel function is high order derivative and continuous, which can be looked as the compromise between the wavelet soft threshold denoising and hard threshold denoising. Simulation experimental results illustrate that, compared with other wavelet threshold denoising, this improved wavelet shift-invariant threshold denoising has higher signal-to-noise ratio(SNR) and smaller root mean-square error (RMSE) value. And this improved denoising also has better denoising effect than others. Therefore, this improved denoising has a certain of potential value in the denoising of glucose photoacoustic signals.

  7. The Application of Compressive Sensing on Spectra De-noising

    Directory of Open Access Journals (Sweden)

    Mingxia Xiao

    2013-10-01

    Full Text Available Through the analyzing of limitations on wavelet threshold filter de-noising, this paper applies wavelet filter based on compressed sensing to reduce the signal noise of spectral signals, and compares the two methods through experiments. The results of experiments shown that the wavelet filter based on compressed sensing can effectively reduce the signal noise of spectral signal. The de-noising effect of the method is better than that of wavelet filter. The method provides a new approach for reducing the signal noise of spectral signals.

  8. The Feature Extraction Based on Texture Image Information for Emotion Sensing in Speech

    Directory of Open Access Journals (Sweden)

    Kun-Ching Wang

    2014-09-01

    Full Text Available In this paper, we present a novel texture image feature for Emotion Sensing in Speech (ESS. This idea is based on the fact that the texture images carry emotion-related information. The feature extraction is derived from time-frequency representation of spectrogram images. First, we transform the spectrogram as a recognizable image. Next, we use a cubic curve to enhance the image contrast. Then, the texture image information (TII derived from the spectrogram image can be extracted by using Laws’ masks to characterize emotional state. In order to evaluate the effectiveness of the proposed emotion recognition in different languages, we use two open emotional databases including the Berlin Emotional Speech Database (EMO-DB and eNTERFACE corpus and one self-recorded database (KHUSC-EmoDB, to evaluate the performance cross-corpora. The results of the proposed ESS system are presented using support vector machine (SVM as a classifier. Experimental results show that the proposed TII-based feature extraction inspired by visual perception can provide significant classification for ESS systems. The two-dimensional (2-D TII feature can provide the discrimination between different emotions in visual expressions except for the conveyance pitch and formant tracks. In addition, the de-noising in 2-D images can be more easily completed than de-noising in 1-D speech.

  9. Stacked Denoise Autoencoder Based Feature Extraction and Classification for Hyperspectral Images

    OpenAIRE

    Chen Xing; Li Ma; Xiaoquan Yang

    2016-01-01

    Deep learning methods have been successfully applied to learn feature representations for high-dimensional data, where the learned features are able to reveal the nonlinear properties exhibited in the data. In this paper, deep learning method is exploited for feature extraction of hyperspectral data, and the extracted features can provide good discriminability for classification task. Training a deep network for feature extraction and classification includes unsupervised pretraining and super...

  10. About Classification Methods Based on Tensor Modelling for Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Salah Bourennane

    2010-03-01

    Full Text Available Denoising and Dimensionality Reduction (DR are key issue to improve the classifiers efficiency for Hyper spectral images (HSI. The multi-way Wiener filtering recently developed is used, Principal and independent component analysis (PCA; ICA and projection pursuit(PP approaches to DR have been investigated. These matrix algebra methods are applied on vectorized images. Thereof, the spatial rearrangement is lost. To jointly take advantage of the spatial and spectral information, HSI has been recently represented as tensor. Offering multiple ways to decompose data orthogonally, we introduced filtering and DR methods based on multilinear algebra tools. The DR is performed on spectral way using PCA, or PP joint to an orthogonal projection onto a lower subspace dimension of the spatial ways. Weshow the classification improvement using the introduced methods in function to existing methods. This experiment is exemplified using real-world HYDICE data. Multi-way filtering, Dimensionality reduction, matrix and multilinear algebra tools, tensor processing.

  11. Performance Evaluation of Image Fusion for Impulse Noise Reduction in Digital Images Using an Image Quality Assessment

    OpenAIRE

    Premkumar, M.; J Harikiran; B SaiChandana; P RajeshKumar

    2011-01-01

    Image fusion is the process of combining two or more images into a single image while retaining the important features of each image. Multiple image fusion is an important technique used in military, remote sensing and medical applications. In this paper, Image Fusion based on local area variance is used to combine the de-noised images from two different filtering algorithms, Vector Median Filter (VMF) and Spatial Median Filter (SMF). The performance of the Image Fusion is evaluated by using ...

  12. FMI SIGNAL DE-NOISING BASED ON WAVELET TRANSFORM AND MEDIAN FILTERING%基于小波变换和中值滤波的FMI信号去噪方法

    Institute of Scientific and Technical Information of China (English)

    张小涛; 张烈辉; 冯国庆; 魏伟

    2009-01-01

    The algorithm of wavelet decomposition and reconstruction was introduced. Based on the decomposition and reconstruction of image, the application of the traditional median filtering method and two-dimensional discrete wavelet transform to noise elimination was analyzed. A corresponding example of application was given. In the example, the new method was applied in one FMI image from a cavernous reservoir, and the resultant image was compared with the single usage in de-noising image. The results showed that the random noise was eliminated and the automatic identification of fractures and caverns were enhanced effectively by preprocessing using FMI images with two-dimensional discrete wavelet transform combined with traditional methods, and the combination was much better than single usage of these two methods.%介绍了二维离散小波变换下的图像分解与重构算法.在图像分解与重构的基础上,提出应用中值滤波和二维离散小波变换相结合的方法对含噪的FMI图像进行去噪处理,通过实例验证,采用新方法对一口溶蚀孔洞储层井的FMI图像进行去噪分析,并与单一的中值滤波或小波变换法的结果进行比较.实验结果表明,应用二维小波分析结合传统方法对FMI图像进行去噪,能够有效地去除图像中的随机噪声,其结果优于单一的中值滤波或小波分析法,对提高油、气层裂缝和孔洞的自动识别具有重要的指导意义.

  13. 散乱点云噪声分析与降噪方法研究%Denoising Methods Research on Scattered Point Cloud Based on Noise Analysis

    Institute of Scientific and Technical Information of China (English)

    王振; 孙志刚

    2015-01-01

    As main part of the preprocessing of scattered point cloud ,outlier identification and surface smoothing are important premise of 3D modeling and visualization .In order to solve this problem ,some certain effective denoising methods are put forward based on the characteristics of random measurement error and noise distribution of point cloud :identifying outliers by mapping the point cloud into feature space through feature extraction ;surface smoothing estimates the true loca‐tions of sampling points based on the noise distribution .The experimental results indicate that the proposed methods can i‐dentify outliers accurately and smooth model surface effectively .%散乱点云离群点识别和表面平滑作为点云预处理的主要组成部分,是三维场景建模和可视化的重要前提。针对这一问题,论文提出了基于随机测量误差特点和噪声点分布特性的点云降噪方法:离群点识别采用统计分类的思想,通过特征提取将点云映射到特征空间后加以区分;表面平滑利用噪声点分布特性,对采样点的真实位置进行估计。实验结果表明,采用文中的方法能够准确有效地识别离群点和平滑模型表面。

  14. Contractive De-noising Auto-encoder

    OpenAIRE

    Chen, Fu-qiang; Wu, Yan; Zhao, Guo-dong; Zhang, Jun-Ming; Zhu, Ming; Bai, Jing

    2013-01-01

    Auto-encoder is a special kind of neural network based on reconstruction. De-noising auto-encoder (DAE) is an improved auto-encoder which is robust to the input by corrupting the original data first and then reconstructing the original input by minimizing the reconstruction error function. And contractive auto-encoder (CAE) is another kind of improved auto-encoder to learn robust feature by introducing the Frobenius norm of the Jacobean matrix of the learned feature with respect to the origin...

  15. Research of image enhancement of dental cast based on wavelet transformation

    Science.gov (United States)

    Zhao, Jing; Li, Zhongke; Liu, Xingmiao

    2010-10-01

    This paper describes a 3D laser scanner for dental cast that realize non-contact deepness measuring. The scanner and the control PC make up of a 3D scan system, accomplish the real time digital of dental cast. Owing to the complexity shape of the dental cast and the random nature of scanned points, the detected feature curves are generally not smooth or not accurate enough for subsequent application. The purpose of this p is to present an algorithm for enhancing the useful points and eliminating the noises. So an image enhancement algorithm based on wavelet transform and fuzzy set theory is presented. Firstly, the multi-scale wavelet transform is adopted to decompose the input image, which extracts the characteristic of multi-scale of the image. Secondly, wavelet threshold is used for image de-noising, and then the traditional fuzzy set theory is improved and applied to enhance the low frequency wavelet coefficients and the high frequency wavelet coefficients of different directions of each scale. Finally, the inverse wavelet transform is applied to synthesis image. A group of experimental results demonstrate that the proposed algorithm is effective for the dental cast image de-noising and enhancement, the edge of the enhanced image is distinct which is good for the subsequent image processing.

  16. Imaging Liver Lesions Using Grating-Based Phase-Contrast Computed Tomography with Bi-Lateral Filter Post-Processing

    OpenAIRE

    Herzen, Julia; Marian S Willner; Fingerle, Alexander A.; Peter B Noël; Köhler, Thomas; Drecoll, Enken; Rummeny, Ernst J.; Pfeiffer, Franz

    2014-01-01

    X-ray phase-contrast imaging shows improved soft-tissue contrast compared to standard absorption-based X-ray imaging. Especially the grating-based method seems to be one promising candidate for clinical implementation due to its extendibility to standard laboratory X-ray sources. Therefore the purpose of our study was to evaluate the potential of grating-based phase-contrast computed tomography in combination with a novel bi-lateral denoising method for imaging of focal liver lesions in an ex...

  17. Discrete shearlet transform on GPU with applications in anomaly detection and denoising

    Science.gov (United States)

    Gibert, Xavier; Patel, Vishal M.; Labate, Demetrio; Chellappa, Rama

    2014-12-01

    Shearlets have emerged in recent years as one of the most successful methods for the multiscale analysis of multidimensional signals. Unlike wavelets, shearlets form a pyramid of well-localized functions defined not only over a range of scales and locations, but also over a range of orientations and with highly anisotropic supports. As a result, shearlets are much more effective than traditional wavelets in handling the geometry of multidimensional data, and this was exploited in a wide range of applications from image and signal processing. However, despite their desirable properties, the wider applicability of shearlets is limited by the computational complexity of current software implementations. For example, denoising a single 512 × 512 image using a current implementation of the shearlet-based shrinkage algorithm can take between 10 s and 2 min, depending on the number of CPU cores, and much longer processing times are required for video denoising. On the other hand, due to the parallel nature of the shearlet transform, it is possible to use graphics processing units (GPU) to accelerate its implementation. In this paper, we present an open source stand-alone implementation of the 2D discrete shearlet transform using CUDA C++ as well as GPU-accelerated MATLAB implementations of the 2D and 3D shearlet transforms. We have instrumented the code so that we can analyze the running time of each kernel under different GPU hardware. In addition to denoising, we describe a novel application of shearlets for detecting anomalies in textured images. In this application, computation times can be reduced by a factor of 50 or more, compared to multicore CPU implementations.

  18. 3-D Adaptive Sparsity Based Image Compression With Applications to Optical Coherence Tomography.

    Science.gov (United States)

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A; Farsiu, Sina

    2015-06-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  19. Dictionary Based Image Segmentation

    DEFF Research Database (Denmark)

    Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2015-01-01

    We propose a method for weakly supervised segmentation of natural images, which may contain both textured or non-textured regions. Our texture representation is based on a dictionary of image patches. To divide an image into separated regions with similar texture we use an implicit level sets...... in an efficient implementation of our segmentation method. We experimentally validated our approach on a number of natural as well as composed images....

  20. Vibration sensor data denoising using a time-frequency manifold for machinery fault diagnosis.

    Science.gov (United States)

    He, Qingbo; Wang, Xiangxiang; Zhou, Qiang

    2013-12-27

    Vibration sensor data from a mechanical system are often associated with important measurement information useful for machinery fault diagnosis. However, in practice the existence of background noise makes it difficult to identify the fault signature from the sensing data. This paper introduces the time-frequency manifold (TFM) concept into sensor data denoising and proposes a novel denoising method for reliable machinery fault diagnosis. The TFM signature reflects the intrinsic time-frequency structure of a non-stationary signal. The proposed method intends to realize data denoising by synthesizing the TFM using time-frequency synthesis and phase space reconstruction (PSR) synthesis. Due to the merits of the TFM in noise suppression and resolution enhancement, the denoised signal would have satisfactory denoising effects, as well as inherent time-frequency structure keeping. Moreover, this paper presents a clustering-based statistical parameter to evaluate the proposed method, and also presents a new diagnostic approach, called frequency probability time series (FPTS) spectral analysis, to show its effectiveness in fault diagnosis. The proposed TFM-based data denoising method has been employed to deal with a set of vibration sensor data from defective bearings, and the results verify that for machinery fault diagnosis the method is superior to two traditional denoising methods.

  1. LPI Radar Signals De-noising Based on Frequency Domain SVD%基于频域奇异值分解的LPI雷达信号降噪

    Institute of Scientific and Technical Information of China (English)

    赵凯凯; 张柏林; 刘璘; 罗旭蒙

    2015-01-01

    随着低截获技术在战场上的大量使用,侦察接收机接收到的低截获( LPI )信号大多都湮没在噪声中。为了准确地检测威胁目标,在研究时域奇异值分解( SVD)降噪的基础上,提出了基于频域SVD的LPI雷达信号降噪方法。仿真实验表明,该方法可以将LPI雷达常用的线性调频连续波( LFMCW)信号和二相编码( BPSK)信号的信噪比从0 dB提升到6 dB以上。频域SVD降噪使侦察机发现LPI雷达的概率大大提高。%With extensive application of low probability of intercept( LPI) technology in battlefield,LPI radar signals are mostly lost in the noise. For the sake of detecting threats accurately,LPI radar signals de-noising method based on frequency domain singular value decomposition ( SVD ) is proposed on the basis of study on time domain SVD de-noising. Simulation shows that the proposed method can improve the widely-used linear frequency modulated continuous wave( LFMCW) and binary phase shift keying ( BPSK) signal’s signal-to-noise ratio( SNR) from 0 dB to at least 6 dB. Frequency domain SVD de-noising method makes the possibility to detect LPI radars greatly increased for reconnaissance receivers.

  2. Signal enhancement of a novel multi-address coding lidar backscatters based on a combined technique of demodulation and wavelet de-noising

    Science.gov (United States)

    Xu, Fan; Wang, Yuanqing

    2015-11-01

    Multi-address coding (MAC) lidar is a novel lidar system recently developed by our laboratory. By applying a new combined technique of multi-address encoding, multiplexing and decoding, range resolution is effectively improved. In data processing, a signal enhancement method involving laser signal demodulation and wavelet de-noising in the downlink is proposed to improve the signal to noise ratio (SNR) of raw signal and the capability of remote application. In this paper, the working mechanism of MAC lidar is introduced and the implementation of encoding and decoding is also illustrated. We focus on the signal enhancement method and provide the mathematical model and analysis of an algorithm on the basis of the combined method of demodulation and wavelet de-noising. The experimental results and analysis demonstrate that the signal enhancement approach improves the SNR of raw data. Overall, compared with conventional lidar system, MAC lidar achieves a higher resolution and better de-noising performance in long-range detection.

  3. Visual Improvement for Hepatic Abscess Sonogram by Segmentation after Curvelet Denoising

    Directory of Open Access Journals (Sweden)

    Mohammed Tarek GadAllah

    2013-06-01

    Full Text Available A wise automated method for wisely improving the visualization of hepatic abscess sonogram, a modest trial is being done to denoise and reduce the ultrasound scan speckles wisely and effectively. As an effective way for improving the diagnostic decision; improved sonogram for hepatic abscess is reconstructed by ultrasound scan image segmentation after denoising in Curvelet transform domain. Better sonogram visualization is required for better human interpretation. Speckle noise filtering of medical ultrasound images is needed for enhanced diagnosis. Double thresholding segmentation was applied on, an ultrasound scan image for a Liver with amebic abscess, after it had been denoised in Curvelet transform domain. The result is enhanced wise effect on the hepatic abscess sonogram image's visualization which improves physicians' decisions. Moreover, this method effectively reduces the memory storage size for the image which consequently decreases computation processing time.

  4. A Comprehensive Noise Robust Speech Parameterization Algorithm Using Wavelet Packet Decomposition-Based Denoising and Speech Feature Representation Techniques

    Science.gov (United States)

    Kotnik, Bojan; Kačič, Zdravko

    2007-12-01

    This paper concerns the problem of automatic speech recognition in noise-intense and adverse environments. The main goal of the proposed work is the definition, implementation, and evaluation of a novel noise robust speech signal parameterization algorithm. The proposed procedure is based on time-frequency speech signal representation using wavelet packet decomposition. A new modified soft thresholding algorithm based on time-frequency adaptive threshold determination was developed to efficiently reduce the level of additive noise in the input noisy speech signal. A two-stage Gaussian mixture model (GMM)-based classifier was developed to perform speech/nonspeech as well as voiced/unvoiced classification. The adaptive topology of the wavelet packet decomposition tree based on voiced/unvoiced detection was introduced to separately analyze voiced and unvoiced segments of the speech signal. The main feature vector consists of a combination of log-root compressed wavelet packet parameters, and autoregressive parameters. The final output feature vector is produced using a two-staged feature vector postprocessing procedure. In the experimental framework, the noisy speech databases Aurora 2 and Aurora 3 were applied together with corresponding standardized acoustical model training/testing procedures. The automatic speech recognition performance achieved using the proposed noise robust speech parameterization procedure was compared to the standardized mel-frequency cepstral coefficient (MFCC) feature extraction procedures ETSI ES 201 108 and ETSI ES 202 050.

  5. WaveletQuant, an improved quantification software based on wavelet signal threshold de-noising for labeled quantitative proteomic analysis

    Directory of Open Access Journals (Sweden)

    Li Song

    2010-04-01

    Full Text Available Abstract Background Quantitative proteomics technologies have been developed to comprehensively identify and quantify proteins in two or more complex samples. Quantitative proteomics based on differential stable isotope labeling is one of the proteomics quantification technologies. Mass spectrometric data generated for peptide quantification are often noisy, and peak detection and definition require various smoothing filters to remove noise in order to achieve accurate peptide quantification. Many traditional smoothing filters, such as the moving average filter, Savitzky-Golay filter and Gaussian filter, have been used to reduce noise in MS peaks. However, limitations of these filtering approaches often result in inaccurate peptide quantification. Here we present the WaveletQuant program, based on wavelet theory, for better or alternative MS-based proteomic quantification. Results We developed a novel discrete wavelet transform (DWT and a 'Spatial Adaptive Algorithm' to remove noise and to identify true peaks. We programmed and compiled WaveletQuant using Visual C++ 2005 Express Edition. We then incorporated the WaveletQuant program in the Trans-Proteomic Pipeline (TPP, a commonly used open source proteomics analysis pipeline. Conclusions We showed that WaveletQuant was able to quantify more proteins and to quantify them more accurately than the ASAPRatio, a program that performs quantification in the TPP pipeline, first using known mixed ratios of yeast extracts and then using a data set from ovarian cancer cell lysates. The program and its documentation can be downloaded from our website at http://systemsbiozju.org/data/WaveletQuant.

  6. A New Approach to Inverting and De-Noising Backscatter from Lidar Observations

    Science.gov (United States)

    Marais, Willem; Hen Hu, Yu; Holz, Robert; Eloranta, Edwin

    2016-06-01

    Atmospheric lidar observations provide a unique capability to directly observe the vertical profile of cloud and aerosol scattering properties and have proven to be an important capability for the atmospheric science community. For this reason NASA and ESA have put a major emphasis on developing both space and ground based lidar instruments. Measurement noise (solar background and detector noise) has proven to be a significant limitation and is typically reduced by temporal and vertical averaging. This approach has significant limitations as it results in significant reduction in the spatial information and can introduce biases due to the non-linear relationship between the signal and the retrieved scattering properties. This paper investigates a new approach to de-noising and retrieving cloud and aerosol backscatter properties from lidar observations that leverages a technique developed for medical imaging to de-blur and de-noise images; the accuracy is defined as the error between the true and inverted photon rates. Hence non-linear bias errors can be mitigated and spatial information can be preserved.

  7. A New Approach to Inverting and De-Noising Backscatter from Lidar Observations

    Directory of Open Access Journals (Sweden)

    Marais Willem

    2016-01-01

    Full Text Available Atmospheric lidar observations provide a unique capability to directly observe the vertical profile of cloud and aerosol scattering properties and have proven to be an important capability for the atmospheric science community. For this reason NASA and ESA have put a major emphasis on developing both space and ground based lidar instruments. Measurement noise (solar background and detector noise has proven to be a significant limitation and is typically reduced by temporal and vertical averaging. This approach has significant limitations as it results in significant reduction in the spatial information and can introduce biases due to the non-linear relationship between the signal and the retrieved scattering properties. This paper investigates a new approach to de-noising and retrieving cloud and aerosol backscatter properties from lidar observations that leverages a technique developed for medical imaging to de-blur and de-noise images; the accuracy is defined as the error between the true and inverted photon rates. Hence non-linear bias errors can be mitigated and spatial information can be preserved.

  8. Real-Time Wavelet-Based Coordinated Control of Hybrid Energy Storage Systems for Denoising and Flattening Wind Power Output

    Directory of Open Access Journals (Sweden)

    Tran Thai Trung

    2014-10-01

    Full Text Available Since the penetration level of wind energy is continuously increasing, the negative impact caused by the fluctuation of wind power output needs to be carefully managed. This paper proposes a novel real-time coordinated control algorithm based on a wavelet transform to mitigate both short-term and long-term fluctuations by using a hybrid energy storage system (HESS. The short-term fluctuation is eliminated by using an electric double-layer capacitor (EDLC, while the wind-HESS system output is kept constant during each 10-min period by a Ni-MH battery (NB. State-of-charge (SOC control strategies for both EDLC and NB are proposed to maintain the SOC level of storage within safe operating limits. A ramp rate limitation (RRL requirement is also considered in the proposed algorithm. The effectiveness of the proposed algorithm has been tested by using real time simulation. The simulation model of the wind-HESS system is developed in the real-time digital simulator (RTDS/RSCAD environment. The proposed algorithm is also implemented as a user defined model of the RSCAD. The simulation results demonstrate that the HESS with the proposed control algorithm can indeed assist in dealing with the variation of wind power generation. Moreover, the proposed method shows better performance in smoothing out the fluctuation and managing the SOC of battery and EDLC than the simple moving average (SMA based method.

  9. Wavelet filter based de-noising of weak neutron flux signal for dynamic control rod reactivity measurement

    Energy Technology Data Exchange (ETDEWEB)

    Park, Moon Ghu; Bae Sung Man; Lee, Chang Sup [Korea Electric Power Research Institute, Taejon (Korea, Republic of)

    2002-10-01

    The measurement and validation of control rod bank (group) worths are typically required by the start-up physics test standard programs for Pressurized Water Reactors (PWR). Recently, the method of DCRM{sup TM} (Dynamic Control rod Reactivity Measurement) technique is developed by KEPRI and will be implemented in near future. The method is based on the fast and complete bank insertion within the short period of time which makes the range of the reactivity variation very large from the below of the background gamma level to the vicinity of nuclear heating point. The weak flux signal below background gamma level is highly noise contaminated, which invokes the large reactivity fluctuation. This paper describes the efficient noise filtering method with wavelet filters. The performance of developed method is demonstrated with the measurement data at YGN-3 cycle 7.

  10. Electrocardiogram Denoised Signal by Discrete Wavelet Transform and Continuous Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Akram Aouinet

    2014-01-01

    Full Text Available One of commonest problems in electrocardiogram (ECG signal processing is denoising. In this paper a denoising technique based on discrete wavelet transform (DWT has been developed. To evaluate proposed technique, we compare it to continuous wavelet transform (CWT. Performance evaluation uses parameters like mean square error (MSE and signal to noise ratio (SNR computations show that the proposed technique out performs the CWT.

  11. Nonlinear denoising of transient signals with application to event related potentials

    CERN Document Server

    Effern, A; Schreiber, T; Grunwald, T; David, P; Elger, C E

    2000-01-01

    We present a new wavelet based method for the denoising of {\\it event related potentials} ERPs), employing techniques recently developed for the paradigm of deterministic chaotic systems. The denoising scheme has been constructed to be appropriate for short and transient time sequences using circular state space embedding. Its effectiveness was successfully tested on simulated signals as well as on ERPs recorded from within a human brain. The method enables the study of individual ERPs against strong ongoing brain electrical activity.

  12. Independent Component Analysis and Decision Trees for ECG Holter Recording De-Noising

    OpenAIRE

    Jakub Kuzilek; Vaclav Kremen; Filip Soucek; Lenka Lhotska

    2014-01-01

    We have developed a method focusing on ECG signal de-noising using Independent component analysis (ICA). This approach combines JADE source separation and binary decision tree for identification and subsequent ECG noise removal. In order to to test the efficiency of this method comparison to standard filtering a wavelet- based de-noising method was used. Freely data available at Physionet medical data storage were evaluated. Evaluation criteria was root mean square error (RMSE) between origin...

  13. Denoising of Ultrasonic Testing Signal Based on Wavelet Footprint and Matching Pursuit%基于小波迹和匹配追踪算法的超声波检测信号消噪

    Institute of Scientific and Technical Information of China (English)

    何明格; 殷鹰; 林丽君; 赵秀粉; 殷国富

    2011-01-01

    针对某大型国有企业的零件在线无损检测工程实际需求,为了提高超声波检测的准确度和精度,引入小波迹理论和匹配追踪算法对超声波检测信号进行消噪处理.首先,在信号的小波变换基础上构建一个小波迹字典;然后,在小波迹域内进行阈值消噪去除信号中的噪声;最后,利用匹配追踪算法通过有限步骤的迭代后,在小波迹字典上用一定数量的小波迹的组合来实现原信号的稀疏描述.小波迹字典内的小波迹具有对信号结构特征无损的描述能力,在小波迹域内消噪克服了传统小波消噪不考虑各尺度之间小波系数的相关性而只进行简单的系数收缩的缺陷.通过仿真试验将本文采用的方法与传统的小波硬阈值和软阈值消噪技术进行了对比,结果表明该方法的消噪效果要优于传统的小波消噪方法.实际超声波检测信号的处理结果也论证了本文所采用消噪技术的优越性.%In order to improve the nicety and precision of online ultrasonic testing of parts in one big national factory, Wavelet Footprint and Matching Pursuit were employed to denoise the ultrasonic testing signal. Firstly, a wavelet footprint dictionary was constructed from wavelet basis function. Secondly, a threshold was applied in the footprint domain to remove the noise from the noisy signal. At last, by adopting the matching pursuit algorithm, a sparse representation of the testing signal was achieved with a certain number of wavelet footprints in a finite number of iterations in the footprint dictionary. The wavelet footprints characterize efficiently the singular structures of signal. Denoising based on wavelet footprint can better exploit the dependency of the wavelet coefficients across scales than traditional wavelet based denoising which just simply shrinks the wavelet coefficients. The experimental simulation results showed that this method outperformed traditional hard and soft

  14. Dictionary Based Image Segmentation

    OpenAIRE

    Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2015-01-01

    We propose a method for weakly supervised segmentationof natural images, which may contain both textured or non-texturedregions. Our texture representation is based on a dictionary of imagepatches. To divide an image into separated regions with similar texturewe use an implicit level sets representation of the curve, which makesour method topologically adaptive. In addition, we suggest a multi-labelversion of the method. Finally, we improve upon a similar texture representation,by formulating...

  15. Denoising and Frequency Analysis of Noninvasive Magnetoencephalography Sensor Signals for Functional Brain Mapping

    CERN Document Server

    Ukil, A

    2015-01-01

    Magnetoencephalography (MEG) is an important noninvasive, nonhazardous technology for functional brain mapping, measuring the magnetic fields due to the intracellular neuronal current flow in the brain. However, most often, the inherent level of noise in the MEG sensor data collection process is large enough to obscure the signal(s) of interest. In this paper, a denoising technique based on the wavelet transform and the multiresolution signal decomposition technique along with thresholding is presented, substantiated by application results. Thereafter, different frequency analysis are performed on the denoised MEG signals to identify the major frequencies of the brain oscillations present in the denoised signals. Time-frequency plots (spectrograms) of the denoised signals are also provided.

  16. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    Science.gov (United States)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  17. Improvising MSN and PSNR for Finger-Print Image noised by GAUSSIAN and SALT & PEPPER

    Directory of Open Access Journals (Sweden)

    Ashish kumar Dass

    2012-08-01

    Full Text Available Image de-noising is a vital concern in image processing. Out of different available method wavelet thresolding method is one of the important approaches for image de-noises. In this paper we propose an adaptive method of image de-noising in the wavelet sub-band domain assuming the images to be contaminated with noise based on threshold estimation for each sub-band. Under this framework the proposed technique estimates the threshold level by apply sub-band of each decomposition level. This paper entails the development of a new MATLAB function based on our algorithm. The experimental evaluation of our proposition reveals that our method removes noise more effectively than the in-built function provided by MATLAB .One of its applications for Fingerprint de-noise due to importance of fingerprint for day-to-day life especially in computer security purposes. Fingerprint acts as a vital role for user authentication as it is unique and not duplicated. Unfortunately allusion Fingerprints may get corrupted and polluted with noise during possession, transmission or retrieval from storage media. Many image processing algorithms such as pattern recognition need a clean fingerprint image to work effectively which in turn needs effective ways of de-noising such images. We apply our proposed algorithm and compare other traditional algorithms for different noises.

  18. An Ultrasound Image Despeckling Approach Based on Principle Component Analysis

    Directory of Open Access Journals (Sweden)

    Jawad F. Al-Asad

    2014-07-01

    Full Text Available An approach based on principle component analysis (PCA to filter out multiplicative noise from ultrasound images is presented in this paper. An image with speckle noise is segmented into small dyadic lengths, depending on the original size of the image, and the global covariance matrix is found. A projection matrix is then formed by selecting the maximum eigenvectors of the global covariance matrix. This projection matrix is used to filter speckle noise by projecting each segment into the signal subspace. The approach is based on the assumption that the signal and noise are independent and that the signal subspace is spanned by a subset of few principal eigenvectors. When applied on simulated and real ultrasound images, the proposed approach has outperformed some popular nonlinear denoising techniques such as 2D wavelets, 2D total variation filtering, and 2D anisotropic diffusion filtering in terms of edge preservation and maximum cleaning of speckle noise. It has also showed lower sensitivity to outliers resulting from the log transformation of the multiplicative noise.

  19. ECG Signal Denoising By Wavelet Transform Thresholding

    Directory of Open Access Journals (Sweden)

    Mikhled Alfaouri

    2008-01-01

    Full Text Available In recent years, ECG signal plays an important role in the primary diagnosis, prognosis and survival analysis of heart diseases. In this paper a new approach based on the threshold value of ECG signal determination is proposed using Wavelet Transform coefficients. Electrocardiography has had a profound influence on the practice of medicine. The electrocardiogram signal contains an important amount of information that can be exploited in different manners. The ECG signal allows for the analysis of anatomic and physiologic aspects of the whole cardiac muscle. Different ECG signals are used to verify the proposed method using MATLAB software. Method presented in this paper is compared with the Donoho's method for signal denoising meanwhile better results are obtained for ECG signals by the proposed algorithm.

  20. A Self-adaptive Learning Rate Principle for Stacked Denoising Autoencoders

    Institute of Scientific and Technical Information of China (English)

    HAO Qian-qian; DING Jin-kou; WANG Jian-fei

    2015-01-01

    Existing research on image classification mainly used the artificial definition as the pre-training of the original image, which cost a lot of time on adjusting parameters. However, the depth of learning algorithm intends to make the computers automatically choose the most suitable features in the training process. The substantial of deep learning is to train mass data and obtain an accurate classification or prediction without any artificial work by constructing a multi-hidden-layer model. However, current deep learning model has problems of local minimums when choosing a con-stant learning rate to solve non-convex objective cost function in model training. This paper proposes an algorithm based on the Stacked Denoising Autoencoders (SDA) to solve this problem, and gives a contrast of different layer designs to test the performance. A MNIST database of handwritten digits is used to verify the effectiveness of this model..

  1. Adaptive Non-Linear Bayesian Filter for ECG Denoising

    Directory of Open Access Journals (Sweden)

    Mitesh Kumar Sao

    2014-06-01

    Full Text Available The cycles of an electrocardiogram (ECG signal contain three components P-wave, QRS complex and the T-wave. Noise is present in cardiograph as signals being measured in which biological resources (muscle contraction, base line drift, motion noise and environmental resources (power line interference, electrode contact noise, instrumentation noise are normally pollute ECG signal detected at the electrode. Visu-Shrink thresholding and Bayesian thresholding are the two filters based technique on wavelet method which is denoising the PLI noisy ECG signal. So thresholding techniques are applied for the effectiveness of ECG interval and compared the results with the wavelet soft and hard thresholding methods. The outputs are evaluated by calculating the root mean square (RMS, signal to noise ratio (SNR, correlation coefficient (CC and power spectral density (PSD using MATLAB software. The clean ECG signal shows Bayesian thresholding technique is more powerful algorithm for denoising.

  2. 基于LabVIEW的宽频时变信号阈值去噪系统设计%Design of threshold denoising system for broadband time-varying signal based on LabVIEW

    Institute of Scientific and Technical Information of China (English)

    张俊涛; 曹梦娜; 张涛

    2014-01-01

    针对含有高斯噪声的宽频时变信号,利用LabVIEW强大的信号处理函数库设计了基于LabVIEW的虚拟去噪系统,该系统去噪快捷高效。对含有高斯噪声的宽频时变信号进行功率谱分析,根据其频谱特性进行小波阈值去噪,通过对各种不同阈值去噪准则及软硬阈值去噪结果进行对比分析,寻找阈值去噪方法的最优参数。结果表明,基于LabVIEW阈值去噪法对含有高斯噪声的宽频时变信号具有较好的效果,在时频域都表现了优良的特征。%A virtual denoising system w hich is fast and efficient is designed for broadband time‐varying signal by using the powerful signal processing function library based on Lab‐VIEW .The power spectrum of broadband time‐varying signal containing gaussian noise is analyzed in the paper and wavelet threshold denoising is used based on their spectral fea‐tures .The optimal parameters of thresholding methods is searched by analyzing and compa‐ring the results of different thresholding criteria and the hard and soft threshold denosing . The results show that thresholding method based on LabVIEW for broadband time‐varying signal achieves good effect and present excellent characteristics in the time‐frequency do‐m ain .

  3. 基于小波提升的ECG去噪和QRS波识别快速算法%Fast algorithm of ECG denoising and QRS wave identification based on wavelet lifting

    Institute of Scientific and Technical Information of China (English)

    姚成; 司玉娟; 郎六琪; 朴德慧; 徐海峰; 李贺佳

    2012-01-01

    提出了一种基于小波提升的ECG去噪和QRS波识别的快速算法。该算法在小波提升基础上引入加权阈值收缩法,保证ECG有用信息不丢失,提高了去噪效果;利用去噪重构的中间结果并结合简单的差分法,实现了使用平滑函数一阶导数对信号进行小波提升变换,避免了需要二次小波提升变换的运算,在保证识别精度的同时,大大降低了运算复杂度。实验结果表明,该算法能得到较高的SNR和较低的MSE,QRS波识别准确率达到了99.5%以上。并且,该算法利于在硬件平台(FPGA)上实现,便于在心电监护设备上集成。%This paper proposes a fast algorithm of ECG denoising and QRS wave identification based on wavelet lifting. On the basis of wavelet lifting, the weighted threshold shrinkage method is introduced to ensure not to lose useful ECG information and improve the denoising effect. Using the intermediate result of the denoising and reconstruction together with the simple finite difference method, the proposed algorithm uses the first derivative of the smooth function to process the signals by the lifting wavelet transform. Such process can avoid the secondary operation of the lifting wavelet transform, thus significantly reducing the complexity of operation meanwhile maintaining the identification precision. Experimental results demonstrate that the proposed algorithm can achieve relatively higher SNR and lower MSE. In addition, the accuracy rate of QRS wave identification is above 99. 5%. Moreover, this algorithm can be realized on the hardware platform of FPGA, which is convenient for ECG monitoring equipment integration.

  4. Anti-electromagnetic Interference Method Based on Wavelet De-noising and Morphology Compensation Algorithm%基于小波降噪和形态补偿的抗电磁干扰方法

    Institute of Scientific and Technical Information of China (English)

    杨云涛; 石志勇; 关贞珍; 许杨

    2011-01-01

    导航载体上电气设备产生的电磁噪声是影响地磁测量精度的主要因素之一.对电气设备产生的电磁干扰特性进行了深入地研究与分析,将电磁干扰噪声分为高频交变磁扰和等效电流磁扰,并根据其不同的特性采用了不同的降噪方法.首先运用小波阈值法消除高频交变电磁噪声,然后提出了以交变磁扰噪声能量分布脊线为参量,建立消影除等效电流磁扰补偿模型的算法.试验证明,该方法不但能起到较好的降噪、平滑效果,而且可以保持真实信号的突变细节,提高了地磁测量的精度.%Electromagnetic noising that produced by the electric equipment on a navigation carrier is a main causation which influenced the geomagnetic measurement precision. The electromagnetic noising characteristic was researched and analyzed deeply in the article. The different de-noising methods were applied when the noise were classed into the high frequency magnetic noising and the equivalent electric current magnetic noising. The method of wavelet threshold de-noising was adopted in order to eliminate the high frequency magnetic noising. And then a compensation algorithm was put forward and molded based on the high frequency magnetic noising energy ridge envelop to eliminate the equivalent electric current magnetic noising. An experiment was conducted at last which showed that the technology not only can de-noising electromagnetic noising preferably but also can keep the actual signal break detail, and improved the geomagnetic measurement precision.

  5. Denoising of mechanical vibration signals based on quantum superposition inspired parametric estimation%基于量子叠加态参数估计的机械振动信号降噪方法

    Institute of Scientific and Technical Information of China (English)

    陈彦龙; 张培林; 王怀光

    2014-01-01

    A novel denoising method for mechanical vibration signals was proposed based on quantum superposition inspired parametric estimation.Considering the relation between real coefficients and imaginary ones of the dual-tree complex wavelet transformation,a new two-dimensional probability density function model with an adaptive parameter was built.Through investigating the inter-scale dependency of coefficients and those of their parents,the proability for quantum superposition inspired signal and noise to occur was presented.Combined with Bayesian estimation theory,an adaptive shrinkage function was deuced based on quantum superposition inspired parametric estimation.At last,the simulated signals and rolling bearing fault vibration signals were analyzed.The results showed that using the proposed method can reduce noise effectively,can achieve much better performance than that of the traditional soft and hard thresholds denoising algorithms.%提出基于量子叠加态参数估计的机械振动信号降噪方法。考虑双树复小波系数虚、实部关系,建立带自适应参数的二维概率密度函数模型;研究父-子代小波系数相关性,提出量子叠加态信号与噪声出现概率,并结合贝叶斯估计理论推导出基于量子叠加态参数估计的自适应收缩函数;分析仿真信号与滚动轴承故障振动信号。结果表明该方法较传统软硬阈值算法适应性更好,降噪效果显著。

  6. Denoising of mechanical vibration signals based on quantum superposition inspired parametric estimation%基于量子叠加态参数估计的机械振动信号降噪方法

    Institute of Scientific and Technical Information of China (English)

    陈彦龙; 张培林; 王怀光

    2014-01-01

    提出基于量子叠加态参数估计的机械振动信号降噪方法。考虑双树复小波系数虚、实部关系,建立带自适应参数的二维概率密度函数模型;研究父-子代小波系数相关性,提出量子叠加态信号与噪声出现概率,并结合贝叶斯估计理论推导出基于量子叠加态参数估计的自适应收缩函数;分析仿真信号与滚动轴承故障振动信号。结果表明该方法较传统软硬阈值算法适应性更好,降噪效果显著。%A novel denoising method for mechanical vibration signals was proposed based on quantum superposition inspired parametric estimation.Considering the relation between real coefficients and imaginary ones of the dual-tree complex wavelet transformation,a new two-dimensional probability density function model with an adaptive parameter was built.Through investigating the inter-scale dependency of coefficients and those of their parents,the proability for quantum superposition inspired signal and noise to occur was presented.Combined with Bayesian estimation theory,an adaptive shrinkage function was deuced based on quantum superposition inspired parametric estimation.At last,the simulated signals and rolling bearing fault vibration signals were analyzed.The results showed that using the proposed method can reduce noise effectively,can achieve much better performance than that of the traditional soft and hard thresholds denoising algorithms.

  7. Application of Wavelet De-noising in Vibration Torque Measurement

    Directory of Open Access Journals (Sweden)

    Hao Zhao

    2012-09-01

    Full Text Available For vibration torque is the key to the rotation system state inspection and fault analysis, a vibration torque test is implemented for no slot rotor three-phase asynchronous when under no load condition. While the vibration torque signal is annihilated by lots of noise, a de-noising scheme based on wavelet transform is constructed. The actual signal is decomposed with wavelet Sym8, then processed by half soft threshold, and reconstructed signal finally. Simulation results indicated that the method can get rid of most of the high frequency noise, recover the factuality and improve the fitting and generalization capability of the data, and de-noising effect is far better than the traditional fast flourier transformation.

  8. Identity Based Color Image Cryptography

    OpenAIRE

    Gopi Krishnan S; Loganathan D

    2011-01-01

    An Identity based cryptography based on visual cryptography scheme was proposed for protecting color image. A color image to be protected and authentic entities such as account number, password, signature image are given as input. The binary key image is obtained by distributing the digital signature of obtained authentic entities. A secret color image which needs to be communicated is decomposed into three grayscale tones of Y-Cb-Cr color component. Then these grayscale images are half-toned...

  9. An underwater ship fault detection method based on Sonar image processing

    Science.gov (United States)

    Hong, Shi; Fang-jian, Shan; Bo, Cong; Wei, Qiu

    2016-02-01

    For the research of underwater ship fault detection method in conditions of sailing on the ocean especially in poor visibility muddy sea, a fault detection method under the assist of sonar image processing was proposed. Firstly, did sonar image denoising using the algorithm of pulse coupled neural network (PCNN); secondly, edge feature extraction for the image after denoising was carried out by morphological wavelet transform; Finally, interested regions Using relevant tracking method were taken, namely fault area mapping. The simulation results presented here proved the feasibility and effectiveness of the sonar image processing in underwater fault detection system.

  10. 基于熵的自适应门限小波去噪股市价格预测%Stock Price Forecasting Based on Adaptive Threshold Wavelet Entropy Denoising

    Institute of Scientific and Technical Information of China (English)

    张晶; 张建文; 李宁

    2011-01-01

    本文针对传统软阈值法小波去噪采用统一门限而引起的过平滑问题,根据熵的特性,在各层自适应调整去噪门限,提出一种改进的小波去噪算法,采用Hurst指数和盒维数作为判决准则抑制过平滑。最后将算法应用于股市价格时间序列去噪,并用BP神经网络对去噪后的深发展A近20年的收盘价格进行了分段预测。仿真表明,本文方法与传统方法相比,误差明显减小,预测结果更为理想。%In this paper, duo to the over-smoothing problems of the traditional so.threshold wavelet de-noising which was caused by uniform threshold.Based on characteristics of entropy in time series of stock price,in this article we proposed a new algorithm to filter out the noise using adaptive threshold.Then we take the Hurst index and the box dimension as the decision threshold to justify the effects.Taking“Shenfazhan A”for example,this article forecasts the closing stock price in the recent 20 years.The simulation result indicates that algorithm reduced errors and has more rational de-noising effects.

  11. Fourth-order partial differential equation noise removal on welding images

    International Nuclear Information System (INIS)

    Partial differential equation (PDE) has become one of the important topics in mathematics and is widely used in various fields. It can be used for image denoising in the image analysis field. In this paper, a fourth-order PDE is discussed and implemented as a denoising method on digital images. The fourth-order PDE is solved computationally using finite difference approach and then implemented on a set of digital radiographic images with welding defects. The performance of the discretized model is evaluated using Peak Signal to Noise Ratio (PSNR). Simulation is carried out on the discretized model on different level of Gaussian noise in order to get the maximum PSNR value. The convergence criteria chosen to determine the number of iterations required is measured based on the highest PSNR value. Results obtained show that the fourth-order PDE model produced promising results as an image denoising tool compared with median filter

  12. Fourth-order partial differential equation noise removal on welding images

    Energy Technology Data Exchange (ETDEWEB)

    Halim, Suhaila Abd; Ibrahim, Arsmah; Sulong, Tuan Nurul Norazura Tuan [Center of Mathematics Studies, Faculty of Computer and Mathematical Sciences, Universiti Teknologi MARA, 40450 Shah Alam, Selangor. Malaysia (Malaysia); Manurung, Yupiter HP [Advanced Manufacturing Technology Center, Faculty of Mechanical Engineering, Universiti TEknologi MARA, 40450 Shah Alam, Selangor. Malaysia (Malaysia)

    2015-10-22

    Partial differential equation (PDE) has become one of the important topics in mathematics and is widely used in various fields. It can be used for image denoising in the image analysis field. In this paper, a fourth-order PDE is discussed and implemented as a denoising method on digital images. The fourth-order PDE is solved computationally using finite difference approach and then implemented on a set of digital radiographic images with welding defects. The performance of the discretized model is evaluated using Peak Signal to Noise Ratio (PSNR). Simulation is carried out on the discretized model on different level of Gaussian noise in order to get the maximum PSNR value. The convergence criteria chosen to determine the number of iterations required is measured based on the highest PSNR value. Results obtained show that the fourth-order PDE model produced promising results as an image denoising tool compared with median filter.

  13. 基于三次B样条函数的SEM图像处理%SEM Image Processing Based on Third- order B- spline Function

    Institute of Scientific and Technical Information of China (English)

    张健

    2011-01-01

    SEM images, for its unique practical testing significance, need in denoising also highlight its edges and accurate edge extraction positioning, So this paper adopts a partial differential method which can maintain the edges of the denoising and a extensive application of multi - scale wavelet analysis to detect edges, all based on third - order B - spline function as the core operator, for line width test of SEM image processing, This algorithm obtained the better denoising effect and maintained edge features for SEM images.%SEM图像由于其独特的实际测试意义,需要在去噪的同时突出边缘和准确的边缘提取定位,所以提出采用能够保持边缘的偏微分方法去噪和广泛应用的多尺度小波提取边缘,基于三次B样条函数作为核心算子,对用于线宽测试的SEM图像进行处理,获得了较好的去噪并保持边缘的效果以及清晰的图像边缘检测效果.

  14. Performance Comparison of Total Variation based Image Regularization Algorithms

    Directory of Open Access Journals (Sweden)

    Kamalaveni Vanjigounder

    2016-07-01

    Full Text Available The mathematical approach calculus of variation is commonly used to find an unknown function that minimizes or maximizes the functional. Retrieving the original image from the degraded one, such problems are called inverse problems. The most basic example for inverse problem is image denoising. Variational methods are formulated as optimization problems and provides a good solution to image denoising. Three such variational methods Tikhonov model, ROF model and Total Variation-L1 model for image denoising are studied and implemented. Performance of these variational algorithms are analyzed for different values of regularization parameter. It is found that small value of regularization parameter causes better noise removal whereas large value of regularization parameter preserves well sharp edges. The Euler’s Lagrangian equation corresponding to an energy functional used in variational methods is solved using gradient descent method and the resulting partial differential equation is solved using Euler’s forward finite difference method. The quality metrics are computed and the results are compared in this paper. 

  15. Iris image recognition wavelet filter-banks based iris feature extraction schemes

    CERN Document Server

    Rahulkar, Amol D

    2014-01-01

    This book provides the new results in wavelet filter banks based feature extraction, and the classifier in the field of iris image recognition. It provides the broad treatment on the design of separable, non-separable wavelets filter banks, and the classifier. The design techniques presented in the book are applied on iris image analysis for person authentication. This book also brings together the three strands of research (wavelets, iris image analysis, and classifier). It compares the performance of the presented techniques with state-of-the-art available schemes. This book contains the compilation of basic material on the design of wavelets that avoids reading many different books. Therefore, it provide an easier path for the new-comers, researchers to master the contents. In addition, the designed filter banks and classifier can also be effectively used than existing filter-banks in many signal processing applications like pattern classification, data-compression, watermarking, denoising etc.  that will...

  16. Hardware design and implementation of a wavelet de-noising procedure for medical signal preprocessing.

    Science.gov (United States)

    Chen, Szi-Wen; Chen, Yuan-Ho

    2015-01-01

    In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz. PMID:26501290

  17. Hardware Design and Implementation of a Wavelet De-Noising Procedure for Medical Signal Preprocessing

    Directory of Open Access Journals (Sweden)

    Szi-Wen Chen

    2015-10-01

    Full Text Available In this paper, a discrete wavelet transform (DWT based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan 40 nm standard cell library. The integrated circuit (IC synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz.

  18. Optical image encryption based on diffractive imaging.

    Science.gov (United States)

    Chen, Wen; Chen, Xudong; Sheppard, Colin J R

    2010-11-15

    In this Letter, we propose a method for optical image encryption based on diffractive imaging. An optical multiple random phase mask encoding system is applied, and one of the phase-only masks is selected and laterally translated along a preset direction during the encryption process. For image decryption, a phase retrieval algorithm is proposed to extract a high-quality plaintext. The feasibility and effectiveness of the proposed method are demonstrated by numerical results. The proposed method can provide a new strategy instead of conventional interference methods, and it may open up a new research perspective for optical image encryption.

  19. Determination and Visualization of pH Values in Anaerobic Digestion of Water Hyacinth and Rice Straw Mixtures Using Hyperspectral Imaging with Wavelet Transform Denoising and Variable Selection

    Directory of Open Access Journals (Sweden)

    Chu Zhang

    2016-02-01

    Full Text Available Biomass energy represents a huge supplement for meeting current energy demands. A hyperspectral imaging system covering the spectral range of 874–1734 nm was used to determine the pH value of anaerobic digestion liquid produced by water hyacinth and rice straw mixtures used for methane production. Wavelet transform (WT was used to reduce noises of the spectral data. Successive projections algorithm (SPA, random frog (RF and variable importance in projection (VIP were used to select 8, 15 and 20 optimal wavelengths for the pH value prediction, respectively. Partial least squares (PLS and a back propagation neural network (BPNN were used to build the calibration models on the full spectra and the optimal wavelengths. As a result, BPNN models performed better than the corresponding PLS models, and SPA-BPNN model gave the best performance with a correlation coefficient of prediction (rp of 0.911 and root mean square error of prediction (RMSEP of 0.0516. The results indicated the feasibility of using hyperspectral imaging to determine pH values during anaerobic digestion. Furthermore, a distribution map of the pH values was achieved by applying the SPA-BPNN model. The results in this study would help to develop an on-line monitoring system for biomass energy producing process by hyperspectral imaging.

  20. Comparative study of ECG signal denoising by wavelet thresholding in empirical and variational mode decomposition domains.

    Science.gov (United States)

    Lahmiri, Salim

    2014-09-01

    Hybrid denoising models based on combining empirical mode decomposition (EMD) and discrete wavelet transform (DWT) were found to be effective in removing additive Gaussian noise from electrocardiogram (ECG) signals. Recently, variational mode decomposition (VMD) has been proposed as a multiresolution technique that overcomes some of the limits of the EMD. Two ECG denoising approaches are compared. The first is based on denoising in the EMD domain by DWT thresholding, whereas the second is based on noise reduction in the VMD domain by DWT thresholding. Using signal-to-noise ratio and mean of squared errors as performance measures, simulation results show that the VMD-DWT approach outperforms the conventional EMD-DWT. In addition, a non-local means approach used as a reference technique provides better results than the VMD-DWT approach. PMID:26609387

  1. An open-source Matlab code package for improved rank-reduction 3D seismic data denoising and reconstruction

    Science.gov (United States)

    Chen, Yangkang; Huang, Weilin; Zhang, Dong; Chen, Wei

    2016-10-01

    Simultaneous seismic data denoising and reconstruction is a currently popular research subject in modern reflection seismology. Traditional rank-reduction based 3D seismic data denoising and reconstruction algorithm will cause strong residual noise in the reconstructed data and thus affect the following processing and interpretation tasks. In this paper, we propose an improved rank-reduction method by modifying the truncated singular value decomposition (TSVD) formula used in the traditional method. The proposed approach can help us obtain nearly perfect reconstruction performance even in the case of low signal-to-noise ratio (SNR). The proposed algorithm is tested via one synthetic and field data examples. Considering that seismic data interpolation and denoising source packages are seldom in the public domain, we also provide a program template for the rank-reduction based simultaneous denoising and reconstruction algorithm by providing an open-source Matlab package.

  2. Shock capturing, level sets, and PDE based methods in computer vision and image processing: a review of Osher's contributions

    International Nuclear Information System (INIS)

    In this paper we review the algorithm development and applications in high resolution shock capturing methods, level set methods, and PDE based methods in computer vision and image processing. The emphasis is on Stanley Osher's contribution in these areas and the impact of his work. We will start with shock capturing methods and will review the Engquist-Osher scheme, TVD schemes, entropy conditions, ENO and WENO schemes, and numerical schemes for Hamilton-Jacobi type equations. Among level set methods we will review level set calculus, numerical techniques, fluids and materials, variational approach, high codimension motion, geometric optics, and the computation of discontinuous solutions to Hamilton-Jacobi equations. Among computer vision and image processing we will review the total variation model for image denoising, images on implicit surfaces, and the level set method in image processing and computer vision

  3. Application of wavelet analysis in laser Doppler vibration signal denoising

    Science.gov (United States)

    Lan, Yu-fei; Xue, Hui-feng; Li, Xin-liang; Liu, Dan

    2010-10-01

    Large number of experiments show that, due to external disturbances, the measured surface is too rough and other factors make use of laser Doppler technique to detect the vibration signal contained complex information, low SNR, resulting in Doppler frequency shift signals unmeasured, can not be demodulated Doppler phase and so on. This paper first analyzes the laser Doppler signal model and feature in the vibration test, and studies the most commonly used three ways of wavelet denoising techniques: the modulus maxima wavelet denoising method, the spatial correlation denoising method and wavelet threshold denoising method. Here we experiment with the vibration signals and achieve three ways by MATLAB simulation. Processing results show that the wavelet modulus maxima denoising method at low laser Doppler vibration SNR, has an advantage for the signal which mixed with white noise and contained more singularities; the spatial correlation denoising method is more suitable for denoising the laser Doppler vibration signal which noise level is not very high, and has a better edge reconstruction capacity; wavelet threshold denoising method has a wide range of adaptability, computational efficiency, and good denoising effect. Specifically, in the wavelet threshold denoising method, we estimate the original noise variance by spatial correlation method, using an adaptive threshold denoising method, and make some certain amendments in practice. Test can be shown that, compared with conventional threshold denoising, this method is more effective to extract the feature of laser Doppler vibration signal.

  4. The research and application of double mean weighting denoising algorithm

    Science.gov (United States)

    Fang, Hao; Xiong, Feng

    2015-12-01

    In the application of image processing and pattern recognition, the precision of image preprocessing has a great influence on the image after-processing and analysis. This paper describes a novel local double mean weighted algorithm (hereinafter referred to as D-M algorithm) for image denoising. Firstly, the pixel difference and the absolute value are taken for the current pixels and the pixels in the neighborhood; then the absolute values are sorted again, the means of such pixels are taken in an half-to-half way; finally the weighting coefficient of the mean is taken. According to a large number of experiments, such algorithm not only introduces a certain robustness, but also improves increment significantly.

  5. An Adaptive De-Noising Method for Vehicle's Acceleration Signal Based on PDE%一种基于偏微分方程的车辆加速度信号自适应降噪方法

    Institute of Scientific and Technical Information of China (English)

    徐叶雷; 黄青华; 方勇

    2009-01-01

    提出一种针对MEMS加速度计信号的基于偏微分方程的自适应降噪方法,该方法不仅能有效克服由于传感器本身原因及车载环境振动噪声带来的影响,获得准确的加速度信号,而且实现容易、实时性好.通过对车辆加速度信号进行建模并叠加真实加速度噪声作为仿真信号,将该方法与选用db6小波基、heursure自适应阈值、4层分解的最佳小波进行降噪性能对比,证明在车辆正常行驶的加速度幅值下,该方法不仅能够取得和小波近似的降噪性能,而且很大程度上减少了运算时间.最后通过对实际车载加速度信号的降噪处理和倾角测量中的应用,再次证明该方法在滤除噪声的同时能够较好体现细节信息,很适合应用在对实时性和准确性要求高的实际工程中.%An adaptive de-noising method was put forward for MEMS accelerometer signal based on partial differential equation. This method can reduce the disturbance caused by vibration of vehicular environment and sensor itself, get acceleration accurately,and have advantages of easy realization and good real-time performance. A vehicle's acceleration signal modeling was set up with real acceleration noises as simulation signals. Compared to the best wavelet in such signal with characters of db6 wavelet base,heursure adaptive threshold, and four decomposed layers, the adaptive de-noising method could perform quite same as wavelet in de-noising,but largely reduce the computation time. Experiments proved above result,while the car was in normal driving conditions. And experiments based on real vehicle acceleration signal and angle measurement proved again that this method could reflect the detailed information effectively while filtering noise, and it was very suitable for practical engineering which needs real-time applications and high accuracy.

  6. Identity Based Color Image Cryptography

    Directory of Open Access Journals (Sweden)

    Gopi Krishnan S

    2011-05-01

    Full Text Available An Identity based cryptography based on visual cryptography scheme was proposed for protecting color image. A color image to be protected and authentic entities such as account number, password, signature image are given as input. The binary key image is obtained by distributing the digital signature of obtained authentic entities. A secret color image which needs to be communicated is decomposed into three grayscale tones of Y-Cb-Cr color component. Then these grayscale images are half-toned to binary image, and finally the obtained binary images are encrypted using binary key image to obtain binary cipher images. To encrypt Exclusive-OR operation is done on binary key image and three half-tones of secret color image separately. These binary images are combined to obtain cipher. In decryption the shares are decrypted by applying Exclusive-OR operation on cipher and key, then the recovered binary images are inverse half-toned and combined to get secret color image. This scheme is more efficient for communicating natural images across diffident channel.

  7. Post-processing noise removal algorithm for magnetic resonance imaging based on edge detection and wavelet analysis

    Energy Technology Data Exchange (ETDEWEB)

    Placidi, Giuseppe; Alecci, Marcello; Sotgiu, Antonello [INFM, c/o Centro di Risonanza Magnetica and Dipartimento di Scienze e Tecnologie Biomediche, Universita dell' Aquila, Via Vetoio 10, 67010 Coppito, L' Aquila (Italy)

    2003-07-07

    A post-processing noise suppression technique for biomedical MRI images is presented. The described procedure recovers both sharp edges and smooth surfaces from a given noisy MRI image; it does not blur the edges and does not introduce spikes or other artefacts. The fine details of the image are also preserved. The proposed algorithm first extracts the edges from the original image and then performs noise reduction by using a wavelet de-noise method. After the application of the wavelet method, the edges are restored to the filtered image. The result is the original image with less noise, fine detail and sharp edges. Edge extraction is performed by using an algorithm based on Sobel operators. The wavelet de-noise method is based on the calculation of the correlation factor between wavelet coefficients belonging to different scales. The algorithm was tested on several MRI images and, as an example of its application, we report the results obtained from a spin echo (multi echo) MRI image of a human wrist collected with a low field experimental scanner (the signal-to-noise ratio, SNR, of the experimental image was 12). Other filtering operations have been performed after the addition of white noise on both channels of the experimental image, before the magnitude calculation. The results at SNR = 7, SNR = 5 and SNR = 3 are also reported. For SNR values between 5 and 12, the improvement in SNR was substantial and the fine details were preserved, the edges were not blurred and no spikes or other artefacts were evident, demonstrating the good performances of our method. At very low SNR (SNR = 3) our result is worse than that obtained by a simpler filtering procedure.

  8. Algorithms and software for total variation image reconstruction via first-order methods

    DEFF Research Database (Denmark)

    Dahl, Joahim; Hansen, Per Christian; Jensen, Søren Holdt;

    2010-01-01

    This paper describes new algorithms and related software for total variation (TV) image reconstruction, more specifically: denoising, inpainting, and deblurring. The algorithms are based on one of Nesterov's first-order methods, tailored to the image processing applications in such a way that...

  9. MSPA BASED ON PROCESS INFORMATION DENOISED WITH WAVELET TRANSFORM AND ITS APPLICATION TO CHEMICAL PROCESS MONITORING%基于小波变换去噪的多元统计投影分析及其在化工过程监控中的应用

    Institute of Scientific and Technical Information of China (English)

    陈国金; 梁军; 钱积新

    2003-01-01

    In industrial processes, measured data are often contaminated by noise, which causes poor performance of some techniques driven by data. Wavelet transform is a useful tool to de-noise the process information, but conventional transaction is directly employing wavelet transform to the measured variables, which will make the method less effective and more multifarious if there exists lots of process variables and collinear relationships. In this paper, a novel multivariate statistical projection analysis (MSPA) based on data de-noised with wavelet transform and blind signal analysis is presented, which can detect fault more quickly and improve the monitoring performance of the process. The simulation results applying to a double-effect evaporator verify higher effectiveness and better performance of the new MSPA than classical multivariate statistical process control(MSPC).

  10. Denoising approach in reversing cam outline based on wavelet threshold analysis%小波阈值法去噪在凸轮廓线反求中的应用

    Institute of Scientific and Technical Information of China (English)

    姚正江; 杨玉萍

    2012-01-01

    通过MATLAB小波工具箱函数,选择合适的小波函数和阈值,可以对一维信号进行有效的小波阈值去噪.在凸轮反求过程中所测得的凸轮廓线数据,可以作为两个一维信号进行小波去噪,去噪后得到新的凸轮廓线数据.对比去噪前后凸轮廓线公差带的变化,对去噪前后凸轮运动时的从动件加速度图进行对比分析,发现小波变换对凸轮廓线去噪有着良好的去噪效果,去噪后基本还原了凸轮的原始廓线.%Through the MATLAB wavelet toolbox,the signal can be an effective one-dimensional wavelet threshold denoising by selecting the appropriate wavelet function and the threshold value. The cam outline data which measured in the process of reversing cam can be denoised as two one-dimensional signals and a new cam outline data could be obtained after denoising. By comparing the tolerance zone of cam outline and the follower acceleration graph before and after denoising,found that wavelet denoising of the cam outline has a good denoising. The cam outline which is also almost restored after denoising.

  11. Optimization of dynamic measurement of receptor kinetics by wavelet denoising.

    Science.gov (United States)

    Alpert, Nathaniel M; Reilhac, Anthonin; Chio, Tat C; Selesnick, Ivan

    2006-04-01

    The most important technical limitation affecting dynamic measurements with PET is low signal-to-noise ratio (SNR). Several reports have suggested that wavelet processing of receptor kinetic data in the human brain can improve the SNR of parametric images of binding potential (BP). However, it is difficult to fully assess these reports because objective standards have not been developed to measure the tradeoff between accuracy (e.g. degradation of resolution) and precision. This paper employs a realistic simulation method that includes all major elements affecting image formation. The simulation was used to derive an ensemble of dynamic PET ligand (11C-raclopride) experiments that was subjected to wavelet processing. A method for optimizing wavelet denoising is presented and used to analyze the simulated experiments. Using optimized wavelet denoising, SNR of the four-dimensional PET data increased by about a factor of two and SNR of three-dimensional BP maps increased by about a factor of 1.5. Analysis of the difference between the processed and unprocessed means for the 4D concentration data showed that more than 80% of voxels in the ensemble mean of the wavelet processed data deviated by less than 3%. These results show that a 1.5x increase in SNR can be achieved with little degradation of resolution. This corresponds to injecting about twice the radioactivity, a maneuver that is not possible in human studies without saturating the PET camera and/or exposing the subject to more than permitted radioactivity.

  12. 基于小波奇异点检测和阈值去噪的眨眼伪迹去除方法%A method for blink artifact removal based on wavelet singularity detection and thresholding denoising

    Institute of Scientific and Technical Information of China (English)

    牟锴钰; 韦明; 杨辉; 彭振

    2015-01-01

    目的:眨眼伪迹是脑电中一种常见且影响严重的伪迹。本论文提出一种基于小波奇异点检测和阈值去噪的眨眼伪迹去除方法,无需眼电参考信号,做到自动去除单导脑电信号中的眨眼伪迹。方法首先利用小波奇异点检测特性以检测眨眼伪迹的峰值位置,然后只对眨眼伪迹区域进行小波阈值去噪。结果实验结果表明,本方法能够有效检测眨眼伪迹,避免了普通方法去噪时对非眨眼区域的影响。结论本方法使用的阈值和阈值函数优于典型的阈值和软、硬阈值函数,有效地去除了脑电中的眨眼伪迹。%Objective Blink artifact is common and has serious effect on EEG .To remove the blink artifacts in EEG automatically without a reference channel , this paper proposes a method for blink artifact removal based on wavelet singularity detection and thresholding denoising .Methods First, the detection property of wavelet singularity was used to detect the positions of blink artifact peaks , and then only the blink artifact zones were denoised by the wavelet thresholding method .Results The experimental results showed that the proposed method could effectively detect the blink artifacts and avoid affecting the EEG outside the blink artifact zones in usual methods .Conclusions The threshold and thresholding function used in the paper could effectively remove the blink artifacts in EEG and outperform the conventional soft or hard thresholding estimators.

  13. Sub-band denoising and spline curve fitting method for hemodynamic measurement in perfusion MRI

    Science.gov (United States)

    Lin, Hong-Dun; Huang, Hsiao-Ling; Hsu, Yuan-Yu; Chen, Chi-Chen; Chen, Ing-Yi; Wu, Liang-Chi; Liu, Ren-Shyan; Lin, Kang-Ping

    2003-05-01

    In clinical research, non-invasive MR perfusion imaging is capable of investigating brain perfusion phenomenon via various hemodynamic measurements, such as cerebral blood volume (CBV), cerebral blood flow (CBF), and mean trasnit time (MTT). These hemodynamic parameters are useful in diagnosing brain disorders such as stroke, infarction and periinfarct ischemia by further semi-quantitative analysis. However, the accuracy of quantitative analysis is usually affected by poor signal-to-noise ratio image quality. In this paper, we propose a hemodynamic measurement method based upon sub-band denoising and spline curve fitting processes to improve image quality for better hemodynamic quantitative analysis results. Ten sets of perfusion MRI data and corresponding PET images were used to validate the performance. For quantitative comparison, we evaluate gray/white matter CBF ratio. As a result, the hemodynamic semi-quantitative analysis result of mean gray to white matter CBF ratio is 2.10 +/- 0.34. The evaluated ratio of brain tissues in perfusion MRI is comparable to PET technique is less than 1-% difference in average. Furthermore, the method features excellent noise reduction and boundary preserving in image processing, and short hemodynamic measurement time.

  14. Multi-scale image fusion for x-ray grating-based mammography

    Science.gov (United States)

    Jiang, Xiaolei; Zhang, Li; Wang, Zhentian; Stampanoni, Marco

    2012-10-01

    X-ray phase contrast imaging (PCI) can provide high sensitivity of weakly absorbing low-Z objects in medical and biological fields, especially in mammography. Grating-based differential phase contrast (DPC) method is the most potential PCI method for clinic applications because it can works well with conventional X-ray tube and it can retrieve attenuation, DPC and dark-field information of the samples in a single scanning. Three kinds of information have different details and contrast which represent different physical characteristics of X-rays with matters. Hence, image fusion can show the most desirable characteristics of each image. In this paper, we proposed a multi-scale image fusion for X-ray grating-based DPC mammography. Firstly, non-local means method is adopted for denoising due to the strong noise, especially for DPC and dark-field images. Then, Laplacian pyramid is used for multi-scale image fusion. The principal component analysis (PCA) method is used on the high frequency part and the spatial frequency method is used on the low frequency part. Finally, the fused image is obtained by inverse Laplacian pyramid transform. Our algorithm is validated by experiments. The experiments were performed on mammoDPC instrumentation at the Paul Scherrer Institut in Villigen, Switzerland. The results show that our algorithm can significantly show the advantages of three kinds of information in the fused image, which is very helpful for the breast cancer diagnosis.

  15. A multiscale products technique for denoising of DNA capillary electrophoresis signals

    Science.gov (United States)

    Gao, Qingwei; Lu, Yixiang; Sun, Dong; Zhang, Dexiang

    2013-06-01

    Since noise degrades the accuracy and precision of DNA capillary electrophoresis (CE) analysis, signal denoising is thus important to facilitate the postprocessing of CE data. In this paper, a new denoising algorithm based on dyadic wavelet transform using multiscale products is applied for the removal of the noise in the DNA CE signal. The adjacent scale wavelet coefficients are first multiplied to amplify the significant features of the CE signal while diluting noise. Then, noise is suppressed by applying a multiscale threshold to the multiscale products instead of directly to the wavelet coefficients. Finally, the noise-free CE signal is recovered from the thresholded coefficients by using inverse dyadic wavelet transform. We compare the performance of the proposed algorithm with other denoising methods applied to the synthetic CE and real CE signals. Experimental results show that the new scheme achieves better removal of noise while preserving the shape of peaks corresponding to the analytes in the sample.

  16. Lifting transform via Savitsky-Golay filter predictor and application of denoising

    Institute of Scientific and Technical Information of China (English)

    ZHOU Guang-zhu; YANG Feng-jie; WANG Cui-zhen

    2006-01-01

    The Savitsky-Golay filter is a smoothing filter based on polynomial regression. It employs the regression fitting capacity to improve the smoothing results. But Savitsky-Golay filter uses a fix sized window. It has the same shortage of Window Fourier Transform. Wavelet mutiresolution analysis may deal with this problem. In this paper, taking advantage of Savitsky-Golay filter's fitting ability and the wavelet transform's multiscale analysis ability, we developed a new lifting transform via Savitsky-Golay smoothing filter as the lifting predictor, and then processed the signals comparing with the ordinary Savitsky-Golay smoothing method. We useed the new lifting in noisy heavy sine denoising. The new transform obviously has better denoise ability than ordinary Savitsky-Golay smoothing method. At the same time singular points are perfectly retained in the denoised signal.Singularity analysis, multiscale interpolation, estimation, chemical data smoothing and other potential signal processing utility of this new lifting transform are in prospect.

  17. Sparse representation for color image restoration.

    Science.gov (United States)

    Mairal, Julien; Elad, Michael; Sapiro, Guillermo

    2008-01-01

    Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. In particular, the design of well adapted dictionaries for images has been a major challenge. The K-SVD has been recently proposed for this task and shown to perform very well for various grayscale image processing tasks. In this paper, we address the problem of learning dictionaries for color images and extend the K-SVD-based grayscale image denoising algorithm that appears in. This work puts forward ways for handling nonhomogeneous noise and missing information, paving the way to state-of-the-art results in applications such as color image denoising, demosaicing, and inpainting, as demonstrated in this paper. PMID:18229804

  18. Variational approach for restoring blurred images with cauchy noise

    DEFF Research Database (Denmark)

    Sciacchitano, Federica; Dong, Yiqiu; Zeng, Tieyong

    2015-01-01

    The restoration of images degraded by blurring and noise is one of the most important tasks in image processing. In this paper, based on the total variation (TV) we propose a new variational method for recovering images degraded by Cauchy noise and blurring. In order to obtain a strictly convex...... and denoising images corrupted by Cauchy noise. Comparison with other existing and well-known methods is provided as well....

  19. Sparse non-linear denoising: Generalization performance and pattern reproducibility in functional MRI

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    We investigate sparse non-linear denoising of functional brain images by kernel Principal Component Analysis (kernel PCA). The main challenge is the mapping of denoised feature space points back into input space, also referred to as ”the pre-image problem”. Since the feature space mapping...... sparse pre-image reconstruction by Lasso regularization. We find that sparse estimation provides better brain state decoding accuracy and a more reproducible pre-image. These two important metrics are combined in an evaluation framework which allow us to optimize both the degree of sparsity and the non-linearity...... of the kernel embedding. The latter result provides evidence of signal manifold non-linearity in the specific fMRI case study....

  20. Efficient Image Retireval Using Region Based Image Retrieval

    OpenAIRE

    Niket Amoda; Ramesh K Kulkarni

    2013-01-01

    Early image retrieval techniques were based on text ual annotation of images. Manual annotation of images is a burdensome and expensive work for a huge image database. It is often introspective, context-sensitive and crude. Content based image retrieval, is implem ented using the optical constituents of an image such as shape, colour, spatial layout, and texture to ex hibit and index the image. The Region Based Image Retrieval (RBIR) system us...

  1. Imaging liver lesions using grating-based phase-contrast computed tomography with bi-lateral filter post-processing.

    Directory of Open Access Journals (Sweden)

    Julia Herzen

    Full Text Available X-ray phase-contrast imaging shows improved soft-tissue contrast compared to standard absorption-based X-ray imaging. Especially the grating-based method seems to be one promising candidate for clinical implementation due to its extendibility to standard laboratory X-ray sources. Therefore the purpose of our study was to evaluate the potential of grating-based phase-contrast computed tomography in combination with a novel bi-lateral denoising method for imaging of focal liver lesions in an ex vivo feasibility study. Our study shows that grating-based phase-contrast CT (PCCT significantly increases the soft-tissue contrast in the ex vivo liver specimens. Combining the information of both signals--absorption and phase-contrast--the bi-lateral filtering leads to an improvement of lesion detectability and higher contrast-to-noise ratios. The normal and the pathological tissue can be clearly delineated and even internal structures of the pathological tissue can be visualized, being invisible in the absorption-based CT alone. Histopathology confirmed the presence of the corresponding findings in the analyzed tissue. The results give strong evidence for a sufficiently high contrast for different liver lesions using non-contrast-enhanced PCCT. Thus, ex vivo imaging of liver lesions is possible with a polychromatic X-ray source and at a spatial resolution of ∼100 µm. The post-processing with the novel bi-lateral denoising method improves the image quality by combining the information from the absorption and the phase-contrast images.

  2. IMAGE DENOISING BASED ON FINITE RIDGELET TRANSFORM%基于有限Ridgelet变换的图像去噪

    Institute of Scientific and Technical Information of China (English)

    唐永茂; 施鹏飞

    2006-01-01

    Ridgelet变换是一种新的图像多尺度几何分析(MGA)方法,它能有效地对图像进行多尺度,多方向的描述.M.N.Do提出一种可逆的,正交化的,极好重建性的Ridgelet变换实现-有限Ridgelet变换(FRIT).本文将有限Ridgelet变换应用到线状边界明显的图像去噪中,实验结果表明,它比小波去噪取得更好的效果.

  3. Diagnostic accuracy of late iodine enhancement on cardiac computed tomography with a denoise filter for the evaluation of myocardial infarction.

    Science.gov (United States)

    Matsuda, Takuya; Kido, Teruhito; Itoh, Toshihide; Saeki, Hideyuki; Shigemi, Susumu; Watanabe, Kouki; Kido, Tomoyuki; Aono, Shoji; Yamamoto, Masaya; Matsuda, Takeshi; Mochizuki, Teruhito

    2015-12-01

    We evaluated the image quality and diagnostic performance of late iodine enhancement (LIE) in dual-source computed tomography (DSCT) with low kilo-voltage peak (kVp) images and a denoise filter for the detection of acute myocardial infarction (AMI) in comparison with late gadolinium enhancement (LGE) magnetic resonance imaging (MRI). The Hospital Ethics Committee approved the study protocol. Before discharge, 19 patients who received percutaneous coronary intervention after AMI underwent DSCT and 1.5 T MRI. Immediately after coronary computed tomography (CT) angiography, contrast medium was administered at a slow injection rate. LIE-CT scans were acquired via dual-energy CT and reconstructed as 100-, 140-kVp, and mixed images. An iterative three-dimensional edge-preserved smoothing filter was applied to the 100-kVp images to obtain denoised 100-kVp images. The mixed, 140-kVp, 100-kVp, and denoised 100-kVp images were assessed using contrast-to-noise ratio (CNR), and their diagnostic performance in comparison with MRI and infarcted volumes were evaluated. Three hundred four segments of 19 patients were evaluated. Fifty-three segments showed LGE in MRI. The median CNR of the mixed, 140-, 100-kVp and denoised 100-kVp images was 3.49, 1.21, 3.57, and 6.08, respectively. The median CNR was significantly higher in the denoised 100-kVp images than in the other three images (P LIE-CT. PMID:26202159

  4. 3D Wavelet-Based Filter and Method

    Science.gov (United States)

    Moss, William C.; Haase, Sebastian; Sedat, John W.

    2008-08-12

    A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

  5. Targets Separation and Imaging Method in Sparse Scene Based on Cluster Result of Range Profile Peaks

    Directory of Open Access Journals (Sweden)

    YANG Qiu

    2015-08-01

    Full Text Available This paper focuses on the synthetic aperture radar (SAR imaging of space-sparse targets such as ships on the sea, and proposes a method of targets separation and imaging of sparse scene based on cluster result of range profile peaks. Firstly, wavelet de-noising algorithm is used to preprocess the original echo, and then the range profile at different viewing positions can be obtained by range compression and range migration correction. Peaks of the range profiles can be detected by the fast peak detection algorithm based on second order difference operator. Targets with sparse energy intervals can be imaged through azimuth compression after clustering of peaks in range dimension. What's more, targets without coupling in range energy interval and direction synthetic aperture time can be imaged through azimuth compression after clustering of peaks both in range and direction dimension. Lastly, the effectiveness of the proposed method is validated by simulations. Results of experiment demonstrate that space-sparse targets such as ships can be imaged separately and completely with a small computation in azimuth compression, and the images are more beneficial for target recognition.

  6. Imaging based refractometers

    Energy Technology Data Exchange (ETDEWEB)

    Baba, Justin S.

    2015-11-24

    Refractometers for simultaneously measuring refractive index of a sample over a range or wavelengths of light include dispersive and focusing optical systems. An optical beam including the rang of wavelengths is spectrally spread along a first axis and focused along a second axis so as to be incident to an interface between the sample and a prism at a range of angles of incidence including a critical angle for at least one wavelength. In some cases, the prism can have a triangle, parallelogram, trapezoid, or other shape. In some cases, the optical beam can be reflected off of multiple interfaces between the prism and the sample. An imaging detector is situated to receive the spectrally spread and focused light from the interface and form an image corresponding to angle of incidence as a function of wavelength. One or more critical angles are indentified and corresponding refractive indices are determined.

  7. A DISTRIBUTED COMPRESSED SENSING APPROACH FOR SPEECH SIGNAL DENOISING

    Institute of Scientific and Technical Information of China (English)

    Ji Yunyun; Yang Zhen

    2011-01-01

    Compressed sensing,a new area of signal processing rising in recent years,seeks to minimize the number of samples that is necessary to be taken from a signal for precise reconstruction.The precondition of compressed sensing theory is the sparsity of signals.In this paper,two methods to estimate the sparsity level of the signal are formulated.And then an approach to estimate the sparsity level directly from the noisy signal is presented.Moreover,a scheme based on distributed compressed sensing for speech signal denoising is described in this work which exploits multiple measurements of the noisy speech signal to construct the block-sparse data and then reconstruct the original speech signal using block-sparse model-based Compressive Sampling Matching Pursuit (CoSaMP) algorithm.Several simulation results demonstrate the accuracy of the estimated sparsity level and that this denoising system for noisy speech signals can achieve favorable performance especially when speech signals suffer severe noise.

  8. SINGLE IMAGE SUPER RESOLUTION IN SPATIAL AND WAVELET DOMAIN

    Directory of Open Access Journals (Sweden)

    Sapan Naik

    2013-08-01

    Full Text Available Recently single image super resolution is very important research area to generate high-resolution image from given low-resolution image. Algorithms of single image resolution are mainly based on wavelet domain and spatial domain. Filter’s support to model the regularity of natural images is exploited in wavelet domain while edges of images get sharp during up sampling in spatial domain. Here single image super resolution algorithm is presented which based on both spatial and wavelet domain and take the advantage of both. Algorithm is iterative and use back projection to minimize reconstruction error. Wavelet based denoising method is also introduced to remove noise.

  9. Photography-based image generator

    Science.gov (United States)

    Dalton, Nicholas M.; Deering, Charles S.

    1989-09-01

    A two-channel Photography Based Image Generator system was developed to drive the Helmet Mounted Laser Projector at the Naval Training System Center at Orlando, Florida. This projector is a two-channel system that displays a wide field-of-view color image with a high-resolution inset to efficiently match the pilot's visual capability. The image generator is a derivative of the LTV-developed visual system installed in the A-7E Weapon System Trainer at NAS Cecil Field. The Photography Based Image Generator is based on patented LTV technology for high resolution, multi-channel, real world visual simulation. Special provisions were developed for driving the NTSC-developed and patented Helmet Mounted Laser Projector. These include a special 1023-line raster format, an electronic image blending technique, spherical lens mapping for dome projection, a special computer interface for head/eye tracking and flight parameters, special software, and a number of data bases. Good gaze angle tracking is critical to the use of the NTSC projector in a flight simulation environment. The Photography Based Image Generator provides superior dynamic response by performing a relatively simple perspective transformation on stored, high-detail photography instead of generating this detail by "brute force" computer image generation methods. With this approach, high detail can be displayed and updated at the television field rate (60 Hz).

  10. Signal of Three-axis Magnetic Flux Leakage Denoising based on Wavelet Transform%基于小波变换的三轴漏磁信号去噪研究

    Institute of Scientific and Technical Information of China (English)

    严园园; 张海燕; 方晓艳

    2013-01-01

    This paper introduced the non-destructive testing technology and the elements and advantages of three-axis magnetic flux leakage sensor,advised to use it for the non-destructive testing of pipeline and discussed its priorities. The researches were a-bout how to denoise the three-axis magnetic flux leakage signal. The researches introduced the elements of wavelet transform and the methods of threshold function denoising. This paper used the MATLAB software as the research tool to conduct simulations of threshold function denoising and the three-axis magnetic flux leakage signal denoising,it got a agreeable result. These works indicate the feasibility of using wavelet transform to denoise the three-axis magnetic flux leakage signal.%简单介绍了管道的无损检测技术,研究了三轴漏磁传感器的特征和原理,提出了采用三轴漏磁传感器进行在役管道无损检测的可行性以及它所具有的优越性,主要对三轴漏磁信号的去噪方法进行了研究.介绍了小波变换的基本原理和阈值函数去噪的方法,并采用MATLAB软件进行了小波变换下各种阈值函数的去噪仿真和三轴漏磁信号的去噪仿真,得到了比较理想的去噪结果,证明采用小波变换进行三轴漏磁信号的去噪是可行的.

  11. Content based Image Retrieval from Forensic Image Databases

    OpenAIRE

    Swati A. Gulhane; Dr. Ajay. A. Gurjar

    2015-01-01

    Due to the proliferation of video and image data in digital form, Content based Image Retrieval has become a prominent research topic. In forensic sciences, digital data have been widely used such as criminal images, fingerprints, scene images and so on. Therefore, the arrangement of such large image data becomes a big issue such as how to get an interested image fast. There is a great need for developing an efficient technique for finding the images. In order to find an image, im...

  12. Detector Based Radio Tomographic Imaging

    OpenAIRE

    Yiğitler, Hüseyin; Jäntti, Riku; Kaltiokallio, Ossi; Patwari, Neal

    2016-01-01

    Received signal strength based radio tomographic imaging is a popular device-free indoor localization method which reconstructs the spatial loss field of the environment using measurements from a dense wireless network. Existing methods solve an associated inverse problem using algebraic or compressed sensing reconstruction algorithms. We propose an alternative imaging method that reconstructs spatial field of occupancy using a back-projection based reconstruction algorithm. The introduced sy...

  13. Biogeography based Satellite Image Classification

    CERN Document Server

    Panchal, V K; Kaur, Navdeep; Kundra, Harish

    2009-01-01

    Biogeography is the study of the geographical distribution of biological organisms. The mindset of the engineer is that we can learn from nature. Biogeography Based Optimization is a burgeoning nature inspired technique to find the optimal solution of the problem. Satellite image classification is an important task because it is the only way we can know about the land cover map of inaccessible areas. Though satellite images have been classified in past by using various techniques, the researchers are always finding alternative strategies for satellite image classification so that they may be prepared to select the most appropriate technique for the feature extraction task in hand. This paper is focused on classification of the satellite image of a particular land cover using the theory of Biogeography based Optimization. The original BBO algorithm does not have the inbuilt property of clustering which is required during image classification. Hence modifications have been proposed to the original algorithm and...

  14. Image Data Bases on Campus.

    Science.gov (United States)

    Kaplan, Reid; Mathieson, Gordon

    1989-01-01

    A description of how image database technology was used to develop two prototypes for academic and administrative applications at Yale University, one using a video data base integration and the other using document-scanning data base technology, is presented. Technical underpinnings for the creation of data bases are described. (Author/MLW)

  15. 基于混沌理论和小波变换的微弱周期信号检测方法%Weak periodical signal detection based on wavelet threshold de-noising and chaos theory

    Institute of Scientific and Technical Information of China (English)

    邓宏贵; 曹文晖; 杨兵初; 梅卫平; 敖邦乾

    2012-01-01

    根据小波变换具有多分辨率,混沌系统对噪声的强免疫力和对周期微弱信号的敏感性等特性,通过对小波阈值去噪方法和混沌Duffing振子方程的改进,提出小波阈值去噪和混沌系统相结合的微弱周期信号检测新方法.该方法利用小波变换的平滑作用对包含噪声的信号进行有限离散处理,并根据小波分解尺度确定阈值去噪深度,然后把重构的信号作为周期策动力的摄动并入混沌系统,采用混沌振子阵列实现在噪声背景下微弱信号的检测,并采用梅尔尼科夫方法作为混沌判据.该检测方法克服了以往小波分解对尺度确定的盲目性和阈值选择的不合理性以及对混沌临界状态与周期态区别的模糊性:同时能检测多种频率的信号.仿真测试表明:该方法直观、高效,检测精度高,检测的最低信噪比达到-100dB,频率误差为0.04%左右,改善了湮没在强噪声下的微弱信号检测技术.%Based on the multi-resolution of wavelet transform and chaotic system having a good immunity to noise and sensitive to weak periodical signal, a new method of weak signal detection was proposed based on the combination of the wavelet threshold de-noising and chaotic system by improving wavelet threshold de-noising and duffing oscillator. This method uses wavelet smoothing effect to the limited discrete processing of the signal that contains noise and uses the scale of the wavelet decomposition to determine the de-nosing depth, and then uses the reconstructed signal as the driving motivation of the perturbation cycle into the chaotic system. The chaotic oscillator array is applied to detect weak signal in noisy background, and Melnikov method is adopted as chaotic criterion. This new method has overcome the blindness to the determination of scale and irrationality choice to the threshold of past method of the wavelet decomposition, as well as the ambiguity to distinguish between the critical

  16. A generalized accelerated proximal gradient approach for total-variation-based image restoration.

    Science.gov (United States)

    Zuo, Wangmeng; Lin, Zhouchen

    2011-10-01

    This paper proposes a generalized accelerated proximal gradient (GAPG) approach for solving total variation (TV)-based image restoration problems. The GAPG algorithm generalizes the original APG algorithm by replacing the Lipschitz constant with an appropriate positive-definite matrix, resulting in faster convergence. For TV-based image restoration problems, we further introduce two auxiliary variables that approximate the partial derivatives. Constraints on the variables can easily be imposed without modifying the algorithm much, and the TV regularization can be either isotropic or anisotropic. As compared with the recently developed APG-based methods for TV-based image restoration, i.e., monotone version of the two-step iterative shrinkage/thresholding algorithm (MTwIST) and monotone version of the fast IST algorithm (MFISTA), our GAPG is much simpler as it does not require to solve an image denoising subproblem. Moreover, the convergence rate of O(k(-2)) is maintained by our GAPG, where k is the number of iterations; the cost of each iteration in GAPG is also lower. As a result, in our experiments, our GAPG approach can be much faster than MTwIST and MFISTA. The experiments also verify that our GAPG converges faster than the original APG and MTwIST when they solve identical problems.

  17. 基于小波半软阈值消噪的盲源分离方法%Blind Source Separation Based on Wavelet Semi-soft Threshold Denoising

    Institute of Scientific and Technical Information of China (English)

    孟宗; 马钊; 刘东; 李晶

    2016-01-01

    为了有效提取含噪机械故障信号中的故障特征信息,研究了一种基于小波半软阈值消噪的盲源分离方法。利用小波半软阈值对故障信号进行消噪处理;采用联合近似对角化算法对信号进行盲源分离;考虑在噪声干扰下预消噪常常不足以消除全部噪声,因此在盲源分离后再进行适当的消噪处理,以提高其分离性能。实验验证了所提出方法的有效性和可行性。%In order to extract fault feature informations from the mechanical malfunction signals with noise,a method of blind source separation was proposed based on wavelet semi-soft threshold de-noising.First,wavelet semi-soft threshold was used to filter the failure signals.Then,joint approxi-mate diagonalization was used as blind source separation method to separate signals.Pretreatment was often not enough to eliminate all noises,therefore,it was necessary to denoise again to improve the separation performance.Finally,the feasibility and validity of this method was verified by experi-ments.

  18. Dictionary learning method for joint sparse representation-based image fusion

    Science.gov (United States)

    Zhang, Qiheng; Fu, Yuli; Li, Haifeng; Zou, Jian

    2013-05-01

    Recently, sparse representation (SR) and joint sparse representation (JSR) have attracted a lot of interest in image fusion. The SR models signals by sparse linear combinations of prototype signal atoms that make a dictionary. The JSR indicates that different signals from the various sensors of the same scene form an ensemble. These signals have a common sparse component and each individual signal owns an innovation sparse component. The JSR offers lower computational complexity compared with SR. First, for JSR-based image fusion, we give a new fusion rule. Then, motivated by the method of optimal directions (MOD), for JSR, we propose a novel dictionary learning method (MODJSR) whose dictionary updating procedure is derived by employing the JSR structure one time with singular value decomposition (SVD). MODJSR has lower complexity than the K-SVD algorithm which is often used in previous JSR-based fusion algorithms. To capture the image details more efficiently, we proposed the generalized JSR in which the signals ensemble depends on two dictionaries. MODJSR is extended to MODGJSR in this case. MODJSR/MODGJSR can simultaneously carry out dictionary learning, denoising, and fusion of noisy source images. Some experiments are given to demonstrate the validity of the MODJSR/MODGJSR for image fusion.

  19. CONTENT BASED BATIK IMAGE RETRIEVAL

    Directory of Open Access Journals (Sweden)

    A. Haris Rangkuti

    2014-01-01

    Full Text Available Content Based Batik Image Retrieval (CBBIR is an area of research that focuses on image processing based on characteristic motifs of batik. Basically the image has a unique batik motif compared with other images. Its uniqueness lies in the characteristics possessed texture and shape, which has a unique and distinct characteristics compared with other image characteristics. To study this batik image must start from a preprocessing stage, in which all its color images must be removed with a grayscale process. Proceed with the feature extraction process taking motifs characteristic of every kind of batik using the method of edge detection. After getting the characteristic motifs seen visually, it will be calculated by using 4 texture characteristic function is the mean, energy, entropy and stadard deviation. Characteristic function will be added as needed. The results of the calculation of characteristic functions will be made more specific using the method of wavelet transform Daubechies type 2 and invariant moment. The result will be the index value of every type of batik. Because each motif there are the same but have different sizes, so any kind of motive would be divided into three sizes: Small, medium and large. The perfomance of Batik Image similarity using this method about 90-92%.

  20. A Denoising Autoencoder that Guides Stochastic Search

    OpenAIRE

    Churchill, Alexander W.; Sigtia, Siddharth; Fernando, Chrisantha

    2014-01-01

    An algorithm is described that adaptively learns a non-linear mutation distribution. It works by training a denoising autoencoder (DA) online at each generation of a genetic algorithm to reconstruct a slowly decaying memory of the best genotypes so far. A compressed hidden layer forces the autoencoder to learn hidden features in the training set that can be used to accelerate search on novel problems with similar structure. Its output neurons define a probability distribution that we sample f...

  1. Metadata for Content-Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Adrian Sterca

    2010-12-01

    Full Text Available This paper presents an image retrieval technique that combines content based image retrieval with pre-computed metadata-based image retrieval. The resulting system will have the advantages of both approaches: the speed/efficiency of metadata-based image retrieval and the accuracy/power of content-based image retrieval.

  2. Metadata for Content-Based Image Retrieval

    OpenAIRE

    Adrian Sterca; Daniela Miron

    2010-01-01

    This paper presents an image retrieval technique that combines content based image retrieval with pre-computed metadata-based image retrieval. The resulting system will have the advantages of both approaches: the speed/efficiency of metadata-based image retrieval and the accuracy/power of content-based image retrieval.

  3. Incrementing data quality of multi-frequency echograms using the Adaptive Wiener Filter (AWF) denoising algorithm

    Science.gov (United States)

    Peña, M.

    2016-10-01

    Achieving acceptable signal-to-noise ratio (SNR) can be difficult when working in sparsely populated waters and/or when species have low scattering such as fluid filled animals. The increasing use of higher frequencies and the study of deeper depths in fisheries acoustics, as well as the use of commercial vessels, is raising the need to employ good denoising algorithms. The use of a lower Sv threshold to remove noise or unwanted targets is not suitable in many cases and increases the relative background noise component in the echogram, demanding more effectiveness from denoising algorithms. The Adaptive Wiener Filter (AWF) denoising algorithm is presented in this study. The technique is based on the AWF commonly used in digital photography and video enhancement. The algorithm firstly increments the quality of the data with a variance-dependent smoothing, before estimating the noise level as the envelope of the Sv minima. The AWF denoising algorithm outperforms existing algorithms in the presence of gaussian, speckle and salt & pepper noise, although impulse noise needs to be previously removed. Cleaned echograms present homogenous echotraces with outlined edges.

  4. REVIEW OF PHASE BASED IMAGE MATCHING

    OpenAIRE

    Jaydeep Kale*

    2016-01-01

    This paper review the phase based image matching method. A major approach for image matching is to extract feature vectors corresponding to given images and perform image matching based on some distance metrics. One of the difficult problem with this feature based image matching is that matching performance depends upon many parameters in feature extraction process. So this paper reviews the phase based image matching methods in which 2D DFTs of given images are used to determine resemblance ...

  5. Wavelength conversion based spectral imaging

    DEFF Research Database (Denmark)

    Dam, Jeppe Seidelin

    resolution for this spectral region. Today, an increasing number of applications exists outside the spectral region covered by Si-based devices, e.g. within cleantech, medical or food imaging. We present a technology based on wavelength conversion which will extend the spectral coverage of state of the art...

  6. Edge-based correlation image registration for multispectral imaging

    Science.gov (United States)

    Nandy, Prabal

    2009-11-17

    Registration information for images of a common target obtained from a plurality of different spectral bands can be obtained by combining edge detection and phase correlation. The images are edge-filtered, and pairs of the edge-filtered images are then phase correlated to produce phase correlation images. The registration information can be determined based on these phase correlation images.

  7. Fovea based image quality assessment

    Science.gov (United States)

    Guo, Anan; Zhao, Debin; Liu, Shaohui; Cao, Guangyao

    2010-07-01

    Humans are the ultimate receivers of the visual information contained in an image, so the reasonable method of image quality assessment (IQA) should follow the properties of the human visual system (HVS). In recent years, IQA methods based on HVS-models are slowly replacing classical schemes, such as mean squared error (MSE) and Peak Signal-to-Noise Ratio (PSNR). IQA-structural similarity (SSIM) regarded as one of the most popular HVS-based methods of full reference IQA has apparent improvements in performance compared with traditional metrics in nature, however, it performs not very well when the images' structure is destroyed seriously or masked by noise. In this paper, a new efficient fovea based structure similarity image quality assessment (FSSIM) is proposed. It enlarges the distortions in the concerned positions adaptively and changes the importances of the three components in SSIM. FSSIM predicts the quality of an image through three steps. First, it computes the luminance, contrast and structure comparison terms; second, it computes the saliency map by extracting the fovea information from the reference image with the features of HVS; third, it pools the above three terms according to the processed saliency map. Finally, a commonly experimental database LIVE IQA is used for evaluating the performance of the FSSIM. Experimental results indicate that the consistency and relevance between FSSIM and mean opinion score (MOS) are both better than SSIM and PSNR clearly.

  8. Noise Reduction of Welding Defect Image Based on NSCT and Anisotropic Diffusion

    Institute of Scientific and Technical Information of China (English)

    吴一全; 万红; 叶志龙; 刚铁

    2014-01-01

    In order to reduce noise effectively in the welding defect image and preserve the minutiae information, a noise reduction method of welding defect image based on nonsubsampled contourlet transform (NSCT) and anisot-ropic diffusion is proposed. Firstly, an X-ray welding defect image is decomposed by NSCT. Then total variation (TV) model and Catte_PM model are used for the obtained low-pass component and band-pass components, respec-tively. Finally, the denoised image is synthesized by inverse NSCT. Experimental results show that, compared with the hybrid method of wavelet threshold shrinkage with TV diffusion, the method combining NSCT with P_Laplace diffu-sion, and the method combining contourlet with TV model and adaptive contrast diffusion, the proposed method has a great improvement in the aspects of subjective visual effect, peak signal-to-noise ratio (PSNR) and mean-square error (MSE). Noise is suppressed more effectively and the minutiae information is preserved better in the image.

  9. Detecting aircrafts from satellite images using saliency and conical pyramid based template representation

    Indian Academy of Sciences (India)

    SAMIK BANERJEE; NITIN GUPTA; SUKHENDU DAS; PINAKI ROY CHOWDHURY; L K SINHA

    2016-10-01

    Automatic target localization in satellite images still remains as a challenging problem in the field of computer vision. The issues involved in locating targets in satellite images are viewpoint, spectral (intensity) and scale variations. Diversity in background texture and target clutter also adds up to the complexity of the problem of localizing aircrafts in satellite images. Failure of modern feature extraction and object detection methods highlight the complexity of the problem. In the proposed work, pre-processing techniques, viz.denoising and contrast enhancement, are first used to improve the quality of the images. Then, the concept of unsupervised saliency is used to detect the potential regions of interest, which reduces the search space. Parts from the salient regions are further processed using clustering and morphological processing to get the probable regions of isolated aircraft targets. Finally, a novel conical pyramid based framework for template representation of the target samples is proposed for matching. Experimental results shown on a few satellite images exhibit the superior performance of the proposed methods.

  10. Phase-based binarization of ancient document images: model and applications.

    Science.gov (United States)

    Nafchi, Hossein Ziaei; Moghaddam, Reza Farrahi; Cheriet, Mohamed

    2014-07-01

    In this paper, a phase-based binarization model for ancient document images is proposed, as well as a postprocessing method that can improve any binarization method and a ground truth generation tool. Three feature maps derived from the phase information of an input document image constitute the core of this binarization model. These features are the maximum moment of phase congruency covariance, a locally weighted mean phase angle, and a phase preserved denoised image. The proposed model consists of three standard steps: 1) preprocessing; 2) main binarization; and 3) postprocessing. In the preprocessing and main binarization steps, the features used are mainly phase derived, while in the postprocessing step, specialized adaptive Gaussian and median filters are considered. One of the outputs of the binarization step, which shows high recall performance, is used in a proposed postprocessing method to improve the performance of other binarization methodologies. Finally, we develop a ground truth generation tool, called PhaseGT, to simplify and speed up the ground truth generation process for ancient document images. The comprehensive experimental results on the DIBCO'09, H-DIBCO'10, DIBCO'11, H-DIBCO'12, DIBCO'13, PHIBD'12, and BICKLEY DIARY data sets show the robustness of the proposed binarization method on various types of degradation and document images. PMID:24816587

  11. A PSEUDO RELEVANCE BASED IMAGE RETRIEVAL MODEL

    OpenAIRE

    Kamini Thakur; Preetika Saxena

    2015-01-01

    Image retrieval is the basic requirement, task now a day. Content based image retrieval is the popular image retrieval system by which the target image to be retrieved based on the useful features of the given image. CBIR has an active and fast growing research area in both image processing and data mining. In marine ecosystems the captured images having lower resolution, transformation invariant and translation capabilities. Therefore, accurate image extraction according to the u...

  12. The research of multi-frame target recognition based on laser active imaging

    Science.gov (United States)

    Wang, Can-jin; Sun, Tao; Wang, Tin-feng; Chen, Juan

    2013-09-01

    Laser active imaging is fit to conditions such as no difference in temperature between target and background, pitch-black night, bad visibility. Also it can be used to detect a faint target in long range or small target in deep space, which has advantage of high definition and good contrast. In one word, it is immune to environment. However, due to the affect of long distance, limited laser energy and atmospheric backscatter, it is impossible to illuminate the whole scene at the same time. It means that the target in every single frame is unevenly or partly illuminated, which make the recognition more difficult. At the same time the speckle noise which is common in laser active imaging blurs the images . In this paper we do some research on laser active imaging and propose a new target recognition method based on multi-frame images . Firstly, multi pulses of laser is used to obtain sub-images for different parts of scene. A denoising method combined homomorphic filter with wavelet domain SURE is used to suppress speckle noise. And blind deconvolution is introduced to obtain low-noise and clear sub-images. Then these sub-images are registered and stitched to combine a completely and uniformly illuminated scene image. After that, a new target recognition method based on contour moments is proposed. Firstly, canny operator is used to obtain contours. For each contour, seven invariant Hu moments are calculated to generate the feature vectors. At last the feature vectors are input into double hidden layers BP neural network for classification . Experiments results indicate that the proposed algorithm could achieve a high recognition rate and satisfactory real-time performance for laser active imaging.

  13. Performance Evaluation of Image Fusion for Impulse Noise Reduction in Digital Images Using an Image Quality Assessment

    Directory of Open Access Journals (Sweden)

    M PremKumar

    2011-07-01

    Full Text Available Image fusion is the process of combining two or more images into a single image while retaining the important features of each image. Multiple image fusion is an important technique used in military, remote sensing and medical applications. In this paper, Image Fusion based on local area variance is used to combine the de-noised images from two different filtering algorithms, Vector Median Filter (VMF and Spatial Median Filter (SMF. The performance of the Image Fusion is evaluated by using a new non-reference image quality assessment; Gradient based Image Quality Index (GIQI, to estimate how well the important information in the source images is represented by the fused image. Experimental results show that GIQI is better in non-reference image fusion performance assessment than universal image quality index (UIQI.

  14. Region-Based Image-Fusion Framework for Compressive Imaging

    Directory of Open Access Journals (Sweden)

    Yang Chen

    2014-01-01

    Full Text Available A novel region-based image-fusion framework for compressive imaging (CI and its implementation scheme are proposed. Unlike previous works on conventional image fusion, we consider both compression capability on sensor side and intelligent understanding of the image contents in the image fusion. Firstly, the compressed sensing theory and normalized cut theory are introduced. Then region-based image-fusion framework for compressive imaging is proposed and its corresponding fusion scheme is constructed. Experiment results demonstrate that the proposed scheme delivers superior performance over traditional compressive image-fusion schemes in terms of both object metrics and visual quality.

  15. Object-Based Image Compression

    Science.gov (United States)

    Schmalz, Mark S.

    2003-01-01

    Image compression frequently supports reduced storage requirement in a computer system, as well as enhancement of effective channel bandwidth in a communication system, by decreasing the source bit rate through reduction of source redundancy. The majority of image compression techniques emphasize pixel-level operations, such as matching rectangular or elliptical sampling blocks taken from the source data stream, with exemplars stored in a database (e.g., a codebook in vector quantization or VQ). Alternatively, one can represent a source block via transformation, coefficient quantization, and selection of coefficients deemed significant for source content approximation in the decompressed image. This approach, called transform coding (TC), has predominated for several decades in the signal and image processing communities. A further technique that has been employed is the deduction of affine relationships from source properties such as local self-similarity, which supports the construction of adaptive codebooks in a self-VQ paradigm that has been called iterated function systems (IFS). Although VQ, TC, and IFS based compression algorithms have enjoyed varying levels of success for different types of applications, bit rate requirements, and image quality constraints, few of these algorithms examine the higher-level spatial structure of an image, and fewer still exploit this structure to enhance compression ratio. In this paper, we discuss a fourth type of compression algorithm, called object-based compression, which is based on research in joint segmentaton and compression, as well as previous research in the extraction of sketch-like representations from digital imagery. Here, large image regions that correspond to contiguous recognizeable objects or parts of objects are segmented from the source, then represented compactly in the compressed image. Segmentation is facilitated by source properties such as size, shape, texture, statistical properties, and spectral

  16. A New Pixels Flipping Method for Huge Watermarking Capacity of the Invoice Font Image

    Directory of Open Access Journals (Sweden)

    Li Li

    2014-01-01

    Full Text Available Invoice printing just has two-color printing, so invoice font image can be seen as binary image. To embed watermarks into invoice image, the pixels need to be flipped. The more huge the watermark is, the more the pixels need to be flipped. We proposed a new pixels flipping method in invoice image for huge watermarking capacity. The pixels flipping method includes one novel interpolation method for binary image, one flippable pixels evaluation mechanism, and one denoising method based on gravity center and chaos degree. The proposed interpolation method ensures that the invoice image keeps features well after scaling. The flippable pixels evaluation mechanism ensures that the pixels keep better connectivity and smoothness and the pattern has highest structural similarity after flipping. The proposed denoising method makes invoice font image smoother and fiter for human vision. Experiments show that the proposed flipping method not only keeps the invoice font structure well but also improves watermarking capacity.

  17. A new pixels flipping method for huge watermarking capacity of the invoice font image.

    Science.gov (United States)

    Li, Li; Hou, Qingzheng; Lu, Jianfeng; Xu, Qishuai; Dai, Junping; Mao, Xiaoyang; Chang, Chin-Chen

    2014-01-01

    Invoice printing just has two-color printing, so invoice font image can be seen as binary image. To embed watermarks into invoice image, the pixels need to be flipped. The more huge the watermark is, the more the pixels need to be flipped. We proposed a new pixels flipping method in invoice image for huge watermarking capacity. The pixels flipping method includes one novel interpolation method for binary image, one flippable pixels evaluation mechanism, and one denoising method based on gravity center and chaos degree. The proposed interpolation method ensures that the invoice image keeps features well after scaling. The flippable pixels evaluation mechanism ensures that the pixels keep better connectivity and smoothness and the pattern has highest structural similarity after flipping. The proposed denoising method makes invoice font image smoother and fiter for human vision. Experiments show that the proposed flipping method not only keeps the invoice font structure well but also improves watermarking capacity.

  18. A Novel and Efficient Lifting Scheme based Super Resolution Reconstruction for Early Detection of Cancer in Low Resolution Mammogram Images

    Directory of Open Access Journals (Sweden)

    Liyakathunisa

    2011-05-01

    Full Text Available Mammography is the most effective method for early detection of breast diseases. However, thetypical diagnostic signs, such as masses and microcalcifications, are difficult to be detectedbecause mammograms are low contrast and noisy images. We concentrate on a special case ofsuper resolution reconstruction for early detection of cancer from low resolution mammogramimages. Super resolution reconstruction is the process of combining several low resolutionimages into a single higher resolution image. This paper describes a novel approach forenhancing the resolution of mammographic images. We are proposing an efficient lifting waveletbased denoising with adaptive interpolation for super resolution reconstruction. Under this framework, the digitized low resolution mammographic images are decomposed into many levels toobtain different frequency bands. We use Daubechies (D4 lifting schemes to decompose lowresolution mammogram images into multilevel scale and wavelet coefficients. Then our proposednovel soft thresholding technique is used to remove the noisy coefficients, by fixing optimumthreshold value. In order to obtain an image of higher resolution adaptive interpolation is applied.Our proposed lifting wavelet transform based restoration and adaptive interpolation preserves theedges as well as smoothens the image without introducing artifacts. The proposed algorithmavoids the application of iterative method, reduces the complexity of calculation and applies tolarge dimension low-resolution images. Experimental results show that the proposed approachhas succeeded in obtaining a high-resolution mammogram image with a high PSNR, ISNR ratioand a good visual quality.

  19. Multiscale Image Based Flow Visualization

    NARCIS (Netherlands)

    Telea, Alexandru; Strzodka, Robert

    2006-01-01

    We present MIBFV, a method to produce real-time, multiscale animations of flow datasets. MIBFV extends the attractive features of the Image-Based Flow Visualization (IBFV) method, i.e. dense flow domain coverage with flow-aligned noise, real-time animation, implementation simplicity, and few (or no)

  20. A Miniature-Based Image Retrieval System

    OpenAIRE

    Islam, Md Saiful; Ali, Md. Haider

    2010-01-01

    Due to the rapid development of World Wide Web (WWW) and imaging technology, more and more images are available in the Internet and stored in databases. Searching the related images by the querying image is becoming tedious and difficult. Most of the images on the web are compressed by methods based on discrete cosine transform (DCT) including Joint Photographic Experts Group(JPEG) and H.261. This paper presents an efficient content-based image indexing technique for searching similar images ...

  1. Atomic norm denoising with applications to line spectral estimation

    CERN Document Server

    Bhaskar, Badri Narayan; Recht, Benjamin

    2012-01-01

    The sub-Nyquist estimation of line spectra is a classical problem in signal processing, but currently popular subspace-based techniques have few guarantees in the presence of noise and rely on a priori knowledge about system model order. Motivated by recent work on atomic norms in inverse problems, we propose a new approach to line spectral estimation that provides theoretical guarantees for the mean-squared-error performance in the presence of noise and without advance knowledge of the model order. We propose an abstract theory of denoising with atomic norms and specialize this theory to provide a convex optimization problem for estimating the frequencies and phases of a mixture of complex exponentials with guaranteed bounds on the mean-squared error. We show that the associated convex optimization problem, called "Atomic norm Soft Thresholding" (AST), can be solved in polynomial time via semidefinite programming. For very large scale problems we provide an alternative, efficient algorithm, called "Discretiz...

  2. Denoising Message Passing for X-ray Computed Tomography Reconstruction

    CERN Document Server

    Perelli, Alessandro; Can, Ali; Davies, Mike E

    2016-01-01

    X-ray Computed Tomography (CT) reconstruction from sparse number of views is becoming a powerful way to reduce either the radiation dose or the acquisition time in CT systems but still requires a huge computational time. This paper introduces an approximate Bayesian inference framework for CT reconstruction based on a family of denoising approximate message passing (DCT-AMP) algorithms able to improve both the convergence speed and the reconstruction quality. Approximate Message Passing for Compressed Sensing has been extensively analysed for random linear measurements but there are still not clear solutions on how AMP should be modified and how it performs with real world problems. In particular to overcome the convergence issues of DCT-AMP with structured measurement matrices, we propose a disjoint preconditioned version of the algorithm tailored for both the geometric system model and the noise model. In addition the Bayesian DCT-AMP formulation allows to measure how the current estimate is close to the pr...

  3. Compressed Sensing for Denoising in Adaptive System Identification

    CERN Document Server

    Hosseini, Seyed Hossein

    2012-01-01

    We propose a new technique for adaptive identification of sparse systems based on the compressed sensing (CS) theory. We manipulate the transmitted pilot (input signal) and the received signal such that the weights of adaptive filter approach the compressed version of the sparse system instead of the original system. To this end, we use random filter structure at the transmitter to form the measurement matrix according to the CS framework. The original sparse system can be reconstructed by the conventional recovery algorithms. As a result, the denoising property of CS can be deployed in the proposed method at the recovery stage. The experiments indicate significant performance improvement of proposed method compared to the conventional LMS method which directly identifies the sparse system. Furthermore, at low levels of sparsity, our method outperforms a specialized identification algorithm that promotes sparsity.

  4. Image Retrieval Based on Fractal Dictionary Parameters

    OpenAIRE

    Yuanyuan Sun; Rudan Xu; Lina Chen; Xiaopeng Hu

    2013-01-01

    Content-based image retrieval is a branch of computer vision. It is important for efficient management of a visual database. In most cases, image retrieval is based on image compression. In this paper, we use a fractal dictionary to encode images. Based on this technique, we propose a set of statistical indices for efficient image retrieval. Experimental results on a database of 416 texture images indicate that the proposed method provides a competitive retrieval rate, compared to the existi...

  5. 基于小波消噪的植物电信号频谱特征分析%The Analysis on Spectrum Characteristic of Plant Electrical Signal Based on Wavelet De-Noising

    Institute of Scientific and Technical Information of China (English)

    张晓辉; 余宁梅; 习岗; 孟晓丽

    2011-01-01

    The paper studies the basic characteristics and the changing laws of aloe electrical signals under different temperatures based on wavelet soft threshold de-noising method and Fast Fourier Transform. The spectral edge frequency ( SEF) , spectral gravity frequency ( SGF) and power spectral entropy ( PSE ) of plant electrical signals are used to study the changes of power spectrum of aloe electrical signals under different temperatures. The results show that the magnitude of aloe electrical signal is a strength of mV, and the frequency is below 5 Hz. The SEF and SGF in aloe leaves move to the high frequency as the temperature increases, and the PSE of the electrical signal has a dramatic increase. The study reveals that the SEF, SGF and PSE have the consistent trend to change and there is a significant relevance between PSE and SGF during the process of raising temperature. It is considered that the changes of the PSE and SGF in aloe leaves can be used as sensitive index of external environment change in leaf cells, and then implement scientific regulators of physiological and biochemical process of plant growth and development.%利用小波软阈值消噪法和快速傅里叶变换研究不同温度条件下芦荟叶片电信号的基本特征及变化规律.通过植物电信号谱边缘频率(SEF)、谱重心频率(SGF)和功率谱熵(PSE)研究不同温度下芦荟(Aloe vera L.)叶片电信号功率谱的变化.结果表明,芦荟的电信号是一种强度为mV数量级、频率分布在5 Hz以下的低频信号;随着温度的升高,电信号的SEF和SGF向高频段移动,细胞活动受到激发,PSE急剧增加;在升温过程中SEF、SGF和PSE三者的变化趋势趋于一致,PSE与SGF的变化之间有很强的关联性,因而植物电信号PSE或SGF的变化可以作为叶片细胞响应外界环境变化的灵敏指标,而对植物生长发育的生理生化过程实施科学调控.

  6. A content-based image retrieval method for optical colonoscopy images based on image recognition techniques

    Science.gov (United States)

    Nosato, Hirokazu; Sakanashi, Hidenori; Takahashi, Eiichi; Murakawa, Masahiro

    2015-03-01

    This paper proposes a content-based image retrieval method for optical colonoscopy images that can find images similar to ones being diagnosed. Optical colonoscopy is a method of direct observation for colons and rectums to diagnose bowel diseases. It is the most common procedure for screening, surveillance and treatment. However, diagnostic accuracy for intractable inflammatory bowel diseases, such as ulcerative colitis (UC), is highly dependent on the experience and knowledge of the medical doctor, because there is considerable variety in the appearances of colonic mucosa within inflammations with UC. In order to solve this issue, this paper proposes a content-based image retrieval method based on image recognition techniques. The proposed retrieval method can find similar images from a database of images diagnosed as UC, and can potentially furnish the medical records associated with the retrieved images to assist the UC diagnosis. Within the proposed method, color histogram features and higher order local auto-correlation (HLAC) features are adopted to represent the color information and geometrical information of optical colonoscopy images, respectively. Moreover, considering various characteristics of UC colonoscopy images, such as vascular patterns and the roughness of the colonic mucosa, we also propose an image enhancement method to highlight the appearances of colonic mucosa in UC. In an experiment using 161 UC images from 32 patients, we demonstrate that our method improves the accuracy of retrieving similar UC images.

  7. A Review of Decision Based Impulse Noise Removing Algorithms

    Directory of Open Access Journals (Sweden)

    SnehalAmbulkar

    2014-04-01

    Full Text Available Noises is an unwanted factor in digital image and videos, hiding the details and destroying image information. Hence denoising has great importance to restore the details and to improve the quality measures. This paper takes a look towards different type of noise found in digital images, Denoising domains, and classification of denoising filters. Some denoising filters like Median filter (MF, Adaptive median filter (AMF and simple adaptive median filters (SAMF are described and compared briefly. A new approach is proposed for video denoising using combination of median filters with multiple views.

  8. A 1D wavelet filtering for ultrasound images despeckling

    Science.gov (United States)

    Dahdouh, Sonia; Dubois, Mathieu; Frenoux, Emmanuelle; Osorio, Angel

    2010-03-01

    Ultrasound images appearance is characterized by speckle, shadows, signal dropout and low contrast which make them really difficult to process and leads to a very poor signal to noise ratio. Therefore, for main imaging applications, a denoising step is necessary to apply successfully medical imaging algorithms on such images. However, due to speckle statistics, denoising and enhancing edges on these images without inducing additional blurring is a real challenging problem on which usual filters often fail. To deal with such problems, a large number of papers are working on B-mode images considering that the noise is purely multiplicative. Making such an assertion could be misleading, because of internal pre-processing such as log compression which are done in the ultrasound device. To address those questions, we designed a novel filtering method based on 1D Radiofrequency signal. Indeed, since B-mode images are initially composed of 1D signals and since the log compression made by ultrasound devices modifies noise statistics, we decided to filter directly the 1D Radiofrequency signal envelope before log compression and image reconstitution, in order to conserve as much information as possible. A bi-orthogonal wavelet transform is applied to the log transform of each signal and an adaptive 1D split and merge like algorithm is used to denoise wavelet coefficients. Experiments were carried out on synthetic data sets simulated with Field II simulator and results show that our filter outperforms classical speckle filtering methods like Lee, non-linear means or SRAD filters.

  9. Restoration of images with rotated shapes

    Science.gov (United States)

    Setzer, S.; Steidl, G.; Teuber, T.

    2008-07-01

    Methods for image restoration which respect edges and other important features are of fundamental importance in digital image processing. In this paper, we present a novel technique for the restoration of images containing rotated (linearly transformed) rectangular shapes which avoids the round-off effects at vertices produced by known edge-preserving denoising techniques. Following an idea of Berkels et al. our approach is also based on two steps: the determination of the angles related to the rotated shapes and a subsequent restoration step which incorporates the knowledge of the angles. However, in contrast to Berkels et al., we find the smoothed rotation angles of the shapes by minimizing a simple quadratic functional without constraints which involves only first order derivatives so that we finally have to solve only a linear system of equations. Moreover, we propose to perform the restoration step either by quadratic programming or by solving an anisotropic diffusion equation. We focus on a discrete approach which approximates derivatives by finite differences. Particular attention is paid to the choice of the difference filters. We prove some relations concerning the preservation of rectangular shapes for our discrete settingE Finally, we present numerical examples for the denoising of artificial images with rotated rectangles and parallelograms and for the denoising of a real-world image.

  10. Content based Image Retrieval from Forensic Image Databases

    Directory of Open Access Journals (Sweden)

    Swati A. Gulhane

    2015-03-01

    Full Text Available Due to the proliferation of video and image data in digital form, Content based Image Retrieval has become a prominent research topic. In forensic sciences, digital data have been widely used such as criminal images, fingerprints, scene images and so on. Therefore, the arrangement of such large image data becomes a big issue such as how to get an interested image fast. There is a great need for developing an efficient technique for finding the images. In order to find an image, image has to be represented with certain features. Color, texture and shape are three important visual features of an image. Searching for images using color, texture and shape features has attracted much attention. There are many content based image retrieval techniques in the literature. This paper gives the overview of different existing methods used for content based image retrieval and also suggests an efficient image retrieval method for digital image database of criminal photos, using dynamic dominant color, texture and shape features of an image which will give an effective retrieval result.

  11. Performance of Various Order Statistics Filters in Impulse and Mixed Noise Removal for RS Images

    Directory of Open Access Journals (Sweden)

    Mrs V.Radhika

    2010-12-01

    Full Text Available Remote sensing images (ranges from satellite to seismic are affected by number of noises like interference, impulse and speckle noises. Image denoising is one of the traditional problems in digital image processing, which plays vital role as a pre-processing step in number of image and video applications. Image denoising still remains a challenging research area for researchers because noise removal introduces artifacts and causes blurring of the images. This study is done with the intension of designing a best algorithm for impulsive noise reduction in an industrial environment. A review of the typical impulsive noise reduction systems which are based on order statistics are done and particularized for the described situation. Finally, computational aspects are analyzed in terms of PSNR values and some solutions are proposed.

  12. Building high dimensional imaging database for content based image search

    Science.gov (United States)

    Sun, Qinpei; Sun, Jianyong; Ling, Tonghui; Wang, Mingqing; Yang, Yuanyuan; Zhang, Jianguo

    2016-03-01

    In medical imaging informatics, content-based image retrieval (CBIR) techniques are employed to aid radiologists in the retrieval of images with similar image contents. CBIR uses visual contents, normally called as image features, to search images from large scale image databases according to users' requests in the form of a query image. However, most of current CBIR systems require a distance computation of image character feature vectors to perform query, and the distance computations can be time consuming when the number of image character features grows large, and thus this limits the usability of the systems. In this presentation, we propose a novel framework which uses a high dimensional database to index the image character features to improve the accuracy and retrieval speed of a CBIR in integrated RIS/PACS.

  13. Denoising solar radiation data using coiflet wavelets

    Energy Technology Data Exchange (ETDEWEB)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my; Janier, Josefina B., E-mail: josefinajanier@petronas.com.my; Muthuvalu, Mohana Sundaram, E-mail: mohana.muthuvalu@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia); Hasan, Mohammad Khatim, E-mail: khatim@ftsm.ukm.my [Jabatan Komputeran Industri, Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor (Malaysia); Sulaiman, Jumat, E-mail: jumat@ums.edu.my [Program Matematik dengan Ekonomi, Universiti Malaysia Sabah, Beg Berkunci 2073, 88999 Kota Kinabalu, Sabah (Malaysia); Ismail, Mohd Tahir [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM Minden, Penang (Malaysia)

    2014-10-24

    Signal denoising and smoothing plays an important role in processing the given signal either from experiment or data collection through observations. Data collection usually was mixed between true data and some error or noise. This noise might be coming from the apparatus to measure or collect the data or human error in handling the data. Normally before the data is use for further processing purposes, the unwanted noise need to be filtered out. One of the efficient methods that can be used to filter the data is wavelet transform. Due to the fact that the received solar radiation data fluctuates according to time, there exist few unwanted oscillation namely noise and it must be filtered out before the data is used for developing mathematical model. In order to apply denoising using wavelet transform (WT), the thresholding values need to be calculated. In this paper the new thresholding approach is proposed. The coiflet2 wavelet with variation diminishing 4 is utilized for our purpose. From numerical results it can be seen clearly that, the new thresholding approach give better results as compare with existing approach namely global thresholding value.

  14. 脉冲漏磁信号的EMD小波阈值去噪研究%Research on EMD Wavelet Threshold Denoising of Pulse Magnetic Flux Leakage Signal

    Institute of Scientific and Technical Information of China (English)

    张韬; 左宪章; 田贵云; 张云; 费骏骉

    2012-01-01

    In order to defect information accurately from the Pulse Magnetic Flux Leakage(PMFL) signal, a new wavelet threshold denoising method is presented based on Empirical Mode Decomposition(EMD). The Smoothly Clipped Absolute Deviation(SCAD) rule is introduced to optimize the conventional wavelet soft and hard threshold function. The new denoising method is applied to denoise the Pulse Magnetic Flux Leakage(PMFL) signal. Experimental results show that not only the new denoising method can remove the noise in signal, but also the denoising performance for pulse noise is better than conventional wavelet threshold denoising method.%为有效抑制脉冲漏磁检测信号中的各种噪声,将小波阈值去噪运用到经验模态分解(EMD)中,提出一种基于EMD的小波阈值去噪方法.针对小波软、硬阈值函数中存在的不足,引入平滑截断绝对偏差惩罚因子进行优化改进.将该方法应用于脉冲漏磁信号进行实际消噪处理.实验结果表明,该方法能较好地剔除信号中的噪声,在脉冲噪声的抑制方面优于小波阈值去噪.

  15. A multiple de-noising method based on energy-zero-product%一种基于能零积的综合去噪方法

    Institute of Scientific and Technical Information of China (English)

    冯纪强; 徐晨; 张维强

    2007-01-01

    基于短时能零积提出一种改进的语音综合去噪方法.通过短时能零积实现噪音阈值选择和噪音功率谱的自适应调整,得到较理想的语音起止点和噪音的功率谱估计.根据噪音在语音段和无音段不同的分布特征,噪音功率在不同阶段设置不同的权重,实现更好的谱减去噪.对每连续三帧进行残留噪音处理,可进一步除去残留噪音.实验表明,本法去噪效果优于使用固定噪音功率谱估计的传统谱减去噪方法.%An improved speech de noising method by modifying the spectral subtraction technique based on the energy-zero-product was presented.Firstly,the power spectrum of the noisy speech and its threshold can be adjusted via the short time energy-zero-product.So the exact begin end points as well as the accurate noise power spectrum are obtained.Secondly,during the course of de noising using the modified spectral subtraction technique,different weights are assigned to the noise power spectrum based on the different characteristics of the noisy signal in both the speech and non speech segments.Finally,the method of residual noise reduction is applied to the residual noise.Simulation experiment results illustrate that this method is more effective than the conventional spectral subtraction methods using the fixed noise power spectrum.

  16. Independent component analysis and decision trees for ECG holter recording de-noising.

    Directory of Open Access Journals (Sweden)

    Jakub Kuzilek

    Full Text Available We have developed a method focusing on ECG signal de-noising using Independent component analysis (ICA. This approach combines JADE source separation and binary decision tree for identification and subsequent ECG noise removal. In order to to test the efficiency of this method comparison to standard filtering a wavelet- based de-noising method was used. Freely data available at Physionet medical data storage were evaluated. Evaluation criteria was root mean square error (RMSE between original ECG and filtered data contaminated with artificial noise. Proposed algorithm achieved comparable result in terms of standard noises (power line interference, base line wander, EMG, but noticeably significantly better results were achieved when uncommon noise (electrode cable movement artefact were compared.

  17. Image Signature Based Mean Square Error for Image Quality Assessment

    Institute of Scientific and Technical Information of China (English)

    CUI Ziguan; GAN Zongliang; TANG Guijin; LIU Feng; ZHU Xiuchang

    2015-01-01

    Motivated by the importance of Human visual system (HVS) in image processing, we propose a novel Image signature based mean square error (ISMSE) metric for full reference Image quality assessment (IQA). Efficient image signature based describer is used to predict visual saliency map of the reference image. The saliency map is incorporated into luminance diff erence between the reference and distorted images to obtain image quality score. The eff ect of luminance diff erence on visual quality with larger saliency value which is usually corresponding to foreground objects is highlighted. Experimental results on LIVE database release 2 show that by integrating the eff ects of image signature based saliency on luminance dif-ference, the proposed ISMSE metric outperforms several state-of-the-art HVS-based IQA metrics but with lower complexity.

  18. A nonlinear filtering algorithm for denoising HR(S)TEM micrographs

    Energy Technology Data Exchange (ETDEWEB)

    Du, Hongchu, E-mail: h.du@fz-juelich.de [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons, Jülich Research Centre, Jülich, 52425 (Germany); Central Facility for Electron Microscopy (GFE), RWTH Aachen University, Aachen 52074 (Germany); Peter Grünberg Institute, Jülich Research Centre, Jülich 52425 (Germany)

    2015-04-15

    Noise reduction of micrographs is often an essential task in high resolution (scanning) transmission electron microscopy (HR(S)TEM) either for a higher visual quality or for a more accurate quantification. Since HR(S)TEM studies are often aimed at resolving periodic atomistic columns and their non-periodic deviation at defects, it is important to develop a noise reduction algorithm that can simultaneously handle both periodic and non-periodic features properly. In this work, a nonlinear filtering algorithm is developed based on widely used techniques of low-pass filter and Wiener filter, which can efficiently reduce noise without noticeable artifacts even in HR(S)TEM micrographs with contrast of variation of background and defects. The developed nonlinear filtering algorithm is particularly suitable for quantitative electron microscopy, and is also of great interest for beam sensitive samples, in situ analyses, and atomic resolution EFTEM. - Highlights: • A nonlinear filtering algorithm for denoising HR(S)TEM images is developed. • It can simultaneously handle both periodic and non-periodic features properly. • It is particularly suitable for quantitative electron microscopy. • It is of great interest for beam sensitive samples, in situ analyses, and atomic resolution EFTEM.

  19. Content Base Image Retrieval Using Phong Shading

    OpenAIRE

    Uday Pratap Singh; Sanjeev Jain; Gulfishan Firdose Ahmed

    2010-01-01

    The digital image data is rapidly expanding in quantity and heterogeneity. The traditional information retrieval techniques does not meet the user’s demand, so there is need to develop an efficient system for content based image retrieval. Content based image retrieval means retrieval of images from database on the basis of visual features of image like as color, texture etc. In our proposed method feature are extracted after applying Phong shading on input image. Phong shading, flattering ou...

  20. Research on Split Augmented Largrangian Shrinkage Algorithm in Magnetic Resonance Imaging Based on Compressed Sensing

    Institute of Scientific and Technical Information of China (English)

    ZHENG Qing-bin; DONG En-qing; YANG Pei; LIU Wei; JIA Da-yu; SUN Hua-kui

    2014-01-01

    This paper aims to meet the requirements of reducing the scanning time of magnetic resonance imaging (MRI), accelerating MRI and reconstructing a high quality image from less acquisition data as much as possible. MRI method based on compressed sensing (CS) with multiple regularizations (two regularizations including total variation (TV) norm and L1 norm or three regularizations consisting of total variation, L1 norm and wavelet tree structure) is proposed in this paper, which is implemented by applying split augmented lagrangian shrinkage algorithm (SALSA). To solve magnetic resonance image reconstruction problems with linear combinations of total variation and L1 norm, we utilized composite split denoising (CSD) to split the original complex problem into TV norm and L1 norm regularization subproblems which were simple and easy to be solved respectively in this paper. The reconstructed image was obtained from the weighted average of solutions from two subproblems in an iterative framework. Because each of the splitted subproblems can be regarded as MRI model based on CS with single regularization, and for solving the kind of model, split augmented lagrange algorithm has advantage over existing fast algorithm such as fast iterative shrinkage thresholding(FIST) and two step iterative shrinkage thresholding (TwIST) in convergence speed. Therefore, we proposed to adopt SALSA to solve the subproblems. Moreover, in order to solve magnetic resonance image reconstruction problems with linear combinations of total variation, L1 norm and wavelet tree structure, we can split the original problem into three subproblems in the same manner, which can be processed by existing iteration scheme. A great deal of experimental results show that the proposed methods can effectively reconstruct the original image. Compared with existing algorithms such as TVCMRI, RecPF, CSA, FCSA and WaTMRI, the proposed methods have greatly improved the quality of the reconstructed images and have

  1. A New Denoising Technique for Capillary Electrophoresis Signals

    Institute of Scientific and Technical Information of China (English)

    王瑛; 莫金垣

    2002-01-01

    Capillary electrophoresis(CE) is a powerful analytical tool in chemistry,Thus,it is valuable to solve the denoising of CE signals.A new denoising method called MWDA which emplosy Mexican Hat wavelet is presented ,It is an efficient chemometrics technique and has been applied successfully in processing CE signals ,Useful information can be extractred even from signals of S/N=1 .After denoising,the peak positions are unchanged and the relative errors of peak height are less than 3%.

  2. Denoising technique based on cascaded filtering of particle filter and ANFIS%粒子滤波和ANFIS级联滤波的去噪技术

    Institute of Scientific and Technical Information of China (English)

    刘宇; 曾燎燎; 路永乐; 黎蕾蕾; 潘英俊

    2012-01-01

    为实现实际应用中的非线性、非高斯系统中的状态估计,结合粒子滤波非线性估计的优势和自适应神经模糊推理系统(ANFIS)的非线性逼近功能,建立了ANFIS一粒子滤波模型。该模型首先通过ANFIS消除测量信号中有色噪声的影响,再运用粒子滤波实现对状态的最优估计,从而进一步提高估计精度。仿真结果表明ANFIS与PF的级联滤波较单一的粒子滤波均值减少了65%,方差减小了74.4%。ANFIS一粒子滤波对于强非线性系统的噪声消除效果显著,使状态估计精度得到了较大提高,证明了该级联滤波模型的有效性。%F develops th system(AN state signa demo or the practical application of nonlinear, e ANFIS- Particle filter cascaded filteri FIS) nonlinear approximation function estimation. ANFIS is used 1 is processed by the nstrate that with the respectively, systems, and the proposed partm casca ANFIS-particle the state estim model. non-Gaussian noise system s ng model based on the adap and particle filter's obvious tate estlmat paper tive neuro-fuzzy inference advantages for non-linear eliminate the bias in the colored noise of the signal, filter to realize the optimal state estimation. The s filter model the mean and variance filter model atlon accuracy has significant noise cancell has been greatly enhanced, then the filtered imulation results are reduced by 65% and 74% ation effect for strongly nonlinear which verifies the effectiveness of

  3. NEW METHOD OF EXTRACTING WEAK FAILURE INFORMATION IN GEARBOX BY COMPLEX WAVELET DENOISING

    Institute of Scientific and Technical Information of China (English)

    CHEN Zhixin; XU Jinwu; YANG Debin

    2008-01-01

    Because the extract of the weak failure information is always the difficulty and focus of fault detection. Aiming for specific statistical properties of complex wavelet coefficients of gearbox vibration signals, a new signal-denoising method which uses local adaptive algorithm based on dual-tree complex wavelet transform (DT-CWT) is introduced to extract weak failure information in gear, especially to extract impulse components. By taking into account the non-Gaussian probability distribution and the statistical dependencies among wavelet coefficients of some signals, and by taking the advantage of near shift-invariance of DT-CWT, the higher signal-to-noise ratio (SNR) than common wavelet denoising methods can be obtained. Experiments of extracting periodic impulses in gearbox vibration signals indicate that the method can extract incipient fault feature and hidden information from heavy noise, and it has an excellent effect on identifying weak feature signals in gearbox vibration signals.

  4. Graph Cuts based Image Segmentation using Fuzzy Rule Based System

    OpenAIRE

    Khokher, M. R.; A. Ghafoor; A.M. Siddiqui

    2012-01-01

    This work deals with the segmentation of gray scale, color and texture images using graph cuts. From input image, a graph is constructed using intensity, color and texture profiles of the image simultaneously. Based on the nature of image, a fuzzy rule based system is designed to find the weight that should be given to a specific image feature during graph development. The graph obtained from the fuzzy rule based weighted average of different image features is further used in normalized graph...

  5. Denoising of chaotic signal using independent component analysis and empirical mode decomposition with circulate translating

    Science.gov (United States)

    Wen-Bo, Wang; Xiao-Dong, Zhang; Yuchan, Chang; Xiang-Li, Wang; Zhao, Wang; Xi, Chen; Lei, Zheng

    2016-01-01

    In this paper, a new method to reduce noises within chaotic signals based on ICA (independent component analysis) and EMD (empirical mode decomposition) is proposed. The basic idea is decomposing chaotic signals and constructing multidimensional input vectors, firstly, on the base of EMD and its translation invariance. Secondly, it makes the independent component analysis on the input vectors, which means that a self adapting denoising is carried out for the intrinsic mode functions (IMFs) of chaotic signals. Finally, all IMFs compose the new denoised chaotic signal. Experiments on the Lorenz chaotic signal composed of different Gaussian noises and the monthly observed chaotic sequence on sunspots were put into practice. The results proved that the method proposed in this paper is effective in denoising of chaotic signals. Moreover, it can correct the center point in the phase space effectively, which makes it approach the real track of the chaotic attractor. Project supported by the National Science and Technology, China (Grant No. 2012BAJ15B04), the National Natural Science Foundation of China (Grant Nos. 41071270 and 61473213), the Natural Science Foundation of Hubei Province, China (Grant No. 2015CFB424), the State Key Laboratory Foundation of Satellite Ocean Environment Dynamics, China (Grant No. SOED1405), the Hubei Provincial Key Laboratory Foundation of Metallurgical Industry Process System Science, China (Grant No. Z201303), and the Hubei Key Laboratory Foundation of Transportation Internet of Things, Wuhan University of Technology, China (Grant No.2015III015-B02).

  6. A Tracking Filter in Spherical Coordinates Enhanced by De-noising of Converted Doppler Measurements

    Institute of Scientific and Technical Information of China (English)

    ZHOU Gongjian; YU Changjun; CUI Naigang; QUAN Taifan

    2012-01-01

    Tracking problem in spherical coordinates with range rate (Doppler) measurements,which would have errors correlated to the range measurement errors,is investigated in this paper.The converted Doppler measurements,constructed by the product of the Doppler measurements and range measurements,are used to replace the original Doppler measurements.A de-noising method based on an unbiased Kalman filter (KF) is proposed to reduce the converted Doppler measurement errors before updating the target states for the constant velocity (CV) model.The states from the de-noising filter are then combined with the Cartesian states from the converted measurement Kalman filter (CMKF) to produce final state estimates.The nonlinearity of the de-noising filter states are handled by expanding them around the Cartesian states from the CMKF in a Taylor series up to the second order term.In the mean time,the correlation between the two filters caused by the common range measurements is handled by a minimum mean squared error (MMSE) estimation-based method.These result in a new tracking filter,CMDN-EKF2.Monte Carlo simulations demonstrate that the proposed tracking filter can provide efficient and robust performance with a modest computational cost.

  7. Blurred Star Image Processing for Star Sensors under Dynamic Conditions

    OpenAIRE

    Lei Guo; Weina Zhang; Wei Quan

    2012-01-01

    The precision of star point location is significant to identify the star map and to acquire the aircraft attitude for star sensors. Under dynamic conditions, star images are not only corrupted by various noises, but also blurred due to the angular rate of the star sensor. According to different angular rates under dynamic conditions, a novel method is proposed in this article, which includes a denoising method based on adaptive wavelet threshold and a restoration method based on the large ang...

  8. Multi dose computed tomography image fusion based on hybrid sparse methodology.

    Science.gov (United States)

    Venkataraman, Anuyogam; Alirezaie, Javad; Babyn, Paul; Ahmadian, Alireza

    2014-01-01

    With the increasing utilization of X-ray Computed Tomography (CT) in medical diagnosis, obtaining higher quality image with lower exposure to radiation has become a highly challenging task in image processing. In this paper, a novel sparse fusion algorithm is proposed to address the problem of lower Signal to Noise Ratio (SNR) in low dose CT images. Initial fused image is obtained by combining low dose and medium dose images in sparse domain, utilizing the Dual Tree Complex Wavelet Transform (DTCWT) dictionary which is trained by high dose image. And then, the strongly focused image is obtained by determining the pixels of source images which have high similarity with the pixels of the initial fused image. Final denoised image is obtained by fusing strongly focused image and decomposed sparse vectors of source images, thereby preserving the edges and other critical information needed for diagnosis. This paper demonstrates the effectiveness of the proposed algorithm both quantitatively and qualitatively. PMID:25570844

  9. Performance of Various Order Statistics Filters in Impulse and Mixed Noise Removal for RS Images

    Directory of Open Access Journals (Sweden)

    V.Radhika

    2011-02-01

    Full Text Available Remote sensing images (ranges from satellite to seismic are affected by number of noises likeinterference, impulse and speckle noises. Image denoising is one of the traditional problems in digitalimage processing, which plays vital role as a pre-processing step in number of image and videoapplications. Image denoising still remains a challenging research area for researchers because noiseremoval introduces artifacts and causes blurring of the images. This study is done with the intension ofdesigning a best algorithm for impulsive noise reduction in an industrial environment. A review of thetypical impulsive noise reduction systems which are based on order statistics are done and particularizedfor the described situation. Finally, computational aspects are analyzed in terms of PSNR values andsome solutions are proposed.

  10. Image matching navigation based on fuzzy information

    Institute of Scientific and Technical Information of China (English)

    田玉龙; 吴伟仁; 田金文; 柳健

    2003-01-01

    In conventional image matching methods, the image matching process is mostly based on image statistic information. One aspect neglected by all these methods is that there is much fuzzy information contained in these images. A new fuzzy matching algorithm based on fuzzy similarity for navigation is presented in this paper. Because the fuzzy theory is of the ability of making good description of the fuzzy information contained in images, the image matching method based on fuzzy similarity would look forward to producing good performance results. Experimental results using matching algorithm based on fuzzy information also demonstrate its reliability and practicability.

  11. CONTENT BASED IMAGE RETRIEVAL : A REVIEW

    OpenAIRE

    Shereena V.B; Julie M.David

    2014-01-01

    In a content-based image retrieval system (CBIR), the main issue is to extract the image features that effectively represent the image contents in a database. Such an extraction requires a detailed evaluation of retrieval performance of image features. This paper presents a review of fundamental aspects of content based image retrieval including feature extraction of color and texture features. Commonly used color features including color moments, color histogram and color corr...

  12. Denoising of ECG -- A discrete time approach using DWT.

    OpenAIRE

    Munzaleen Rashid Bhat; Virk Rana

    2015-01-01

    This paper is about denoising of ECG signal using DWT transform. In this paper, ECG signals are denoised using DWT transform.Ecg signals are taken and noise at different frequencies are generated which are superimposed on this original ecg signal.High frequency noise is of 4000 hertz and power line interference is of 50 hertz.Decomposition of noisy signal is achieved through wavelet packet .wavelet packets are reconstructed and appropriate wavelet packets are combined to obtain a ...

  13. Feasibility of RFID signal denoising using neural network

    OpenAIRE

    Vojtěch, Lukáš

    2010-01-01

    Radio Frequency Identification signal denoising can be a perspective method for the future intelligent Radio Frequency Identification readers with high reading distances capability. This paper deals with the Group Method of Data Handling neural network denoising filter experiments. Capability of the probability learning of the Group Method of Data Handling filters is an effective instrument in more exacting applications in comparison with classical Finite Impulse Respo...

  14. Denoising without access to clean data using a partitioned autoencoder

    OpenAIRE

    Stowell, Dan; Turner, Richard E

    2015-01-01

    Training a denoising autoencoder neural network requires access to truly clean data, a requirement which is often impractical. To remedy this, we introduce a method to train an autoencoder using only noisy data, having examples with and without the signal class of interest. The autoencoder learns a partitioned representation of signal and noise, learning to reconstruct each separately. We illustrate the method by denoising birdsong audio (available abundantly in uncontrolled noisy datasets) u...

  15. Rapid Feature Learning with Stacked Linear Denoisers

    CERN Document Server

    Xu, Zhixiang Eddie; Sha, Fei

    2011-01-01

    We investigate unsupervised pre-training of deep architectures as feature generators for "shallow" classifiers. Stacked Denoising Autoencoders (SdA), when used as feature pre-processing tools for SVM classification, can lead to significant improvements in accuracy - however, at the price of a substantial increase in computational cost. In this paper we create a simple algorithm which mimics the layer by layer training of SdAs. However, in contrast to SdAs, our algorithm requires no training through gradient descent as the parameters can be computed in closed-form. It can be implemented in less than 20 lines of MATLABTMand reduces the computation time from several hours to mere seconds. We show that our feature transformation reliably improves the results of SVM classification significantly on all our data sets - often outperforming SdAs and even deep neural networks in three out of four deep learning benchmarks.

  16. A Shape Based Image Search Technique

    Directory of Open Access Journals (Sweden)

    Aratrika Sarkar

    2014-08-01

    Full Text Available This paper describes an interactive application we have developed based on shaped-based image retrieval technique. The key concepts described in the project are, imatching of images based on contour matching; iimatching of images based on edge matching; iiimatching of images based on pixel matching of colours. Further, the application facilitates the matching of images invariant of transformations like i translation ; ii rotation; iii scaling. The key factor of the system is, the system shows the percentage unmatched of the image uploaded with respect to the images already existing in the database graphically, whereas, the integrity of the system lies on the unique matching techniques used for optimum result. This increases the accuracy of the system. For example, when a user uploads an image say, an image of a mango leaf, then the application shows all mango leaves present in the database as well other leaves matching the colour and shape of the mango leaf uploaded.

  17. Image transformation based on learning dictionaries across image spaces.

    Science.gov (United States)

    Jia, Kui; Wang, Xiaogang; Tang, Xiaoou

    2013-02-01

    In this paper, we propose a framework of transforming images from a source image space to a target image space, based on learning coupled dictionaries from a training set of paired images. The framework can be used for applications such as image super-resolution and estimation of image intrinsic components (shading and albedo). It is based on a local parametric regression approach, using sparse feature representations over learned coupled dictionaries across the source and target image spaces. After coupled dictionary learning, sparse coefficient vectors of training image patch pairs are partitioned into easily retrievable local clusters. For any test image patch, we can fast index into its closest local cluster and perform a local parametric regression between the learned sparse feature spaces. The obtained sparse representation (together with the learned target space dictionary) provides multiple constraints for each pixel of the target image to be estimated. The final target image is reconstructed based on these constraints. The contributions of our proposed framework are three-fold. 1) We propose a concept of coupled dictionary learning based on coupled sparse coding which requires the sparse coefficient vectors of a pair of corresponding source and target image patches to have the same support, i.e., the same indices of nonzero elements. 2) We devise a space partitioning scheme to divide the high-dimensional but sparse feature space into local clusters. The partitioning facilitates extremely fast retrieval of closest local clusters for query patches. 3) Benefiting from sparse feature-based image transformation, our method is more robust to corrupted input data, and can be considered as a simultaneous image restoration and transformation process. Experiments on intrinsic image estimation and super-resolution demonstrate the effectiveness and efficiency of our proposed method. PMID:22529324

  18. MR IMAGE RECONSTRUCTION BASED ON COMPREHENSIVE SPARSE PRIOR

    Institute of Scientific and Technical Information of China (English)

    Ding Xinghao; Chen Xianbo; Huang Yue; Mi Zengyuan

    2012-01-01

    In this paper,a novel Magnetic Resonance (MR) reconstruction framework which combines image-wise and patch-wise sparse prior is proposed.For addressing,a truncated beta-Bernoulli process is firstly employed to enforce sparsity on overlapping image patches emphasizing local structures.Due to its properties,beta-Bernoulli process can adaptive infer the sparsity (number of nonzero coefficients) of each patch,an appropriate dictionary,and the noise variance simultaneously,which are prerequisite for iterative image reconstruction.Secondly,a General Gaussian Distribution (GGD) prior is introduced to engage image-wise sparsity for wavelet coefficients,which can be then estimated by a threshold denoising algorithm.Finally,MR image is reconstructed by patch-wise estimation,image-wise estimation and under-sampled k-space data with least square data fitting.Experimental results have demonstrated that proposed approach exhibits excellent reconstruction performance.Moreover,if the image is full of similar low-dimensional-structures,proposed algorithm has dramatically improved Peak Signal to Noise Ratio (PSNR) 7~9 dB,with comparisons to other state-of-art compressive sampling methods.

  19. A Miniature-Based Image Retrieval System

    CERN Document Server

    Islam, Md Saiful

    2010-01-01

    Due to the rapid development of World Wide Web (WWW) and imaging technology, more and more images are available in the Internet and stored in databases. Searching the related images by the querying image is becoming tedious and difficult. Most of the images on the web are compressed by methods based on discrete cosine transform (DCT) including Joint Photographic Experts Group(JPEG) and H.261. This paper presents an efficient content-based image indexing technique for searching similar images using discrete cosine transform features. Experimental results demonstrate its superiority with the existing techniques.

  20. Study of Denoising in TEOAE Signals Using an Appropriate Mother Wavelet Function

    Directory of Open Access Journals (Sweden)

    Habib Alizadeh Dizaji

    2007-06-01

    Full Text Available Background and Aim: Matching a mother wavelet to class of signals can be of interest in signal analy­sis and denoising based on wavelet multiresolution analysis and decomposition. As transient evoked otoacoustic emissions (TEOAES are contaminated with noise, the aim of this work was to pro­vide a quantitative approach to the problem of matching a mother wavelet to TEOAE signals by us­ing tun­ing curves and to use it for analysis and denoising TEOAE signals. Approximated mother wave­let for TEOAE signals was calculated using an algorithm for designing wavelet to match a specified sig­nal.Materials and Methods: In this paper a tuning curve has used as a template for designing a mother wave­let that has maximum matching to the tuning curve. The mother wavelet matching was performed on tuning curves spectrum magnitude and phase independent of one another. The scaling function was calcu­lated from the matched mother wavelet and by using these functions, lowpass and highpass filters were designed for a filter bank and otoacoustic emissions signal analysis and synthesis. After signal analyz­ing, denoising was performed by time windowing the signal time-frequency component.Results: Aanalysis indicated more signal reconstruction improvement in comparison with coiflets mother wavelet and by using the purposed denoising algorithm it is possible to enhance signal to noise ra­tio up to dB.Conclusion: The wavelet generated from this algorithm was remarkably similar to the biorthogonal wave­lets. Therefore, by matching a biorthogonal wavelet to the tuning curve and using wavelet packet analy­sis, a high resolution time-frequency analysis for the otoacoustic emission signals is possible.

  1. Denoising of arterial and venous Doppler signals using discrete wavelet transform: effect on clinical parameters.

    Science.gov (United States)

    Tokmakçi, Mahmut; Erdoğan, Nuri

    2009-05-01

    In this paper, the effects of a wavelet transform based denoising strategy on clinical Doppler parameters are analyzed. The study scheme included: (a) Acquisition of arterial and venous Doppler signals by sampling the audio output of an ultrasound scanner from 20 healthy volunteers, (b) Noise reduction via decomposition of the signals through discrete wavelet transform, (c) Spectral analysis of noisy and noise-free signals with short time Fourier transform, (d) Curve fitting to spectrograms, (e) Calculation of clinical Doppler parameters, (f) Statistical comparison of parameters obtained from noisy and noise-free signals. The decomposition level was selected as the highest level at which the maximum power spectral density and its corresponding frequency were preserved. In all subjects, noise-free spectrograms had smoother trace with less ripples. In both arterial and venous spectrograms, denoising resulted in a significant decrease in the maximum (systolic) and mean frequency, with no statistical difference in the minimum (diastolic) frequency. In arterial signals, this leads to a significant decrease in the calculated parameters such as Systolic/Diastolic Velocity Ratio, Resistivity Index, Pulsatility Index and Acceleration Time. Acceleration Index did not change significantly. Despite a successful denoising, the effects of wavelet decomposition on high frequency components in the Doppler signal should be challenged by comparison with reference data, or, through clinical investigations. PMID:19470316

  2. SURVEY ON CONTENT BASED IMAGE RETRIEVAL

    OpenAIRE

    S.R.Surya; G. Sasikala

    2011-01-01

    The digital image data is rapidly expanding in quantity and heterogeneity. The traditional information retrieval techniques does not meet the user’s demand, so there is need to develop an efficient system for content based image retrieval. The content based image retrieval are becoming a source of exact and fast retrieval. In thispaper the techniques of content based image retrieval are discussed, analysed and compared. Here, to compared features as color correlogram, texture, shape, edge den...

  3. PERFORMANCE EVALUATION OF CONTENT BASED IMAGE RETRIEVAL FOR MEDICAL IMAGES

    Directory of Open Access Journals (Sweden)

    SASI KUMAR. M

    2013-04-01

    Full Text Available Content-based image retrieval (CBIR technology benefits not only large image collections management, but also helps clinical care, biomedical research, and education. Digital images are found in X-Rays, MRI, CT which are used for diagnosing and planning treatment schedules. Thus, visual information management is challenging as the data quantity available is huge. Currently, available medical databases utilization is limited image retrieval issues. Archived digital medical images retrieval is always challenging and this is being researched more as images are of great importance in patient diagnosis, therapy, medical reference, and medical training. In this paper, an image matching scheme using Discrete Sine Transform for relevant feature extraction is presented. The efficiency of different algorithm for classifying the features to retrieve medical images is investigated.

  4. Imaging based, patient specific dosimetry

    International Nuclear Information System (INIS)

    that this can be performed is either by sequential planar scintillation camera measurements or by SPECT methods. Scintillation cameras generally have a low spatial resolution and sensitivity (cps/MBq) due to the collimator. The resolution is in order of 1-2 cm depending on the source location and radionuclide characteristics. Image noise is also a major problem since only a small activity is given for pre-planning which can degrade the image quality. Dosimetry using 2D planar imaging and the conjugate-view activity quantitation method have been used for many years. The quantification of the activity includes several approximations. In a planar acquisition the source depth in the patient is not resolved which makes the correction for photon attenuation and unwanted contribution from scattered photons to the image less accurate and consistent. Furthermore, contributions from activity uptakes that overlap the volume of interest in the image is a major problem. For calculation of the absorbed dose, the organ mass also needs to be determined, which can be made using patient CT images, or, using less accurate estimations from standardized phantom geometries. The energy deposition and transport is done based on pre-calculated dose factors from standardized phantom geometries. Despite these problems, the conjugate-view method has been the major choice for many dosimetrical studies. SPECT provide a possibility for 3D activity measurements. In this method, correction for non-homogeneous photon attenuation, scatter and loss of spatial resolution due to the collimator are today quite accurate when incorporated in iterative reconstruction methods. SPECT also allows for an accurate 3D absorbed dose calculation in that the patient's geometry can be taken into consideration if a co-registered CT study of the patient is available. Modern hybrid SPECT/CT cameras make such calculations relatively straight-forward. A major advantage using SPECT imaging is also that the absorbed dose

  5. Quantum Image Encryption Algorithm Based on Quantum Image XOR Operations

    Science.gov (United States)

    Gong, Li-Hua; He, Xiang-Tao; Cheng, Shan; Hua, Tian-Xiang; Zhou, Nan-Run

    2016-07-01

    A novel encryption algorithm for quantum images based on quantum image XOR operations is designed. The quantum image XOR operations are designed by using the hyper-chaotic sequences generated with the Chen's hyper-chaotic system to control the control-NOT operation, which is used to encode gray-level information. The initial conditions of the Chen's hyper-chaotic system are the keys, which guarantee the security of the proposed quantum image encryption algorithm. Numerical simulations and theoretical analyses demonstrate that the proposed quantum image encryption algorithm has larger key space, higher key sensitivity, stronger resistance of statistical analysis and lower computational complexity than its classical counterparts.

  6. Wavelet Image Encryption Algorithm Based on AES

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Traditional encryption techniques have some limits for multimedia information, especially image and video, which are considered only to be common data. In this paper, we propose a wavelet-based image encryption algorithm based on the Advanced Encryption Standard, which encrypts only those low frequency coefficients of image wavelet decomposition. The experimental results are satisfactory.

  7. 一种有效的双树复小波-Wiener滤波去噪算法%An efficient dual-tree complex wavelet-wiener denoising algorithm

    Institute of Scientific and Technical Information of China (English)

    魏培; 全子一; 门爱东

    2009-01-01

    In order to eliminate the Gauss white noise in the image, the dual-tree complex wavelet transform was combined with the adaptive Wiener filtering and formed the dual-tree complex wavelet - Wiener denoising algorithm. Simulation results show that this new algorithm yields better results, both visually and in terms of PSNR, than thresholding and Wiener filtering based on DWT.%为了有效恢复被高斯白噪声污染的图像,将双树复小波变换和自适应Wiener滤波结合起来,提出了一种双树复小波-Wiener滤波去噪算法.仿真结果表明,利用该算法去噪后恢复的图像主观质量和峰值信噪比比基于正交小波变换的门限法和Wiener滤波法都要好.

  8. Content Based Image Indexing and Retrieval

    OpenAIRE

    Bhute, Avinash N; B B Meshram

    2014-01-01

    In this paper, we present the efficient content based image retrieval systems which employ the color, texture and shape information of images to facilitate the retrieval process. For efficient feature extraction, we extract the color, texture and shape feature of images automatically using edge detection which is widely used in signal processing and image compression. For facilitated the speedy retrieval we are implements the antipole-tree algorithm for indexing the images.

  9. Global Descriptor Attributes Based Content Based Image Retrieval of Query Images

    OpenAIRE

    Jaykrishna Joshi; Dattatray Bade

    2015-01-01

    The need for efficient content-based image retrieval system has increased hugely. Efficient and effective retrieval techniques of images are desired because of the explosive growth of digital images. Content based image retrieval (CBIR) is a promising approach because of its automatic indexing retrieval based on their semantic features and visual appearance. In this proposed system we investigate method for describing the contents of images which characterizes images by global des...

  10. SPOT Controlled Image Base 10 meter

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — SPOT Controlled Image Base 10 meter (CIB-10) is a collection of orthorectified panchromatic (grayscale) images. The data were acquired between 1986 and 1993 by the...

  11. Optical image hiding based on computational ghost imaging

    Science.gov (United States)

    Wang, Le; Zhao, Shengmei; Cheng, Weiwen; Gong, Longyan; Chen, Hanwu

    2016-05-01

    Imaging hiding schemes play important roles in now big data times. They provide copyright protections of digital images. In the paper, we propose a novel image hiding scheme based on computational ghost imaging to have strong robustness and high security. The watermark is encrypted with the configuration of a computational ghost imaging system, and the random speckle patterns compose a secret key. Least significant bit algorithm is adopted to embed the watermark and both the second-order correlation algorithm and the compressed sensing (CS) algorithm are used to extract the watermark. The experimental and simulation results show that the authorized users can get the watermark with the secret key. The watermark image could not be retrieved when the eavesdropping ratio is less than 45% with the second-order correlation algorithm, whereas it is less than 20% with the TVAL3 CS reconstructed algorithm. In addition, the proposed scheme is robust against the 'salt and pepper' noise and image cropping degradations.

  12. Gearbox fault diagnosis of rolling mills using multiwavelet sliding window neighboring coefficient denoising and optimal blind deconvolution

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Fault diagnosis of rolling mills, especially the main drive gearbox, is of great importance to the high quality products and long-term safe operation. However, the useful fault information is usually submerged in heavy background noise under the severe condition. Thereby, a novel method based on multiwavelet sliding window neighboring coefficient denoising and optimal blind deconvolution is proposed for gearbox fault diagnosis in rolling mills. The emerging multiwavelets can seize the important signal processing properties simultaneously. Owing to the multiple scaling and wavelet basis functions, they have the supreme possibility of matching various features. Due to the periodicity of gearbox signals, sliding window is recommended to conduct local threshold denoising, so as to avoid the "overkill" of conventional universal thresholding techniques. Meanwhile, neighboring coefficient denoising, considering the correlation of the coefficients, is introduced to effectively process the noisy signals in every sliding window. Thus, multiwavelet sliding window neighboring coefficient denoising not only can perform excellent fault extraction, but also accords with the essence of gearbox fault features. On the other hand, optimal blind deconvolution is carried out to highlight the denoised features for operators’ easy identification. The filter length is vital for the effective and meaningful results. Hence, the foremost filter length selection based on the kurtosis is discussed in order to full benefits of this technique. The new method is applied to two gearbox fault diagnostic cases of hot strip finishing mills, compared with multiwavelet and scalar wavelet methods with/without optimal blind deconvolution. The results show that it could enhance the ability of fault detection for the main drive gearboxes.

  13. Multi region based image retrieval system

    Indian Academy of Sciences (India)

    P Manipoonchelvi; K Muneeswaran

    2014-04-01

    Multimedia information retrieval systems continue to be an active research area in the world of huge and voluminous data. The paramount challenge is to translate or convert a visual query from a human and find similar images or videos in large digital collection. In this paper, a technique of region based image retrieval, a branch of Content Based Image Retrieval, is proposed. The proposed model does not need prior knowledge or full semantic understanding of image content. It identifies significant regions in an image based on feature-based attention model which mimic viewer’s attention. The Curvelet Transform in combination with colour descriptors are used to represent each significant region in an image. Experimental results are analysed and compared with the state-of-the-art Region Based Image Retrieval Technique.

  14. Research on De-noising of Nuclear Magnetic Logging Signal Based on Ensemble Extremum Field Mean Mode Decomposition%基于集总极值域均值分解的核磁测井信号去噪方法

    Institute of Scientific and Technical Information of China (English)

    胡日苏; 杨文涛; 刘冠玉; 蒋锐

    2016-01-01

    Considering the spin echo signals mixed with mass noise in nuclear magnetic resonance ( NMR ) logging and the both end effect and modal aliasing in empirical mode decomposition ( EMD) , a signal de-noi-sing method based on ensemble extremum field mean mode decomposition( EMMD) was proposed. This meth-od has ensemble empirical mode decomposition ( EEMD) embedded in EMMD to avoid afore-said end effect and modal aliasing in EMD with an exception of its superiority in the signal de-noising. Analyzing the logging data indicates that, this method can filter out the noise in return signals and improve SNR;and both T2 spec-trum and porosity inversely calculated by echo signals coincides with the result measured through core data un-der laboratory conditions.%针对核磁共振测井中自旋回波串混有大量噪声与传统经验模态分解算法存在端点效应和模态混叠的问题,提出一种基于集总极值域均值分解的信号去噪新方法。该方法将极值域均值模式分解融合嵌入集总经验模分解算法中,有效避免了经验模态分解算法存在的弊端,同时继承了其去噪优势。对实际测井数据的分析表明:该方法有效滤除了回波信号的噪声,提高了信噪比,且利用回波信号反演计算出的T2谱和孔隙度与实验用岩心的测量结果一致。

  15. Wavelets in medical imaging

    Energy Technology Data Exchange (ETDEWEB)

    Zahra, Noor e; Sevindir, Huliya A.; Aslan, Zafar; Siddiqi, A. H. [Sharda University, SET, Department of Electronics and Communication, Knowledge Park 3rd, Gr. Noida (India); University of Kocaeli, Department of Mathematics, 41380 Kocaeli (Turkey); Istanbul Aydin University, Department of Computer Engineering, 34295 Istanbul (Turkey); Sharda University, SET, Department of Mathematics, 32-34 Knowledge Park 3rd, Greater Noida (India)

    2012-07-17

    The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.

  16. Feature-based Image Sequence Compression Coding

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A novel compressing method for video teleconference applications is presented. Semantic-based coding based on human image feature is realized, where human features are adopted as parameters. Model-based coding and the concept of vector coding are combined with the work on image feature extraction to obtain the result.

  17. Image processing technique based on image understanding architecture

    Science.gov (United States)

    Kuvychko, Igor

    2000-12-01

    Effectiveness of image applications is directly based on its abilities to resolve ambiguity and uncertainty in the real images. That requires tight integration of low-level image processing with high-level knowledge-based reasoning, which is the solution of the image understanding problem. This article presents a generic computational framework necessary for the solution of image understanding problem -- Spatial Turing Machine. Instead of tape of symbols, it works with hierarchical networks dually represented as discrete and continuous structures. Dual representation provides natural transformation of the continuous image information into the discrete structures, making it available for analysis. Such structures are data and algorithms at the same time and able to perform graph and diagrammatic operations being the basis of intelligence. They can create derivative structures that play role of context, or 'measurement device,' giving the ability to analyze, and run top-bottom algorithms. Symbols naturally emerge there, and symbolic operations work in combination with new simplified methods of computational intelligence. That makes images and scenes self-describing, and provides flexible ways of resolving uncertainty. Classification of images truly invariant to any transformation could be done via matching their derivative structures. New proposed architecture does not require supercomputers, opening ways to the new image technologies.

  18. Multiple-image encryption based on computational ghost imaging

    Science.gov (United States)

    Wu, Jingjing; Xie, Zhenwei; Liu, Zhengjun; Liu, Wei; Zhang, Yan; Liu, Shutian

    2016-01-01

    We propose an optical multiple-image encryption scheme based on computational ghost imaging with the position multiplexing. In the encryption process, each plain image is encrypted into an intensity vector by using the computational ghost imaging with a different diffraction distance. The final ciphertext is generated by superposing all the intensity vectors together. Different from common multiple-image cryptosystems, the ciphertext in the proposed scheme is simply an intensity vector instead of a complex amplitude. Simulation results are presented to demonstrate the validity and security of the proposed multiple-image encryption method. The multiplexing capacity of the proposed method is also investigated. Optical experiment is presented to verify the validity of the proposed scheme in practical application.

  19. Texture Image Classification Based on Gabor Wavelet

    Institute of Scientific and Technical Information of China (English)

    DENG Wei-bing; LI Hai-fei; SHI Ya-li; YANG Xiao-hui

    2014-01-01

    For a texture image, by recognizining the class of every pixel of the image, it can be partitioned into disjoint regions of uniform texture. This paper proposed a texture image classification algorithm based on Gabor wavelet. In this algorithm, characteristic of every image is obtained through every pixel and its neighborhood of this image. And this algorithm can achieve the information transform between different sizes of neighborhood. Experiments on standard Brodatz texture image dataset show that our proposed algorithm can achieve good classification rates.

  20. Developing stereo image based robot control system

    Energy Technology Data Exchange (ETDEWEB)

    Suprijadi,; Pambudi, I. R.; Woran, M.; Naa, C. F; Srigutomo, W. [Department of Physics, FMIPA, InstitutTeknologi Bandung Jl. Ganesha No. 10. Bandung 40132, Indonesia supri@fi.itb.ac.id (Indonesia)

    2015-04-16

    Application of image processing is developed in various field and purposes. In the last decade, image based system increase rapidly with the increasing of hardware and microprocessor performance. Many fields of science and technology were used this methods especially in medicine and instrumentation. New technique on stereovision to give a 3-dimension image or movie is very interesting, but not many applications in control system. Stereo image has pixel disparity information that is not existed in single image. In this research, we proposed a new method in wheel robot control system using stereovision. The result shows robot automatically moves based on stereovision captures.

  1. Comparative Study of Triangulation based and Feature based Image Morphing

    Directory of Open Access Journals (Sweden)

    Ms. Bhumika G. Bhatt

    2012-01-01

    Full Text Available Image Morphing is one of the most powerful Digital Image processing technique, which is used to enhancemany multimedia projects, presentations, education and computer based training. It is also used inmedical imaging field to recover features not visible in images by establishing correspondence of featuresamong successive pair of scanned images. This paper discuss what morphing is and implementation ofTriangulation based morphing Technique and Feature based Image Morphing. IT analyze both morphingtechniques in terms of different attributes such as computational complexity, Visual quality of morphobtained and complexity involved in selection of features.

  2. Survey paper on Sketch Based and Content Based Image Retrieval

    OpenAIRE

    Gaidhani, Prachi A.; S. B. Bagal

    2015-01-01

    This survey paper presents an overview of development of Sketch Based Image Retrieval (SBIR) and Content based image retrieval (CBIR) in the past few years. There is awful growth in bulk of images as well as the far-flung application in too many fields. The main attributes to represent as well index the images are color, shape, texture, spatial layout. These features of images are extracted to check similarity among the images. Generation of special query is the main problem of content based ...

  3. Detail Enhancement for Infrared Images Based on Propagated Image Filter

    Directory of Open Access Journals (Sweden)

    Yishu Peng

    2016-01-01

    Full Text Available For displaying high-dynamic-range images acquired by thermal camera systems, 14-bit raw infrared data should map into 8-bit gray values. This paper presents a new method for detail enhancement of infrared images to display the image with a relatively satisfied contrast and brightness, rich detail information, and no artifacts caused by the image processing. We first adopt a propagated image filter to smooth the input image and separate the image into the base layer and the detail layer. Then, we refine the base layer by using modified histogram projection for compressing. Meanwhile, the adaptive weights derived from the layer decomposition processing are used as the strict gain control for the detail layer. The final display result is obtained by recombining the two modified layers. Experimental results on both cooled and uncooled infrared data verify that the proposed method outperforms the method based on log-power histogram modification and bilateral filter-based detail enhancement in both detail enhancement and visual effect.

  4. Feature-point-extracting-based automatically mosaic for composite microscopic images

    Institute of Scientific and Technical Information of China (English)

    YIN YanSheng; ZHAO XiuYang; TIAN XiaoFeng; LI Jia

    2007-01-01

    Image mosaic is a crucial step in the three-dimensional reconstruction of composite materials to align the serial images. A novel method is adopted to mosaic two SiC/Al microscopic images with an amplification coefficient of 1000. The two images are denoised by Gaussian model, and feature points are then extracted by using Harris corner detector. The feature points are filtered through Canny edge detector. A 40x40 feature template is chosen by sowing a seed in an overlapped area of the reference image, and the homologous region in floating image is acquired automatically by the way of correlation analysis. The feature points in matched templates are used as feature point-sets. Using the transformational parameters acquired by SVD-ICP method, the two images are transformed into the universal coordinates and merged to the final mosaic image.

  5. ECG signals denoising using wavelet transform and independent component analysis

    Science.gov (United States)

    Liu, Manjin; Hui, Mei; Liu, Ming; Dong, Liquan; Zhao, Zhu; Zhao, Yuejin

    2015-08-01

    A method of two channel exercise electrocardiograms (ECG) signals denoising based on wavelet transform and independent component analysis is proposed in this paper. First of all, two channel exercise ECG signals are acquired. We decompose these two channel ECG signals into eight layers and add up the useful wavelet coefficients separately, getting two channel ECG signals with no baseline drift and other interference components. However, it still contains electrode movement noise, power frequency interference and other interferences. Secondly, we use these two channel ECG signals processed and one channel signal constructed manually to make further process with independent component analysis, getting the separated ECG signal. We can see the residual noises are removed effectively. Finally, comparative experiment is made with two same channel exercise ECG signals processed directly with independent component analysis and the method this paper proposed, which shows the indexes of signal to noise ratio (SNR) increases 21.916 and the root mean square error (MSE) decreases 2.522, proving the method this paper proposed has high reliability.

  6. Improving Signal-to-Noise Ratio in Susceptibility Weighted Imaging: A Novel Multicomponent Non-Local Approach.

    Directory of Open Access Journals (Sweden)

    Pasquale Borrelli

    Full Text Available In susceptibility-weighted imaging (SWI, the high resolution required to obtain a proper contrast generation leads to a reduced signal-to-noise ratio (SNR. The application of a denoising filter to produce images with higher SNR and still preserve small structures from excessive blurring is therefore extremely desirable. However, as the distributions of magnitude and phase noise may introduce biases during image restoration, the application of a denoising filter is non-trivial. Taking advantage of the potential multispectral nature of MR images, a multicomponent approach using a Non-Local Means (MNLM denoising filter may perform better than a component-by-component image restoration method. Here we present a new MNLM-based method (Multicomponent-Imaginary-Real-SWI, hereafter MIR-SWI to produce SWI images with high SNR and improved conspicuity. Both qualitative and quantitative comparisons of MIR-SWI with the original SWI scheme and previously proposed SWI restoring pipelines showed that MIR-SWI fared consistently better than the other approaches. Noise removal with MIR-SWI also provided improvement in contrast-to-noise ratio (CNR and vessel conspicuity at higher factors of phase mask multiplications than the one suggested in the literature for SWI vessel imaging. We conclude that a proper handling of noise in the complex MR dataset may lead to improved image quality for SWI data.

  7. Non-local MRI denoising using random sampling.

    Science.gov (United States)

    Hu, Jinrong; Zhou, Jiliu; Wu, Xi

    2016-09-01

    In this paper, we propose a random sampling non-local mean (SNLM) algorithm to eliminate noise in 3D MRI datasets. Non-local means (NLM) algorithms have been implemented efficiently for MRI denoising, but are always limited by high computational complexity. Compared to conventional methods, which raster through the entire search window when computing similarity weights, the proposed SNLM algorithm randomly selects a small subset of voxels which dramatically decreases the computational burden, together with competitive denoising result. Moreover, structure tensor which encapsulates high-order information was introduced as an optimal sampling pattern for further improvement. Numerical experiments demonstrated that the proposed SNLM method can get a good balance between denoising quality and computation efficiency. At a relative sampling ratio (i.e. ξ=0.05), SNLM can remove noise as effectively as full NLM, meanwhile the running time can be reduced to 1/20 of NLM's. PMID:27114338

  8. Depth-based selective image reconstruction using spatiotemporal image analysis

    Science.gov (United States)

    Haga, Tetsuji; Sumi, Kazuhiko; Hashimoto, Manabu; Seki, Akinobu

    1999-03-01

    In industrial plants, a remote monitoring system which removes physical tour inspection is often considered desirable. However the image sequence given from the mobile inspection robot is hard to see because interested objects are often partially occluded by obstacles such as pillars or fences. Our aim is to improve the image sequence that increases the efficiency and reliability of remote visual inspection. We propose a new depth-based image processing technique, which removes the needless objects from the foreground and recovers the occluded background electronically. Our algorithm is based on spatiotemporal analysis that enables fine and dense depth estimation, depth-based precise segmentation, and accurate interpolation. We apply this technique to a real time sequence given from the mobile inspection robot. The resulted image sequence is satisfactory in that the operator can make correct visual inspection with less fatigue.

  9. MAP-based infrared image expansion

    Institute of Scientific and Technical Information of China (English)

    Nan Zhang; Weiqi Jin; Binghua Su; Yangyang Liu; Hua Chen

    2005-01-01

    @@ Image expansion plays a very important role in image analysis. Common methods of image expansion, such as the zero-order hold method, may generate a visual mosaic to the expanded image, linear and cubic spline interpolation may blur the image data at peripheral regions. Since infrared images have the characteristics of low contrast and low signal-to-noise ratio (SNR), the expanded images derived from common methods are not satisfactory. As shown in the analysis of the course from images with low resolution to those with high resolution, the expansion of image is found to be an ill-posed inverse problem. An image interpolation algorithm based on MAP estimation under Bayesian framework is proposed in this paper,which can effectively preserve the discontinuities in the original image. Experimental results demonstrate that the expanded images by this method are visually and quantitatively (analyzed by using the criteria of mean squared error (MSE) and mean absolute error (MAE)) superior to the images expanded by common methods of linear interpolation. Even in expansion of infrared images, this method can also give good results. An analysis about choosing regularization parameter α in this algorithm is given.

  10. Content Based Image Retrieval by Multi Features using Image Blocks

    Directory of Open Access Journals (Sweden)

    Arpita Mathur

    2013-12-01

    Full Text Available Content based image retrieval (CBIR is an effective method of retrieving images from large image resources. CBIR is a technique in which images are indexed by extracting their low level features like, color, texture, shape, and spatial location, etc. Effective and efficient feature extraction mechanisms are required to improve existing CBIR performance. This paper presents a novel approach of CBIR system in which higher retrieval efficiency is achieved by combining the information of image features color, shape and texture. The color feature is extracted using color histogram for image blocks, for shape feature Canny edge detection algorithm is used and the HSB extraction in blocks is used for texture feature extraction. The feature set of the query image are compared with the feature set of each image in the database. The experiments show that the fusion of multiple features retrieval gives better retrieval results than another approach used by Rao et al. This paper presents comparative study of performance of the two different approaches of CBIR system in which the image features color, shape and texture are used.

  11. Image edge detection based on beamlet transform

    Institute of Scientific and Technical Information of China (English)

    Li Jing; Huang Peikang; Wang Xiaohu; Pan Xudong

    2009-01-01

    Combining beamlet transform with steerable filters, a new edge detection method based on line gra-dient is proposed. Compared with operators based on point local properties, the edge-detection results with this method achieve higher SNR and position accuracy, and are quite helpful for image registration, object identification, etc. Some edge-detection experiments on optical and SAR images that demonstrate the significant improvement over classical edge operators are also presented. Moreover, the template matching result based on edge information of optical reference image and SAR image also proves the validity of this method.

  12. New Time Dependent Pretreat Models Based on Total Variational Image Restoration

    Institute of Scientific and Technical Information of China (English)

    Jing Xu; Qian-shun Chang

    2011-01-01

    In this paper, we propose new pretreat models for total wriation (TV) minimization problems in image deblurring and denoising. Specially, blur operator is considered as useful information in restoration. New models in form is equivalent to pretreat the initial value by image blur operator. We successfully get a new (L.Rudin, S. Osher, and E. Fatemi) ROF model, a new level set motion model and a new anisotropic diffusion model respectively. Numerical experiments demonstrate that, under the same stopping rule, the proposed methods significantly accelerate the convergence of the mothed, save computation time and get the same restored effect.

  13. Statistical x-ray computed tomography imaging from photon-starved measurements

    Science.gov (United States)

    Chang, Zhiqian; Zhang, Ruoqiao; Thibault, Jean-Baptiste; Sauer, Ken; Bouman, Charles

    2013-03-01

    Dose reduction in clinical X-ray computed tomography (CT) causes low signal-to-noise ratio (SNR) in photonsparse situations. Statistical iterative reconstruction algorithms have the advantage of retaining image quality while reducing input dosage, but they meet their limits of practicality when significant portions of the sinogram near photon starvation. The corruption of electronic noise leads to measured photon counts taking on negative values, posing a problem for the log() operation in preprocessing of data. In this paper, we propose two categories of projection correction methods: an adaptive denoising filter and Bayesian inference. The denoising filter is easy to implement and preserves local statistics, but it introduces correlation between channels and may affect image resolution. Bayesian inference is a point-wise estimation based on measurements and prior information. Both approaches help improve diagnostic image quality at dramatically reduced dosage.

  14. Agent Based Image Segmentation Method : A Review

    OpenAIRE

    Pooja Mishra; Navita Srivastava; Shukla, K. K.; Achintya Singlal

    2011-01-01

    Image segmentation is an important research area in computer vision and many segmentation methods have been proposed. This paper attempts to provide a brief overview of elemental segmentation techniques based on boundary or regional approaches. It focuses mainly on the agent based image segmentation techniques

  15. Image Based Rendering and Virtual Reality

    DEFF Research Database (Denmark)

    Livatino, Salvatore

    The Presentation concerns with an overview of Image Based Rendering approaches and their use on Virtual Reality, including Virtual Photography and Cinematography, and Mobile Robot Navigation.......The Presentation concerns with an overview of Image Based Rendering approaches and their use on Virtual Reality, including Virtual Photography and Cinematography, and Mobile Robot Navigation....

  16. Fused analytical and iterative reconstruction (AIR) via modified proximal forward-backward splitting: a FDK-based iterative image reconstruction example for CBCT

    Science.gov (United States)

    Gao, Hao

    2016-10-01

    This work is to develop a general framework, namely analytical iterative reconstruction (AIR) method, to incorporate analytical reconstruction (AR) method into iterative reconstruction (IR) method, for enhanced CT image quality and reconstruction efficiency. Specifically, AIR is established based on the modified proximal forward-backward splitting (PFBS) algorithm, and its connection to the filtered data fidelity with sparsity regularization is discussed. As a result, AIR decouples data fidelity and image regularization with a two-step iterative scheme, during which an AR-projection step updates the filtered data fidelity term, while a denoising solver updates the sparsity regularization term. During the AR-projection step, the image is projected to the data domain to form the data residual, and then reconstructed by certain AR to a residual image which is then weighted together with previous image iterate to form next image iterate. Intuitively since the eigenvalues of AR-projection operator are close to the unity, PFBS based AIR has a fast convergence. Such an advantage is rigorously established through convergence analysis and numerical computation of convergence rate. The proposed AIR method is validated in the setting of circular cone-beam CT with AR being FDK and total-variation sparsity regularization, and has improved image quality from both AR and IR. For example, AIR has improved visual assessment and quantitative measurement in terms of both contrast and resolution, and reduced axial and half-fan artifacts.

  17. An Image Filter Based on Shearlet Transformation and Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Kai Hu

    2015-01-01

    Full Text Available Digital image is always polluted by noise and made data postprocessing difficult. To remove noise and preserve detail of image as much as possible, this paper proposed image filter algorithm which combined the merits of Shearlet transformation and particle swarm optimization (PSO algorithm. Firstly, we use classical Shearlet transform to decompose noised image into many subwavelets under multiscale and multiorientation. Secondly, we gave weighted factor to those subwavelets obtained. Then, using classical Shearlet inverse transform, we obtained a composite image which is composed of those weighted subwavelets. After that, we designed fast and rough evaluation method to evaluate noise level of the new image; by using this method as fitness, we adopted PSO to find the optimal weighted factor we added; after lots of iterations, by the optimal factors and Shearlet inverse transform, we got the best denoised image. Experimental results have shown that proposed algorithm eliminates noise effectively and yields good peak signal noise ratio (PSNR.

  18. Kernel based subspace projection of hyperspectral images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten;

    In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF). Th...... MAF projection exploits the fact that interesting phenomena in images typically exhibit spatial autocorrelation. The analysis is based on nearinfrared hyperspectral images of maize grains demonstrating the superiority of the kernelbased MAF method.......In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF). The...

  19. Position sensor based on slit imaging

    Institute of Scientific and Technical Information of China (English)

    Aijun Zeng(曾爱军); Xiangzhao Wang(王向朝); Yang Bu(步扬); Dailin Li(李代林)

    2004-01-01

    A position sensor based on slit imaging is proposed and its measurement principle is described.An imaging slit is illuminated by a collimated laser beam with square-wave modulation and imaged on a detection double slit through a 4f system.A magnified image of the detection double slit is formed on a bi-cell detector.The position of the imaging slit is obtained by detecting light intensity on two parts of the bi-cell detector.In experiments,the feasibility of the sensor was verified.The repeatability was less than 40 nm.

  20. Iris image Segmentation Based on Phase Congruency

    Institute of Scientific and Technical Information of China (English)

    GAO Chao; JIANG Da-qin; Guo Yong-cai

    2006-01-01

    @@ Iris image segmentation is very important for an iris recognition system.There are always iris noises as eyelash,eyelid,reflection and pupil in iris images.The paper proposes a virtual method of segmentation.By locating and normalizing iris images with Gabor filter,we can acquire information of image texture in a series of frequencies and orientations.Iris noise regions are determined based on phase congruency by a group of Gabor filters whose kernels are suitable for edge detection.These regions are filled according to the characteristics of iris noise.The experimental results show that the proposed method can segment iris images effectively.