WorldWideScience

Sample records for based image denoising

  1. Image denoising based on noise detection

    Science.gov (United States)

    Jiang, Yuanxiang; Yuan, Rui; Sun, Yuqiu; Tian, Jinwen

    2018-03-01

    Because of the noise points in the images, any operation of denoising would change the original information of non-noise pixel. A noise detection algorithm based on fractional calculus was proposed to denoise in this paper. Convolution of the image was made to gain direction gradient masks firstly. Then, the mean gray was calculated to obtain the gradient detection maps. Logical product was made to acquire noise position image next. Comparisons in the visual effect and evaluation parameters after processing, the results of experiment showed that the denoising algorithms based on noise were better than that of traditional methods in both subjective and objective aspects.

  2. Regularized image denoising based on spectral gradient optimization

    International Nuclear Information System (INIS)

    Lukić, Tibor; Lindblad, Joakim; Sladoje, Nataša

    2011-01-01

    Image restoration methods, such as denoising, deblurring, inpainting, etc, are often based on the minimization of an appropriately defined energy function. We consider energy functions for image denoising which combine a quadratic data-fidelity term and a regularization term, where the properties of the latter are determined by a used potential function. Many potential functions are suggested for different purposes in the literature. We compare the denoising performance achieved by ten different potential functions. Several methods for efficient minimization of regularized energy functions exist. Most are only applicable to particular choices of potential functions, however. To enable a comparison of all the observed potential functions, we propose to minimize the objective function using a spectral gradient approach; spectral gradient methods put very weak restrictions on the used potential function. We present and evaluate the performance of one spectral conjugate gradient and one cyclic spectral gradient algorithm, and conclude from experiments that both are well suited for the task. We compare the performance with three total variation-based state-of-the-art methods for image denoising. From the empirical evaluation, we conclude that denoising using the Huber potential (for images degraded by higher levels of noise; signal-to-noise ratio below 10 dB) and the Geman and McClure potential (for less noisy images), in combination with the spectral conjugate gradient minimization algorithm, shows the overall best performance

  3. An enhanced fractal image denoising algorithm

    International Nuclear Information System (INIS)

    Lu Jian; Ye Zhongxing; Zou Yuru; Ye Ruisong

    2008-01-01

    In recent years, there has been a significant development in image denoising using fractal-based method. This paper presents an enhanced fractal predictive denoising algorithm for denoising the images corrupted by an additive white Gaussian noise (AWGN) by using quadratic gray-level function. Meanwhile, a quantization method for the fractal gray-level coefficients of the quadratic function is proposed to strictly guarantee the contractivity requirement of the enhanced fractal coding, and in terms of the quality of the fractal representation measured by PSNR, the enhanced fractal image coding using quadratic gray-level function generally performs better than the standard fractal coding using linear gray-level function. Based on this enhanced fractal coding, the enhanced fractal image denoising is implemented by estimating the fractal gray-level coefficients of the quadratic function of the noiseless image from its noisy observation. Experimental results show that, compared with other standard fractal-based image denoising schemes using linear gray-level function, the enhanced fractal denoising algorithm can improve the quality of the restored image efficiently

  4. Performance evaluation of image denoising developed using convolutional denoising autoencoders in chest radiography

    Science.gov (United States)

    Lee, Donghoon; Choi, Sunghoon; Kim, Hee-Joung

    2018-03-01

    When processing medical images, image denoising is an important pre-processing step. Various image denoising algorithms have been developed in the past few decades. Recently, image denoising using the deep learning method has shown excellent performance compared to conventional image denoising algorithms. In this study, we introduce an image denoising technique based on a convolutional denoising autoencoder (CDAE) and evaluate clinical applications by comparing existing image denoising algorithms. We train the proposed CDAE model using 3000 chest radiograms training data. To evaluate the performance of the developed CDAE model, we compare it with conventional denoising algorithms including median filter, total variation (TV) minimization, and non-local mean (NLM) algorithms. Furthermore, to verify the clinical effectiveness of the developed denoising model with CDAE, we investigate the performance of the developed denoising algorithm on chest radiograms acquired from real patients. The results demonstrate that the proposed denoising algorithm developed using CDAE achieves a superior noise-reduction effect in chest radiograms compared to TV minimization and NLM algorithms, which are state-of-the-art algorithms for image noise reduction. For example, the peak signal-to-noise ratio and structure similarity index measure of CDAE were at least 10% higher compared to conventional denoising algorithms. In conclusion, the image denoising algorithm developed using CDAE effectively eliminated noise without loss of information on anatomical structures in chest radiograms. It is expected that the proposed denoising algorithm developed using CDAE will be effective for medical images with microscopic anatomical structures, such as terminal bronchioles.

  5. Regularized Fractional Power Parameters for Image Denoising Based on Convex Solution of Fractional Heat Equation

    Directory of Open Access Journals (Sweden)

    Hamid A. Jalab

    2014-01-01

    Full Text Available The interest in using fractional mask operators based on fractional calculus operators has grown for image denoising. Denoising is one of the most fundamental image restoration problems in computer vision and image processing. This paper proposes an image denoising algorithm based on convex solution of fractional heat equation with regularized fractional power parameters. The performances of the proposed algorithms were evaluated by computing the PSNR, using different types of images. Experiments according to visual perception and the peak signal to noise ratio values show that the improvements in the denoising process are competent with the standard Gaussian filter and Wiener filter.

  6. Non-local means denoising of dynamic PET images.

    Directory of Open Access Journals (Sweden)

    Joyita Dutta

    Full Text Available Dynamic positron emission tomography (PET, which reveals information about both the spatial distribution and temporal kinetics of a radiotracer, enables quantitative interpretation of PET data. Model-based interpretation of dynamic PET images by means of parametric fitting, however, is often a challenging task due to high levels of noise, thus necessitating a denoising step. The objective of this paper is to develop and characterize a denoising framework for dynamic PET based on non-local means (NLM.NLM denoising computes weighted averages of voxel intensities assigning larger weights to voxels that are similar to a given voxel in terms of their local neighborhoods or patches. We introduce three key modifications to tailor the original NLM framework to dynamic PET. Firstly, we derive similarities from less noisy later time points in a typical PET acquisition to denoise the entire time series. Secondly, we use spatiotemporal patches for robust similarity computation. Finally, we use a spatially varying smoothing parameter based on a local variance approximation over each spatiotemporal patch.To assess the performance of our denoising technique, we performed a realistic simulation on a dynamic digital phantom based on the Digimouse atlas. For experimental validation, we denoised [Formula: see text] PET images from a mouse study and a hepatocellular carcinoma patient study. We compared the performance of NLM denoising with four other denoising approaches - Gaussian filtering, PCA, HYPR, and conventional NLM based on spatial patches.The simulation study revealed significant improvement in bias-variance performance achieved using our NLM technique relative to all the other methods. The experimental data analysis revealed that our technique leads to clear improvement in contrast-to-noise ratio in Patlak parametric images generated from denoised preclinical and clinical dynamic images, indicating its ability to preserve image contrast and high

  7. Non-local means denoising of dynamic PET images.

    Science.gov (United States)

    Dutta, Joyita; Leahy, Richard M; Li, Quanzheng

    2013-01-01

    Dynamic positron emission tomography (PET), which reveals information about both the spatial distribution and temporal kinetics of a radiotracer, enables quantitative interpretation of PET data. Model-based interpretation of dynamic PET images by means of parametric fitting, however, is often a challenging task due to high levels of noise, thus necessitating a denoising step. The objective of this paper is to develop and characterize a denoising framework for dynamic PET based on non-local means (NLM). NLM denoising computes weighted averages of voxel intensities assigning larger weights to voxels that are similar to a given voxel in terms of their local neighborhoods or patches. We introduce three key modifications to tailor the original NLM framework to dynamic PET. Firstly, we derive similarities from less noisy later time points in a typical PET acquisition to denoise the entire time series. Secondly, we use spatiotemporal patches for robust similarity computation. Finally, we use a spatially varying smoothing parameter based on a local variance approximation over each spatiotemporal patch. To assess the performance of our denoising technique, we performed a realistic simulation on a dynamic digital phantom based on the Digimouse atlas. For experimental validation, we denoised [Formula: see text] PET images from a mouse study and a hepatocellular carcinoma patient study. We compared the performance of NLM denoising with four other denoising approaches - Gaussian filtering, PCA, HYPR, and conventional NLM based on spatial patches. The simulation study revealed significant improvement in bias-variance performance achieved using our NLM technique relative to all the other methods. The experimental data analysis revealed that our technique leads to clear improvement in contrast-to-noise ratio in Patlak parametric images generated from denoised preclinical and clinical dynamic images, indicating its ability to preserve image contrast and high intensity details while

  8. Dictionary-Based Image Denoising by Fused-Lasso Atom Selection

    Directory of Open Access Journals (Sweden)

    Ao Li

    2014-01-01

    Full Text Available We proposed an efficient image denoising scheme by fused lasso with dictionary learning. The scheme has two important contributions. The first one is that we learned the patch-based adaptive dictionary by principal component analysis (PCA with clustering the image into many subsets, which can better preserve the local geometric structure. The second one is that we coded the patches in each subset by fused lasso with the clustering learned dictionary and proposed an iterative Split Bregman to solve it rapidly. We present the capabilities with several experiments. The results show that the proposed scheme is competitive to some excellent denoising algorithms.

  9. PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.

    Science.gov (United States)

    Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David

    2009-04-01

    Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.

  10. Image de-noising based on mathematical morphology and multi-objective particle swarm optimization

    Science.gov (United States)

    Dou, Liyun; Xu, Dan; Chen, Hao; Liu, Yicheng

    2017-07-01

    To overcome the problem of image de-noising, an efficient image de-noising approach based on mathematical morphology and multi-objective particle swarm optimization (MOPSO) is proposed in this paper. Firstly, constructing a series and parallel compound morphology filter based on open-close (OC) operation and selecting a structural element with different sizes try best to eliminate all noise in a series link. Then, combining multi-objective particle swarm optimization (MOPSO) to solve the parameters setting of multiple structural element. Simulation result shows that our algorithm can achieve a superior performance compared with some traditional de-noising algorithm.

  11. Image denoising by exploring external and internal correlations.

    Science.gov (United States)

    Yue, Huanjing; Sun, Xiaoyan; Yang, Jingyu; Wu, Feng

    2015-06-01

    Single image denoising suffers from limited data collection within a noisy image. In this paper, we propose a novel image denoising scheme, which explores both internal and external correlations with the help of web images. For each noisy patch, we build internal and external data cubes by finding similar patches from the noisy and web images, respectively. We then propose reducing noise by a two-stage strategy using different filtering approaches. In the first stage, since the noisy patch may lead to inaccurate patch selection, we propose a graph based optimization method to improve patch matching accuracy in external denoising. The internal denoising is frequency truncation on internal cubes. By combining the internal and external denoising patches, we obtain a preliminary denoising result. In the second stage, we propose reducing noise by filtering of external and internal cubes, respectively, on transform domain. In this stage, the preliminary denoising result not only enhances the patch matching accuracy but also provides reliable estimates of filtering parameters. The final denoising image is obtained by fusing the external and internal filtering results. Experimental results show that our method constantly outperforms state-of-the-art denoising schemes in both subjective and objective quality measurements, e.g., it achieves >2 dB gain compared with BM3D at a wide range of noise levels.

  12. Sparse representations via learned dictionaries for x-ray angiogram image denoising

    Science.gov (United States)

    Shang, Jingfan; Huang, Zhenghua; Li, Qian; Zhang, Tianxu

    2018-03-01

    X-ray angiogram image denoising is always an active research topic in the field of computer vision. In particular, the denoising performance of many existing methods had been greatly improved by the widely use of nonlocal similar patches. However, the only nonlocal self-similar (NSS) patch-based methods can be still be improved and extended. In this paper, we propose an image denoising model based on the sparsity of the NSS patches to obtain high denoising performance and high-quality image. In order to represent the sparsely NSS patches in every location of the image well and solve the image denoising model more efficiently, we obtain dictionaries as a global image prior by the K-SVD algorithm over the processing image; Then the single and effectively alternating directions method of multipliers (ADMM) method is used to solve the image denoising model. The results of widely synthetic experiments demonstrate that, owing to learned dictionaries by K-SVD algorithm, a sparsely augmented lagrangian image denoising (SALID) model, which perform effectively, obtains a state-of-the-art denoising performance and better high-quality images. Moreover, we also give some denoising results of clinical X-ray angiogram images.

  13. Image Classification Based on Convolutional Denoising Sparse Autoencoder

    Directory of Open Access Journals (Sweden)

    Shuangshuang Chen

    2017-01-01

    Full Text Available Image classification aims to group images into corresponding semantic categories. Due to the difficulties of interclass similarity and intraclass variability, it is a challenging issue in computer vision. In this paper, an unsupervised feature learning approach called convolutional denoising sparse autoencoder (CDSAE is proposed based on the theory of visual attention mechanism and deep learning methods. Firstly, saliency detection method is utilized to get training samples for unsupervised feature learning. Next, these samples are sent to the denoising sparse autoencoder (DSAE, followed by convolutional layer and local contrast normalization layer. Generally, prior in a specific task is helpful for the task solution. Therefore, a new pooling strategy—spatial pyramid pooling (SPP fused with center-bias prior—is introduced into our approach. Experimental results on the common two image datasets (STL-10 and CIFAR-10 demonstrate that our approach is effective in image classification. They also demonstrate that none of these three components: local contrast normalization, SPP fused with center-prior, and l2 vector normalization can be excluded from our proposed approach. They jointly improve image representation and classification performance.

  14. Wavelet-based de-noising algorithm for images acquired with parallel magnetic resonance imaging (MRI)

    International Nuclear Information System (INIS)

    Delakis, Ioannis; Hammad, Omer; Kitney, Richard I

    2007-01-01

    Wavelet-based de-noising has been shown to improve image signal-to-noise ratio in magnetic resonance imaging (MRI) while maintaining spatial resolution. Wavelet-based de-noising techniques typically implemented in MRI require that noise displays uniform spatial distribution. However, images acquired with parallel MRI have spatially varying noise levels. In this work, a new algorithm for filtering images with parallel MRI is presented. The proposed algorithm extracts the edges from the original image and then generates a noise map from the wavelet coefficients at finer scales. The noise map is zeroed at locations where edges have been detected and directional analysis is also used to calculate noise in regions of low-contrast edges that may not have been detected. The new methodology was applied on phantom and brain images and compared with other applicable de-noising techniques. The performance of the proposed algorithm was shown to be comparable with other techniques in central areas of the images, where noise levels are high. In addition, finer details and edges were maintained in peripheral areas, where noise levels are low. The proposed methodology is fully automated and can be applied on final reconstructed images without requiring sensitivity profiles or noise matrices of the receiver coils, therefore making it suitable for implementation in a clinical MRI setting

  15. Medical Image Denoising Using Mixed Transforms

    Directory of Open Access Journals (Sweden)

    Jaleel Sadoon Jameel

    2018-02-01

    Full Text Available  In this paper,  a mixed transform method is proposed based on a combination of wavelet transform (WT and multiwavelet transform (MWT in order to denoise medical images. The proposed method consists of WT and MWT in cascade form to enhance the denoising performance of image processing. Practically, the first step is to add a noise to Magnetic Resonance Image (MRI or Computed Tomography (CT images for the sake of testing. The noisy image is processed by WT to achieve four sub-bands and each sub-band is treated individually using MWT before the soft/hard denoising stage. Simulation results show that a high peak signal to noise ratio (PSNR is improved significantly and the characteristic features are well preserved by employing mixed transform of WT and MWT due to their capability of separating noise signals from image signals. Moreover, the corresponding mean square error (MSE is decreased accordingly compared to other available methods.

  16. Improving performance of wavelet-based image denoising algorithm using complex diffusion process

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Sharifzadeh, Sara; Korhonen, Jari

    2012-01-01

    using a variety of standard images and its performance has been compared against several de-noising algorithms known from the prior art. Experimental results show that the proposed algorithm preserves the edges better and in most cases, improves the measured visual quality of the denoised images......Image enhancement and de-noising is an essential pre-processing step in many image processing algorithms. In any image de-noising algorithm, the main concern is to keep the interesting structures of the image. Such interesting structures often correspond to the discontinuities (edges...... in comparison to the existing methods known from the literature. The improvement is obtained without excessive computational cost, and the algorithm works well on a wide range of different types of noise....

  17. Night Vision Image De-Noising of Apple Harvesting Robots Based on the Wavelet Fuzzy Threshold

    Directory of Open Access Journals (Sweden)

    Chengzhi Ruan

    2015-12-01

    Full Text Available In this paper, the de-noising problem of night vision images is studied for apple harvesting robots working at night. The wavelet threshold method is applied to the de-noising of night vision images. Due to the fact that the choice of wavelet threshold function restricts the effect of the wavelet threshold method, the fuzzy theory is introduced to construct the fuzzy threshold function. We then propose the de-noising algorithm based on the wavelet fuzzy threshold. This new method can reduce image noise interferences, which is conducive to further image segmentation and recognition. To demonstrate the performance of the proposed method, we conducted simulation experiments and compared the median filtering and the wavelet soft threshold de-noising methods. It is shown that this new method can achieve the highest relative PSNR. Compared with the original images, the median filtering de-noising method and the classical wavelet threshold de-noising method, the relative PSNR increases 24.86%, 13.95%, and 11.38% respectively. We carry out comparisons from various aspects, such as intuitive visual evaluation, objective data evaluation, edge evaluation and artificial light evaluation. The experimental results show that the proposed method has unique advantages for the de-noising of night vision images, which lay the foundation for apple harvesting robots working at night.

  18. A wavelet multiscale denoising algorithm for magnetic resonance (MR) images

    International Nuclear Information System (INIS)

    Yang, Xiaofeng; Fei, Baowei

    2011-01-01

    Based on the Radon transform, a wavelet multiscale denoising method is proposed for MR images. The approach explicitly accounts for the Rician nature of MR data. Based on noise statistics we apply the Radon transform to the original MR images and use the Gaussian noise model to process the MR sinogram image. A translation invariant wavelet transform is employed to decompose the MR 'sinogram' into multiscales in order to effectively denoise the images. Based on the nature of Rician noise we estimate noise variance in different scales. For the final denoised sinogram we apply the inverse Radon transform in order to reconstruct the original MR images. Phantom, simulation brain MR images, and human brain MR images were used to validate our method. The experiment results show the superiority of the proposed scheme over the traditional methods. Our method can reduce Rician noise while preserving the key image details and features. The wavelet denoising method can have wide applications in MRI as well as other imaging modalities

  19. Remote Sensing Image Classification Based on Stacked Denoising Autoencoder

    Directory of Open Access Journals (Sweden)

    Peng Liang

    2017-12-01

    Full Text Available Focused on the issue that conventional remote sensing image classification methods have run into the bottlenecks in accuracy, a new remote sensing image classification method inspired by deep learning is proposed, which is based on Stacked Denoising Autoencoder. First, the deep network model is built through the stacked layers of Denoising Autoencoder. Then, with noised input, the unsupervised Greedy layer-wise training algorithm is used to train each layer in turn for more robust expressing, characteristics are obtained in supervised learning by Back Propagation (BP neural network, and the whole network is optimized by error back propagation. Finally, Gaofen-1 satellite (GF-1 remote sensing data are used for evaluation, and the total accuracy and kappa accuracy reach 95.7% and 0.955, respectively, which are higher than that of the Support Vector Machine and Back Propagation neural network. The experiment results show that the proposed method can effectively improve the accuracy of remote sensing image classification.

  20. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    Science.gov (United States)

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  1. Normal Inverse Gaussian Model-Based Image Denoising in the NSCT Domain

    Directory of Open Access Journals (Sweden)

    Jian Jia

    2015-01-01

    Full Text Available The objective of image denoising is to retain useful details while removing as much noise as possible to recover an original image from its noisy version. This paper proposes a novel normal inverse Gaussian (NIG model-based method that uses a Bayesian estimator to carry out image denoising in the nonsubsampled contourlet transform (NSCT domain. In the proposed method, the NIG model is first used to describe the distributions of the image transform coefficients of each subband in the NSCT domain. Then, the corresponding threshold function is derived from the model using Bayesian maximum a posteriori probability estimation theory. Finally, optimal linear interpolation thresholding algorithm (OLI-Shrink is employed to guarantee a gentler thresholding effect. The results of comparative experiments conducted indicate that the denoising performance of our proposed method in terms of peak signal-to-noise ratio is superior to that of several state-of-the-art methods, including BLS-GSM, K-SVD, BivShrink, and BM3D. Further, the proposed method achieves structural similarity (SSIM index values that are comparable to those of the block-matching 3D transformation (BM3D method.

  2. Patch Similarity Modulus and Difference Curvature Based Fourth-Order Partial Differential Equation for Image Denoising

    Directory of Open Access Journals (Sweden)

    Yunjiao Bai

    2015-01-01

    Full Text Available The traditional fourth-order nonlinear diffusion denoising model suffers the isolated speckles and the loss of fine details in the processed image. For this reason, a new fourth-order partial differential equation based on the patch similarity modulus and the difference curvature is proposed for image denoising. First, based on the intensity similarity of neighbor pixels, this paper presents a new edge indicator called patch similarity modulus, which is strongly robust to noise. Furthermore, the difference curvature which can effectively distinguish between edges and noise is incorporated into the denoising algorithm to determine the diffusion process by adaptively adjusting the size of the diffusion coefficient. The experimental results show that the proposed algorithm can not only preserve edges and texture details, but also avoid isolated speckles and staircase effect while filtering out noise. And the proposed algorithm has a better performance for the images with abundant details. Additionally, the subjective visual quality and objective evaluation index of the denoised image obtained by the proposed algorithm are higher than the ones from the related methods.

  3. Dictionary Pair Learning on Grassmann Manifolds for Image Denoising.

    Science.gov (United States)

    Zeng, Xianhua; Bian, Wei; Liu, Wei; Shen, Jialie; Tao, Dacheng

    2015-11-01

    Image denoising is a fundamental problem in computer vision and image processing that holds considerable practical importance for real-world applications. The traditional patch-based and sparse coding-driven image denoising methods convert 2D image patches into 1D vectors for further processing. Thus, these methods inevitably break down the inherent 2D geometric structure of natural images. To overcome this limitation pertaining to the previous image denoising methods, we propose a 2D image denoising model, namely, the dictionary pair learning (DPL) model, and we design a corresponding algorithm called the DPL on the Grassmann-manifold (DPLG) algorithm. The DPLG algorithm first learns an initial dictionary pair (i.e., the left and right dictionaries) by employing a subspace partition technique on the Grassmann manifold, wherein the refined dictionary pair is obtained through a sub-dictionary pair merging. The DPLG obtains a sparse representation by encoding each image patch only with the selected sub-dictionary pair. The non-zero elements of the sparse representation are further smoothed by the graph Laplacian operator to remove the noise. Consequently, the DPLG algorithm not only preserves the inherent 2D geometric structure of natural images but also performs manifold smoothing in the 2D sparse coding space. We demonstrate that the DPLG algorithm also improves the structural SIMilarity values of the perceptual visual quality for denoised images using the experimental evaluations on the benchmark images and Berkeley segmentation data sets. Moreover, the DPLG also produces the competitive peak signal-to-noise ratio values from popular image denoising algorithms.

  4. Denoising imaging polarimetry by adapted BM3D method.

    Science.gov (United States)

    Tibbs, Alexander B; Daly, Ilse M; Roberts, Nicholas W; Bull, David R

    2018-04-01

    In addition to the visual information contained in intensity and color, imaging polarimetry allows visual information to be extracted from the polarization of light. However, a major challenge of imaging polarimetry is image degradation due to noise. This paper investigates the mitigation of noise through denoising algorithms and compares existing denoising algorithms with a new method, based on BM3D (Block Matching 3D). This algorithm, Polarization-BM3D (PBM3D), gives visual quality superior to the state of the art across all images and noise standard deviations tested. We show that denoising polarization images using PBM3D allows the degree of polarization to be more accurately calculated by comparing it with spectral polarimetry measurements.

  5. Analysis of Non Local Image Denoising Methods

    Science.gov (United States)

    Pardo, Álvaro

    Image denoising is probably one of the most studied problems in the image processing community. Recently a new paradigm on non local denoising was introduced. The Non Local Means method proposed by Buades, Morel and Coll attracted the attention of other researches who proposed improvements and modifications to their proposal. In this work we analyze those methods trying to understand their properties while connecting them to segmentation based on spectral graph properties. We also propose some improvements to automatically estimate the parameters used on these methods.

  6. Fringe pattern denoising via image decomposition.

    Science.gov (United States)

    Fu, Shujun; Zhang, Caiming

    2012-02-01

    Filtering off noise from a fringe pattern is one of the key tasks in optical interferometry. In this Letter, using some suitable function spaces to model different components of a fringe pattern, we propose a new fringe pattern denoising method based on image decomposition. In our method, a fringe image is divided into three parts: low-frequency fringe, high-frequency fringe, and noise, which are processed in different spaces. An adaptive threshold in wavelet shrinkage involved in this algorithm improves its denoising performance. Simulation and experimental results show that our algorithm obtains smooth and clean fringes with different frequencies while preserving fringe features effectively.

  7. Optical Aperture Synthesis Object's Information Extracting Based on Wavelet Denoising

    International Nuclear Information System (INIS)

    Fan, W J; Lu, Y

    2006-01-01

    Wavelet denoising is studied to improve OAS(optical aperture synthesis) object's Fourier information extracting. Translation invariance wavelet denoising based on Donoho wavelet soft threshold denoising is researched to remove Pseudo-Gibbs in wavelet soft threshold image. OAS object's information extracting based on translation invariance wavelet denoising is studied. The study shows that wavelet threshold denoising can improve the precision and the repetition of object's information extracting from interferogram, and the translation invariance wavelet denoising information extracting is better than soft threshold wavelet denoising information extracting

  8. The Noise Clinic: a Blind Image Denoising Algorithm

    Directory of Open Access Journals (Sweden)

    Marc Lebrun

    2015-01-01

    Full Text Available This paper describes the complete implementation of a blind image algorithm, that takes any digital image as input. In a first step the algorithm estimates a Signal and Frequency Dependent (SFD noise model. In a second step, the image is denoised by a multiscale adaptation of the Non-local Bayes denoising method. We focus here on a careful analysis of the denoising step and present a detailed discussion of the influence of its parameters. Extensive commented tests of the blind denoising algorithm are presented, on real JPEG images and scans of old photographs.

  9. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    Science.gov (United States)

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.

  10. Image denoising using the squared eigenfunctions of the Schrodinger operator

    KAUST Repository

    Kaisserli, Zineb; Laleg-Kirati, Taous-Meriem

    2015-01-01

    This study introduces a new image denoising method based on the spectral analysis of the semi-classical Schrodinger operator. The noisy image is considered as a potential of the Schrodinger operator, and the denoised image is reconstructed using the discrete spectrum of this operator. First results illustrating the performance of the proposed approach are presented and compared to the singular value decomposition method.

  11. Image denoising using the squared eigenfunctions of the Schrodinger operator

    KAUST Repository

    Kaisserli, Zineb

    2015-02-02

    This study introduces a new image denoising method based on the spectral analysis of the semi-classical Schrodinger operator. The noisy image is considered as a potential of the Schrodinger operator, and the denoised image is reconstructed using the discrete spectrum of this operator. First results illustrating the performance of the proposed approach are presented and compared to the singular value decomposition method.

  12. Image denoising using non linear diffusion tensors

    International Nuclear Information System (INIS)

    Benzarti, F.; Amiri, H.

    2011-01-01

    Image denoising is an important pre-processing step for many image analysis and computer vision system. It refers to the task of recovering a good estimate of the true image from a degraded observation without altering and changing useful structure in the image such as discontinuities and edges. In this paper, we propose a new approach for image denoising based on the combination of two non linear diffusion tensors. One allows diffusion along the orientation of greatest coherences, while the other allows diffusion along orthogonal directions. The idea is to track perfectly the local geometry of the degraded image and applying anisotropic diffusion mainly along the preferred structure direction. To illustrate the effective performance of our model, we present some experimental results on a test and real photographic color images.

  13. Denoising human cardiac diffusion tensor magnetic resonance images using sparse representation combined with segmentation

    International Nuclear Information System (INIS)

    Bao, L J; Zhu, Y M; Liu, W Y; Pu, Z B; Magnin, I E; Croisille, P; Robini, M

    2009-01-01

    Cardiac diffusion tensor magnetic resonance imaging (DT-MRI) is noise sensitive, and the noise can induce numerous systematic errors in subsequent parameter calculations. This paper proposes a sparse representation-based method for denoising cardiac DT-MRI images. The method first generates a dictionary of multiple bases according to the features of the observed image. A segmentation algorithm based on nonstationary degree detector is then introduced to make the selection of atoms in the dictionary adapted to the image's features. The denoising is achieved by gradually approximating the underlying image using the atoms selected from the generated dictionary. The results on both simulated image and real cardiac DT-MRI images from ex vivo human hearts show that the proposed denoising method performs better than conventional denoising techniques by preserving image contrast and fine structures.

  14. A new method for mobile phone image denoising

    Science.gov (United States)

    Jin, Lianghai; Jin, Min; Li, Xiang; Xu, Xiangyang

    2015-12-01

    Images captured by mobile phone cameras via pipeline processing usually contain various kinds of noises, especially granular noise with different shapes and sizes in both luminance and chrominance channels. In chrominance channels, noise is closely related to image brightness. To improve image quality, this paper presents a new method to denoise such mobile phone images. The proposed scheme converts the noisy RGB image to luminance and chrominance images, which are then denoised by a common filtering framework. The common filtering framework processes a noisy pixel by first excluding the neighborhood pixels that significantly deviate from the (vector) median and then utilizing the other neighborhood pixels to restore the current pixel. In the framework, the strength of chrominance image denoising is controlled by image brightness. The experimental results show that the proposed method obviously outperforms some other representative denoising methods in terms of both objective measure and visual evaluation.

  15. A Digital Image Denoising Algorithm Based on Gaussian Filtering and Bilateral Filtering

    Directory of Open Access Journals (Sweden)

    Piao Weiying

    2018-01-01

    Full Text Available Bilateral filtering has been applied in the area of digital image processing widely, but in the high gradient region of the image, bilateral filtering may generate staircase effect. Bilateral filtering can be regarded as one particular form of local mode filtering, according to above analysis, an mixed image de-noising algorithm is proposed based on Gaussian filter and bilateral filtering. First of all, it uses Gaussian filter to filtrate the noise image and get the reference image, then to take both the reference image and noise image as the input for range kernel function of bilateral filter. The reference image can provide the image’s low frequency information, and noise image can provide image’s high frequency information. Through the competitive experiment on both the method in this paper and traditional bilateral filtering, the experimental result showed that the mixed de-noising algorithm can effectively overcome staircase effect, and the filtrated image was more smooth, its textural features was also more close to the original image, and it can achieve higher PSNR value, but the amount of calculation of above two algorithms are basically the same.

  16. Image Structure-Preserving Denoising Based on Difference Curvature Driven Fractional Nonlinear Diffusion

    Directory of Open Access Journals (Sweden)

    Xuehui Yin

    2015-01-01

    Full Text Available The traditional integer-order partial differential equations and gradient regularization based image denoising techniques often suffer from staircase effect, speckle artifacts, and the loss of image contrast and texture details. To address these issues, in this paper, a difference curvature driven fractional anisotropic diffusion for image noise removal is presented, which uses two new techniques, fractional calculus and difference curvature, to describe the intensity variations in images. The fractional-order derivatives information of an image can deal well with the textures of the image and achieve a good tradeoff between eliminating speckle artifacts and restraining staircase effect. The difference curvature constructed by the second order derivatives along the direction of gradient of an image and perpendicular to the gradient can effectively distinguish between ramps and edges. Fourier transform technique is also proposed to compute the fractional-order derivative. Experimental results demonstrate that the proposed denoising model can avoid speckle artifacts and staircase effect and preserve important features such as curvy edges, straight edges, ramps, corners, and textures. They are obviously superior to those of traditional integral based methods. The experimental results also reveal that our proposed model yields a good visual effect and better values of MSSIM and PSNR.

  17. A Novel Approach of Low-Light Image Denoising for Face Recognition

    Directory of Open Access Journals (Sweden)

    Yimei Kang

    2014-04-01

    Full Text Available Illumination variation makes automatic face recognition a challenging task, especially in low light environments. A very simple and efficient novel low-light image denoising of low frequency noise (DeLFN is proposed. The noise frequency distribution of low-light images is presented based on massive experimental results. The low and very low frequency noise are dominant in low light conditions. DeLFN is a three-level image denoising method. The first level denoises mixed noises by histogram equalization (HE to improve overall contrast. The second level denoises low frequency noise by logarithmic transformation (LOG to enhance the image detail. The third level denoises residual very low frequency noise by high-pass filtering to recover more features of the true images. The PCA (Principal Component Analysis recognition method is applied to test recognition rate of the preprocessed face images with DeLFN. DeLFN are compared with several representative illumination preprocessing methods on the Yale Face Database B, the Extended Yale face database B, and the CMU PIE face database, respectively. DeLFN not only outperformed other algorithms in improving visual quality and face recognition rate, but also is simpler and computationally efficient for real time applications.

  18. Wavelet-based multicomponent denoising on GPU to improve the classification of hyperspectral images

    Science.gov (United States)

    Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco; Mouriño, J. C.

    2017-10-01

    Supervised classification allows handling a wide range of remote sensing hyperspectral applications. Enhancing the spatial organization of the pixels over the image has proven to be beneficial for the interpretation of the image content, thus increasing the classification accuracy. Denoising in the spatial domain of the image has been shown as a technique that enhances the structures in the image. This paper proposes a multi-component denoising approach in order to increase the classification accuracy when a classification method is applied. It is computed on multicore CPUs and NVIDIA GPUs. The method combines feature extraction based on a 1Ddiscrete wavelet transform (DWT) applied in the spectral dimension followed by an Extended Morphological Profile (EMP) and a classifier (SVM or ELM). The multi-component noise reduction is applied to the EMP just before the classification. The denoising recursively applies a separable 2D DWT after which the number of wavelet coefficients is reduced by using a threshold. Finally, inverse 2D-DWT filters are applied to reconstruct the noise free original component. The computational cost of the classifiers as well as the cost of the whole classification chain is high but it is reduced achieving real-time behavior for some applications through their computation on NVIDIA multi-GPU platforms.

  19. GPU-Based Block-Wise Nonlocal Means Denoising for 3D Ultrasound Images

    Directory of Open Access Journals (Sweden)

    Liu Li

    2013-01-01

    Full Text Available Speckle suppression plays an important role in improving ultrasound (US image quality. While lots of algorithms have been proposed for 2D US image denoising with remarkable filtering quality, there is relatively less work done on 3D ultrasound speckle suppression, where the whole volume data rather than just one frame needs to be considered. Then, the most crucial problem with 3D US denoising is that the computational complexity increases tremendously. The nonlocal means (NLM provides an effective method for speckle suppression in US images. In this paper, a programmable graphic-processor-unit- (GPU- based fast NLM filter is proposed for 3D ultrasound speckle reduction. A Gamma distribution noise model, which is able to reliably capture image statistics for Log-compressed ultrasound images, was used for the 3D block-wise NLM filter on basis of Bayesian framework. The most significant aspect of our method was the adopting of powerful data-parallel computing capability of GPU to improve the overall efficiency. Experimental results demonstrate that the proposed method can enormously accelerate the algorithm.

  20. Image denoising via adaptive eigenvectors of graph Laplacian

    Science.gov (United States)

    Chen, Ying; Tang, Yibin; Xu, Ning; Zhou, Lin; Zhao, Li

    2016-07-01

    An image denoising method via adaptive eigenvectors of graph Laplacian (EGL) is proposed. Unlike the trivial parameter setting of the used eigenvectors in the traditional EGL method, in our method, the eigenvectors are adaptively selected in the whole denoising procedure. In detail, a rough image is first built with the eigenvectors from the noisy image, where the eigenvectors are selected by using the deviation estimation of the clean image. Subsequently, a guided image is effectively restored with a weighted average of the noisy and rough images. In this operation, the average coefficient is adaptively obtained to set the deviation of the guided image to approximately that of the clean image. Finally, the denoised image is achieved by a group-sparse model with the pattern from the guided image, where the eigenvectors are chosen in the error control of the noise deviation. Moreover, a modified group orthogonal matching pursuit algorithm is developed to efficiently solve the above group sparse model. The experiments show that our method not only improves the practicality of the EGL methods with the dependence reduction of the parameter setting, but also can outperform some well-developed denoising methods, especially for noise with large deviations.

  1. Adaptive image denoising based on support vector machine and wavelet description

    Science.gov (United States)

    An, Feng-Ping; Zhou, Xian-Wei

    2017-12-01

    Adaptive image denoising method decomposes the original image into a series of basic pattern feature images on the basis of wavelet description and constructs the support vector machine regression function to realize the wavelet description of the original image. The support vector machine method allows the linear expansion of the signal to be expressed as a nonlinear function of the parameters associated with the SVM. Using the radial basis kernel function of SVM, the original image can be extended into a MEXICAN function and a residual trend. This MEXICAN represents a basic image feature pattern. If the residual does not fluctuate, it can also be represented as a characteristic pattern. If the residuals fluctuate significantly, it is treated as a new image and the same decomposition process is repeated until the residuals obtained by the decomposition do not significantly fluctuate. Experimental results show that the proposed method in this paper performs well; especially, it satisfactorily solves the problem of image noise removal. It may provide a new tool and method for image denoising.

  2. Fractional order integration and fuzzy logic based filter for denoising of echocardiographic image.

    Science.gov (United States)

    Saadia, Ayesha; Rashdi, Adnan

    2016-12-01

    Ultrasound is widely used for imaging due to its cost effectiveness and safety feature. However, ultrasound images are inherently corrupted with speckle noise which severely affects the quality of these images and create difficulty for physicians in diagnosis. To get maximum benefit from ultrasound imaging, image denoising is an essential requirement. To perform image denoising, a two stage methodology using fuzzy weighted mean and fractional integration filter has been proposed in this research work. In stage-1, image pixels are processed by applying a 3 × 3 window around each pixel and fuzzy logic is used to assign weights to the pixels in each window, replacing central pixel of the window with weighted mean of all neighboring pixels present in the same window. Noise suppression is achieved by assigning weights to the pixels while preserving edges and other important features of an image. In stage-2, the resultant image is further improved by fractional order integration filter. Effectiveness of the proposed methodology has been analyzed for standard test images artificially corrupted with speckle noise and real ultrasound B-mode images. Results of the proposed technique have been compared with different state-of-the-art techniques including Lsmv, Wiener, Geometric filter, Bilateral, Non-local means, Wavelet, Perona et al., Total variation (TV), Global Adaptive Fractional Integral Algorithm (GAFIA) and Improved Fractional Order Differential (IFD) model. Comparison has been done on quantitative and qualitative basis. For quantitative analysis different metrics like Peak Signal to Noise Ratio (PSNR), Speckle Suppression Index (SSI), Structural Similarity (SSIM), Edge Preservation Index (β) and Correlation Coefficient (ρ) have been used. Simulations have been done using Matlab. Simulation results of artificially corrupted standard test images and two real Echocardiographic images reveal that the proposed method outperforms existing image denoising techniques

  3. Image denoising by sparse 3-D transform-domain collaborative filtering.

    Science.gov (United States)

    Dabov, Kostadin; Foi, Alessandro; Katkovnik, Vladimir; Egiazarian, Karen

    2007-08-01

    We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2-D image fragments (e.g., blocks) into 3-D data arrays which we call "groups." Collaborative filtering is a special procedure developed to deal with these 3-D groups. We realize it using the three successive steps: 3-D transformation of a group, shrinkage of the transform spectrum, and inverse 3-D transformation. The result is a 3-D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality.

  4. Image denoising by a direct variational minimization

    Directory of Open Access Journals (Sweden)

    Pilipović Stevan

    2011-01-01

    Full Text Available Abstract In this article we introduce a novel method for the image de-noising which combines a mathematically well-posdenes of the variational modeling with the efficiency of a patch-based approach in the field of image processing. It based on a direct minimization of an energy functional containing a minimal surface regularizer that uses fractional gradient. The minimization is obtained on every predefined patch of the image, independently. By doing so, we avoid the use of an artificial time PDE model with its inherent problems of finding optimal stopping time, as well as the optimal time step. Moreover, we control the level of image smoothing on each patch (and thus on the whole image by adapting the Lagrange multiplier using the information on the level of discontinuities on a particular patch, which we obtain by pre-processing. In order to reduce the average number of vectors in the approximation generator and still to obtain the minimal degradation, we combine a Ritz variational method for the actual minimization on a patch, and a complementary fractional variational principle. Thus, the proposed method becomes computationally feasible and applicable for practical purposes. We confirm our claims with experimental results, by comparing the proposed method with a couple of PDE-based methods, where we get significantly better denoising results specially on the oscillatory regions.

  5. Regularized Pre-image Estimation for Kernel PCA De-noising

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    The main challenge in de-noising by kernel Principal Component Analysis (PCA) is the mapping of de-noised feature space points back into input space, also referred to as “the pre-image problem”. Since the feature space mapping is typically not bijective, pre-image estimation is inherently illposed...

  6. External Prior Guided Internal Prior Learning for Real-World Noisy Image Denoising

    Science.gov (United States)

    Xu, Jun; Zhang, Lei; Zhang, David

    2018-06-01

    Most of existing image denoising methods learn image priors from either external data or the noisy image itself to remove noise. However, priors learned from external data may not be adaptive to the image to be denoised, while priors learned from the given noisy image may not be accurate due to the interference of corrupted noise. Meanwhile, the noise in real-world noisy images is very complex, which is hard to be described by simple distributions such as Gaussian distribution, making real noisy image denoising a very challenging problem. We propose to exploit the information in both external data and the given noisy image, and develop an external prior guided internal prior learning method for real noisy image denoising. We first learn external priors from an independent set of clean natural images. With the aid of learned external priors, we then learn internal priors from the given noisy image to refine the prior model. The external and internal priors are formulated as a set of orthogonal dictionaries to efficiently reconstruct the desired image. Extensive experiments are performed on several real noisy image datasets. The proposed method demonstrates highly competitive denoising performance, outperforming state-of-the-art denoising methods including those designed for real noisy images.

  7. Image Denoising via Bayesian Estimation of Statistical Parameter Using Generalized Gamma Density Prior in Gaussian Noise Model

    Science.gov (United States)

    Kittisuwan, Pichid

    2015-03-01

    The application of image processing in industry has shown remarkable success over the last decade, for example, in security and telecommunication systems. The denoising of natural image corrupted by Gaussian noise is a classical problem in image processing. So, image denoising is an indispensable step during image processing. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. One of the cruxes of the Bayesian image denoising algorithms is to estimate the statistical parameter of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with generalized Gamma density prior for local observed variance and Laplacian or Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by efficient and flexible properties of generalized Gamma density. The experimental results show that the proposed method yields good denoising results.

  8. Denoising of Microscopy Images: A Review of the State-of-the-Art, and a New Sparsity-Based Method.

    Science.gov (United States)

    Meiniel, William; Olivo-Marin, Jean-Christophe; Angelini, Elsa D

    2018-08-01

    This paper reviews the state-of-the-art in denoising methods for biological microscopy images and introduces a new and original sparsity-based algorithm. The proposed method combines total variation (TV) spatial regularization, enhancement of low-frequency information, and aggregation of sparse estimators and is able to handle simple and complex types of noise (Gaussian, Poisson, and mixed), without any a priori model and with a single set of parameter values. An extended comparison is also presented, that evaluates the denoising performance of the thirteen (including ours) state-of-the-art denoising methods specifically designed to handle the different types of noises found in bioimaging. Quantitative and qualitative results on synthetic and real images show that the proposed method outperforms the other ones on the majority of the tested scenarios.

  9. Edge-preserving image denoising via group coordinate descent on the GPU.

    Science.gov (United States)

    McGaffin, Madison Gray; Fessler, Jeffrey A

    2015-04-01

    Image denoising is a fundamental operation in image processing, and its applications range from the direct (photographic enhancement) to the technical (as a subproblem in image reconstruction algorithms). In many applications, the number of pixels has continued to grow, while the serial execution speed of computational hardware has begun to stall. New image processing algorithms must exploit the power offered by massively parallel architectures like graphics processing units (GPUs). This paper describes a family of image denoising algorithms well-suited to the GPU. The algorithms iteratively perform a set of independent, parallel 1D pixel-update subproblems. To match GPU memory limitations, they perform these pixel updates in-place and only store the noisy data, denoised image, and problem parameters. The algorithms can handle a wide range of edge-preserving roughness penalties, including differentiable convex penalties and anisotropic total variation. Both algorithms use the majorize-minimize framework to solve the 1D pixel update subproblem. Results from a large 2D image denoising problem and a 3D medical imaging denoising problem demonstrate that the proposed algorithms converge rapidly in terms of both iteration and run-time.

  10. Image fusion and denoising using fractional-order gradient information

    DEFF Research Database (Denmark)

    Mei, Jin-Jin; Dong, Yiqiu; Huang, Ting-Zhu

    Image fusion and denoising are significant in image processing because of the availability of multi-sensor and the presence of the noise. The first-order and second-order gradient information have been effectively applied to deal with fusing the noiseless source images. In this paper, due to the adv...... show that the proposed method outperforms the conventional total variation in methods for simultaneously fusing and denoising....

  11. GPU-accelerated denoising of 3D magnetic resonance images

    Energy Technology Data Exchange (ETDEWEB)

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  12. The Research on Denoising of SAR Image Based on Improved K-SVD Algorithm

    Science.gov (United States)

    Tan, Linglong; Li, Changkai; Wang, Yueqin

    2018-04-01

    SAR images often receive noise interference in the process of acquisition and transmission, which can greatly reduce the quality of images and cause great difficulties for image processing. The existing complete DCT dictionary algorithm is fast in processing speed, but its denoising effect is poor. In this paper, the problem of poor denoising, proposed K-SVD (K-means and singular value decomposition) algorithm is applied to the image noise suppression. Firstly, the sparse dictionary structure is introduced in detail. The dictionary has a compact representation and can effectively train the image signal. Then, the sparse dictionary is trained by K-SVD algorithm according to the sparse representation of the dictionary. The algorithm has more advantages in high dimensional data processing. Experimental results show that the proposed algorithm can remove the speckle noise more effectively than the complete DCT dictionary and retain the edge details better.

  13. Efficient bias correction for magnetic resonance image denoising.

    Science.gov (United States)

    Mukherjee, Partha Sarathi; Qiu, Peihua

    2013-05-30

    Magnetic resonance imaging (MRI) is a popular radiology technique that is used for visualizing detailed internal structure of the body. Observed MRI images are generated by the inverse Fourier transformation from received frequency signals of a magnetic resonance scanner system. Previous research has demonstrated that random noise involved in the observed MRI images can be described adequately by the so-called Rician noise model. Under that model, the observed image intensity at a given pixel is a nonlinear function of the true image intensity and of two independent zero-mean random variables with the same normal distribution. Because of such a complicated noise structure in the observed MRI images, denoised images by conventional denoising methods are usually biased, and the bias could reduce image contrast and negatively affect subsequent image analysis. Therefore, it is important to address the bias issue properly. To this end, several bias-correction procedures have been proposed in the literature. In this paper, we study the Rician noise model and the corresponding bias-correction problem systematically and propose a new and more effective bias-correction formula based on the regression analysis and Monte Carlo simulation. Numerical studies show that our proposed method works well in various applications. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Hand Depth Image Denoising and Superresolution via Noise-Aware Dictionaries

    Directory of Open Access Journals (Sweden)

    Huayang Li

    2016-01-01

    Full Text Available This paper proposes a two-stage method for hand depth image denoising and superresolution, using bilateral filters and learned dictionaries via noise-aware orthogonal matching pursuit (NAOMP based K-SVD. The bilateral filtering phase recovers singular points and removes artifacts on silhouettes by averaging depth data using neighborhood pixels on which both depth difference and RGB similarity restrictions are imposed. The dictionary learning phase uses NAOMP for training dictionaries which separates faithful depth from noisy data. Compared with traditional OMP, NAOMP adds a residual reduction step which effectively weakens the noise term within the residual during the residual decomposition in terms of atoms. Experimental results demonstrate that the bilateral phase and the NAOMP-based learning dictionaries phase corporately denoise both virtual and real depth images effectively.

  15. Image Denoising Using Singular Value Difference in the Wavelet Domain

    Directory of Open Access Journals (Sweden)

    Min Wang

    2018-01-01

    Full Text Available Singular value (SV difference is the difference in the singular values between a noisy image and the original image; it varies regularly with noise intensity. This paper proposes an image denoising method using the singular value difference in the wavelet domain. First, the SV difference model is generated for different noise variances in the three directions of the wavelet transform and the noise variance of a new image is used to make the calculation by the diagonal part. Next, the single-level discrete 2-D wavelet transform is used to decompose each noisy image into its low-frequency and high-frequency parts. Then, singular value decomposition (SVD is used to obtain the SVs of the three high-frequency parts. Finally, the three denoised high-frequency parts are reconstructed by SVD from the SV difference, and the final denoised image is obtained using the inverse wavelet transform. Experiments show the effectiveness of this method compared with relevant existing methods.

  16. Imaging reconstruction based on improved wavelet denoising combined with parallel-beam filtered back-projection algorithm

    Science.gov (United States)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2012-11-01

    The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.

  17. Statistical model for OCT image denoising

    KAUST Repository

    Li, Muxingzi

    2017-08-01

    Optical coherence tomography (OCT) is a non-invasive technique with a large array of applications in clinical imaging and biological tissue visualization. However, the presence of speckle noise affects the analysis of OCT images and their diagnostic utility. In this article, we introduce a new OCT denoising algorithm. The proposed method is founded on a numerical optimization framework based on maximum-a-posteriori estimate of the noise-free OCT image. It combines a novel speckle noise model, derived from local statistics of empirical spectral domain OCT (SD-OCT) data, with a Huber variant of total variation regularization for edge preservation. The proposed approach exhibits satisfying results in terms of speckle noise reduction as well as edge preservation, at reduced computational cost.

  18. Image denoising via collaborative support-agnostic recovery

    KAUST Repository

    Behzad, Muzammil; Masood, Mudassir; Ballal, Tarig; Shadaydeh, Maha; Al-Naffouri, Tareq Y.

    2017-01-01

    In this paper, we propose a novel patch-based image denoising algorithm using collaborative support-agnostic sparse reconstruction. In the proposed collaborative scheme, similar patches are assumed to share the same support taps. For sparse reconstruction, the likelihood of a tap being active in a patch is computed and refined through a collaboration process with other similar patches in the similarity group. This provides a very good patch support estimation, hence enhancing the quality of image restoration. Performance comparisons with state-of-the-art algorithms, in terms of PSNR and SSIM, demonstrate the superiority of the proposed algorithm.

  19. Image denoising via collaborative support-agnostic recovery

    KAUST Repository

    Behzad, Muzammil

    2017-06-20

    In this paper, we propose a novel patch-based image denoising algorithm using collaborative support-agnostic sparse reconstruction. In the proposed collaborative scheme, similar patches are assumed to share the same support taps. For sparse reconstruction, the likelihood of a tap being active in a patch is computed and refined through a collaboration process with other similar patches in the similarity group. This provides a very good patch support estimation, hence enhancing the quality of image restoration. Performance comparisons with state-of-the-art algorithms, in terms of PSNR and SSIM, demonstrate the superiority of the proposed algorithm.

  20. The effect of image enhancement on the statistical analysis of functional neuroimages : Wavelet-based denoising and Gaussian smoothing

    NARCIS (Netherlands)

    Wink, AM; Roerdink, JBTM; Sonka, M; Fitzpatrick, JM

    2003-01-01

    The quality of statistical analyses of functional neuroimages is studied after applying various preprocessing methods. We present wavelet-based denoising as an alternative to Gaussian smoothing, the standard denoising method in statistical parametric mapping (SPM). The wavelet-based denoising

  1. Twofold processing for denoising ultrasound medical images.

    Science.gov (United States)

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India.

  2. Denoising time-resolved microscopy image sequences with singular value thresholding

    Energy Technology Data Exchange (ETDEWEB)

    Furnival, Tom, E-mail: tjof2@cam.ac.uk; Leary, Rowan K., E-mail: rkl26@cam.ac.uk; Midgley, Paul A., E-mail: pam33@cam.ac.uk

    2017-07-15

    Time-resolved imaging in microscopy is important for the direct observation of a range of dynamic processes in both the physical and life sciences. However, the image sequences are often corrupted by noise, either as a result of high frame rates or a need to limit the radiation dose received by the sample. Here we exploit both spatial and temporal correlations using low-rank matrix recovery methods to denoise microscopy image sequences. We also make use of an unbiased risk estimator to address the issue of how much thresholding to apply in a robust and automated manner. The performance of the technique is demonstrated using simulated image sequences, as well as experimental scanning transmission electron microscopy data, where surface adatom motion and nanoparticle structural dynamics are recovered at rates of up to 32 frames per second. - Highlights: • Correlations in space and time are harnessed to denoise microscopy image sequences. • A robust estimator provides automated selection of the denoising parameter. • Motion tracking and automated noise estimation provides a versatile algorithm. • Application to time-resolved STEM enables study of atomic and nanoparticle dynamics.

  3. Image Denoising Algorithm Combined with SGK Dictionary Learning and Principal Component Analysis Noise Estimation

    Directory of Open Access Journals (Sweden)

    Wenjing Zhao

    2018-01-01

    Full Text Available SGK (sequential generalization of K-means dictionary learning denoising algorithm has the characteristics of fast denoising speed and excellent denoising performance. However, the noise standard deviation must be known in advance when using SGK algorithm to process the image. This paper presents a denoising algorithm combined with SGK dictionary learning and the principal component analysis (PCA noise estimation. At first, the noise standard deviation of the image is estimated by using the PCA noise estimation algorithm. And then it is used for SGK dictionary learning algorithm. Experimental results show the following: (1 The SGK algorithm has the best denoising performance compared with the other three dictionary learning algorithms. (2 The SGK algorithm combined with PCA is superior to the SGK algorithm combined with other noise estimation algorithms. (3 Compared with the original SGK algorithm, the proposed algorithm has higher PSNR and better denoising performance.

  4. Application of Improved Wavelet Thresholding Function in Image Denoising Processing

    Directory of Open Access Journals (Sweden)

    Hong Qi Zhang

    2014-07-01

    Full Text Available Wavelet analysis is a time – frequency analysis method, time-frequency localization problems are well solved, this paper analyzes the basic principles of the wavelet transform and the relationship between the signal singularity Lipschitz exponent and the local maxima of the wavelet transform coefficients mold, the principles of wavelet transform in image denoising are analyzed, the disadvantages of traditional wavelet thresholding function are studied, wavelet threshold function, the discontinuity of hard threshold and constant deviation of soft threshold are improved, image is denoised through using the improved threshold function.

  5. Group-sparse representation with dictionary learning for medical image denoising and fusion.

    Science.gov (United States)

    Li, Shutao; Yin, Haitao; Fang, Leyuan

    2012-12-01

    Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.

  6. Performance tuning for CUDA-accelerated neighborhood denoising filters

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Ziyi; Mueller, Klaus [Stony Brook Univ., NY (United States). Center for Visual Computing, Computer Science; Xu, Wei

    2011-07-01

    Neighborhood denoising filters are powerful techniques in image processing and can effectively enhance the image quality in CT reconstructions. In this study, by taking the bilateral filter and the non-local mean filter as two examples, we discuss their implementations and perform fine-tuning on the targeted GPU architecture. Experimental results show that the straightforward GPU-based neighborhood filters can be further accelerated by pre-fetching. The optimized GPU-accelerated denoising filters are ready for plug-in into reconstruction framework to enable fast denoising without compromising image quality. (orig.)

  7. Fast and accurate denoising method applied to very high resolution optical remote sensing images

    Science.gov (United States)

    Masse, Antoine; Lefèvre, Sébastien; Binet, Renaud; Artigues, Stéphanie; Lassalle, Pierre; Blanchet, Gwendoline; Baillarin, Simon

    2017-10-01

    Restoration of Very High Resolution (VHR) optical Remote Sensing Image (RSI) is critical and leads to the problem of removing instrumental noise while keeping integrity of relevant information. Improving denoising in an image processing chain implies increasing image quality and improving performance of all following tasks operated by experts (photo-interpretation, cartography, etc.) or by algorithms (land cover mapping, change detection, 3D reconstruction, etc.). In a context of large industrial VHR image production, the selected denoising method should optimized accuracy and robustness with relevant information and saliency conservation, and rapidity due to the huge amount of data acquired and/or archived. Very recent research in image processing leads to a fast and accurate algorithm called Non Local Bayes (NLB) that we propose to adapt and optimize for VHR RSIs. This method is well suited for mass production thanks to its best trade-off between accuracy and computational complexity compared to other state-of-the-art methods. NLB is based on a simple principle: similar structures in an image have similar noise distribution and thus can be denoised with the same noise estimation. In this paper, we describe in details algorithm operations and performances, and analyze parameter sensibilities on various typical real areas observed in VHR RSIs.

  8. Two-stage image denoising considering interscale and intrascale dependencies

    Science.gov (United States)

    Shahdoosti, Hamid Reza

    2017-11-01

    A solution to the problem of reducing the noise of grayscale images is presented. To consider the intrascale and interscale dependencies, this study makes use of a model. It is shown that the dependency between a wavelet coefficient and its predecessors can be modeled by the first-order Markov chain, which means that the parent conveys all of the information necessary for efficient estimation. Using this fact, the proposed method employs the Kalman filter in the wavelet domain for image denoising. The proposed method has two stages. The first stage employs a simple denoising algorithm to provide the noise-free image, by which the parameters of the model such as state transition matrix, variance of the process noise, the observation model, and the covariance of the observation noise are estimated. In the second stage, the Kalman filter is applied to the wavelet coefficients of the noisy image to estimate the noise-free coefficients. In fact, the Kalman filter is used to estimate the coefficients of high-frequency subbands from the coefficients of coarser scales and noisy observations of neighboring coefficients. In this way, both the interscale and intrascale dependencies are taken into account. Results are presented and discussed on a set of standard 8-bit grayscale images. The experimental results demonstrate that the proposed method achieves performances competitive with the state-of-the-art denoising methods in terms of both peak-signal-to-noise ratio and subjective visual quality.

  9. Denoising Algorithm for CFA Image Sensors Considering Inter-Channel Correlation.

    Science.gov (United States)

    Lee, Min Seok; Park, Sang Wook; Kang, Moon Gi

    2017-05-28

    In this paper, a spatio-spectral-temporal filter considering an inter-channel correlation is proposed for the denoising of a color filter array (CFA) sequence acquired by CCD/CMOS image sensors. Owing to the alternating under-sampled grid of the CFA pattern, the inter-channel correlation must be considered in the direct denoising process. The proposed filter is applied in the spatial, spectral, and temporal domain, considering the spatio-tempo-spectral correlation. First, nonlocal means (NLM) spatial filtering with patch-based difference (PBD) refinement is performed by considering both the intra-channel correlation and inter-channel correlation to overcome the spatial resolution degradation occurring with the alternating under-sampled pattern. Second, a motion-compensated temporal filter that employs inter-channel correlated motion estimation and compensation is proposed to remove the noise in the temporal domain. Then, a motion adaptive detection value controls the ratio of the spatial filter and the temporal filter. The denoised CFA sequence can thus be obtained without motion artifacts. Experimental results for both simulated and real CFA sequences are presented with visual and numerical comparisons to several state-of-the-art denoising methods combined with a demosaicing method. Experimental results confirmed that the proposed frameworks outperformed the other techniques in terms of the objective criteria and subjective visual perception in CFA sequences.

  10. A hybrid spatial-spectral denoising method for infrared hyperspectral images using 2DPCA

    Science.gov (United States)

    Huang, Jun; Ma, Yong; Mei, Xiaoguang; Fan, Fan

    2016-11-01

    The traditional noise reduction methods for 3-D infrared hyperspectral images typically operate independently in either the spatial or spectral domain, and such methods overlook the relationship between the two domains. To address this issue, we propose a hybrid spatial-spectral method in this paper to link both domains. First, principal component analysis and bivariate wavelet shrinkage are performed in the 2-D spatial domain. Second, 2-D principal component analysis transformation is conducted in the 1-D spectral domain to separate the basic components from detail ones. The energy distribution of noise is unaffected by orthogonal transformation; therefore, the signal-to-noise ratio of each component is used as a criterion to determine whether a component should be protected from over-denoising or denoised with certain 1-D denoising methods. This study implements the 1-D wavelet shrinking threshold method based on Stein's unbiased risk estimator, and the quantitative results on publicly available datasets demonstrate that our method can improve denoising performance more effectively than other state-of-the-art methods can.

  11. Preliminary study on effects of 60Co γ-irradiation on video quality and the image de-noising methods

    International Nuclear Information System (INIS)

    Yuan Mei; Zhao Jianbin; Cui Lei

    2011-01-01

    There will be variable noises appear on images in video once the play device irradiated by γ-rays, so as to affect the image clarity. In order to eliminate the image noising, the affection mechanism of γ-irradiation on video-play device was studied in this paper and the methods to improve the image quality with both hardware and software were proposed by use of protection program and de-noising algorithm. The experimental results show that the scheme of video de-noising based on hardware and software can improve effectively the PSNR by 87.5 dB. (authors)

  12. Denoising of MR images using FREBAS collaborative filtering

    International Nuclear Information System (INIS)

    Ito, Satoshi; Hizume, Masayuki; Yamada, Yoshifumi

    2011-01-01

    We propose a novel image denoising strategy based on the correlation in the FREBAS transformed domain. FREBAS transform is a kind of multi-resolution image analysis which consists of two different Fresnel transforms. It can decompose images into down-scaled images of the same size with a different frequency bandwidth. Since these decomposed images have similar distributions for the same directions from the center of the FREBAS domain, even when the FREBAS signal is hidden by noise in the case of a low-signal-to-noise ratio (SNR) image, the signal distribution can be estimated using the distribution of the FREBAS signal located near the position of interest. We have developed a collaborative Wiener filter in the FREBAS transformed domain which implements collaboration of the standard deviation of the position of interest and that of analogous positions. The experimental results demonstrated that the proposed algorithm improves the SNR in terms of both the total SNR and the SNR at the edges of images. (author)

  13. Energy-Based Wavelet De-Noising of Hydrologic Time Series

    Science.gov (United States)

    Sang, Yan-Fang; Liu, Changming; Wang, Zhonggen; Wen, Jun; Shang, Lunyu

    2014-01-01

    De-noising is a substantial issue in hydrologic time series analysis, but it is a difficult task due to the defect of methods. In this paper an energy-based wavelet de-noising method was proposed. It is to remove noise by comparing energy distribution of series with the background energy distribution, which is established from Monte-Carlo test. Differing from wavelet threshold de-noising (WTD) method with the basis of wavelet coefficient thresholding, the proposed method is based on energy distribution of series. It can distinguish noise from deterministic components in series, and uncertainty of de-noising result can be quantitatively estimated using proper confidence interval, but WTD method cannot do this. Analysis of both synthetic and observed series verified the comparable power of the proposed method and WTD, but de-noising process by the former is more easily operable. The results also indicate the influences of three key factors (wavelet choice, decomposition level choice and noise content) on wavelet de-noising. Wavelet should be carefully chosen when using the proposed method. The suitable decomposition level for wavelet de-noising should correspond to series' deterministic sub-signal which has the smallest temporal scale. If too much noise is included in a series, accurate de-noising result cannot be obtained by the proposed method or WTD, but the series would show pure random but not autocorrelation characters, so de-noising is no longer needed. PMID:25360533

  14. Sinogram denoising via simultaneous sparse representation in learned dictionaries

    International Nuclear Information System (INIS)

    Karimi, Davood; Ward, Rabab K

    2016-01-01

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster. (paper)

  15. Maximum likelihood estimation-based denoising of magnetic resonance images using restricted local neighborhoods

    International Nuclear Information System (INIS)

    Rajan, Jeny; Jeurissen, Ben; Sijbers, Jan; Verhoye, Marleen; Van Audekerke, Johan

    2011-01-01

    In this paper, we propose a method to denoise magnitude magnetic resonance (MR) images, which are Rician distributed. Conventionally, maximum likelihood methods incorporate the Rice distribution to estimate the true, underlying signal from a local neighborhood within which the signal is assumed to be constant. However, if this assumption is not met, such filtering will lead to blurred edges and loss of fine structures. As a solution to this problem, we put forward the concept of restricted local neighborhoods where the true intensity for each noisy pixel is estimated from a set of preselected neighboring pixels. To this end, a reference image is created from the noisy image using a recently proposed nonlocal means algorithm. This reference image is used as a prior for further noise reduction. A scheme is developed to locally select an appropriate subset of pixels from which the underlying signal is estimated. Experimental results based on the peak signal to noise ratio, structural similarity index matrix, Bhattacharyya coefficient and mean absolute difference from synthetic and real MR images demonstrate the superior performance of the proposed method over other state-of-the-art methods.

  16. Wavelet-domain de-noising of OCT images of human brain malignant glioma

    Science.gov (United States)

    Dolganova, I. N.; Aleksandrova, P. V.; Beshplav, S.-I. T.; Chernomyrdin, N. V.; Dubyanskaya, E. N.; Goryaynov, S. A.; Kurlov, V. N.; Reshetov, I. V.; Potapov, A. A.; Tuchin, V. V.; Zaytsev, K. I.

    2018-04-01

    We have proposed a wavelet-domain de-noising technique for imaging of human brain malignant glioma by optical coherence tomography (OCT). It implies OCT image decomposition using the direct fast wavelet transform, thresholding of the obtained wavelet spectrum and further inverse fast wavelet transform for image reconstruction. By selecting both wavelet basis and thresholding procedure, we have found an optimal wavelet filter, which application improves differentiation of the considered brain tissue classes - i.e. malignant glioma and normal/intact tissue. Namely, it allows reducing the scattering noise in the OCT images and retaining signal decrement for each tissue class. Therefore, the observed results reveals the wavelet-domain de-noising as a prospective tool for improved characterization of biological tissue using the OCT.

  17. Impact of image denoising on image quality, quantitative parameters and sensitivity of ultra-low-dose volume perfusion CT imaging

    International Nuclear Information System (INIS)

    Othman, Ahmed E.; Brockmann, Carolin; Afat, Saif; Pjontek, Rastislav; Nikoubashman, Omid; Brockmann, Marc A.; Wiesmann, Martin; Yang, Zepa; Kim, Changwon; Nikolaou, Konstantin; Kim, Jong Hyo

    2016-01-01

    To examine the impact of denoising on ultra-low-dose volume perfusion CT (ULD-VPCT) imaging in acute stroke. Simulated ULD-VPCT data sets at 20 % dose rate were generated from perfusion data sets of 20 patients with suspected ischemic stroke acquired at 80 kVp/180 mAs. Four data sets were generated from each ULD-VPCT data set: not-denoised (ND); denoised using spatiotemporal filter (D1); denoised using quanta-stream diffusion technique (D2); combination of both methods (D1 + D2). Signal-to-noise ratio (SNR) was measured in the resulting 100 data sets. Image quality, presence/absence of ischemic lesions, CBV and CBF scores according to a modified ASPECTS score were assessed by two blinded readers. SNR and qualitative scores were highest for D1 + D2 and lowest for ND (all p ≤ 0.001). In 25 % of the patients, ND maps were not assessable and therefore excluded from further analyses. Compared to original data sets, in D2 and D1 + D2, readers correctly identified all patients with ischemic lesions (sensitivity 1.0, kappa 1.0). Lesion size was most accurately estimated for D1 + D2 with a sensitivity of 1.0 (CBV) and 0.94 (CBF) and an inter-rater agreement of 1.0 and 0.92, respectively. An appropriate combination of denoising techniques applied in ULD-VPCT produces diagnostically sufficient perfusion maps at substantially reduced dose rates as low as 20 % of the normal scan. (orig.)

  18. Comparison of de-noising techniques of scintigraphic images; Comparaison de techniques de debruitage des images scintigraphiques

    Energy Technology Data Exchange (ETDEWEB)

    Kirkove, M.; Seret, A. [Liege Univ., Imagerie Medicale Experimentale, Institut de Physique (Belgium)

    2007-05-15

    Scintigraphic images are strongly affected by Poisson noise. This article presents the results of a comparison between de-noising methods for Poisson noise according to different criteria: the gain in signal-to-noise ratio, the preservation of resolution and contrast. and the visual quality. The wavelet techniques recently developed to de-noise Poisson noise limited images are divided into two groups based on: (1) the Haar representation. 1 (2) the transformation of Poisson noise into white Gaussian noise by the Haar-Fisz transform followed by a de-noising. In this study, three variants of the first group and three variants of the second. including the adaptative Wiener filter, four types of wavelet thresholding and the Bayesian method of Pizurica were compared to Metz and Hanning filters and to Shine, a systematic noise elimination process. All these methods, except Shine, are parametric. For each of them, ranges of optimal values for the parameters were highlighted as a function of the aforementioned criteria. The intersection of ranges for the wavelet methods without thresholding was empty, and these methods were therefore not further compared quantitatively. The thresholding techniques and Shine gave the best results in resolution and contrast. The largest improvement in signal-to-noise ratio was obtained by the filters. Ideally, these filters should be accurately defined for each image. This is difficult in the clinical context. Moreover. they generate oscillation artefacts. In addition, the wavelet techniques did not bring significant improvements, and are rather slow. Therefore, Shine, which is fast and works automatically, appears to be an interesting alternative. (authors)

  19. Comparison of wavelet based denoising schemes for gear condition monitoring: An Artificial Neural Network based Approach

    Science.gov (United States)

    Ahmed, Rounaq; Srinivasa Pai, P.; Sriram, N. S.; Bhat, Vasudeva

    2018-02-01

    Vibration Analysis has been extensively used in recent past for gear fault diagnosis. The vibration signals extracted is usually contaminated with noise and may lead to wrong interpretation of results. The denoising of extracted vibration signals helps the fault diagnosis by giving meaningful results. Wavelet Transform (WT) increases signal to noise ratio (SNR), reduces root mean square error (RMSE) and is effective to denoise the gear vibration signals. The extracted signals have to be denoised by selecting a proper denoising scheme in order to prevent the loss of signal information along with noise. An approach has been made in this work to show the effectiveness of Principal Component Analysis (PCA) to denoise gear vibration signal. In this regard three selected wavelet based denoising schemes namely PCA, Empirical Mode Decomposition (EMD), Neighcoeff Coefficient (NC), has been compared with Adaptive Threshold (AT) an extensively used wavelet based denoising scheme for gear vibration signal. The vibration signals acquired from a customized gear test rig were denoised by above mentioned four denoising schemes. The fault identification capability as well as SNR, Kurtosis and RMSE for the four denoising schemes have been compared. Features extracted from the denoised signals have been used to train and test artificial neural network (ANN) models. The performances of the four denoising schemes have been evaluated based on the performance of the ANN models. The best denoising scheme has been identified, based on the classification accuracy results. PCA is effective in all the regards as a best denoising scheme.

  20. Nonlinear Denoising and Analysis of Neuroimages With Kernel Principal Component Analysis and Pre-Image Estimation

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Abrahamsen, Trine Julie; Madsen, Kristoffer Hougaard

    2012-01-01

    We investigate the use of kernel principal component analysis (PCA) and the inverse problem known as pre-image estimation in neuroimaging: i) We explore kernel PCA and pre-image estimation as a means for image denoising as part of the image preprocessing pipeline. Evaluation of the denoising...... procedure is performed within a data-driven split-half evaluation framework. ii) We introduce manifold navigation for exploration of a nonlinear data manifold, and illustrate how pre-image estimation can be used to generate brain maps in the continuum between experimentally defined brain states/classes. We...

  1. An Implementation and Detailed Analysis of the K-SVD Image Denoising Algorithm

    Directory of Open Access Journals (Sweden)

    Marc Lebrun

    2012-05-01

    Full Text Available K-SVD is a signal representation method which, from a set of signals, can derive a dictionary able to approximate each signal with a sparse combination of the atoms. This paper focuses on the K-SVD-based image denoising algorithm. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation.

  2. Input Space Regularization Stabilizes Pre-images for Kernel PCA De-noising

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2009-01-01

    Solution of the pre-image problem is key to efficient nonlinear de-noising using kernel Principal Component Analysis. Pre-image estimation is inherently ill-posed for typical kernels used in applications and consequently the most widely used estimation schemes lack stability. For de...

  3. Improved Real-time Denoising Method Based on Lifting Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Liu Zhaohua

    2014-06-01

    Full Text Available Signal denoising can not only enhance the signal to noise ratio (SNR but also reduce the effect of noise. In order to satisfy the requirements of real-time signal denoising, an improved semisoft shrinkage real-time denoising method based on lifting wavelet transform was proposed. The moving data window technology realizes the real-time wavelet denoising, which employs wavelet transform based on lifting scheme to reduce computational complexity. Also hyperbolic threshold function and recursive threshold computing can ensure the dynamic characteristics of the system, in addition, it can improve the real-time calculating efficiency as well. The simulation results show that the semisoft shrinkage real-time denoising method has quite a good performance in comparison to the traditional methods, namely soft-thresholding and hard-thresholding. Therefore, this method can solve more practical engineering problems.

  4. Nonlinear Image Denoising Methodologies

    National Research Council Canada - National Science Library

    Yufang, Bao

    2002-01-01

    In this thesis, we propose a theoretical as well as practical framework to combine geometric prior information to a statistical/probabilistic methodology in the investigation of a denoising problem...

  5. A Hybrid Technique for De-Noising Multi-Modality Medical Images by Employing Cuckoo’s Search with Curvelet Transform

    Directory of Open Access Journals (Sweden)

    Qaisar Javaid

    2018-01-01

    Full Text Available De-noising of the medical images is very difficult task. To improve the overall visual representation we need to apply a contrast enhancement techniques, this representation provide the physicians and clinicians a good and recovered diagnosis results. Various de-noising and contrast enhancements methods are develops. However, some of the methods are not good in providing the better results with accuracy and efficiency. In our paper we de-noise and enhance the medical images without any loss of information. We uses the curvelet transform in combination with ridglet transform along with CS (Cuckoo Search algorithm. The curvlet transform adapt and represents the sparse pixel informations with all edges. The edges play very important role in understanding of the images. Curvlet transform computes the edges very efficiently where the wavelets are failed. We used the CS to optimize the de-noising coefficients without loss of structural and morphological information. Our designed method would be accurate and efficient in de-noising the medical images. Our method attempts to remove the multiplicative and additive noises. Our proposed method is proved to be an efficient and reliable in removing all kind of noises from the medical images. Result indicates that our proposed approach is better than other approaches in removing impulse, Gaussian, and speckle noises.

  6. Image Denoising Using Interquartile Range Filter with Local Averaging

    OpenAIRE

    Jassim, Firas Ajil

    2013-01-01

    Image denoising is one of the fundamental problems in image processing. In this paper, a novel approach to suppress noise from the image is conducted by applying the interquartile range (IQR) which is one of the statistical methods used to detect outlier effect from a dataset. A window of size kXk was implemented to support IQR filter. Each pixel outside the IQR range of the kXk window is treated as noisy pixel. The estimation of the noisy pixels was obtained by local averaging. The essential...

  7. Edge-preserving image denoising via group coordinate descent on the GPU

    OpenAIRE

    McGaffin, Madison G.; Fessler, Jeffrey A.

    2015-01-01

    Image denoising is a fundamental operation in image processing, and its applications range from the direct (photographic enhancement) to the technical (as a subproblem in image reconstruction algorithms). In many applications, the number of pixels has continued to grow, while the serial execution speed of computational hardware has begun to stall. New image processing algorithms must exploit the power offered by massively parallel architectures like graphics processing units (GPUs). This pape...

  8. Adaptive nonlocal means filtering based on local noise level for CT denoising

    International Nuclear Information System (INIS)

    Li, Zhoubo; Trzasko, Joshua D.; Lake, David S.; Blezek, Daniel J.; Manduca, Armando; Yu, Lifeng; Fletcher, Joel G.; McCollough, Cynthia H.

    2014-01-01

    Purpose: To develop and evaluate an image-domain noise reduction method based on a modified nonlocal means (NLM) algorithm that is adaptive to local noise level of CT images and to implement this method in a time frame consistent with clinical workflow. Methods: A computationally efficient technique for local noise estimation directly from CT images was developed. A forward projection, based on a 2D fan-beam approximation, was used to generate the projection data, with a noise model incorporating the effects of the bowtie filter and automatic exposure control. The noise propagation from projection data to images was analytically derived. The analytical noise map was validated using repeated scans of a phantom. A 3D NLM denoising algorithm was modified to adapt its denoising strength locally based on this noise map. The performance of this adaptive NLM filter was evaluated in phantom studies in terms of in-plane and cross-plane high-contrast spatial resolution, noise power spectrum (NPS), subjective low-contrast spatial resolution using the American College of Radiology (ACR) accreditation phantom, and objective low-contrast spatial resolution using a channelized Hotelling model observer (CHO). Graphical processing units (GPU) implementation of this noise map calculation and the adaptive NLM filtering were developed to meet demands of clinical workflow. Adaptive NLM was piloted on lower dose scans in clinical practice. Results: The local noise level estimation matches the noise distribution determined from multiple repetitive scans of a phantom, demonstrated by small variations in the ratio map between the analytical noise map and the one calculated from repeated scans. The phantom studies demonstrated that the adaptive NLM filter can reduce noise substantially without degrading the high-contrast spatial resolution, as illustrated by modulation transfer function and slice sensitivity profile results. The NPS results show that adaptive NLM denoising preserves the

  9. Denoising of B1+ field maps for noise-robust image reconstruction in electrical properties tomography

    International Nuclear Information System (INIS)

    Michel, Eric; Hernandez, Daniel; Cho, Min Hyoung; Lee, Soo Yeol

    2014-01-01

    Purpose: To validate the use of adaptive nonlinear filters in reconstructing conductivity and permittivity images from the noisy B 1 + maps in electrical properties tomography (EPT). Methods: In EPT, electrical property images are computed by taking Laplacian of the B 1 + maps. To mitigate the noise amplification in computing the Laplacian, the authors applied adaptive nonlinear denoising filters to the measured complex B 1 + maps. After the denoising process, they computed the Laplacian by central differences. They performed EPT experiments on phantoms and a human brain at 3 T along with corresponding EPT simulations on finite-difference time-domain models. They evaluated the EPT images comparing them with the ones obtained by previous EPT reconstruction methods. Results: In both the EPT simulations and experiments, the nonlinear filtering greatly improved the EPT image quality when evaluated in terms of the mean and standard deviation of the electrical property values at the regions of interest. The proposed method also improved the overall similarity between the reconstructed conductivity images and the true shapes of the conductivity distribution. Conclusions: The nonlinear denoising enabled us to obtain better-quality EPT images of the phantoms and the human brain at 3 T

  10. Denoising in Wavelet Packet Domain via Approximation Coefficients

    Directory of Open Access Journals (Sweden)

    Zahra Vahabi

    2012-01-01

    Full Text Available In this paper we propose a new approach in the wavelet domain for image denoising. In recent researches wavelet transform has introduced a time-Frequency transform for computing wavelet coefficient and eliminating noise. Some coefficients have effected smaller than the other's from noise, so they can be use reconstruct images with other subbands. We have developed Approximation image to estimate better denoised image. Naturally noiseless subimage introduced image with lower noise. Beside denoising we obtain a bigger compression rate. Increasing image contrast is another advantage of this method. Experimental results demonstrate that our approach compares favorably to more typical methods of denoising and compression in wavelet domain.100 images of LIVE Dataset were tested, comparing signal to noise ratios (SNR,soft thresholding was %1.12 better than hard thresholding, POAC was %1.94 better than soft thresholding and POAC with wavelet packet was %1.48 better than POAC.

  11. Unmixing-Based Denoising as a Pre-Processing Step for Coral Reef Analysis

    Science.gov (United States)

    Cerra, D.; Traganos, D.; Gege, P.; Reinartz, P.

    2017-05-01

    Coral reefs, among the world's most biodiverse and productive submerged habitats, have faced several mass bleaching events due to climate change during the past 35 years. In the course of this century, global warming and ocean acidification are expected to cause corals to become increasingly rare on reef systems. This will result in a sharp decrease in the biodiversity of reef communities and carbonate reef structures. Coral reefs may be mapped, characterized and monitored through remote sensing. Hyperspectral images in particular excel in being used in coral monitoring, being characterized by very rich spectral information, which results in a strong discrimination power to characterize a target of interest, and separate healthy corals from bleached ones. Being submerged habitats, coral reef systems are difficult to analyse in airborne or satellite images, as relevant information is conveyed in bands in the blue range which exhibit lower signal-to-noise ratio (SNR) with respect to other spectral ranges; furthermore, water is absorbing most of the incident solar radiation, further decreasing the SNR. Derivative features, which are important in coral analysis, result greatly affected by the resulting noise present in relevant spectral bands, justifying the need of new denoising techniques able to keep local spatial and spectral features. In this paper, Unmixing-based Denoising (UBD) is used to enable analysis of a hyperspectral image acquired over a coral reef system in the Red Sea based on derivative features. UBD reconstructs pixelwise a dataset with reduced noise effects, by forcing each spectrum to a linear combination of other reference spectra, exploiting the high dimensionality of hyperspectral datasets. Results show clear enhancements with respect to traditional denoising methods based on spatial and spectral smoothing, facilitating the coral detection task.

  12. System and method for image reconstruction, analysis, and/or de-noising

    KAUST Repository

    Laleg-Kirati, Taous-Meriem; Kaisserli, Zineb

    2015-01-01

    A method and system can analyze, reconstruct, and/or denoise an image. The method and system can include interpreting a signal as a potential of a Schrödinger operator, decomposing the signal into squared eigenfunctions, reducing a design parameter

  13. A Denoising Scheme for Randomly Clustered Noise Removal in ICCD Sensing Image

    Directory of Open Access Journals (Sweden)

    Fei Wang

    2017-01-01

    Full Text Available An Intensified Charge-Coupled Device (ICCD image is captured by the ICCD image sensor in extremely low-light conditions. Its noise has two distinctive characteristics. (a Different from the independent identically distributed (i.i.d. noise in natural image, the noise in the ICCD sensing image is spatially clustered, which induces unexpected structure information; (b The pattern of the clustered noise is formed randomly. In this paper, we propose a denoising scheme to remove the randomly clustered noise in the ICCD sensing image. First, we decompose the image into non-overlapped patches and classify them into flat patches and structure patches according to if real structure information is included. Then, two denoising algorithms are designed for them, respectively. For each flat patch, we simulate multiple similar patches for it in pseudo-time domain and remove its noise by averaging all the simulated patches, considering that the structure information induced by the noise varies randomly over time. For each structure patch, we design a structure-preserved sparse coding algorithm to reconstruct the real structure information. It reconstructs each patch by describing it as a weighted summation of its neighboring patches and incorporating the weights into the sparse representation of the current patch. Based on all the reconstructed patches, we generate a reconstructed image. After that, we repeat the whole process by changing relevant parameters, considering that blocking artifacts exist in a single reconstructed image. Finally, we obtain the reconstructed image by merging all the generated images into one. Experiments are conducted on an ICCD sensing image dataset, which verifies its subjective performance in removing the randomly clustered noise and preserving the real structure information in the ICCD sensing image.

  14. Fast and Accurate Poisson Denoising With Trainable Nonlinear Diffusion.

    Science.gov (United States)

    Feng, Wensen; Qiao, Peng; Chen, Yunjin; Wensen Feng; Peng Qiao; Yunjin Chen; Feng, Wensen; Chen, Yunjin; Qiao, Peng

    2018-06-01

    The degradation of the acquired signal by Poisson noise is a common problem for various imaging applications, such as medical imaging, night vision, and microscopy. Up to now, many state-of-the-art Poisson denoising techniques mainly concentrate on achieving utmost performance, with little consideration for the computation efficiency. Therefore, in this paper we aim to propose an efficient Poisson denoising model with both high computational efficiency and recovery quality. To this end, we exploit the newly developed trainable nonlinear reaction diffusion (TNRD) model which has proven an extremely fast image restoration approach with performance surpassing recent state-of-the-arts. However, the straightforward direct gradient descent employed in the original TNRD-based denoising task is not applicable in this paper. To solve this problem, we resort to the proximal gradient descent method. We retrain the model parameters, including the linear filters and influence functions by taking into account the Poisson noise statistics, and end up with a well-trained nonlinear diffusion model specialized for Poisson denoising. The trained model provides strongly competitive results against state-of-the-art approaches, meanwhile bearing the properties of simple structure and high efficiency. Furthermore, our proposed model comes along with an additional advantage, that the diffusion process is well-suited for parallel computation on graphics processing units (GPUs). For images of size , our GPU implementation takes less than 0.1 s to produce state-of-the-art Poisson denoising performance.

  15. Computed tomography perfusion imaging denoising using Gaussian process regression

    International Nuclear Information System (INIS)

    Zhu Fan; Gonzalez, David Rodriguez; Atkinson, Malcolm; Carpenter, Trevor; Wardlaw, Joanna

    2012-01-01

    Brain perfusion weighted images acquired using dynamic contrast studies have an important clinical role in acute stroke diagnosis and treatment decisions. However, computed tomography (CT) images suffer from low contrast-to-noise ratios (CNR) as a consequence of the limitation of the exposure to radiation of the patient. As a consequence, the developments of methods for improving the CNR are valuable. The majority of existing approaches for denoising CT images are optimized for 3D (spatial) information, including spatial decimation (spatially weighted mean filters) and techniques based on wavelet and curvelet transforms. However, perfusion imaging data is 4D as it also contains temporal information. Our approach using Gaussian process regression (GPR), which takes advantage of the temporal information, to reduce the noise level. Over the entire image, GPR gains a 99% CNR improvement over the raw images and also improves the quality of haemodynamic maps allowing a better identification of edges and detailed information. At the level of individual voxel, GPR provides a stable baseline, helps us to identify key parameters from tissue time-concentration curves and reduces the oscillations in the curve. GPR is superior to the comparable techniques used in this study. (note)

  16. System and method for image reconstruction, analysis, and/or de-noising

    KAUST Repository

    Laleg-Kirati, Taous-Meriem

    2015-11-12

    A method and system can analyze, reconstruct, and/or denoise an image. The method and system can include interpreting a signal as a potential of a Schrödinger operator, decomposing the signal into squared eigenfunctions, reducing a design parameter of the Schrödinger operator, analyzing discrete spectra of the Schrödinger operator and combining the analysis of the discrete spectra to construct the image.

  17. a Universal De-Noising Algorithm for Ground-Based LIDAR Signal

    Science.gov (United States)

    Ma, Xin; Xiang, Chengzhi; Gong, Wei

    2016-06-01

    Ground-based lidar, working as an effective remote sensing tool, plays an irreplaceable role in the study of atmosphere, since it has the ability to provide the atmospheric vertical profile. However, the appearance of noise in a lidar signal is unavoidable, which leads to difficulties and complexities when searching for more information. Every de-noising method has its own characteristic but with a certain limitation, since the lidar signal will vary with the atmosphere changes. In this paper, a universal de-noising algorithm is proposed to enhance the SNR of a ground-based lidar signal, which is based on signal segmentation and reconstruction. The signal segmentation serving as the keystone of the algorithm, segments the lidar signal into three different parts, which are processed by different de-noising method according to their own characteristics. The signal reconstruction is a relatively simple procedure that is to splice the signal sections end to end. Finally, a series of simulation signal tests and real dual field-of-view lidar signal shows the feasibility of the universal de-noising algorithm.

  18. Pipeline for effective denoising of digital mammography and digital breast tomosynthesis

    Science.gov (United States)

    Borges, Lucas R.; Bakic, Predrag R.; Foi, Alessandro; Maidment, Andrew D. A.; Vieira, Marcelo A. C.

    2017-03-01

    Denoising can be used as a tool to enhance image quality and enforce low radiation doses in X-ray medical imaging. The effectiveness of denoising techniques relies on the validity of the underlying noise model. In full-field digital mammography (FFDM) and digital breast tomosynthesis (DBT), calibration steps like the detector offset and flat-fielding can affect some assumptions made by most denoising techniques. Furthermore, quantum noise found in X-ray images is signal-dependent and can only be treated by specific filters. In this work we propose a pipeline for FFDM and DBT image denoising that considers the calibration steps and simplifies the modeling of the noise statistics through variance-stabilizing transformations (VST). The performance of a state-of-the-art denoising method was tested with and without the proposed pipeline. To evaluate the method, objective metrics such as the normalized root mean square error (N-RMSE), noise power spectrum, modulation transfer function (MTF) and the frequency signal-to-noise ratio (SNR) were analyzed. Preliminary tests show that the pipeline improves denoising. When the pipeline is not used, bright pixels of the denoised image are under-filtered and dark pixels are over-smoothed due to the assumption of a signal-independent Gaussian model. The pipeline improved denoising up to 20% in terms of spatial N-RMSE and up to 15% in terms of frequency SNR. Besides improving the denoising, the pipeline does not increase signal smoothing significantly, as shown by the MTF. Thus, the proposed pipeline can be used with state-of-the-art denoising techniques to improve the quality of DBT and FFDM images.

  19. Fractional Diffusion, Low Exponent Lévy Stable Laws, and 'Slow Motion' Denoising of Helium Ion Microscope Nanoscale Imagery.

    Science.gov (United States)

    Carasso, Alfred S; Vladár, András E

    2012-01-01

    Helium ion microscopes (HIM) are capable of acquiring images with better than 1 nm resolution, and HIM images are particularly rich in morphological surface details. However, such images are generally quite noisy. A major challenge is to denoise these images while preserving delicate surface information. This paper presents a powerful slow motion denoising technique, based on solving linear fractional diffusion equations forward in time. The method is easily implemented computationally, using fast Fourier transform (FFT) algorithms. When applied to actual HIM images, the method is found to reproduce the essential surface morphology of the sample with high fidelity. In contrast, such highly sophisticated methodologies as Curvelet Transform denoising, and Total Variation denoising using split Bregman iterations, are found to eliminate vital fine scale information, along with the noise. Image Lipschitz exponents are a useful image metrology tool for quantifying the fine structure content in an image. In this paper, this tool is applied to rank order the above three distinct denoising approaches, in terms of their texture preserving properties. In several denoising experiments on actual HIM images, it was found that fractional diffusion smoothing performed noticeably better than split Bregman TV, which in turn, performed slightly better than Curvelet denoising.

  20. Effect of denoising on supervised lung parenchymal clusters

    Science.gov (United States)

    Jayamani, Padmapriya; Raghunath, Sushravya; Rajagopalan, Srinivasan; Karwoski, Ronald A.; Bartholmai, Brian J.; Robb, Richard A.

    2012-03-01

    Denoising is a critical preconditioning step for quantitative analysis of medical images. Despite promises for more consistent diagnosis, denoising techniques are seldom explored in clinical settings. While this may be attributed to the esoteric nature of the parameter sensitve algorithms, lack of quantitative measures on their ecacy to enhance the clinical decision making is a primary cause of physician apathy. This paper addresses this issue by exploring the eect of denoising on the integrity of supervised lung parenchymal clusters. Multiple Volumes of Interests (VOIs) were selected across multiple high resolution CT scans to represent samples of dierent patterns (normal, emphysema, ground glass, honey combing and reticular). The VOIs were labeled through consensus of four radiologists. The original datasets were ltered by multiple denoising techniques (median ltering, anisotropic diusion, bilateral ltering and non-local means) and the corresponding ltered VOIs were extracted. Plurality of cluster indices based on multiple histogram-based pair-wise similarity measures were used to assess the quality of supervised clusters in the original and ltered space. The resultant rank orders were analyzed using the Borda criteria to nd the denoising-similarity measure combination that has the best cluster quality. Our exhaustive analyis reveals (a) for a number of similarity measures, the cluster quality is inferior in the ltered space; and (b) for measures that benet from denoising, a simple median ltering outperforms non-local means and bilateral ltering. Our study suggests the need to judiciously choose, if required, a denoising technique that does not deteriorate the integrity of supervised clusters.

  1. An efficient dictionary learning algorithm and its application to 3-D medical image denoising.

    Science.gov (United States)

    Li, Shutao; Fang, Leyuan; Yin, Haitao

    2012-02-01

    In this paper, we propose an efficient dictionary learning algorithm for sparse representation of given data and suggest a way to apply this algorithm to 3-D medical image denoising. Our learning approach is composed of two main parts: sparse coding and dictionary updating. On the sparse coding stage, an efficient algorithm named multiple clusters pursuit (MCP) is proposed. The MCP first applies a dictionary structuring strategy to cluster the atoms with high coherence together, and then employs a multiple-selection strategy to select several competitive atoms at each iteration. These two strategies can greatly reduce the computation complexity of the MCP and assist it to obtain better sparse solution. On the dictionary updating stage, the alternating optimization that efficiently approximates the singular value decomposition is introduced. Furthermore, in the 3-D medical image denoising application, a joint 3-D operation is proposed for taking the learning capabilities of the presented algorithm to simultaneously capture the correlations within each slice and correlations across the nearby slices, thereby obtaining better denoising results. The experiments on both synthetically generated data and real 3-D medical images demonstrate that the proposed approach has superior performance compared to some well-known methods. © 2011 IEEE

  2. Echocardiogram enhancement using supervised manifold denoising.

    Science.gov (United States)

    Wu, Hui; Huynh, Toan T; Souvenir, Richard

    2015-08-01

    This paper presents data-driven methods for echocardiogram enhancement. Existing denoising algorithms typically rely on a single noise model, and do not generalize to the composite noise sources typically found in real-world echocardiograms. Our methods leverage the low-dimensional intrinsic structure of echocardiogram videos. We assume that echocardiogram images are noisy samples from an underlying manifold parametrized by cardiac motion and denoise images via back-projection onto a learned (non-linear) manifold. Our methods incorporate synchronized side information (e.g., electrocardiography), which is often collected alongside the visual data. We evaluate the proposed methods on a synthetic data set and real-world echocardiograms. Quantitative results show improved performance of our methods over recent image despeckling methods and video denoising methods, and a visual analysis of real-world data shows noticeable image enhancement, even in the challenging case of noise due to dropout artifacts. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. [Research on electrocardiogram de-noising algorithm based on wavelet neural networks].

    Science.gov (United States)

    Wan, Xiangkui; Zhang, Jun

    2010-12-01

    In this paper, the ECG de-noising technology based on wavelet neural networks (WNN) is used to deal with the noises in Electrocardiogram (ECG) signal. The structure of WNN, which has the outstanding nonlinear mapping capability, is designed as a nonlinear filter used for ECG to cancel the baseline wander, electromyo-graphical interference and powerline interference. The network training algorithm and de-noising experiments results are presented, and some key points of the WNN filter using ECG de-noising are discussed.

  4. Non parametric denoising methods based on wavelets: Application to electron microscopy images in low exposure time

    International Nuclear Information System (INIS)

    Soumia, Sid Ahmed; Messali, Zoubeida; Ouahabi, Abdeldjalil; Trepout, Sylvain; Messaoudi, Cedric; Marco, Sergio

    2015-01-01

    The 3D reconstruction of the Cryo-Transmission Electron Microscopy (Cryo-TEM) and Energy Filtering TEM images (EFTEM) hampered by the noisy nature of these images, so that their alignment becomes so difficult. This noise refers to the collision between the frozen hydrated biological samples and the electrons beam, where the specimen is exposed to the radiation with a high exposure time. This sensitivity to the electrons beam led specialists to obtain the specimen projection images at very low exposure time, which resulting the emergence of a new problem, an extremely low signal-to-noise ratio (SNR). This paper investigates the problem of TEM images denoising when they are acquired at very low exposure time. So, our main objective is to enhance the quality of TEM images to improve the alignment process which will in turn improve the three dimensional tomography reconstructions. We have done multiple tests on special TEM images acquired at different exposure time 0.5s, 0.2s, 0.1s and 1s (i.e. with different values of SNR)) and equipped by Golding beads for helping us in the assessment step. We herein, propose a structure to combine multiple noisy copies of the TEM images. The structure is based on four different denoising methods, to combine the multiple noisy TEM images copies. Namely, the four different methods are Soft, the Hard as Wavelet-Thresholding methods, Bilateral Filter as a non-linear technique able to maintain the edges neatly, and the Bayesian approach in the wavelet domain, in which context modeling is used to estimate the parameter for each coefficient. To ensure getting a high signal-to-noise ratio, we have guaranteed that we are using the appropriate wavelet family at the appropriate level. So we have chosen âĂIJsym8âĂİ wavelet at level 3 as the most appropriate parameter. Whereas, for the bilateral filtering many tests are done in order to determine the proper filter parameters represented by the size of the filter, the range parameter and the

  5. A survey on OFDM channel estimation techniques based on denoising strategies

    Directory of Open Access Journals (Sweden)

    Pallaviram Sure

    2017-04-01

    Full Text Available Channel estimation forms the heart of any orthogonal frequency division multiplexing (OFDM based wireless communication receiver. Frequency domain pilot aided channel estimation techniques are either least squares (LS based or minimum mean square error (MMSE based. LS based techniques are computationally less complex. Unlike MMSE ones, they do not require a priori knowledge of channel statistics (KCS. However, the mean square error (MSE performance of the channel estimator incorporating MMSE based techniques is better compared to that obtained with the incorporation of LS based techniques. To enhance the MSE performance using LS based techniques, a variety of denoising strategies have been developed in the literature, which are applied on the LS estimated channel impulse response (CIR. The advantage of denoising threshold based LS techniques is that, they do not require KCS but still render near optimal MMSE performance similar to MMSE based techniques. In this paper, a detailed survey on various existing denoising strategies, with a comparative discussion of these strategies is presented.

  6. Rudin-Osher-Fatemi Total Variation Denoising using Split Bregman

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2012-05-01

    Full Text Available Denoising is the problem of removing noise from an image. The most commonly studied case is with additive white Gaussian noise (AWGN, where the observed noisy image f is related to the underlying true image u by f=u+η and η is at each point in space independently and identically distributed as a zero-mean Gaussian random variable. Total variation (TV regularization is a technique that was originally developed for AWGN image denoising by Rudin, Osher, and Fatemi. The TV regularization technique has since been applied to a multitude of other imaging problems, see for example Chan and Shen's book. We focus here on the split Bregman algorithm of Goldstein and Osher for TV-regularized denoising.

  7. Adaptive bilateral filter for image denoising and its application to in-vitro Time-of-Flight data

    Science.gov (United States)

    Seitel, Alexander; dos Santos, Thiago R.; Mersmann, Sven; Penne, Jochen; Groch, Anja; Yung, Kwong; Tetzlaff, Ralf; Meinzer, Hans-Peter; Maier-Hein, Lena

    2011-03-01

    Image-guided therapy systems generally require registration of pre-operative planning data with the patient's anatomy. One common approach to achieve this is to acquire intra-operative surface data and match it to surfaces extracted from the planning image. Although increasingly popular for surface generation in general, the novel Time-of-Flight (ToF) technology has not yet been applied in this context. This may be attributed to the fact that the ToF range images are subject to considerable noise. The contribution of this study is two-fold. Firstly, we present an adaption of the well-known bilateral filter for denoising ToF range images based on the noise characteristics of the camera. Secondly, we assess the quality of organ surfaces generated from ToF range data with and without bilateral smoothing using corresponding high resolution CT data as ground truth. According to an evaluation on five porcine organs, the root mean squared (RMS) distance between the denoised ToF data points and the reference computed tomography (CT) surfaces ranged from 3.0 mm (lung) to 9.0 mm (kidney). This corresponds to an error-reduction of up to 36% compared to the error of the original ToF surfaces.

  8. LSTM-Based Hierarchical Denoising Network for Android Malware Detection

    Directory of Open Access Journals (Sweden)

    Jinpei Yan

    2018-01-01

    Full Text Available Mobile security is an important issue on Android platform. Most malware detection methods based on machine learning models heavily rely on expert knowledge for manual feature engineering, which are still difficult to fully describe malwares. In this paper, we present LSTM-based hierarchical denoise network (HDN, a novel static Android malware detection method which uses LSTM to directly learn from the raw opcode sequences extracted from decompiled Android files. However, most opcode sequences are too long for LSTM to train due to the gradient vanishing problem. Hence, HDN uses a hierarchical structure, whose first-level LSTM parallelly computes on opcode subsequences (we called them method blocks to learn the dense representations; then the second-level LSTM can learn and detect malware through method block sequences. Considering that malicious behavior only appears in partial sequence segments, HDN uses method block denoise module (MBDM for data denoising by adaptive gradient scaling strategy based on loss cache. We evaluate and compare HDN with the latest mainstream researches on three datasets. The results show that HDN outperforms these Android malware detection methods,and it is able to capture longer sequence features and has better detection efficiency than N-gram-based malware detection which is similar to our method.

  9. Evaluation of Effectiveness of Wavelet Based Denoising Schemes Using ANN and SVM for Bearing Condition Classification

    Directory of Open Access Journals (Sweden)

    Vijay G. S.

    2012-01-01

    Full Text Available The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR and reducing the root-mean-square error (RMSE. In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN and the Support Vector Machine (SVM, for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher’s Criterion (FC. Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal.

  10. A shape-optimized framework for kidney segmentation in ultrasound images using NLTV denoising and DRLSE

    Directory of Open Access Journals (Sweden)

    Yang Fan

    2012-10-01

    Full Text Available Abstract Background Computer-assisted surgical navigation aims to provide surgeons with anatomical target localization and critical structure observation, where medical image processing methods such as segmentation, registration and visualization play a critical role. Percutaneous renal intervention plays an important role in several minimally-invasive surgeries of kidney, such as Percutaneous Nephrolithotomy (PCNL and Radio-Frequency Ablation (RFA of kidney tumors, which refers to a surgical procedure where access to a target inside the kidney by a needle puncture of the skin. Thus, kidney segmentation is a key step in developing any ultrasound-based computer-aided diagnosis systems for percutaneous renal intervention. Methods In this paper, we proposed a novel framework for kidney segmentation of ultrasound (US images combined with nonlocal total variation (NLTV image denoising, distance regularized level set evolution (DRLSE and shape prior. Firstly, a denoised US image was obtained by NLTV image denoising. Secondly, DRLSE was applied in the kidney segmentation to get binary image. In this case, black and white region represented the kidney and the background respectively. The last stage is that the shape prior was applied to get a shape with the smooth boundary from the kidney shape space, which was used to optimize the segmentation result of the second step. The alignment model was used occasionally to enlarge the shape space in order to increase segmentation accuracy. Experimental results on both synthetic images and US data are given to demonstrate the effectiveness and accuracy of the proposed algorithm. Results We applied our segmentation framework on synthetic and real US images to demonstrate the better segmentation results of our method. From the qualitative results, the experiment results show that the segmentation results are much closer to the manual segmentations. The sensitivity (SN, specificity (SP and positive predictive value

  11. Poisson denoising on the sphere

    Science.gov (United States)

    Schmitt, J.; Starck, J. L.; Fadili, J.; Grenier, I.; Casandjian, J. M.

    2009-08-01

    In the scope of the Fermi mission, Poisson noise removal should improve data quality and make source detection easier. This paper presents a method for Poisson data denoising on sphere, called Multi-Scale Variance Stabilizing Transform on Sphere (MS-VSTS). This method is based on a Variance Stabilizing Transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has an (asymptotically) constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. Thus, MS-VSTS consists in decomposing the data into a sparse multi-scale dictionary (wavelets, curvelets, ridgelets...), and then applying a VST on the coefficients in order to get quasi-Gaussian stabilized coefficients. In this present article, the used multi-scale transform is the Isotropic Undecimated Wavelet Transform. Then, hypothesis tests are made to detect significant coefficients, and the denoised image is reconstructed with an iterative method based on Hybrid Steepest Descent (HST). The method is tested on simulated Fermi data.

  12. Example-based human motion denoising.

    Science.gov (United States)

    Lou, Hui; Chai, Jinxiang

    2010-01-01

    With the proliferation of motion capture data, interest in removing noise and outliers from motion capture data has increased. In this paper, we introduce an efficient human motion denoising technique for the simultaneous removal of noise and outliers from input human motion data. The key idea of our approach is to learn a series of filter bases from precaptured motion data and use them along with robust statistics techniques to filter noisy motion data. Mathematically, we formulate the motion denoising process in a nonlinear optimization framework. The objective function measures the distance between the noisy input and the filtered motion in addition to how well the filtered motion preserves spatial-temporal patterns embedded in captured human motion data. Optimizing the objective function produces an optimal filtered motion that keeps spatial-temporal patterns in captured motion data. We also extend the algorithm to fill in the missing values in input motion data. We demonstrate the effectiveness of our system by experimenting with both real and simulated motion data. We also show the superior performance of our algorithm by comparing it with three baseline algorithms and to those in state-of-art motion capture data processing software such as Vicon Blade.

  13. Research and Implementation of Heart Sound Denoising

    Science.gov (United States)

    Liu, Feng; Wang, Yutai; Wang, Yanxiang

    Heart sound is one of the most important signals. However, the process of getting heart sound signal can be interfered with many factors outside. Heart sound is weak electric signal and even weak external noise may lead to the misjudgment of pathological and physiological information in this signal, thus causing the misjudgment of disease diagnosis. As a result, it is a key to remove the noise which is mixed with heart sound. In this paper, a more systematic research and analysis which is involved in heart sound denoising based on matlab has been made. The study of heart sound denoising based on matlab firstly use the powerful image processing function of matlab to transform heart sound signals with noise into the wavelet domain through wavelet transform and decomposition these signals in muli-level. Then for the detail coefficient, soft thresholding is made using wavelet transform thresholding to eliminate noise, so that a signal denoising is significantly improved. The reconstructed signals are gained with stepwise coefficient reconstruction for the processed detail coefficient. Lastly, 50HZ power frequency and 35 Hz mechanical and electrical interference signals are eliminated using a notch filter.

  14. Image matching in Bayer raw domain to de-noise low-light still images, optimized for real-time implementation

    Science.gov (United States)

    Romanenko, I. V.; Edirisinghe, E. A.; Larkin, D.

    2013-03-01

    Temporal accumulation of images is a well-known approach to improve signal to noise ratios of still images taken in a low light conditions. However, the complexity of known algorithms often leads to high hardware resource usage, increased memory bandwidth and computational complexity, making their practical use impossible. In our research we attempt to solve this problem with an implementation of a practical spatial-temporal de-noising algorithm, based on image accumulation. Image matching and spatial-temporal filtering was performed in Bayer RAW data space, which allowed us to benefit from predictable sensor noise characteristics, thus allowing using a range of algorithmic optimizations. The proposed algorithm accurately compensates for global and local motion and efficiently removes different kinds of noise in noisy images taken in low light conditions. In our algorithm we were able to perform global and local motion compensation in Bayer RAW data space, while preserving the resolution and effectively improving signal to noise ratios of moving objects as well as non-stationary background. The proposed algorithm is suitable for implementation in commercial grade FPGA's and capable of processing 16MP images at capturing rate (10 frames per second). The main challenge for matching between still images is the compromise between the quality of the motion prediction and the complexity of the algorithm and required memory bandwidth. Still images taken in a burst sequence must be aligned to compensate for background motion and foreground objects movements in a scene. High resolution still images coupled with significant time between successive frames can produce large displacements between images, which creates additional difficulty for image matching algorithms. In photo applications it is very important that the noise is efficiently removed in both static, and non-static background as well as in a moving objects, maintaining the resolution of the image. In our proposed

  15. MULTISCALE TENSOR ANISOTROPIC FILTERING OF FLUORESCENCE MICROSCOPY FOR DENOISING MICROVASCULATURE.

    Science.gov (United States)

    Prasath, V B S; Pelapur, R; Glinskii, O V; Glinsky, V V; Huxley, V H; Palaniappan, K

    2015-04-01

    Fluorescence microscopy images are contaminated by noise and improving image quality without blurring vascular structures by filtering is an important step in automatic image analysis. The application of interest here is to automatically extract the structural components of the microvascular system with accuracy from images acquired by fluorescence microscopy. A robust denoising process is necessary in order to extract accurate vascular morphology information. For this purpose, we propose a multiscale tensor with anisotropic diffusion model which progressively and adaptively updates the amount of smoothing while preserving vessel boundaries accurately. Based on a coherency enhancing flow with planar confidence measure and fused 3D structure information, our method integrates multiple scales for microvasculature preservation and noise removal membrane structures. Experimental results on simulated synthetic images and epifluorescence images show the advantage of our improvement over other related diffusion filters. We further show that the proposed multiscale integration approach improves denoising accuracy of different tensor diffusion methods to obtain better microvasculature segmentation.

  16. (Non-) homomorphic approaches to denoise intensity SAR images with non-local means and stochastic distances

    Science.gov (United States)

    Penna, Pedro A. A.; Mascarenhas, Nelson D. A.

    2018-02-01

    The development of new methods to denoise images still attract researchers, who seek to combat the noise with the minimal loss of resolution and details, like edges and fine structures. Many algorithms have the goal to remove additive white Gaussian noise (AWGN). However, it is not the only type of noise which interferes in the analysis and interpretation of images. Therefore, it is extremely important to expand the filters capacity to different noise models present in li-terature, for example the multiplicative noise called speckle that is present in synthetic aperture radar (SAR) images. The state-of-the-art algorithms in remote sensing area work with similarity between patches. This paper aims to develop two approaches using the non local means (NLM), developed for AWGN. In our research, we expanded its capacity for intensity SAR ima-ges speckle. The first approach is grounded on the use of stochastic distances based on the G0 distribution without transforming the data to the logarithm domain, like homomorphic transformation. It takes into account the speckle and backscatter to estimate the parameters necessary to compute the stochastic distances on NLM. The second method uses a priori NLM denoising with a homomorphic transformation and applies the inverse Gamma distribution to estimate the parameters that were used into NLM with stochastic distances. The latter method also presents a new alternative to compute the parameters for the G0 distribution. Finally, this work compares and analyzes the synthetic and real results of the proposed methods with some recent filters of the literature.

  17. Electrocardiogram signal denoising based on a new improved wavelet thresholding

    Science.gov (United States)

    Han, Guoqiang; Xu, Zhijun

    2016-08-01

    Good quality electrocardiogram (ECG) is utilized by physicians for the interpretation and identification of physiological and pathological phenomena. In general, ECG signals may mix various noises such as baseline wander, power line interference, and electromagnetic interference in gathering and recording process. As ECG signals are non-stationary physiological signals, wavelet transform is investigated to be an effective tool to discard noises from corrupted signals. A new compromising threshold function called sigmoid function-based thresholding scheme is adopted in processing ECG signals. Compared with other methods such as hard/soft thresholding or other existing thresholding functions, the new algorithm has many advantages in the noise reduction of ECG signals. It perfectly overcomes the discontinuity at ±T of hard thresholding and reduces the fixed deviation of soft thresholding. The improved wavelet thresholding denoising can be proved to be more efficient than existing algorithms in ECG signal denoising. The signal to noise ratio, mean square error, and percent root mean square difference are calculated to verify the denoising performance as quantitative tools. The experimental results reveal that the waves including P, Q, R, and S waves of ECG signals after denoising coincide with the original ECG signals by employing the new proposed method.

  18. Enhancement and denoising of mammographic images for breast disease detection

    International Nuclear Information System (INIS)

    Yazdani, S.; Yusof, R.; Karimian, A.; Hematian, A.; Yousefi, M.

    2012-01-01

    In these two decades breast cancer is one of the leading cause of death among women. In breast cancer research, Mammographic Image is being assessed as a potential tool for detecting breast disease and investigating response to chemotherapy. In first stage of breast disease discovery, the density measurement of the breast in mammographic images provides very useful information. Because of the importance of the role of mammographic images the need for accurate and robust automated image enhancement techniques is becoming clear. Mammographic images have some disadvantages such as, the high dependence of contrast upon the way the image is acquired, weak distinction in splitting cyst from tumor, intensity non uniformity, the existence of noise, etc. These limitations make problem to detect the typical signs such as masses and microcalcifications. For this reason, denoising and enhancing the quality of mammographic images is very important. The method which is used in this paper is in spatial domain which its input includes high, intermediate and even very low contrast mammographic images based on specialist physician's view, while its output is processed images that show the input images with higher quality, more contrast and more details. In this research, 38 mammographic images have been used. The result of purposed method shows details of abnormal zones and the areas with defects so that specialist could explore these zones more accurately and it could be deemed as an index for cancer diagnosis. In this study, mammographic images are initially converted into digital images and then to increase spatial resolution power, their noise is reduced and consequently their contrast is improved. The results demonstrate effectiveness and efficiency of the proposed methods. (authors)

  19. 3D seismic denoising based on a low-redundancy curvelet transform

    International Nuclear Information System (INIS)

    Cao, Jingjie; Zhao, Jingtao; Hu, Zhiying

    2015-01-01

    Contamination of seismic signal with noise is one of the main challenges during seismic data processing. Several methods exist for eliminating different types of noises, but optimal random noise attenuation remains difficult. Based on multi-scale, multi-directional locality of curvelet transform, the curvelet thresholding method is a relatively new method for random noise elimination. However, the high redundancy of a 3D curvelet transform makes its computational time and memory for massive data processing costly. To improve the efficiency of the curvelet thresholding denoising, a low-redundancy curvelet transform was introduced. The redundancy of the low-redundancy curvelet transform is approximately one-quarter of the original transform and the tightness of the original transform is also kept, thus the low-redundancy curvelet transform calls for less memory and computational resource compared with the original one. Numerical results on 3D synthetic and field data demonstrate that the low-redundancy curvelet denoising consumes one-quarter of the CPU time compared with the original curvelet transform using iterative thresholding denoising when comparable results are obtained. Thus, the low-redundancy curvelet transform is a good candidate for massive seismic denoising. (paper)

  20. Sparse non-linear denoising: Generalization performance and pattern reproducibility in functional MRI

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    We investigate sparse non-linear denoising of functional brain images by kernel Principal Component Analysis (kernel PCA). The main challenge is the mapping of denoised feature space points back into input space, also referred to as ”the pre-image problem”. Since the feature space mapping is typi...

  1. Denoising GPS-Based Structure Monitoring Data Using Hybrid EMD and Wavelet Packet

    Directory of Open Access Journals (Sweden)

    Lu Ke

    2017-01-01

    Full Text Available High-frequency components are often discarded for data denoising when applying pure wavelet multiscale or empirical mode decomposition (EMD based approaches. Instead, they may raise the problem of energy leakage in vibration signals. Hybrid EMD and wavelet packet (EMD-WP is proposed to denoise Global Positioning System- (GPS- based structure monitoring data. First, field observables are decomposed into a collection of intrinsic mode functions (IMFs with different characteristics. Second, high-frequency IMFs are denoised using the wavelet packet; then the monitoring data are reconstructed using the denoised IMFs together with the remaining low-frequency IMFs. Our algorithm is demonstrated on a synthetic displacement response of a 3-story frame excited by El Centro earthquake along with a set of Gaussian random white noises on different levels added. We find that the hybrid method can effectively weaken the multipath effect with low frequency and can potentially extract vibration feature. However, false modals may still exist by the rest of the noise contained in the high-frequency IMFs and when the frequency of the noise is located in the same band as that of effective vibration. Finally, real GPS observables are implemented to evaluate the efficiency of EMD-WP method in mitigating low-frequency multipath.

  2. Electrocardiogram signal denoising based on empirical mode decomposition technique: an overview

    International Nuclear Information System (INIS)

    Han, G.; Lin, B.; Xu, Z.

    2017-01-01

    Electrocardiogram (ECG) signal is nonlinear and non-stationary weak signal which reflects whether the heart is functioning normally or abnormally. ECG signal is susceptible to various kinds of noises such as high/low frequency noises, powerline interference and baseline wander. Hence, the removal of noises from ECG signal becomes a vital link in the ECG signal processing and plays a significant role in the detection and diagnosis of heart diseases. The review will describe the recent developments of ECG signal denoising based on Empirical Mode Decomposition (EMD) technique including high frequency noise removal, powerline interference separation, baseline wander correction, the combining of EMD and Other Methods, EEMD technique. EMD technique is a quite potential and prospective but not perfect method in the application of processing nonlinear and non-stationary signal like ECG signal. The EMD combined with other algorithms is a good solution to improve the performance of noise cancellation. The pros and cons of EMD technique in ECG signal denoising are discussed in detail. Finally, the future work and challenges in ECG signal denoising based on EMD technique are clarified.

  3. Electrocardiogram signal denoising based on empirical mode decomposition technique: an overview

    Science.gov (United States)

    Han, G.; Lin, B.; Xu, Z.

    2017-03-01

    Electrocardiogram (ECG) signal is nonlinear and non-stationary weak signal which reflects whether the heart is functioning normally or abnormally. ECG signal is susceptible to various kinds of noises such as high/low frequency noises, powerline interference and baseline wander. Hence, the removal of noises from ECG signal becomes a vital link in the ECG signal processing and plays a significant role in the detection and diagnosis of heart diseases. The review will describe the recent developments of ECG signal denoising based on Empirical Mode Decomposition (EMD) technique including high frequency noise removal, powerline interference separation, baseline wander correction, the combining of EMD and Other Methods, EEMD technique. EMD technique is a quite potential and prospective but not perfect method in the application of processing nonlinear and non-stationary signal like ECG signal. The EMD combined with other algorithms is a good solution to improve the performance of noise cancellation. The pros and cons of EMD technique in ECG signal denoising are discussed in detail. Finally, the future work and challenges in ECG signal denoising based on EMD technique are clarified.

  4. OBS Data Denoising Based on Compressed Sensing Using Fast Discrete Curvelet Transform

    Science.gov (United States)

    Nan, F.; Xu, Y.

    2017-12-01

    OBS (Ocean Bottom Seismometer) data denoising is an important step of OBS data processing and inversion. It is necessary to get clearer seismic phases for further velocity structure analysis. Traditional methods for OBS data denoising include band-pass filter, Wiener filter and deconvolution etc. (Liu, 2015). Most of these filtering methods are based on Fourier Transform (FT). Recently, the multi-scale transform methods such as wavelet transform (WT) and Curvelet transform (CvT) are widely used for data denoising in various applications. The FT, WT and CvT could represent signal sparsely and separate noise in transform domain. They could be used in different cases. Compared with Curvelet transform, the FT has Gibbs phenomenon and it cannot handle points discontinuities well. WT is well localized and multi scale, but it has poor orientation selectivity and could not handle curves discontinuities well. CvT is a multiscale directional transform that could represent curves with only a small number of coefficients. It provide an optimal sparse representation of objects with singularities along smooth curves, which is suitable for seismic data processing. As we know, different seismic phases in OBS data are showed as discontinuous curves in time domain. Hence, we promote to analysis the OBS data via CvT and separate the noise in CvT domain. In this paper, our sparsity-promoting inversion approach is restrained by L1 condition and we solve this L1 problem by using modified iteration thresholding. Results show that the proposed method could suppress the noise well and give sparse results in Curvelet domain. Figure 1 compares the Curvelet denoising method with Wavelet method on the same iterations and threshold through synthetic example. a)Original data. b) Add-noise data. c) Denoised data using CvT. d) Denoised data using WT. The CvT can well eliminate the noise and has better result than WT. Further we applied the CvT denoise method for the OBS data processing. Figure 2a

  5. Quantification of GABAA receptors in the rat brain with [123I]Iomazenil SPECT from factor analysis-denoised images

    International Nuclear Information System (INIS)

    Tsartsalis, Stergios; Moulin-Sallanon, Marcelle; Dumas, Noé; Tournier, Benjamin B.; Ghezzi, Catherine; Charnay, Yves; Ginovart, Nathalie; Millet, Philippe

    2014-01-01

    Purpose: In vivo imaging of GABA A receptors is essential for the comprehension of psychiatric disorders in which the GABAergic system is implicated. Small animal SPECT provides a modality for in vivo imaging of the GABAergic system in rodents using [ 123 I]Iomazenil, an antagonist of the GABA A receptor. The goal of this work is to describe and evaluate different quantitative reference tissue methods that enable reliable binding potential (BP) estimations in the rat brain to be obtained. Methods: Five male Sprague–Dawley rats were used for [ 123 I]Iomazenil brain SPECT scans. Binding parameters were obtained with a one-tissue compartment model (1TC), a constrained two-tissue compartment model (2TC c ), the two-step Simplified Reference Tissue Model (SRTM2), Logan graphical analysis and analysis of delayed-activity images. In addition, we employed factor analysis (FA) to deal with noise in data. Results: BP ND obtained with SRTM2, Logan graphical analysis and delayed-activity analysis was highly correlated with BP F values obtained with 2TC c (r = 0.954 and 0.945 respectively, p c and SRTM2 in raw and FA-denoised images (r = 0.961 and 0.909 respectively, p ND values from raw images while scans of only 70 min are sufficient from FA-denoised images. These images are also associated with significantly lower standard errors of 2TC c and SRTM2 BP values. Conclusion: Reference tissue methods such as SRTM2 and Logan graphical analysis can provide equally reliable BP ND values from rat brain [ 123 I]Iomazenil SPECT. Acquisitions, however, can be much less time-consuming either with analysis of delayed activity obtained from a 20-minute scan 50 min after tracer injection or with FA-denoising of images

  6. Machinery vibration signal denoising based on learned dictionary and sparse representation

    International Nuclear Information System (INIS)

    Guo, Liang; Gao, Hongli; Li, Jun; Huang, Haifeng; Zhang, Xiaochen

    2015-01-01

    Mechanical vibration signal denoising has been an import problem for machine damage assessment and health monitoring. Wavelet transfer and sparse reconstruction are the powerful and practical methods. However, those methods are based on the fixed basis functions or atoms. In this paper, a novel method is presented. The atoms used to represent signals are learned from the raw signal. And in order to satisfy the requirements of real-time signal processing, an online dictionary learning algorithm is adopted. Orthogonal matching pursuit is applied to extract the most pursuit column in the dictionary. At last, denoised signal is calculated with the sparse vector and learned dictionary. A simulation signal and real bearing fault signal are utilized to evaluate the improved performance of the proposed method through the comparison with kinds of denoising algorithms. Then Its computing efficiency is demonstrated by an illustrative runtime example. The results show that the proposed method outperforms current algorithms with efficiency calculation. (paper)

  7. On Adapting the Tensor Voting Framework to Robust Color Image Denoising

    Science.gov (United States)

    Moreno, Rodrigo; Garcia, Miguel Angel; Puig, Domenec; Julià, Carme

    This paper presents an adaptation of the tensor voting framework for color image denoising, while preserving edges. Tensors are used in order to encode the CIELAB color channels, the uniformity and the edginess of image pixels. A specific voting process is proposed in order to propagate color from a pixel to its neighbors by considering the distance between pixels, the perceptual color difference (by using an optimized version of CIEDE2000), a uniformity measurement and the likelihood of the pixels being impulse noise. The original colors are corrected with those encoded by the tensors obtained after the voting process. Peak to noise ratios and visual inspection show that the proposed methodology has a better performance than state-of-the-art techniques.

  8. A virtualized software based on the NVIDIA cuFFT library for image denoising: performance analysis

    DEFF Research Database (Denmark)

    Galletti, Ardelio; Marcellino, Livia; Montella, Raffaele

    2017-01-01

    Abstract Generic Virtualization Service (GVirtuS) is a new solution for enabling GPGPU on Virtual Machines or low powered devices. This paper focuses on the performance analysis that can be obtained using a GPGPU virtualized software. Recently, GVirtuS has been extended in order to support CUDA...... ancillary libraries with good results. Here, our aim is to analyze the applicability of this powerful tool to a real problem, which uses the NVIDIA cuFFT library. As case study we consider a simple denoising algorithm, implementing a virtualized GPU-parallel software based on the convolution theorem...

  9. A Small Leak Detection Method Based on VMD Adaptive De-Noising and Ambiguity Correlation Classification Intended for Natural Gas Pipelines.

    Science.gov (United States)

    Xiao, Qiyang; Li, Jian; Bai, Zhiliang; Sun, Jiedi; Zhou, Nan; Zeng, Zhoumo

    2016-12-13

    In this study, a small leak detection method based on variational mode decomposition (VMD) and ambiguity correlation classification (ACC) is proposed. The signals acquired from sensors were decomposed using the VMD, and numerous components were obtained. According to the probability density function (PDF), an adaptive de-noising algorithm based on VMD is proposed for noise component processing and de-noised components reconstruction. Furthermore, the ambiguity function image was employed for analysis of the reconstructed signals. Based on the correlation coefficient, ACC is proposed to detect the small leak of pipeline. The analysis of pipeline leakage signals, using 1 mm and 2 mm leaks, has shown that proposed detection method can detect a small leak accurately and effectively. Moreover, the experimental results have shown that the proposed method achieved better performances than support vector machine (SVM) and back propagation neural network (BP) methods.

  10. Application of reversible denoising and lifting steps with step skipping to color space transforms for improved lossless compression

    Science.gov (United States)

    Starosolski, Roman

    2016-07-01

    Reversible denoising and lifting steps (RDLS) are lifting steps integrated with denoising filters in such a way that, despite the inherently irreversible nature of denoising, they are perfectly reversible. We investigated the application of RDLS to reversible color space transforms: RCT, YCoCg-R, RDgDb, and LDgEb. In order to improve RDLS effects, we propose a heuristic for image-adaptive denoising filter selection, a fast estimator of the compressed image bitrate, and a special filter that may result in skipping of the steps. We analyzed the properties of the presented methods, paying special attention to their usefulness from a practical standpoint. For a diverse image test-set and lossless JPEG-LS, JPEG 2000, and JPEG XR algorithms, RDLS improves the bitrates of all the examined transforms. The most interesting results were obtained for an estimation-based heuristic filter selection out of a set of seven filters; the cost of this variant was similar to or lower than the transform cost, and it improved the average lossless JPEG 2000 bitrates by 2.65% for RDgDb and by over 1% for other transforms; bitrates of certain images were improved to a significantly greater extent.

  11. GPU Performance and Power Consumption Analysis: A DCT based denoising application

    OpenAIRE

    Pi Puig, Martín; De Giusti, Laura Cristina; Naiouf, Marcelo; De Giusti, Armando Eduardo

    2017-01-01

    It is known that energy and power consumption are becoming serious metrics in the design of high performance workstations because of heat dissipation problems. In the last years, GPU accelerators have been integrating many of these expensive systems despite they are embedding more and more transistors on their chips producing a quick increase of power consumption requirements. This paper analyzes an image processing application, in particular a Discrete Cosine Transform denoising algorithm, i...

  12. A Small Leak Detection Method Based on VMD Adaptive De-Noising and Ambiguity Correlation Classification Intended for Natural Gas Pipelines

    Directory of Open Access Journals (Sweden)

    Qiyang Xiao

    2016-12-01

    Full Text Available In this study, a small leak detection method based on variational mode decomposition (VMD and ambiguity correlation classification (ACC is proposed. The signals acquired from sensors were decomposed using the VMD, and numerous components were obtained. According to the probability density function (PDF, an adaptive de-noising algorithm based on VMD is proposed for noise component processing and de-noised components reconstruction. Furthermore, the ambiguity function image was employed for analysis of the reconstructed signals. Based on the correlation coefficient, ACC is proposed to detect the small leak of pipeline. The analysis of pipeline leakage signals, using 1 mm and 2 mm leaks, has shown that proposed detection method can detect a small leak accurately and effectively. Moreover, the experimental results have shown that the proposed method achieved better performances than support vector machine (SVM and back propagation neural network (BP methods.

  13. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    Science.gov (United States)

    Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad

    2016-01-01

    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  14. Efficient OCT Image Enhancement Based on Collaborative Shock Filtering.

    Science.gov (United States)

    Liu, Guohua; Wang, Ziyu; Mu, Guoying; Li, Peijin

    2018-01-01

    Efficient enhancement of noisy optical coherence tomography (OCT) images is a key task for interpreting them correctly. In this paper, to better enhance details and layered structures of a human retina image, we propose a collaborative shock filtering for OCT image denoising and enhancement. Noisy OCT image is first denoised by a collaborative filtering method with new similarity measure, and then the denoised image is sharpened by a shock-type filtering for edge and detail enhancement. For dim OCT images, in order to improve image contrast for the detection of tiny lesions, a gamma transformation is first used to enhance the images within proper gray levels. The proposed method integrating image smoothing and sharpening simultaneously obtains better visual results in experiments.

  15. Image decomposition model Shearlet-Hilbert-L2 with better performance for denoising in ESPI fringe patterns.

    Science.gov (United States)

    Xu, Wenjun; Tang, Chen; Su, Yonggang; Li, Biyuan; Lei, Zhenkun

    2018-02-01

    In this paper, we propose an image decomposition model Shearlet-Hilbert-L 2 with better performance for denoising in electronic speckle pattern interferometry (ESPI) fringe patterns. In our model, the low-density fringes, high-density fringes, and noise are, respectively, described by shearlet smoothness spaces, adaptive Hilbert space, and L 2 space and processed individually. Because the shearlet transform has superior directional sensitivity, our proposed Shearlet-Hilbert-L 2 model achieves commendable filtering results for various types of ESPI fringe patterns, including uniform density fringe patterns, moderately variable density fringe patterns, and greatly variable density fringe patterns. We evaluate the performance of our proposed Shearlet-Hilbert-L 2 model via application to two computer-simulated and nine experimentally obtained ESPI fringe patterns with various densities and poor quality. Furthermore, we compare our proposed model with windowed Fourier filtering and coherence-enhancing diffusion, both of which are the state-of-the-art methods for ESPI fringe patterns denoising in transform domain and spatial domain, respectively. We also compare our proposed model with the previous image decomposition model BL-Hilbert-L 2 .

  16. Multisensor signal denoising based on matching synchrosqueezing wavelet transform for mechanical fault condition assessment

    Science.gov (United States)

    Yi, Cancan; Lv, Yong; Xiao, Han; Huang, Tao; You, Guanghui

    2018-04-01

    Since it is difficult to obtain the accurate running status of mechanical equipment with only one sensor, multisensor measurement technology has attracted extensive attention. In the field of mechanical fault diagnosis and condition assessment based on vibration signal analysis, multisensor signal denoising has emerged as an important tool to improve the reliability of the measurement result. A reassignment technique termed the synchrosqueezing wavelet transform (SWT) has obvious superiority in slow time-varying signal representation and denoising for fault diagnosis applications. The SWT uses the time-frequency reassignment scheme, which can provide signal properties in 2D domains (time and frequency). However, when the measured signal contains strong noise components and fast varying instantaneous frequency, the performance of SWT-based analysis still depends on the accuracy of instantaneous frequency estimation. In this paper, a matching synchrosqueezing wavelet transform (MSWT) is investigated as a potential candidate to replace the conventional synchrosqueezing transform for the applications of denoising and fault feature extraction. The improved technology utilizes the comprehensive instantaneous frequency estimation by chirp rate estimation to achieve a highly concentrated time-frequency representation so that the signal resolution can be significantly improved. To exploit inter-channel dependencies, the multisensor denoising strategy is performed by using a modulated multivariate oscillation model to partition the time-frequency domain; then, the common characteristics of the multivariate data can be effectively identified. Furthermore, a modified universal threshold is utilized to remove noise components, while the signal components of interest can be retained. Thus, a novel MSWT-based multisensor signal denoising algorithm is proposed in this paper. The validity of this method is verified by numerical simulation, and experiments including a rolling

  17. LSTM-Based Hierarchical Denoising Network for Android Malware Detection

    OpenAIRE

    Yan, Jinpei; Qi, Yong; Rao, Qifan

    2018-01-01

    Mobile security is an important issue on Android platform. Most malware detection methods based on machine learning models heavily rely on expert knowledge for manual feature engineering, which are still difficult to fully describe malwares. In this paper, we present LSTM-based hierarchical denoise network (HDN), a novel static Android malware detection method which uses LSTM to directly learn from the raw opcode sequences extracted from decompiled Android files. However, most opcode sequence...

  18. Raman spectroscopy denoising based on smoothing filter combined with EEMD algorithm

    Science.gov (United States)

    Tian, Dayong; Lv, Xiaoyi; Mo, Jiaqing; Chen, Chen

    2018-02-01

    In the extraction of Raman spectra, the signal will be affected by a variety of background noises, and then the effective information of Raman spectra is weakened or even submerged in noises, so the spectral analysis and denoising processing is very important. The traditional ensemble empirical mode decomposition (EEMD) method is to remove the noises by removing the IMF components that mainly contain the noises. However, it will lose some details of the Raman signal. For the problem of EEMD algorithm, the denoising method of smoothing filter combined with EEMD is proposed in this paper. First, EEMD is used to decompose the Raman noise signal into several IMF components. Then, the components mainly containing noises are selected using the self-correlation function, and the smoothing filter is used to remove the noises of the components. Finally, the sum of the denoised components is added with the remaining components to obtain the final denoised signal. The experimental results show that compared with the traditional denoising algorithm, the signal-to-noise ratio (SNR), the root mean square error (RMSE) and the correlation coefficient are significantly improved by using the proposed smoothing filter combined with EEMD.

  19. AMA- and RWE- Based Adaptive Kalman Filter for Denoising Fiber Optic Gyroscope Drift Signal.

    Science.gov (United States)

    Yang, Gongliu; Liu, Yuanyuan; Li, Ming; Song, Shunguang

    2015-10-23

    An improved double-factor adaptive Kalman filter called AMA-RWE-DFAKF is proposed to denoise fiber optic gyroscope (FOG) drift signal in both static and dynamic conditions. The first factor is Kalman gain updated by random weighting estimation (RWE) of the covariance matrix of innovation sequence at any time to ensure the lowest noise level of output, but the inertia of KF response increases in dynamic condition. To decrease the inertia, the second factor is the covariance matrix of predicted state vector adjusted by RWE only when discontinuities are detected by adaptive moving average (AMA).The AMA-RWE-DFAKF is applied for denoising FOG static and dynamic signals, its performance is compared with conventional KF (CKF), RWE-based adaptive KF with gain correction (RWE-AKFG), AMA- and RWE- based dual mode adaptive KF (AMA-RWE-DMAKF). Results of Allan variance on static signal and root mean square error (RMSE) on dynamic signal show that this proposed algorithm outperforms all the considered methods in denoising FOG signal.

  20. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    Directory of Open Access Journals (Sweden)

    Khang Jie Liew

    Full Text Available This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  1. Wiener discrete cosine transform-based image filtering

    Science.gov (United States)

    Pogrebnyak, Oleksiy; Lukin, Vladimir V.

    2012-10-01

    A classical problem of additive white (spatially uncorrelated) Gaussian noise suppression in grayscale images is considered. The main attention is paid to discrete cosine transform (DCT)-based denoising, in particular, to image processing in blocks of a limited size. The efficiency of DCT-based image filtering with hard thresholding is studied for different sizes of overlapped blocks. A multiscale approach that aggregates the outputs of DCT filters having different overlapped block sizes is proposed. Later, a two-stage denoising procedure that presumes the use of the multiscale DCT-based filtering with hard thresholding at the first stage and a multiscale Wiener DCT-based filtering at the second stage is proposed and tested. The efficiency of the proposed multiscale DCT-based filtering is compared to the state-of-the-art block-matching and three-dimensional filter. Next, the potentially reachable multiscale filtering efficiency in terms of output mean square error (MSE) is studied. The obtained results are of the same order as those obtained by Chatterjee's approach based on nonlocal patch processing. It is shown that the ideal Wiener DCT-based filter potential is usually higher when noise variance is high.

  2. A SVD Based Image Complexity Measure

    DEFF Research Database (Denmark)

    Gustafsson, David Karl John; Pedersen, Kim Steenstrup; Nielsen, Mads

    2009-01-01

    Images are composed of geometric structures and texture, and different image processing tools - such as denoising, segmentation and registration - are suitable for different types of image contents. Characterization of the image content in terms of geometric structure and texture is an important...... problem that one is often faced with. We propose a patch based complexity measure, based on how well the patch can be approximated using singular value decomposition. As such the image complexity is determined by the complexity of the patches. The concept is demonstrated on sequences from the newly...... collected DIKU Multi-Scale image database....

  3. Mesh Denoising based on Normal Voting Tensor and Binary Optimization

    OpenAIRE

    Yadav, S. K.; Reitebuch, U.; Polthier, K.

    2016-01-01

    This paper presents a tensor multiplication based smoothing algorithm that follows a two step denoising method. Unlike other traditional averaging approaches, our approach uses an element based normal voting tensor to compute smooth surfaces. By introducing a binary optimization on the proposed tensor together with a local binary neighborhood concept, our algorithm better retains sharp features and produces smoother umbilical regions than previous approaches. On top of that, we provide a stoc...

  4. Denoising by semi-supervised kernel PCA preimaging

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Abrahamsen, Trine Julie; Hansen, Lars Kai

    2014-01-01

    Kernel Principal Component Analysis (PCA) has proven a powerful tool for nonlinear feature extraction, and is often applied as a pre-processing step for classification algorithms. In denoising applications Kernel PCA provides the basis for dimensionality reduction, prior to the so-called pre-imag...

  5. Speckle reduction in optical coherence tomography images based on wave atoms

    Science.gov (United States)

    Du, Yongzhao; Liu, Gangjun; Feng, Guoying; Chen, Zhongping

    2014-01-01

    Abstract. Optical coherence tomography (OCT) is an emerging noninvasive imaging technique, which is based on low-coherence interferometry. OCT images suffer from speckle noise, which reduces image contrast. A shrinkage filter based on wave atoms transform is proposed for speckle reduction in OCT images. Wave atoms transform is a new multiscale geometric analysis tool that offers sparser expansion and better representation for images containing oscillatory patterns and textures than other traditional transforms, such as wavelet and curvelet transforms. Cycle spinning-based technology is introduced to avoid visual artifacts, such as Gibbs-like phenomenon, and to develop a translation invariant wave atoms denoising scheme. The speckle suppression degree in the denoised images is controlled by an adjustable parameter that determines the threshold in the wave atoms domain. The experimental results show that the proposed method can effectively remove the speckle noise and improve the OCT image quality. The signal-to-noise ratio, contrast-to-noise ratio, average equivalent number of looks, and cross-correlation (XCOR) values are obtained, and the results are also compared with the wavelet and curvelet thresholding techniques. PMID:24825507

  6. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los

    2013-11-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  7. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los; Schö nlieb, Carola-Bibiane

    2013-01-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  8. A wavelet and least square filter based spatial-spectral denoising approach of hyperspectral imagery

    Science.gov (United States)

    Li, Ting; Chen, Xiao-Mei; Chen, Gang; Xue, Bo; Ni, Guo-Qiang

    2009-11-01

    Noise reduction is a crucial step in hyperspectral imagery pre-processing. Based on sensor characteristics, the noise of hyperspectral imagery represents in both spatial and spectral domain. However, most prevailing denosing techniques process the imagery in only one specific domain, which have not utilized multi-domain nature of hyperspectral imagery. In this paper, a new spatial-spectral noise reduction algorithm is proposed, which is based on wavelet analysis and least squares filtering techniques. First, in the spatial domain, a new stationary wavelet shrinking algorithm with improved threshold function is utilized to adjust the noise level band-by-band. This new algorithm uses BayesShrink for threshold estimation, and amends the traditional soft-threshold function by adding shape tuning parameters. Comparing with soft or hard threshold function, the improved one, which is first-order derivable and has a smooth transitional region between noise and signal, could save more details of image edge and weaken Pseudo-Gibbs. Then, in the spectral domain, cubic Savitzky-Golay filter based on least squares method is used to remove spectral noise and artificial noise that may have been introduced in during the spatial denoising. Appropriately selecting the filter window width according to prior knowledge, this algorithm has effective performance in smoothing the spectral curve. The performance of the new algorithm is experimented on a set of Hyperion imageries acquired in 2007. The result shows that the new spatial-spectral denoising algorithm provides more significant signal-to-noise-ratio improvement than traditional spatial or spectral method, while saves the local spectral absorption features better.

  9. An NMR log echo data de-noising method based on the wavelet packet threshold algorithm

    International Nuclear Information System (INIS)

    Meng, Xiangning; Xie, Ranhong; Li, Changxi; Hu, Falong; Li, Chaoliu; Zhou, Cancan

    2015-01-01

    To improve the de-noising effects of low signal-to-noise ratio (SNR) nuclear magnetic resonance (NMR) log echo data, this paper applies the wavelet packet threshold algorithm to the data. The principle of the algorithm is elaborated in detail. By comparing the properties of a series of wavelet packet bases and the relevance between them and the NMR log echo train signal, ‘sym7’ is found to be the optimal wavelet packet basis of the wavelet packet threshold algorithm to de-noise the NMR log echo train signal. A new method is presented to determine the optimal wavelet packet decomposition scale; this is within the scope of its maximum, using the modulus maxima and the Shannon entropy minimum standards to determine the global and local optimal wavelet packet decomposition scales, respectively. The results of applying the method to the simulated and actual NMR log echo data indicate that compared with the wavelet threshold algorithm, the wavelet packet threshold algorithm, which shows higher decomposition accuracy and better de-noising effect, is much more suitable for de-noising low SNR–NMR log echo data. (paper)

  10. Random Correlation Matrix and De-Noising

    OpenAIRE

    Ken-ichi Mitsui; Yoshio Tabata

    2006-01-01

    In Finance, the modeling of a correlation matrix is one of the important problems. In particular, the correlation matrix obtained from market data has the noise. Here we apply the de-noising processing based on the wavelet analysis to the noisy correlation matrix, which is generated by a parametric function with random parameters. First of all, we show that two properties, i.e. symmetry and ones of all diagonal elements, of the correlation matrix preserve via the de-noising processing and the...

  11. Discrete wavelet transform-based denoising technique for advanced state-of-charge estimator of a lithium-ion battery in electric vehicles

    International Nuclear Information System (INIS)

    Lee, Seongjun; Kim, Jonghoon

    2015-01-01

    Sophisticated data of the experimental DCV (discharging/charging voltage) of a lithium-ion battery is required for high-accuracy SOC (state-of-charge) estimation algorithms based on the state-space ECM (electrical circuit model) in BMSs (battery management systems). However, when sensing noisy DCV signals, erroneous SOC estimation (which results in low BMS performance) is inevitable. Therefore, this manuscript describes the design and implementation of a DWT (discrete wavelet transform)-based denoising technique for DCV signals. The steps for denoising a noisy DCV measurement in the proposed approach are as follows. First, using MRA (multi-resolution analysis), the noise-riding DCV signal is decomposed into different frequency sub-bands (low- and high-frequency components, A n and D n ). Specifically, signal processing of the high frequency component D n that focuses on a short-time interval is necessary to reduce noise in the DCV measurement. Second, a hard-thresholding-based denoising rule is applied to adjust the wavelet coefficients of the DWT to achieve a clear separation between the signal and the noise. Third, the desired de-noised DCV signal is reconstructed by taking the IDWT (inverse discrete wavelet transform) of the filtered detailed coefficients. Finally, this signal is sent to the ECM-based SOC estimation algorithm using an EKF (extended Kalman filter). Experimental results indicate the robustness of the proposed approach for reliable SOC estimation. - Highlights: • Sophisticated data of the experimental DCV is required for high-accuracy SOC. • DWT (discrete wavelet transform)-based denoising technique is newly investigated. • Three steps for denoising a noisy DCV measurement in this work are implemented. • Experimental results indicate the robustness of the proposed work for reliable SOC

  12. The Hilbert-Huang Transform-Based Denoising Method for the TEM Response of a PRBS Source Signal

    Science.gov (United States)

    Hai, Li; Guo-qiang, Xue; Pan, Zhao; Hua-sen, Zhong; Khan, Muhammad Younis

    2016-08-01

    The denoising process is critical in processing transient electromagnetic (TEM) sounding data. For the full waveform pseudo-random binary sequences (PRBS) response, an inadequate noise estimation may result in an erroneous interpretation. We consider the Hilbert-Huang transform (HHT) and its application to suppress the noise in the PRBS response. The focus is on the thresholding scheme to suppress the noise and the analysis of the signal based on its Hilbert time-frequency representation. The method first decomposes the signal into the intrinsic mode function, and then, inspired by the thresholding scheme in wavelet analysis; an adaptive and interval thresholding is conducted to set to zero all the components in intrinsic mode function which are lower than a threshold related to the noise level. The algorithm is based on the characteristic of the PRBS response. The HHT-based denoising scheme is tested on the synthetic and field data with the different noise levels. The result shows that the proposed method has a good capability in denoising and detail preservation.

  13. A procedure for denoising dual-axis swallowing accelerometry signals

    International Nuclear Information System (INIS)

    Sejdić, Ervin; Chau, Tom; Steele, Catriona M

    2010-01-01

    Dual-axis swallowing accelerometry is an emerging tool for the assessment of dysphagia (swallowing difficulties). These signals however can be very noisy as a result of physiological and motion artifacts. In this note, we propose a novel scheme for denoising those signals, i.e. a computationally efficient search for the optimal denoising threshold within a reduced wavelet subspace. To determine a viable subspace, the algorithm relies on the minimum value of the estimated upper bound for the reconstruction error. A numerical analysis of the proposed scheme using synthetic test signals demonstrated that the proposed scheme is computationally more efficient than minimum noiseless description length (MNDL)-based denoising. It also yields smaller reconstruction errors than MNDL, SURE and Donoho denoising methods. When applied to dual-axis swallowing accelerometry signals, the proposed scheme exhibits improved performance for dry, wet and wet chin tuck swallows. These results are important for the further development of medical devices based on dual-axis swallowing accelerometry signals. (note)

  14. A New Wavelet Threshold Determination Method Considering Interscale Correlation in Signal Denoising

    Directory of Open Access Journals (Sweden)

    Can He

    2015-01-01

    Full Text Available Due to simple calculation and good denoising effect, wavelet threshold denoising method has been widely used in signal denoising. In this method, the threshold is an important parameter that affects the denoising effect. In order to improve the denoising effect of the existing methods, a new threshold considering interscale correlation is presented. Firstly, a new correlation index is proposed based on the propagation characteristics of the wavelet coefficients. Then, a threshold determination strategy is obtained using the new index. At the end of the paper, a simulation experiment is given to verify the effectiveness of the proposed method. In the experiment, four benchmark signals are used as test signals. Simulation results show that the proposed method can achieve a good denoising effect under various signal types, noise intensities, and thresholding functions.

  15. The denoising of Monte Carlo dose distributions using convolution superposition calculations

    International Nuclear Information System (INIS)

    El Naqa, I; Cui, J; Lindsay, P; Olivera, G; Deasy, J O

    2007-01-01

    Monte Carlo (MC) dose calculations can be accurate but are also computationally intensive. In contrast, convolution superposition (CS) offers faster and smoother results but by making approximations. We investigated MC denoising techniques, which use available convolution superposition results and new noise filtering methods to guide and accelerate MC calculations. Two main approaches were developed to combine CS information with MC denoising. In the first approach, the denoising result is iteratively updated by adding the denoised residual difference between the result and the MC image. Multi-scale methods were used (wavelets or contourlets) for denoising the residual. The iterations are initialized by the CS data. In the second approach, we used a frequency splitting technique by quadrature filtering to combine low frequency components derived from MC simulations with high frequency components derived from CS components. The rationale is to take the scattering tails as well as dose levels in the high-dose region from the MC calculations, which presumably more accurately incorporates scatter; high-frequency details are taken from CS calculations. 3D Butterworth filters were used to design the quadrature filters. The methods were demonstrated using anonymized clinical lung and head and neck cases. The MC dose distributions were calculated by the open-source dose planning method MC code with varying noise levels. Our results indicate that the frequency-splitting technique for incorporating CS-guided MC denoising is promising in terms of computational efficiency and noise reduction. (note)

  16. A Novel Partial Discharge Ultra-High Frequency Signal De-Noising Method Based on a Single-Channel Blind Source Separation Algorithm

    Directory of Open Access Journals (Sweden)

    Liangliang Wei

    2018-02-01

    Full Text Available To effectively de-noise the Gaussian white noise and periodic narrow-band interference in the background noise of partial discharge ultra-high frequency (PD UHF signals in field tests, a novel de-noising method, based on a single-channel blind source separation algorithm, is proposed. Compared with traditional methods, the proposed method can effectively de-noise the noise interference, and the distortion of the de-noising PD signal is smaller. Firstly, the PD UHF signal is time-frequency analyzed by S-transform to obtain the number of source signals. Then, the single-channel detected PD signal is converted into multi-channel signals by singular value decomposition (SVD, and background noise is separated from multi-channel PD UHF signals by the joint approximate diagonalization of eigen-matrix method. At last, the source PD signal is estimated and recovered by the l1-norm minimization method. The proposed de-noising method was applied on the simulation test and field test detected signals, and the de-noising performance of the different methods was compared. The simulation and field test results demonstrate the effectiveness and correctness of the proposed method.

  17. Using wavelet denoising and mathematical morphology in the segmentation technique applied to blood cells images.

    Science.gov (United States)

    Boix, Macarena; Cantó, Begoña

    2013-04-01

    Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet denoising we determine the best wavelet that shows a segmentation with the largest area in the cell. We study different wavelet families and we conclude that the wavelet db1 is the best and it can serve for posterior works on blood pathologies. The proposed method generates goods results when it is applied on several images. Finally, the proposed algorithm made in MatLab environment is verified for a selected blood cells.

  18. Prognostics of Lithium-Ion Batteries Based on Wavelet Denoising and DE-RVM

    Science.gov (United States)

    Zhang, Chaolong; He, Yigang; Yuan, Lifeng; Xiang, Sheng; Wang, Jinping

    2015-01-01

    Lithium-ion batteries are widely used in many electronic systems. Therefore, it is significantly important to estimate the lithium-ion battery's remaining useful life (RUL), yet very difficult. One important reason is that the measured battery capacity data are often subject to the different levels of noise pollution. In this paper, a novel battery capacity prognostics approach is presented to estimate the RUL of lithium-ion batteries. Wavelet denoising is performed with different thresholds in order to weaken the strong noise and remove the weak noise. Relevance vector machine (RVM) improved by differential evolution (DE) algorithm is utilized to estimate the battery RUL based on the denoised data. An experiment including battery 5 capacity prognostics case and battery 18 capacity prognostics case is conducted and validated that the proposed approach can predict the trend of battery capacity trajectory closely and estimate the battery RUL accurately. PMID:26413090

  19. Deep architecture neural network-based real-time image processing for image-guided radiotherapy.

    Science.gov (United States)

    Mori, Shinichiro

    2017-08-01

    To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  20. Image features dependant correlation-weighting function for efficient PRNU based source camera identification.

    Science.gov (United States)

    Tiwari, Mayank; Gupta, Bhupendra

    2018-04-01

    For source camera identification (SCI), photo response non-uniformity (PRNU) has been widely used as the fingerprint of the camera. The PRNU is extracted from the image by applying a de-noising filter then taking the difference between the original image and the de-noised image. However, it is observed that intensity-based features and high-frequency details (edges and texture) of the image, effect quality of the extracted PRNU. This effects correlation calculation and creates problems in SCI. For solving this problem, we propose a weighting function based on image features. We have experimentally identified image features (intensity and high-frequency contents) effect on the estimated PRNU, and then develop a weighting function which gives higher weights to image regions which give reliable PRNU and at the same point it gives comparatively less weights to the image regions which do not give reliable PRNU. Experimental results show that the proposed weighting function is able to improve the accuracy of SCI up to a great extent. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. NOTE: The denoising of Monte Carlo dose distributions using convolution superposition calculations

    Science.gov (United States)

    El Naqa, I.; Cui, J.; Lindsay, P.; Olivera, G.; Deasy, J. O.

    2007-09-01

    Monte Carlo (MC) dose calculations can be accurate but are also computationally intensive. In contrast, convolution superposition (CS) offers faster and smoother results but by making approximations. We investigated MC denoising techniques, which use available convolution superposition results and new noise filtering methods to guide and accelerate MC calculations. Two main approaches were developed to combine CS information with MC denoising. In the first approach, the denoising result is iteratively updated by adding the denoised residual difference between the result and the MC image. Multi-scale methods were used (wavelets or contourlets) for denoising the residual. The iterations are initialized by the CS data. In the second approach, we used a frequency splitting technique by quadrature filtering to combine low frequency components derived from MC simulations with high frequency components derived from CS components. The rationale is to take the scattering tails as well as dose levels in the high-dose region from the MC calculations, which presumably more accurately incorporates scatter; high-frequency details are taken from CS calculations. 3D Butterworth filters were used to design the quadrature filters. The methods were demonstrated using anonymized clinical lung and head and neck cases. The MC dose distributions were calculated by the open-source dose planning method MC code with varying noise levels. Our results indicate that the frequency-splitting technique for incorporating CS-guided MC denoising is promising in terms of computational efficiency and noise reduction.

  2. Vibration sensor data denoising using a time-frequency manifold for machinery fault diagnosis.

    Science.gov (United States)

    He, Qingbo; Wang, Xiangxiang; Zhou, Qiang

    2013-12-27

    Vibration sensor data from a mechanical system are often associated with important measurement information useful for machinery fault diagnosis. However, in practice the existence of background noise makes it difficult to identify the fault signature from the sensing data. This paper introduces the time-frequency manifold (TFM) concept into sensor data denoising and proposes a novel denoising method for reliable machinery fault diagnosis. The TFM signature reflects the intrinsic time-frequency structure of a non-stationary signal. The proposed method intends to realize data denoising by synthesizing the TFM using time-frequency synthesis and phase space reconstruction (PSR) synthesis. Due to the merits of the TFM in noise suppression and resolution enhancement, the denoised signal would have satisfactory denoising effects, as well as inherent time-frequency structure keeping. Moreover, this paper presents a clustering-based statistical parameter to evaluate the proposed method, and also presents a new diagnostic approach, called frequency probability time series (FPTS) spectral analysis, to show its effectiveness in fault diagnosis. The proposed TFM-based data denoising method has been employed to deal with a set of vibration sensor data from defective bearings, and the results verify that for machinery fault diagnosis the method is superior to two traditional denoising methods.

  3. Mesh Denoising based on Normal Voting Tensor and Binary Optimization.

    Science.gov (United States)

    Yadav, Sunil Kumar; Reitebuch, Ulrich; Polthier, Konrad

    2017-08-17

    This paper presents a two-stage mesh denoising algorithm. Unlike other traditional averaging approaches, our approach uses an element-based normal voting tensor to compute smooth surfaces. By introducing a binary optimization on the proposed tensor together with a local binary neighborhood concept, our algorithm better retains sharp features and produces smoother umbilical regions than previous approaches. On top of that, we provide a stochastic analysis on the different kinds of noise based on the average edge length. The quantitative results demonstrate that the performance of our method is better compared to state-of-the-art smoothing approaches.

  4. Electrocardiogram de-noising based on forward wavelet transform ...

    Indian Academy of Sciences (India)

    Ratio (SNR) and Mean Square Error (MSE) computations showed that our proposed ... This technique permits to cancel noises and retain the informa- tion of the ... Wavelet analysis is used for transforming the signal under investigation into joined temporal and ... introduced the BWT in our proposed ECG de-noising system.

  5. Image interpolation and denoising for division of focal plane sensors using Gaussian processes.

    Science.gov (United States)

    Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor

    2014-06-16

    Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.

  6. Stacked Denoise Autoencoder Based Feature Extraction and Classification for Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Chen Xing

    2016-01-01

    Full Text Available Deep learning methods have been successfully applied to learn feature representations for high-dimensional data, where the learned features are able to reveal the nonlinear properties exhibited in the data. In this paper, deep learning method is exploited for feature extraction of hyperspectral data, and the extracted features can provide good discriminability for classification task. Training a deep network for feature extraction and classification includes unsupervised pretraining and supervised fine-tuning. We utilized stacked denoise autoencoder (SDAE method to pretrain the network, which is robust to noise. In the top layer of the network, logistic regression (LR approach is utilized to perform supervised fine-tuning and classification. Since sparsity of features might improve the separation capability, we utilized rectified linear unit (ReLU as activation function in SDAE to extract high level and sparse features. Experimental results using Hyperion, AVIRIS, and ROSIS hyperspectral data demonstrated that the SDAE pretraining in conjunction with the LR fine-tuning and classification (SDAE_LR can achieve higher accuracies than the popular support vector machine (SVM classifier.

  7. Wavelet denoising of multiframe optical coherence tomography data.

    Science.gov (United States)

    Mayer, Markus A; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y; Tornow, Ralf P

    2012-03-01

    We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise.

  8. ECG denoising with adaptive bionic wavelet transform.

    Science.gov (United States)

    Sayadi, Omid; Shamsollahi, Mohammad Bagher

    2006-01-01

    In this paper a new ECG denoising scheme is proposed using a novel adaptive wavelet transform, named bionic wavelet transform (BWT), which had been first developed based on a model of the active auditory system. There has been some outstanding features with the BWT such as nonlinearity, high sensitivity and frequency selectivity, concentrated energy distribution and its ability to reconstruct signal via inverse transform but the most distinguishing characteristic of BWT is that its resolution in the time-frequency domain can be adaptively adjusted not only by the signal frequency but also by the signal instantaneous amplitude and its first-order differential. Besides by optimizing the BWT parameters parallel to modifying a new threshold value, one can handle ECG denoising with results comparing to those of wavelet transform (WT). Preliminary tests of BWT application to ECG denoising were constructed on the signals of MIT-BIH database which showed high performance of noise reduction.

  9. Wavelet Based Denoising for the Estimation of the State of Charge for Lithium-Ion Batteries

    Directory of Open Access Journals (Sweden)

    Xiao Wang

    2018-05-01

    Full Text Available In practical electric vehicle applications, the noise of original discharging/charging voltage (DCV signals are inevitable, which comes from electromagnetic interference and the measurement noise of the sensors. To solve such problems, the Discrete Wavelet Transform (DWT based state of charge (SOC estimation method is proposed in this paper. Through a multi-resolution analysis, the original DCV signals with noise are decomposed into different frequency sub-bands. The desired de-noised DCV signals are then reconstructed by utilizing the inverse discrete wavelet transform, based on the sure rule. With the de-noised DCV signal, the SOC and the parameters are obtained using the adaptive extended Kalman Filter algorithm, and the adaptive forgetting factor recursive least square method. Simulation and experimental results show that the SOC estimation error is less than 1%, which indicates an effective improvement in SOC estimation accuracy.

  10. Adaptive Wavelet Threshold Denoising Method for Machinery Sound Based on Improved Fruit Fly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Jing Xu

    2016-07-01

    Full Text Available As the sound signal of a machine contains abundant information and is easy to measure, acoustic-based monitoring or diagnosis systems exhibit obvious superiority, especially in some extreme conditions. However, the sound directly collected from industrial field is always polluted. In order to eliminate noise components from machinery sound, a wavelet threshold denoising method optimized by an improved fruit fly optimization algorithm (WTD-IFOA is proposed in this paper. The sound is firstly decomposed by wavelet transform (WT to obtain coefficients of each level. As the wavelet threshold functions proposed by Donoho were discontinuous, many modified functions with continuous first and second order derivative were presented to realize adaptively denoising. However, the function-based denoising process is time-consuming and it is difficult to find optimal thresholds. To overcome these problems, fruit fly optimization algorithm (FOA was introduced to the process. Moreover, to avoid falling into local extremes, an improved fly distance range obeying normal distribution was proposed on the basis of original FOA. Then, sound signal of a motor was recorded in a soundproof laboratory, and Gauss white noise was added into the signal. The simulation results illustrated the effectiveness and superiority of the proposed approach by a comprehensive comparison among five typical methods. Finally, an industrial application on a shearer in coal mining working face was performed to demonstrate the practical effect.

  11. Denoising PCR-amplified metagenome data

    Directory of Open Access Journals (Sweden)

    Rosen Michael J

    2012-10-01

    Full Text Available Abstract Background PCR amplification and high-throughput sequencing theoretically enable the characterization of the finest-scale diversity in natural microbial and viral populations, but each of these methods introduces random errors that are difficult to distinguish from genuine biological diversity. Several approaches have been proposed to denoise these data but lack either speed or accuracy. Results We introduce a new denoising algorithm that we call DADA (Divisive Amplicon Denoising Algorithm. Without training data, DADA infers both the sample genotypes and error parameters that produced a metagenome data set. We demonstrate performance on control data sequenced on Roche’s 454 platform, and compare the results to the most accurate denoising software currently available, AmpliconNoise. Conclusions DADA is more accurate and over an order of magnitude faster than AmpliconNoise. It eliminates the need for training data to establish error parameters, fully utilizes sequence-abundance information, and enables inclusion of context-dependent PCR error rates. It should be readily extensible to other sequencing platforms such as Illumina.

  12. A Denoising Based Autoassociative Model for Robust Sensor Monitoring in Nuclear Power Plants

    Directory of Open Access Journals (Sweden)

    Ahmad Shaheryar

    2016-01-01

    Full Text Available Sensors health monitoring is essentially important for reliable functioning of safety-critical chemical and nuclear power plants. Autoassociative neural network (AANN based empirical sensor models have widely been reported for sensor calibration monitoring. However, such ill-posed data driven models may result in poor generalization and robustness. To address above-mentioned issues, several regularization heuristics such as training with jitter, weight decay, and cross-validation are suggested in literature. Apart from these regularization heuristics, traditional error gradient based supervised learning algorithms for multilayered AANN models are highly susceptible of being trapped in local optimum. In order to address poor regularization and robust learning issues, here, we propose a denoised autoassociative sensor model (DAASM based on deep learning framework. Proposed DAASM model comprises multiple hidden layers which are pretrained greedily in an unsupervised fashion under denoising autoencoder architecture. In order to improve robustness, dropout heuristic and domain specific data corruption processes are exercised during unsupervised pretraining phase. The proposed sensor model is trained and tested on sensor data from a PWR type nuclear power plant. Accuracy, autosensitivity, spillover, and sequential probability ratio test (SPRT based fault detectability metrics are used for performance assessment and comparison with extensively reported five-layer AANN model by Kramer.

  13. Wavelet Denoising of Radio Observations of Rotating Radio Transients (RRATs): Improved Timing Parameters for Eight RRATs

    Science.gov (United States)

    Jiang, M.; Cui, B.-Y.; Schmid, N. A.; McLaughlin, M. A.; Cao, Z.-C.

    2017-09-01

    Rotating radio transients (RRATs) are sporadically emitting pulsars detectable only through searches for single pulses. While over 100 RRATs have been detected, only a small fraction (roughly 20%) have phase-connected timing solutions, which are critical for determining how they relate to other neutron star populations. Detecting more pulses in order to achieve solutions is key to understanding their physical nature. Astronomical signals collected by radio telescopes contain noise from many sources, making the detection of weak pulses difficult. Applying a denoising method to raw time series prior to performing a single-pulse search typically leads to a more accurate estimation of their times of arrival (TOAs). Taking into account some features of RRAT pulses and noise, we present a denoising method based on wavelet data analysis, an image-processing technique. Assuming that the spin period of an RRAT is known, we estimate the frequency spectrum components contributing to the composition of RRAT pulses. This allows us to suppress the noise, which contributes to other frequencies. We apply the wavelet denoising method including selective wavelet reconstruction and wavelet shrinkage to the de-dispersed time series of eight RRATs with existing timing solutions. The signal-to-noise ratio (S/N) of most pulses are improved after wavelet denoising. Compared to the conventional approach, we measure 12%–69% more TOAs for the eight RRATs. The new timing solutions for the eight RRATs show 16%–90% smaller estimation error of most parameters. Thus, we conclude that wavelet analysis is an effective tool for denoising RRATs signal.

  14. Wavelet Denoising of Radio Observations of Rotating Radio Transients (RRATs): Improved Timing Parameters for Eight RRATs

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, M.; Schmid, N. A.; Cao, Z.-C. [Lane Department of Computer Science and Electrical Engineering West Virginia University Morgantown, WV 26506 (United States); Cui, B.-Y.; McLaughlin, M. A. [Department of Physics and Astronomy West Virginia University Morgantown, WV 26506 (United States)

    2017-09-20

    Rotating radio transients (RRATs) are sporadically emitting pulsars detectable only through searches for single pulses. While over 100 RRATs have been detected, only a small fraction (roughly 20%) have phase-connected timing solutions, which are critical for determining how they relate to other neutron star populations. Detecting more pulses in order to achieve solutions is key to understanding their physical nature. Astronomical signals collected by radio telescopes contain noise from many sources, making the detection of weak pulses difficult. Applying a denoising method to raw time series prior to performing a single-pulse search typically leads to a more accurate estimation of their times of arrival (TOAs). Taking into account some features of RRAT pulses and noise, we present a denoising method based on wavelet data analysis, an image-processing technique. Assuming that the spin period of an RRAT is known, we estimate the frequency spectrum components contributing to the composition of RRAT pulses. This allows us to suppress the noise, which contributes to other frequencies. We apply the wavelet denoising method including selective wavelet reconstruction and wavelet shrinkage to the de-dispersed time series of eight RRATs with existing timing solutions. The signal-to-noise ratio (S/N) of most pulses are improved after wavelet denoising. Compared to the conventional approach, we measure 12%–69% more TOAs for the eight RRATs. The new timing solutions for the eight RRATs show 16%–90% smaller estimation error of most parameters. Thus, we conclude that wavelet analysis is an effective tool for denoising RRATs signal.

  15. Partial discharge signal denoising with spatially adaptive wavelet thresholding and support vector machines

    Energy Technology Data Exchange (ETDEWEB)

    Mota, Hilton de Oliveira; Rocha, Leonardo Chaves Dutra da [Department of Computer Science, Federal University of Sao Joao del-Rei, Visconde do Rio Branco Ave., Colonia do Bengo, Sao Joao del-Rei, MG, 36301-360 (Brazil); Salles, Thiago Cunha de Moura [Department of Computer Science, Federal University of Minas Gerais, 6627 Antonio Carlos Ave., Pampulha, Belo Horizonte, MG, 31270-901 (Brazil); Vasconcelos, Flavio Henrique [Department of Electrical Engineering, Federal University of Minas Gerais, 6627 Antonio Carlos Ave., Pampulha, Belo Horizonte, MG, 31270-901 (Brazil)

    2011-02-15

    In this paper an improved method to denoise partial discharge (PD) signals is presented. The method is based on the wavelet transform (WT) and support vector machines (SVM) and is distinct from other WT-based denoising strategies in the sense that it exploits the high spatial correlations presented by PD wavelet decompositions as a way to identify and select the relevant coefficients. PD spatial correlations are characterized by WT modulus maxima propagation along decomposition levels (scales), which are a strong indicative of the their time-of-occurrence. Denoising is performed by identification and separation of PD-related maxima lines by an SVM pattern classifier. The results obtained confirm that this method has superior denoising capabilities when compared to other WT-based methods found in the literature for the processing of Gaussian and discrete spectral interferences. Moreover, its greatest advantages become clear when the interference has a pulsating or localized shape, situation in which traditional methods usually fail. (author)

  16. A chromaticity-brightness model for color images denoising in a Meyer’s “u + v” framework

    KAUST Repository

    Ferreira, Rita; Fonseca, Irene; Mascarenhas, M. Luí sa

    2017-01-01

    A variational model for imaging segmentation and denoising color images is proposed. The model combines Meyer’s “u+v” decomposition with a chromaticity-brightness framework and is expressed by a minimization of energy integral functionals depending on a small parameter ε>0. The asymptotic behavior as ε→0+ is characterized, and convergence of infima, almost minimizers, and energies are established. In particular, an integral representation of the lower semicontinuous envelope, with respect to the L1-norm, of functionals with linear growth and defined for maps taking values on a certain compact manifold is provided. This study escapes the realm of previous results since the underlying manifold has boundary, and the integrand and its recession function fail to satisfy hypotheses commonly assumed in the literature. The main tools are Γ-convergence and relaxation techniques.

  17. A chromaticity-brightness model for color images denoising in a Meyer’s “u + v” framework

    KAUST Repository

    Ferreira, Rita

    2017-09-11

    A variational model for imaging segmentation and denoising color images is proposed. The model combines Meyer’s “u+v” decomposition with a chromaticity-brightness framework and is expressed by a minimization of energy integral functionals depending on a small parameter ε>0. The asymptotic behavior as ε→0+ is characterized, and convergence of infima, almost minimizers, and energies are established. In particular, an integral representation of the lower semicontinuous envelope, with respect to the L1-norm, of functionals with linear growth and defined for maps taking values on a certain compact manifold is provided. This study escapes the realm of previous results since the underlying manifold has boundary, and the integrand and its recession function fail to satisfy hypotheses commonly assumed in the literature. The main tools are Γ-convergence and relaxation techniques.

  18. Incorporating HYPR de-noising within iterative PET reconstruction (HYPR-OSEM)

    Science.gov (United States)

    (Kevin Cheng, Ju-Chieh; Matthews, Julian; Sossi, Vesna; Anton-Rodriguez, Jose; Salomon, André; Boellaard, Ronald

    2017-08-01

    HighlY constrained back-PRojection (HYPR) is a post-processing de-noising technique originally developed for time-resolved magnetic resonance imaging. It has been recently applied to dynamic imaging for positron emission tomography and shown promising results. In this work, we have developed an iterative reconstruction algorithm (HYPR-OSEM) which improves the signal-to-noise ratio (SNR) in static imaging (i.e. single frame reconstruction) by incorporating HYPR de-noising directly within the ordered subsets expectation maximization (OSEM) algorithm. The proposed HYPR operator in this work operates on the target image(s) from each subset of OSEM and uses the sum of the preceding subset images as the composite which is updated every iteration. Three strategies were used to apply the HYPR operator in OSEM: (i) within the image space modeling component of the system matrix in forward-projection only, (ii) within the image space modeling component in both forward-projection and back-projection, and (iii) on the image estimate after the OSEM update for each subset thus generating three forms: (i) HYPR-F-OSEM, (ii) HYPR-FB-OSEM, and (iii) HYPR-AU-OSEM. Resolution and contrast phantom simulations with various sizes of hot and cold regions as well as experimental phantom and patient data were used to evaluate the performance of the three forms of HYPR-OSEM, and the results were compared to OSEM with and without a post reconstruction filter. It was observed that the convergence in contrast recovery coefficients (CRC) obtained from all forms of HYPR-OSEM was slower than that obtained from OSEM. Nevertheless, HYPR-OSEM improved SNR without degrading accuracy in terms of resolution and contrast. It achieved better accuracy in CRC at equivalent noise level and better precision than OSEM and better accuracy than filtered OSEM in general. In addition, HYPR-AU-OSEM has been determined to be the more effective form of HYPR-OSEM in terms of accuracy and precision based on the studies

  19. TERRESTRIAL LASER SCANNER DATA DENOISING BY DICTIONARY LEARNING OF SPARSE CODING

    Directory of Open Access Journals (Sweden)

    E. Smigiel

    2013-07-01

    Full Text Available Point cloud processing is basically a signal processing issue. The huge amount of data which are collected with Terrestrial Laser Scanners or photogrammetry techniques faces the classical questions linked with signal or image processing. Among others, denoising and compression are questions which have to be addressed in this context. That is why, one has to turn attention to signal theory because it is susceptible to guide one's good practices or to inspire new ideas from the latest developments of this field. The literature have been showing for decades how strong and dynamic, the theoretical field is and how efficient the derived algorithms have become. For about ten years, a new technique has appeared: known as compressive sensing or compressive sampling, it is based first on sparsity which is an interesting characteristic of many natural signals. Based on this concept, many denoising and compression techniques have shown their efficiencies. Sparsity can also be seen as redundancy removal of natural signals. Taken along with incoherent measurements, compressive sensing has appeared and uses the idea that redundancy could be removed at the very early stage of sampling. Hence, instead of sampling the signal at high sampling rate and removing redundancy as a second stage, the acquisition stage itself may be run with redundancy removal. This paper gives some theoretical aspects of these ideas with first simple mathematics. Then, the idea of compressive sensing for a Terrestrial Laser Scanner is examined as a potential research question and finally, a denoising scheme based on a dictionary learning of sparse coding is experienced. Both the theoretical discussion and the obtained results show that it is worth staying close to signal processing theory and its community to take benefit of its latest developments.

  20. Radar Target Recognition Based on Stacked Denoising Sparse Autoencoder

    Directory of Open Access Journals (Sweden)

    Zhao Feixiang

    2017-04-01

    Full Text Available Feature extraction is a key step in radar target recognition. The quality of the extracted features determines the performance of target recognition. However, obtaining the deep nature of the data is difficult using the traditional method. The autoencoder can learn features by making use of data and can obtain feature expressions at different levels of data. To eliminate the influence of noise, the method of radar target recognition based on stacked denoising sparse autoencoder is proposed in this paper. This method can extract features directly and efficiently by setting different hidden layers and numbers of iterations. Experimental results show that the proposed method is superior to the K-nearest neighbor method and the traditional stacked autoencoder.

  1. Segmentation of confocal Raman microspectroscopic imaging data using edge-preserving denoising and clustering.

    Science.gov (United States)

    Alexandrov, Theodore; Lasch, Peter

    2013-06-18

    Over the past decade, confocal Raman microspectroscopic (CRM) imaging has matured into a useful analytical tool to obtain spatially resolved chemical information on the molecular composition of biological samples and has found its way into histopathology, cytology, and microbiology. A CRM imaging data set is a hyperspectral image in which Raman intensities are represented as a function of three coordinates: a spectral coordinate λ encoding the wavelength and two spatial coordinates x and y. Understanding CRM imaging data is challenging because of its complexity, size, and moderate signal-to-noise ratio. Spatial segmentation of CRM imaging data is a way to reveal regions of interest and is traditionally performed using nonsupervised clustering which relies on spectral domain-only information with the main drawback being the high sensitivity to noise. We present a new pipeline for spatial segmentation of CRM imaging data which combines preprocessing in the spectral and spatial domains with k-means clustering. Its core is the preprocessing routine in the spatial domain, edge-preserving denoising (EPD), which exploits the spatial relationships between Raman intensities acquired at neighboring pixels. Additionally, we propose to use both spatial correlation to identify Raman spectral features colocalized with defined spatial regions and confidence maps to assess the quality of spatial segmentation. For CRM data acquired from midsagittal Syrian hamster ( Mesocricetus auratus ) brain cryosections, we show how our pipeline benefits from the complex spatial-spectral relationships inherent in the CRM imaging data. EPD significantly improves the quality of spatial segmentation that allows us to extract the underlying structural and compositional information contained in the Raman microspectra.

  2. Hardware design and implementation of a wavelet de-noising procedure for medical signal preprocessing.

    Science.gov (United States)

    Chen, Szi-Wen; Chen, Yuan-Ho

    2015-10-16

    In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz.

  3. HDR Pathological Image Enhancement Based on Improved Bias Field Correction and Guided Image Filter

    Directory of Open Access Journals (Sweden)

    Qingjiao Sun

    2016-01-01

    Full Text Available Pathological image enhancement is a significant topic in the field of pathological image processing. This paper proposes a high dynamic range (HDR pathological image enhancement method based on improved bias field correction and guided image filter (GIF. Firstly, a preprocessing including stain normalization and wavelet denoising is performed for Haematoxylin and Eosin (H and E stained pathological image. Then, an improved bias field correction model is developed to enhance the influence of light for high-frequency part in image and correct the intensity inhomogeneity and detail discontinuity of image. Next, HDR pathological image is generated based on least square method using low dynamic range (LDR image, H and E channel images. Finally, the fine enhanced image is acquired after the detail enhancement process. Experiments with 140 pathological images demonstrate the performance advantages of our proposed method as compared with related work.

  4. SVD and Hankel matrix based de-noising approach for ball bearing fault detection and its assessment using artificial faults

    Science.gov (United States)

    Golafshan, Reza; Yuce Sanliturk, Kenan

    2016-03-01

    Ball bearings remain one of the most crucial components in industrial machines and due to their critical role, it is of great importance to monitor their conditions under operation. However, due to the background noise in acquired signals, it is not always possible to identify probable faults. This incapability in identifying the faults makes the de-noising process one of the most essential steps in the field of Condition Monitoring (CM) and fault detection. In the present study, Singular Value Decomposition (SVD) and Hankel matrix based de-noising process is successfully applied to the ball bearing time domain vibration signals as well as to their spectrums for the elimination of the background noise and the improvement the reliability of the fault detection process. The test cases conducted using experimental as well as the simulated vibration signals demonstrate the effectiveness of the proposed de-noising approach for the ball bearing fault detection.

  5. Hardware Design and Implementation of a Wavelet De-Noising Procedure for Medical Signal Preprocessing

    Directory of Open Access Journals (Sweden)

    Szi-Wen Chen

    2015-10-01

    Full Text Available In this paper, a discrete wavelet transform (DWT based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan 40 nm standard cell library. The integrated circuit (IC synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz.

  6. A SAR IMAGE REGISTRATION METHOD BASED ON SIFT ALGORITHM

    Directory of Open Access Journals (Sweden)

    W. Lu

    2017-09-01

    Full Text Available In order to improve the stability and rapidity of synthetic aperture radar (SAR images matching, an effective method was presented. Firstly, the adaptive smoothing filtering was employed for image denoising in image processing based on Wallis filtering to avoid the follow-up noise is amplified. Secondly, feature points were extracted by a simplified SIFT algorithm. Finally, the exact matching of the images was achieved with these points. Compared with the existing methods, it not only maintains the richness of features, but a-lso reduces the noise of the image. The simulation results show that the proposed algorithm can achieve better matching effect.

  7. Sparse Representation Denoising for Radar High Resolution Range Profiling

    Directory of Open Access Journals (Sweden)

    Min Li

    2014-01-01

    Full Text Available Radar high resolution range profile has attracted considerable attention in radar automatic target recognition. In practice, radar return is usually contaminated by noise, which results in profile distortion and recognition performance degradation. To deal with this problem, in this paper, a novel denoising method based on sparse representation is proposed to remove the Gaussian white additive noise. The return is sparsely described in the Fourier redundant dictionary and the denoising problem is described as a sparse representation model. Noise level of the return, which is crucial to the denoising performance but often unknown, is estimated by performing subspace method on the sliding subsequence correlation matrix. Sliding window process enables noise level estimation using only one observation sequence, not only guaranteeing estimation efficiency but also avoiding the influence of profile time-shift sensitivity. Experimental results show that the proposed method can effectively improve the signal-to-noise ratio of the return, leading to a high-quality profile.

  8. The Feature Extraction Based on Texture Image Information for Emotion Sensing in Speech

    Directory of Open Access Journals (Sweden)

    Kun-Ching Wang

    2014-09-01

    Full Text Available In this paper, we present a novel texture image feature for Emotion Sensing in Speech (ESS. This idea is based on the fact that the texture images carry emotion-related information. The feature extraction is derived from time-frequency representation of spectrogram images. First, we transform the spectrogram as a recognizable image. Next, we use a cubic curve to enhance the image contrast. Then, the texture image information (TII derived from the spectrogram image can be extracted by using Laws’ masks to characterize emotional state. In order to evaluate the effectiveness of the proposed emotion recognition in different languages, we use two open emotional databases including the Berlin Emotional Speech Database (EMO-DB and eNTERFACE corpus and one self-recorded database (KHUSC-EmoDB, to evaluate the performance cross-corpora. The results of the proposed ESS system are presented using support vector machine (SVM as a classifier. Experimental results show that the proposed TII-based feature extraction inspired by visual perception can provide significant classification for ESS systems. The two-dimensional (2-D TII feature can provide the discrimination between different emotions in visual expressions except for the conveyance pitch and formant tracks. In addition, the de-noising in 2-D images can be more easily completed than de-noising in 1-D speech.

  9. Chambolle's Projection Algorithm for Total Variation Denoising

    Directory of Open Access Journals (Sweden)

    Joan Duran

    2013-12-01

    Full Text Available Denoising is the problem of removing the inherent noise from an image. The standard noise model is additive white Gaussian noise, where the observed image f is related to the underlying true image u by the degradation model f=u+n, and n is supposed to be at each pixel independently and identically distributed as a zero-mean Gaussian random variable. Since this is an ill-posed problem, Rudin, Osher and Fatemi introduced the total variation as a regularizing term. It has proved to be quite efficient for regularizing images without smoothing the boundaries of the objects. This paper focuses on the simple description of the theory and on the implementation of Chambolle's projection algorithm for minimizing the total variation of a grayscale image. Furthermore, we adapt the algorithm to the vectorial total variation for color images. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation.

  10. Sharpness-Aware Low-Dose CT Denoising Using Conditional Generative Adversarial Network.

    Science.gov (United States)

    Yi, Xin; Babyn, Paul

    2018-02-20

    Low-dose computed tomography (LDCT) has offered tremendous benefits in radiation-restricted applications, but the quantum noise as resulted by the insufficient number of photons could potentially harm the diagnostic performance. Current image-based denoising methods tend to produce a blur effect on the final reconstructed results especially in high noise levels. In this paper, a deep learning-based approach was proposed to mitigate this problem. An adversarially trained network and a sharpness detection network were trained to guide the training process. Experiments on both simulated and real dataset show that the results of the proposed method have very small resolution loss and achieves better performance relative to state-of-the-art methods both quantitatively and visually.

  11. Parameters optimization for wavelet denoising based on normalized spectral angle and threshold constraint machine learning

    Science.gov (United States)

    Li, Hao; Ma, Yong; Liang, Kun; Tian, Yong; Wang, Rui

    2012-01-01

    Wavelet parameters (e.g., wavelet type, level of decomposition) affect the performance of the wavelet denoising algorithm in hyperspectral applications. Current studies select the best wavelet parameters for a single spectral curve by comparing similarity criteria such as spectral angle (SA). However, the method to find the best parameters for a spectral library that contains multiple spectra has not been studied. In this paper, a criterion named normalized spectral angle (NSA) is proposed. By comparing NSA, the best combination of parameters for a spectral library can be selected. Moreover, a fast algorithm based on threshold constraint and machine learning is developed to reduce the time of a full search. After several iterations of learning, the combination of parameters that constantly surpasses a threshold is selected. The experiments proved that by using the NSA criterion, the SA values decreased significantly, and the fast algorithm could save 80% time consumption, while the denoising performance was not obviously impaired.

  12. Comment on ‘A new method for fusion, denoising and enhancement of x-ray images retrieved from Talbot–Lau grating interferometry’

    International Nuclear Information System (INIS)

    Scholkmann, Felix; Revol, Vincent; Kaufmann, Rolf; Kottler, Christian

    2015-01-01

    In a recent paper (Scholkamm et al 2014 Phys. Med. Biol. 59 1425–40) we presented a new image denoising, fusion and enhancement framework for combining and optimal visualization of x-ray attenuation contrast, differential phase contrast and dark-field contrast images retrieved from x-ray Talbot–Lau grating interferometry. In this comment we give additional information and report about the application of our framework to breast cancer tissue which we presented in our paper as an example. The applied procedure is suitable for a qualitative comparison of different algorithms. For a quantitative juxtaposition original data would however be needed as an input. (comment and reply)

  13. Research on Ship-Radiated Noise Denoising Using Secondary Variational Mode Decomposition and Correlation Coefficient.

    Science.gov (United States)

    Li, Yuxing; Li, Yaan; Chen, Xiao; Yu, Jing

    2017-12-26

    As the sound signal of ships obtained by sensors contains other many significant characteristics of ships and called ship-radiated noise (SN), research into a denoising algorithm and its application has obtained great significance. Using the advantage of variational mode decomposition (VMD) combined with the correlation coefficient for denoising, a hybrid secondary denoising algorithm is proposed using secondary VMD combined with a correlation coefficient (CC). First, different kinds of simulation signals are decomposed into several bandwidth-limited intrinsic mode functions (IMFs) using VMD, where the decomposition number by VMD is equal to the number by empirical mode decomposition (EMD); then, the CCs between the IMFs and the simulation signal are calculated respectively. The noise IMFs are identified by the CC threshold and the rest of the IMFs are reconstructed in order to realize the first denoising process. Finally, secondary denoising of the simulation signal can be accomplished by repeating the above steps of decomposition, screening and reconstruction. The final denoising result is determined according to the CC threshold. The denoising effect is compared under the different signal-to-noise ratio and the time of decomposition by VMD. Experimental results show the validity of the proposed denoising algorithm using secondary VMD (2VMD) combined with CC compared to EMD denoising, ensemble EMD (EEMD) denoising, VMD denoising and cubic VMD (3VMD) denoising, as well as two denoising algorithms presented recently. The proposed denoising algorithm is applied to feature extraction and classification for SN signals, which can effectively improve the recognition rate of different kinds of ships.

  14. Research on Ship-Radiated Noise Denoising Using Secondary Variational Mode Decomposition and Correlation Coefficient

    Directory of Open Access Journals (Sweden)

    Yuxing Li

    2017-12-01

    Full Text Available As the sound signal of ships obtained by sensors contains other many significant characteristics of ships and called ship-radiated noise (SN, research into a denoising algorithm and its application has obtained great significance. Using the advantage of variational mode decomposition (VMD combined with the correlation coefficient for denoising, a hybrid secondary denoising algorithm is proposed using secondary VMD combined with a correlation coefficient (CC. First, different kinds of simulation signals are decomposed into several bandwidth-limited intrinsic mode functions (IMFs using VMD, where the decomposition number by VMD is equal to the number by empirical mode decomposition (EMD; then, the CCs between the IMFs and the simulation signal are calculated respectively. The noise IMFs are identified by the CC threshold and the rest of the IMFs are reconstructed in order to realize the first denoising process. Finally, secondary denoising of the simulation signal can be accomplished by repeating the above steps of decomposition, screening and reconstruction. The final denoising result is determined according to the CC threshold. The denoising effect is compared under the different signal-to-noise ratio and the time of decomposition by VMD. Experimental results show the validity of the proposed denoising algorithm using secondary VMD (2VMD combined with CC compared to EMD denoising, ensemble EMD (EEMD denoising, VMD denoising and cubic VMD (3VMD denoising, as well as two denoising algorithms presented recently. The proposed denoising algorithm is applied to feature extraction and classification for SN signals, which can effectively improve the recognition rate of different kinds of ships.

  15. Fully Convolutional Architecture for Low-Dose CT Image Noise Reduction

    Science.gov (United States)

    Badretale, S.; Shaker, F.; Babyn, P.; Alirezaie, J.

    2017-10-01

    One of the critical topics in medical low-dose Computed Tomography (CT) imaging is how best to maintain image quality. As the quality of images decreases with lowering the X-ray radiation dose, improving image quality is extremely important and challenging. We have proposed a novel approach to denoise low-dose CT images. Our algorithm learns directly from an end-to-end mapping from the low-dose Computed Tomography images for denoising the normal-dose CT images. Our method is based on a deep convolutional neural network with rectified linear units. By learning various low-level to high-level features from a low-dose image the proposed algorithm is capable of creating a high-quality denoised image. We demonstrate the superiority of our technique by comparing the results with two other state-of-the-art methods in terms of the peak signal to noise ratio, root mean square error, and a structural similarity index.

  16. A gradient-based method for segmenting FDG-PET images: methodology and validation

    International Nuclear Information System (INIS)

    Geets, Xavier; Lee, John A.; Gregoire, Vincent; Bol, Anne; Lonneux, Max

    2007-01-01

    A new gradient-based method for segmenting FDG-PET images is described and validated. The proposed method relies on the watershed transform and hierarchical cluster analysis. To allow a better estimation of the gradient intensity, iteratively reconstructed images were first denoised and deblurred with an edge-preserving filter and a constrained iterative deconvolution algorithm. Validation was first performed on computer-generated 3D phantoms containing spheres, then on a real cylindrical Lucite phantom containing spheres of different volumes ranging from 2.1 to 92.9 ml. Moreover, laryngeal tumours from seven patients were segmented on PET images acquired before laryngectomy by the gradient-based method and the thresholding method based on the source-to-background ratio developed by Daisne (Radiother Oncol 2003;69:247-50). For the spheres, the calculated volumes and radii were compared with the known values; for laryngeal tumours, the volumes were compared with the macroscopic specimens. Volume mismatches were also analysed. On computer-generated phantoms, the deconvolution algorithm decreased the mis-estimate of volumes and radii. For the Lucite phantom, the gradient-based method led to a slight underestimation of sphere volumes (by 10-20%), corresponding to negligible radius differences (0.5-1.1 mm); for laryngeal tumours, the segmented volumes by the gradient-based method agreed with those delineated on the macroscopic specimens, whereas the threshold-based method overestimated the true volume by 68% (p = 0.014). Lastly, macroscopic laryngeal specimens were totally encompassed by neither the threshold-based nor the gradient-based volumes. The gradient-based segmentation method applied on denoised and deblurred images proved to be more accurate than the source-to-background ratio method. (orig.)

  17. Experimental study on a de-noising system for gas and oil pipelines based on an acoustic leak detection and location method

    International Nuclear Information System (INIS)

    Liu, Cuiwei; Li, Yuxing; Fang, Liping; Xu, Minghai

    2017-01-01

    To protect the pipelines from significant danger, the acoustic leak detection and location method for oil and gas pipelines is studied, and a de-noising system is established to extract leakage characteristics from signals. A test loop for gas and oil is established to carry out experiments. First, according to the measured signals, fitting leakage signals are obtained, and then, the objective signals are constructed by adding noises to the fitting signals. Based on the proposed evaluation indexes, the filtering methods are then applied to process the constructed signals and the de-noising system is established. The established leakage extraction system is validated and then applied to process signals measured in gas pipelines that include a straight pipe, elbow pipe and reducing pipe. The leak detection and location is carried out effectively. Finally, the system is applied to process signals measured in water pipelines. The results demonstrate that the proposed de-noising system is effective at extracting leakage signals from measured signals and that the proposed leak detection and location method has a higher detection sensitivity and localization accuracy. For a pipeline with an inner diameter of 42 mm, the smallest leakage orifice that can be detected is 0.1 mm for gas and water and the largest location error is 0.874% for gas and 0.176% for water. - Highlights: • Three evaluation indexes are proposed: SNR, RMSE and ALPD. • The de-noising system is established in the gas and oil pipelines. • The established system is used for gas pipeline effectively, including interference pipes. • The established de-noising system is used for water pipeline effectively.

  18. HARDI denoising using nonlocal means on S2

    Science.gov (United States)

    Kuurstra, Alan; Dolui, Sudipto; Michailovich, Oleg

    2012-02-01

    Diffusion MRI (dMRI) is a unique imaging modality for in vivo delineation of the anatomical structure of white matter in the brain. In particular, high angular resolution diffusion imaging (HARDI) is a specific instance of dMRI which is known to excel in detection of multiple neural fibers within a single voxel. Unfortunately, the angular resolution of HARDI is known to be inversely proportional to SNR, which makes the problem of denoising of HARDI data be of particular practical importance. Since HARDI signals are effectively band-limited, denoising can be accomplished by means of linear filtering. However, the spatial dependency of diffusivity in brain tissue makes it impossible to find a single set of linear filter parameters which is optimal for all types of diffusion signals. Hence, adaptive filtering is required. In this paper, we propose a new type of non-local means (NLM) filtering which possesses the required adaptivity property. As opposed to similar methods in the field, however, the proposed NLM filtering is applied in the spherical domain of spatial orientations. Moreover, the filter uses an original definition of adaptive weights, which are designed to be invariant to both spatial rotations as well as to a particular sampling scheme in use. As well, we provide a detailed description of the proposed filtering procedure, its efficient implementation, as well as experimental results with synthetic data. We demonstrate that our filter has substantially better adaptivity as compared to a number of alternative methods.

  19. A robust technique based on VLM and Frangi filter for retinal vessel extraction and denoising.

    Directory of Open Access Journals (Sweden)

    Khan Bahadar Khan

    Full Text Available The exploration of retinal vessel structure is colossally important on account of numerous diseases including stroke, Diabetic Retinopathy (DR and coronary heart diseases, which can damage the retinal vessel structure. The retinal vascular network is very hard to be extracted due to its spreading and diminishing geometry and contrast variation in an image. The proposed technique consists of unique parallel processes for denoising and extraction of blood vessels in retinal images. In the preprocessing section, an adaptive histogram equalization enhances dissimilarity between the vessels and the background and morphological top-hat filters are employed to eliminate macula and optic disc, etc. To remove local noise, the difference of images is computed from the top-hat filtered image and the high-boost filtered image. Frangi filter is applied at multi scale for the enhancement of vessels possessing diverse widths. Segmentation is performed by using improved Otsu thresholding on the high-boost filtered image and Frangi's enhanced image, separately. In the postprocessing steps, a Vessel Location Map (VLM is extracted by using raster to vector transformation. Postprocessing steps are employed in a novel way to reject misclassified vessel pixels. The final segmented image is obtained by using pixel-by-pixel AND operation between VLM and Frangi output image. The method has been rigorously analyzed on the STARE, DRIVE and HRF datasets.

  20. Single image super resolution algorithm based on edge interpolation in NSCT domain

    Science.gov (United States)

    Zhang, Mengqun; Zhang, Wei; He, Xinyu

    2017-11-01

    In order to preserve the texture and edge information and to improve the space resolution of single frame, a superresolution algorithm based on Contourlet (NSCT) is proposed. The original low resolution image is transformed by NSCT, and the directional sub-band coefficients of the transform domain are obtained. According to the scale factor, the high frequency sub-band coefficients are amplified by the interpolation method based on the edge direction to the desired resolution. For high frequency sub-band coefficients with noise and weak targets, Bayesian shrinkage is used to calculate the threshold value. The coefficients below the threshold are determined by the correlation among the sub-bands of the same scale to determine whether it is noise and de-noising. The anisotropic diffusion filter is used to effectively enhance the weak target in the low contrast region of the target and background. Finally, the high-frequency sub-band is amplified by the bilinear interpolation method to the desired resolution, and then combined with the high-frequency subband coefficients after de-noising and small target enhancement, the NSCT inverse transform is used to obtain the desired resolution image. In order to verify the effectiveness of the proposed algorithm, the proposed algorithm and several common image reconstruction methods are used to test the synthetic image, motion blurred image and hyperspectral image, the experimental results show that compared with the traditional single resolution algorithm, the proposed algorithm can obtain smooth edges and good texture features, and the reconstructed image structure is well preserved and the noise is suppressed to some extent.

  1. Multiview point clouds denoising based on interference elimination

    Science.gov (United States)

    Hu, Yang; Wu, Qian; Wang, Le; Jiang, Huanyu

    2018-03-01

    Newly emerging low-cost depth sensors offer huge potentials for three-dimensional (3-D) modeling, but existing high noise restricts these sensors from obtaining accurate results. Thus, we proposed a method for denoising registered multiview point clouds with high noise to solve that problem. The proposed method is aimed at fully using redundant information to eliminate the interferences among point clouds of different views based on an iterative procedure. In each iteration, noisy points are either deleted or moved to their weighted average targets in accordance with two cases. Simulated data and practical data captured by a Kinect v2 sensor were tested in experiments qualitatively and quantitatively. Results showed that the proposed method can effectively reduce noise and recover local features from highly noisy multiview point clouds with good robustness, compared to truncated signed distance function and moving least squares (MLS). Moreover, the resulting low-noise point clouds can be further smoothed by the MLS to achieve improved results. This study provides the feasibility of obtaining fine 3-D models with high-noise devices, especially for depth sensors, such as Kinect.

  2. Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method

    Science.gov (United States)

    Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao

    2017-03-01

    Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.

  3. Improvement image in tomosynthesis

    International Nuclear Information System (INIS)

    Gomi, Tsutomu; Umeda, Tokuo; Takeda, Tohoru; Saito, Kyouko; Sakaguchi, Kazuya; Nakajima, Masahiro; Koshida, Kichirou

    2012-01-01

    We evaluated the X-ray digital tomosynthesis (DT) reconstruction processing method for metal artifact reduction and the application of wavelet denoising to selectively remove quantum noise and suggest the possibility of image quality improvement using a novel application for chest. In orthopedic DT imaging, we developed artifact reduction methods based on a modified Shepp and Logan reconstruction filter kernel realized by taking into account additional weighing by direct current (DC) components in frequency domain space. Processing leads to an increase in the ratio of low-frequency components in an image. The effectiveness of the method in enhancing the visibility of a prosthetic case was quantified in terms of removal of ghosting artifacts. In chest DT imaging, the technique was implemented on a DT system and experimentally evaluated through chest phantom measurements, spatial resolution and compared with an existing post-reconstruction wavelet denoise algorithm by Badea et al. Our wavelet technique with balance sparsity-norm contrast-to-noise ratio (CNR) effectively decreased quantum noise in the reconstructed images with and improvement when applied to pre-reconstruction image for post-reconstruction. The results of our technique showed that although modulation transfer function (MTF) did not vary (preserving spatial resolution), the existing wavelet denoise algorithm caused MTF deterioration. (author)

  4. Fringe pattern denoising using coherence-enhancing diffusion.

    Science.gov (United States)

    Wang, Haixia; Kemao, Qian; Gao, Wenjing; Lin, Feng; Seah, Hock Soon

    2009-04-15

    Electronic speckle pattern interferometry is one of the methods measuring the displacement on object surfaces in which fringe patterns need to be evaluated. Noise is one of the key problems affecting further processing and reducing measurement quality. We propose an application of coherence-enhancing diffusion to fringe-pattern denoising. It smoothes a fringe pattern along directions both parallel and perpendicular to fringe orientation with suitable diffusion speeds to more effectively reduce noise and improve fringe-pattern quality. It is a generalized work of Tang's et al.'s [Opt. Lett.33, 2179 (2008)] model that only smoothes a fringe pattern along fringe orientation. Since our model diffuses a fringe pattern with an additional direction, it is able to denoise low-density fringes as well as improve denoising effectiveness for high-density fringes. Theoretical analysis as well as simulation and experimental verifications are addressed.

  5. Split-Bregman-based sparse-view CT reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Vandeghinste, Bert; Vandenberghe, Stefaan [Ghent Univ. (Belgium). Medical Image and Signal Processing (MEDISIP); Goossens, Bart; Pizurica, Aleksandra; Philips, Wilfried [Ghent Univ. (Belgium). Image Processing and Interpretation Research Group (IPI); Beenhouwer, Jan de [Ghent Univ. (Belgium). Medical Image and Signal Processing (MEDISIP); Antwerp Univ., Wilrijk (Belgium). The Vision Lab; Staelens, Steven [Ghent Univ. (Belgium). Medical Image and Signal Processing (MEDISIP); Antwerp Univ., Edegem (Belgium). Molecular Imaging Centre Antwerp

    2011-07-01

    Total variation minimization has been extensively researched for image denoising and sparse view reconstruction. These methods show superior denoising performance for simple images with little texture, but result in texture information loss when applied to more complex images. It could thus be beneficial to use other regularizers within medical imaging. We propose a general regularization method, based on a split-Bregman approach. We show results for this framework combined with a total variation denoising operator, in comparison to ASD-POCS. We show that sparse-view reconstruction and noise regularization is possible. This general method will allow us to investigate other regularizers in the context of regularized CT reconstruction, and decrease the acquisition times in {mu}CT. (orig.)

  6. Methodological improvements in voxel-based analysis of diffusion tensor images: applications to study the impact of apolipoprotein E on white matter integrity.

    Science.gov (United States)

    Newlander, Shawn M; Chu, Alan; Sinha, Usha S; Lu, Po H; Bartzokis, George

    2014-02-01

    To identify regional differences in apparent diffusion coefficient (ADC) and fractional anisotropy (FA) using customized preprocessing before voxel-based analysis (VBA) in 14 normal subjects with the specific genes that decrease (apolipoprotein [APO] E ε2) and that increase (APOE ε4) the risk of Alzheimer's disease. Diffusion tensor images (DTI) acquired at 1.5 Tesla were denoised with a total variation tensor regularization algorithm before affine and nonlinear registration to generate a common reference frame for the image volumes of all subjects. Anisotropic and isotropic smoothing with varying kernel sizes was applied to the aligned data before VBA to determine regional differences between cohorts segregated by allele status. VBA on the denoised tensor data identified regions of reduced FA in APOE ε4 compared with the APOE ε2 healthy older carriers. The most consistent results were obtained using the denoised tensor and anisotropic smoothing before statistical testing. In contrast, isotropic smoothing identified regional differences for small filter sizes alone, emphasizing that this method introduces bias in FA values for higher kernel sizes. Voxel-based DTI analysis can be performed on low signal to noise ratio images to detect subtle regional differences in cohorts using the proposed preprocessing techniques. Copyright © 2013 Wiley Periodicals, Inc.

  7. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    Science.gov (United States)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  8. Quantitative accuracy of denoising techniques applied to dynamic 82Rb myocardial blood flow PET/CT scans

    DEFF Research Database (Denmark)

    Harms, Hans; Tolbod, Lars Poulsen; Bouchelouche, Kirsten

    with suspected ischemic heart disease underwent a dynamic 7 minute 82Rb scan under resting and adenosine induced hyperaemic conditions after injection of 1100 MBq of 82Rb on a GE Discovery 690 PET/CT. Dynamic images were filtered using HighlY constrained backPRojection (HYPR) and a Hotelling filter of which...... the latter was evaluated using a range of 4 to 7 included factors and for both 2D and 3D filtering. Data were analyzed using Cardiac VUer and obtained MBF values were compared with those obtained when no denoising of the dynamic data was performed. Results: Both HYPR and Hotelling denoising could...

  9. A novel structured dictionary for fast processing of 3D medical images, with application to computed tomography restoration and denoising

    Science.gov (United States)

    Karimi, Davood; Ward, Rabab K.

    2016-03-01

    Sparse representation of signals in learned overcomplete dictionaries has proven to be a powerful tool with applications in denoising, restoration, compression, reconstruction, and more. Recent research has shown that learned overcomplete dictionaries can lead to better results than analytical dictionaries such as wavelets in almost all image processing applications. However, a major disadvantage of these dictionaries is that their learning and usage is very computationally intensive. In particular, finding the sparse representation of a signal in these dictionaries requires solving an optimization problem that leads to very long computational times, especially in 3D image processing. Moreover, the sparse representation found by greedy algorithms is usually sub-optimal. In this paper, we propose a novel two-level dictionary structure that improves the performance and the speed of standard greedy sparse coding methods. The first (i.e., the top) level in our dictionary is a fixed orthonormal basis, whereas the second level includes the atoms that are learned from the training data. We explain how such a dictionary can be learned from the training data and how the sparse representation of a new signal in this dictionary can be computed. As an application, we use the proposed dictionary structure for removing the noise and artifacts in 3D computed tomography (CT) images. Our experiments with real CT images show that the proposed method achieves results that are comparable with standard dictionary-based methods while substantially reducing the computational time.

  10. Denoising of gravitational wave signals via dictionary learning algorithms

    Science.gov (United States)

    Torres-Forné, Alejandro; Marquina, Antonio; Font, José A.; Ibáñez, José M.

    2016-12-01

    Gravitational wave astronomy has become a reality after the historical detections accomplished during the first observing run of the two advanced LIGO detectors. In the following years, the number of detections is expected to increase significantly with the full commissioning of the advanced LIGO, advanced Virgo and KAGRA detectors. The development of sophisticated data analysis techniques to improve the opportunities of detection for low signal-to-noise-ratio events is, hence, a most crucial effort. In this paper, we present one such technique, dictionary-learning algorithms, which have been extensively developed in the last few years and successfully applied mostly in the context of image processing. However, to the best of our knowledge, such algorithms have not yet been employed to denoise gravitational wave signals. By building dictionaries from numerical relativity templates of both binary black holes mergers and bursts of rotational core collapse, we show how machine-learning algorithms based on dictionaries can also be successfully applied for gravitational wave denoising. We use a subset of signals from both catalogs, embedded in nonwhite Gaussian noise, to assess our techniques with a large sample of tests and to find the best model parameters. The application of our method to the actual signal GW150914 shows promising results. Dictionary-learning algorithms could be a complementary addition to the gravitational wave data analysis toolkit. They may be used to extract signals from noise and to infer physical parameters if the data are in good enough agreement with the morphology of the dictionary atoms.

  11. A New Wavelet Threshold Function and Denoising Application

    Directory of Open Access Journals (Sweden)

    Lu Jing-yi

    2016-01-01

    Full Text Available In order to improve the effects of denoising, this paper introduces the basic principles of wavelet threshold denoising and traditional structures threshold functions. Meanwhile, it proposes wavelet threshold function and fixed threshold formula which are both improved here. First, this paper studies the problems existing in the traditional wavelet threshold functions and introduces the adjustment factors to construct the new threshold function basis on soft threshold function. Then, it studies the fixed threshold and introduces the logarithmic function of layer number of wavelet decomposition to design the new fixed threshold formula. Finally, this paper uses hard threshold, soft threshold, Garrote threshold, and improved threshold function to denoise different signals. And the paper also calculates signal-to-noise (SNR and mean square errors (MSE of the hard threshold functions, soft thresholding functions, Garrote threshold functions, and the improved threshold function after denoising. Theoretical analysis and experimental results showed that the proposed approach could improve soft threshold functions with constant deviation and hard threshold with discontinuous function problems. The proposed approach could improve the different decomposition scales that adopt the same threshold value to deal with the noise problems, also effectively filter the noise in the signals, and improve the SNR and reduce the MSE of output signals.

  12. Denoising of chaotic signal using independent component analysis and empirical mode decomposition with circulate translating

    International Nuclear Information System (INIS)

    Wang Wen-Bo; Zhang Xiao-Dong; Chang Yuchan; Wang Xiang-Li; Wang Zhao; Chen Xi; Zheng Lei

    2016-01-01

    In this paper, a new method to reduce noises within chaotic signals based on ICA (independent component analysis) and EMD (empirical mode decomposition) is proposed. The basic idea is decomposing chaotic signals and constructing multidimensional input vectors, firstly, on the base of EMD and its translation invariance. Secondly, it makes the independent component analysis on the input vectors, which means that a self adapting denoising is carried out for the intrinsic mode functions (IMFs) of chaotic signals. Finally, all IMFs compose the new denoised chaotic signal. Experiments on the Lorenz chaotic signal composed of different Gaussian noises and the monthly observed chaotic sequence on sunspots were put into practice. The results proved that the method proposed in this paper is effective in denoising of chaotic signals. Moreover, it can correct the center point in the phase space effectively, which makes it approach the real track of the chaotic attractor. (paper)

  13. A Novel Hybrid Model Based on Extreme Learning Machine, k-Nearest Neighbor Regression and Wavelet Denoising Applied to Short-Term Electric Load Forecasting

    Directory of Open Access Journals (Sweden)

    Weide Li

    2017-05-01

    Full Text Available Electric load forecasting plays an important role in electricity markets and power systems. Because electric load time series are complicated and nonlinear, it is very difficult to achieve a satisfactory forecasting accuracy. In this paper, a hybrid model, Wavelet Denoising-Extreme Learning Machine optimized by k-Nearest Neighbor Regression (EWKM, which combines k-Nearest Neighbor (KNN and Extreme Learning Machine (ELM based on a wavelet denoising technique is proposed for short-term load forecasting. The proposed hybrid model decomposes the time series into a low frequency-associated main signal and some detailed signals associated with high frequencies at first, then uses KNN to determine the independent and dependent variables from the low-frequency signal. Finally, the ELM is used to get the non-linear relationship between these variables to get the final prediction result for the electric load. Compared with three other models, Extreme Learning Machine optimized by k-Nearest Neighbor Regression (EKM, Wavelet Denoising-Extreme Learning Machine (WKM and Wavelet Denoising-Back Propagation Neural Network optimized by k-Nearest Neighbor Regression (WNNM, the model proposed in this paper can improve the accuracy efficiently. New South Wales is the economic powerhouse of Australia, so we use the proposed model to predict electric demand for that region. The accurate prediction has a significant meaning.

  14. Fractional-Order Total Variation Image Restoration Based on Primal-Dual Algorithm

    OpenAIRE

    Chen, Dali; Chen, YangQuan; Xue, Dingyu

    2013-01-01

    This paper proposes a fractional-order total variation image denoising algorithm based on the primal-dual method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, convergence rate, and blocky effect. The fractional-order total variation model is introduced by generalizing the first-order model, and the corresponding saddle-point and dual formulation are constructed in theory. In order to guarantee $O(1/{N}^{2})$ conv...

  15. Comparative study on γ energy spectrum denoise by fourier and wavelet transforms

    International Nuclear Information System (INIS)

    Shi Dongsheng; Di Yuming; Zhou Chunlin

    2007-01-01

    This paper introduces the basic principle of wavelet and Fourier transforms, applies wavelet transform method to denoise γ energy spectrum of 60 Co and compares it with Fourier transform method. The result of simulation with MATLAB software tool showed that as compared with traditional Fourier transform, wavelet transform has comparatively higher accuracy for γ energy spectrum denoising and is more feasible to γ energy spectrum denoising. (authors)

  16. Adaptive DSPI phase denoising using mutual information and 2D variational mode decomposition

    Science.gov (United States)

    Xiao, Qiyang; Li, Jian; Wu, Sijin; Li, Weixian; Yang, Lianxiang; Dong, Mingli; Zeng, Zhoumo

    2018-04-01

    In digital speckle pattern interferometry (DSPI), noise interference leads to a low peak signal-to-noise ratio (PSNR) and measurement errors in the phase map. This paper proposes an adaptive DSPI phase denoising method based on two-dimensional variational mode decomposition (2D-VMD) and mutual information. Firstly, the DSPI phase map is subjected to 2D-VMD in order to obtain a series of band-limited intrinsic mode functions (BLIMFs). Then, on the basis of characteristics of the BLIMFs and in combination with mutual information, a self-adaptive denoising method is proposed to obtain noise-free components containing the primary phase information. The noise-free components are reconstructed to obtain the denoising DSPI phase map. Simulation and experimental results show that the proposed method can effectively reduce noise interference, giving a PSNR that is higher than that of two-dimensional empirical mode decomposition methods.

  17. RADIANCE DOMAIN COMPOSITING FOR HIGH DYNAMIC RANGE IMAGING

    Directory of Open Access Journals (Sweden)

    M.R. Renu

    2013-02-01

    Full Text Available High dynamic range imaging aims at creating an image with a range of intensity variations larger than the range supported by a camera sensor. Most commonly used methods combine multiple exposure low dynamic range (LDR images, to obtain the high dynamic range (HDR image. Available methods typically neglect the noise term while finding appropriate weighting functions to estimate the camera response function as well as the radiance map. We look at the HDR imaging problem in a denoising frame work and aim at reconstructing a low noise radiance map from noisy low dynamic range images, which is tone mapped to get the LDR equivalent of the HDR image. We propose a maximum aposteriori probability (MAP based reconstruction of the HDR image using Gibb’s prior to model the radiance map, with total variation (TV as the prior to avoid unnecessary smoothing of the radiance field. To make the computation with TV prior efficient, we extend the majorize-minimize method of upper bounding the total variation by a quadratic function to our case which has a nonlinear term arising from the camera response function. A theoretical justification for doing radiance domain denoising as opposed to image domain denoising is also provided.

  18. Three-Dimensional Velocity Field De-Noising using Modal Projection

    Science.gov (United States)

    Frank, Sarah; Ameli, Siavash; Szeri, Andrew; Shadden, Shawn

    2017-11-01

    PCMRI and Doppler ultrasound are common modalities for imaging velocity fields inside the body (e.g. blood, air, etc) and PCMRI is increasingly being used for other fluid mechanics applications where optical imaging is difficult. This type of imaging is typically applied to internal flows, which are strongly influenced by domain geometry. While these technologies are evolving, it remains that measured data is noisy and boundary layers are poorly resolved. We have developed a boundary modal analysis method to de-noise 3D velocity fields such that the resulting field is divergence-free and satisfies no-slip/no-penetration boundary conditions. First, two sets of divergence-free modes are computed based on domain geometry. The first set accounts for flow through ``truncation boundaries'', and the second set of modes has no-slip/no-penetration conditions imposed on all boundaries. The modes are calculated by minimizing the velocity gradient throughout the domain while enforcing a divergence-free condition. The measured velocity field is then projected onto these modes using a least squares algorithm. This method is demonstrated on CFD simulations with artificial noise. Different degrees of noise and different numbers of modes are tested to reveal the capabilities of the approach. American Heart Association Award 17PRE33660202.

  19. Accelerometer North Finding System Based on the Wavelet Packet De-noising Algorithm and Filtering Circuit

    Directory of Open Access Journals (Sweden)

    LU Yongle

    2014-07-01

    Full Text Available This paper demonstrates a method and system for north finding with a low-cost piezoelectricity accelerometer based on the Coriolis acceleration principle. The proposed setup is based on the choice of an accelerometer with residual noise of 35 ng•Hz-1/2. The plane of the north finding system is aligned parallel to the local level, which helps to eliminate the effect of plane error. The Coriolis acceleration caused by the earth’s rotation and the acceleration’s instantaneous velocity is much weaker than the g-sensitivity acceleration. To get a high accuracy and a shorter time for north finding system, in this paper, the Filtering Circuit and the wavelet packet de-nosing algorithm are used as the following. First, the hardware is designed as the alternating currents across by filtering circuit, so the DC will be isolated and the weak AC signal will be amplified. The DC is interfering signal generated by the earth's gravity. Then, we have used a wavelet packet to filter the signal which has been done through the filtering circuit. Finally, compare the north finding results measured by wavelet packet filtering with those measured by a low-pass filter. Wavelet filter de-noise data shows that wavelet packet filtering and wavelet filter measurement have high accuracy. Wavelet Packet filtering has stronger ability to remove burst noise and higher engineering environment adaptability than that of Wavelet filtering. Experimental results prove the effectiveness and project implementation of the accelerometer north finding method based on wavelet packet de-noising algorithm.

  20. RESTORATION TECHNIQUE FOR PLEIADES-HR PANCHROMATIC IMAGES

    Directory of Open Access Journals (Sweden)

    C. Latry

    2012-07-01

    Full Text Available 17th of December 2011 from Kourou Space Centre, French Guyana. Like others high resolution optical satellites, it acquires both panchromatic images, with 70cm spatial resolution, and lower resolution multispectral images with 2.8m spatial resolution. Pleiades-HR is an optimized system, which means that the Modulation Transfer Function has a low value at Nyquist frequency, in order to reduce both the telescope diameter and aliasing effects. Shannon sampling condition is thus met at first order, which also makes classical ground processing, such as image matching or resampling, more justified for a mathematical point of view. Raw images are thus blurry which implies a deconvolution stage that restores sharpness but also increases the noise level in the high frequency domain. A denoising step, based upon wavelet packet coefficients thresholding/shrinkage technique, allows controlling the final noise level. Each of these methods includes numerous parameters that have to be assessed during the inflight commissioning period: deconvolution filter that depends on MTF assessment, instrumental noise model, noise level target for denoised images, wavelet packet decomposition level. This paper aims to precisely describe the deconvolution/denoising algorithms and how their main parameters have been set up during the inflight commissioning stage. Special attention will be given to structured noise induced by Pleiades-HR on board wavelet-based compression algorithm

  1. Comments on "Image denoising by sparse 3-D transform-domain collaborative filtering".

    Science.gov (United States)

    Hou, Yingkun; Zhao, Chunxia; Yang, Deyun; Cheng, Yong

    2011-01-01

    In order to resolve the problem that the denoising performance has a sharp drop when noise standard deviation reaches 40, proposed to replace the wavelet transform by the DCT. In this comment, we argue that this replacement is unnecessary, and that the problem can be solved by adjusting some numerical parameters. We also present this parameter modification approach here. Experimental results demonstrate that the proposed modification achieves better results in terms of both peak signal-to-noise ratio and subjective visual quality than the original method for strong noise.

  2. Data-adaptive image-denoising for detecting and quantifying nanoparticle entry in mucosal tissues through intravital 2-photon microscopy

    Directory of Open Access Journals (Sweden)

    Torsten Bölke

    2014-11-01

    Full Text Available Intravital 2-photon microscopy of mucosal membranes across which nanoparticles enter the organism typically generates noisy images. Because the noise results from the random statistics of only very few photons detected per pixel, it cannot be avoided by technical means. Fluorescent nanoparticles contained in the tissue may be represented by a few bright pixels which closely resemble the noise structure. We here present a data-adaptive method for digital denoising of datasets obtained by 2-photon microscopy. The algorithm exploits both local and non-local redundancy of the underlying ground-truth signal to reduce noise. Our approach automatically adapts the strength of noise suppression in a data-adaptive way by using a Bayesian network. The results show that the specific adaption to both signal and noise characteristics improves the preservation of fine structures such as nanoparticles while less artefacts were produced as compared to reference algorithms. Our method is applicable to other imaging modalities as well, provided the specific noise characteristics are known and taken into account.

  3. Joint seismic data denoising and interpolation with double-sparsity dictionary learning

    Science.gov (United States)

    Zhu, Lingchen; Liu, Entao; McClellan, James H.

    2017-08-01

    Seismic data quality is vital to geophysical applications, so that methods of data recovery, including denoising and interpolation, are common initial steps in the seismic data processing flow. We present a method to perform simultaneous interpolation and denoising, which is based on double-sparsity dictionary learning. This extends previous work that was for denoising only. The original double-sparsity dictionary learning algorithm is modified to track the traces with missing data by defining a masking operator that is integrated into the sparse representation of the dictionary. A weighted low-rank approximation algorithm is adopted to handle the dictionary updating as a sparse recovery optimization problem constrained by the masking operator. Compared to traditional sparse transforms with fixed dictionaries that lack the ability to adapt to complex data structures, the double-sparsity dictionary learning method learns the signal adaptively from selected patches of the corrupted seismic data, while preserving compact forward and inverse transform operators. Numerical experiments on synthetic seismic data indicate that this new method preserves more subtle features in the data set without introducing pseudo-Gibbs artifacts when compared to other directional multi-scale transform methods such as curvelets.

  4. Deep Fault Recognizer: An Integrated Model to Denoise and Extract Features for Fault Diagnosis in Rotating Machinery

    Directory of Open Access Journals (Sweden)

    Xiaojie Guo

    2016-12-01

    Full Text Available Fault diagnosis in rotating machinery is significant to avoid serious accidents; thus, an accurate and timely diagnosis method is necessary. With the breakthrough in deep learning algorithm, some intelligent methods, such as deep belief network (DBN and deep convolution neural network (DCNN, have been developed with satisfactory performances to conduct machinery fault diagnosis. However, only a few of these methods consider properly dealing with noises that exist in practical situations and the denoising methods are in need of extensive professional experiences. Accordingly, rethinking the fault diagnosis method based on deep architectures is essential. Hence, this study proposes an automatic denoising and feature extraction method that inherently considers spatial and temporal correlations. In this study, an integrated deep fault recognizer model based on the stacked denoising autoencoder (SDAE is applied to both denoise random noises in the raw signals and represent fault features in fault pattern diagnosis for both bearing rolling fault and gearbox fault, and trained in a greedy layer-wise fashion. Finally, the experimental validation demonstrates that the proposed method has better diagnosis accuracy than DBN, particularly in the existing situation of noises with superiority of approximately 7% in fault diagnosis accuracy.

  5. A Segmental Approach with SWT Technique for Denoising the EOG Signal

    Directory of Open Access Journals (Sweden)

    Naga Rajesh

    2015-01-01

    Full Text Available The Electrooculogram (EOG signal is often contaminated with artifacts and power-line while recording. It is very much essential to denoise the EOG signal for quality diagnosis. The present study deals with denoising of noisy EOG signals using Stationary Wavelet Transformation (SWT technique by two different approaches, namely, increasing segments of the EOG signal and different equal segments of the EOG signal. For performing the segmental denoising analysis, an EOG signal is simulated and added with controlled noise powers of 5 dB, 10 dB, 15 dB, 20 dB, and 25 dB so as to obtain five different noisy EOG signals. The results obtained after denoising them are extremely encouraging. Root Mean Square Error (RMSE values between reference EOG signal and EOG signals with noise powers of 5 dB, 10 dB, and 15 dB are very less when compared with 20 dB and 25 dB noise powers. The findings suggest that the SWT technique can be used to denoise the noisy EOG signal with optimum noise powers ranging from 5 dB to 15 dB. This technique might be useful in quality diagnosis of various neurological or eye disorders.

  6. 基于三次B样条函数的SEM图像处理%SEM Image Processing Based on Third- order B- spline Function

    Institute of Scientific and Technical Information of China (English)

    张健

    2011-01-01

    SEM images, for its unique practical testing significance, need in denoising also highlight its edges and accurate edge extraction positioning, So this paper adopts a partial differential method which can maintain the edges of the denoising and a extensive application of multi - scale wavelet analysis to detect edges, all based on third - order B - spline function as the core operator, for line width test of SEM image processing, This algorithm obtained the better denoising effect and maintained edge features for SEM images.%SEM图像由于其独特的实际测试意义,需要在去噪的同时突出边缘和准确的边缘提取定位,所以提出采用能够保持边缘的偏微分方法去噪和广泛应用的多尺度小波提取边缘,基于三次B样条函数作为核心算子,对用于线宽测试的SEM图像进行处理,获得了较好的去噪并保持边缘的效果以及清晰的图像边缘检测效果.

  7. Application of time-resolved glucose concentration photoacoustic signals based on an improved wavelet denoising

    Science.gov (United States)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2014-10-01

    Real-time monitoring of blood glucose concentration (BGC) is a great important procedure in controlling diabetes mellitus and preventing the complication for diabetic patients. Noninvasive measurement of BGC has already become a research hotspot because it can overcome the physical and psychological harm. Photoacoustic spectroscopy is a well-established, hybrid and alternative technique used to determine the BGC. According to the theory of photoacoustic technique, the blood is irradiated by plused laser with nano-second repeation time and micro-joule power, the photoacoustic singals contained the information of BGC are generated due to the thermal-elastic mechanism, then the BGC level can be interpreted from photoacoustic signal via the data analysis. But in practice, the time-resolved photoacoustic signals of BGC are polluted by the varities of noises, e.g., the interference of background sounds and multi-component of blood. The quality of photoacoustic signal of BGC directly impacts the precision of BGC measurement. So, an improved wavelet denoising method was proposed to eliminate the noises contained in BGC photoacoustic signals. To overcome the shortcoming of traditional wavelet threshold denoising, an improved dual-threshold wavelet function was proposed in this paper. Simulation experimental results illustrated that the denoising result of this improved wavelet method was better than that of traditional soft and hard threshold function. To varify the feasibility of this improved function, the actual photoacoustic BGC signals were test, the test reslut demonstrated that the signal-to-noises ratio(SNR) of the improved function increases about 40-80%, and its root-mean-square error (RMSE) decreases about 38.7-52.8%.

  8. Assessing denoising strategies to increase signal to noise ratio in spinal cord and in brain cortical and subcortical regions

    Science.gov (United States)

    Maugeri, L.; Moraschi, M.; Summers, P.; Favilla, S.; Mascali, D.; Cedola, A.; Porro, C. A.; Giove, F.; Fratini, M.

    2018-02-01

    Functional Magnetic Resonance Imaging (fMRI) based on Blood Oxygenation Level Dependent (BOLD) contrast has become one of the most powerful tools in neuroscience research. On the other hand, fMRI approaches have seen limited use in the study of spinal cord and subcortical brain regions (such as the brainstem and portions of the diencephalon). Indeed obtaining good BOLD signal in these areas still represents a technical and scientific challenge, due to poor control of physiological noise and to a limited overall quality of the functional series. A solution can be found in the combination of optimized experimental procedures at acquisition stage, and well-adapted artifact mitigation procedures in the data processing. In this framework, we studied two different data processing strategies to reduce physiological noise in cortical and subcortical brain regions and in the spinal cord, based on the aCompCor and RETROICOR denoising tools respectively. The study, performed in healthy subjects, was carried out using an ad hoc isometric motor task. We observed an increased signal to noise ratio in the denoised functional time series in the spinal cord and in the subcortical brain region.

  9. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  10. MLESAC Based Localization of Needle Insertion Using 2D Ultrasound Images

    Science.gov (United States)

    Xu, Fei; Gao, Dedong; Wang, Shan; Zhanwen, A.

    2018-04-01

    In the 2D ultrasound image of ultrasound-guided percutaneous needle insertions, it is difficult to determine the positions of needle axis and tip because of the existence of artifacts and other noises. In this work the speckle is regarded as the noise of an ultrasound image, and a novel algorithm is presented to detect the needle in a 2D ultrasound image. Firstly, the wavelet soft thresholding technique based on BayesShrink rule is used to denoise the speckle of ultrasound image. Secondly, we add Otsu’s thresholding method and morphologic operations to pre-process the ultrasound image. Finally, the localization of the needle is identified and positioned in the 2D ultrasound image based on the maximum likelihood estimation sample consensus (MLESAC) algorithm. The experimental results show that it is valid for estimating the position of needle axis and tip in the ultrasound images with the proposed algorithm. The research work is hopeful to be used in the path planning and robot-assisted needle insertion procedures.

  11. Patch-based anisotropic diffusion scheme for fluorescence diffuse optical tomography--part 2: image reconstruction.

    Science.gov (United States)

    Correia, Teresa; Koch, Maximilian; Ale, Angelique; Ntziachristos, Vasilis; Arridge, Simon

    2016-02-21

    Fluorescence diffuse optical tomography (fDOT) provides 3D images of fluorescence distributions in biological tissue, which represent molecular and cellular processes. The image reconstruction problem is highly ill-posed and requires regularisation techniques to stabilise and find meaningful solutions. Quadratic regularisation tends to either oversmooth or generate very noisy reconstructions, depending on the regularisation strength. Edge preserving methods, such as anisotropic diffusion regularisation (AD), can preserve important features in the fluorescence image and smooth out noise. However, AD has limited ability to distinguish an edge from noise. We propose a patch-based anisotropic diffusion regularisation (PAD), where regularisation strength is determined by a weighted average according to the similarity between patches around voxels within a search window, instead of a simple local neighbourhood strategy. However, this method has higher computational complexity and, hence, we wavelet compress the patches (PAD-WT) to speed it up, while simultaneously taking advantage of the denoising properties of wavelet thresholding. Furthermore, structural information can be incorporated into the image reconstruction with PAD-WT to improve image quality and resolution. In this case, the weights used to average voxels in the image are calculated using the structural image, instead of the fluorescence image. The regularisation strength depends on both structural and fluorescence images, which guarantees that the method can preserve fluorescence information even when it is not structurally visible in the anatomical images. In part 1, we tested the method using a denoising problem. Here, we use simulated and in vivo mouse fDOT data to assess the algorithm performance. Our results show that the proposed PAD-WT method provides high quality and noise free images, superior to those obtained using AD.

  12. Image denoising using new pixon representation based on fuzzy filtering and partial differential equations

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Nikpour, Mohsen

    2012-01-01

    In this paper, we have proposed two extensions to pixon-based image modeling. The first one is using bicubic interpolation instead of bilinear interpolation and the second one is using fuzzy filtering method, aiming to improve the quality of the pixonal image. Finally, partial differential...

  13. Intra-Day Trading System Design Based on the Integrated Model of Wavelet De-Noise and Genetic Programming

    Directory of Open Access Journals (Sweden)

    Hongguang Liu

    2016-12-01

    Full Text Available Technical analysis has been proved to be capable of exploiting short-term fluctuations in financial markets. Recent results indicate that the market timing approach beats many traditional buy-and-hold approaches in most of the short-term trading periods. Genetic programming (GP was used to generate short-term trade rules on the stock markets during the last few decades. However, few of the related studies on the analysis of financial time series with genetic programming considered the non-stationary and noisy characteristics of the time series. In this paper, to de-noise the original financial time series and to search profitable trading rules, an integrated method is proposed based on the Wavelet Threshold (WT method and GP. Since relevant information that affects the movement of the time series is assumed to be fully digested during the market closed periods, to avoid the jumping points of the daily or monthly data, in this paper, intra-day high-frequency time series are used to fully exploit the short-term forecasting advantage of technical analysis. To validate the proposed integrated approach, an empirical study is conducted based on the China Securities Index (CSI 300 futures in the emerging China Financial Futures Exchange (CFFEX market. The analysis outcomes show that the wavelet de-noise approach outperforms many comparative models.

  14. New second order Mumford-Shah model based on Γ-convergence approximation for image processing

    Science.gov (United States)

    Duan, Jinming; Lu, Wenqi; Pan, Zhenkuan; Bai, Li

    2016-05-01

    In this paper, a second order variational model named the Mumford-Shah total generalized variation (MSTGV) is proposed for simultaneously image denoising and segmentation, which combines the original Γ-convergence approximated Mumford-Shah model with the second order total generalized variation (TGV). For image denoising, the proposed MSTGV can eliminate both the staircase artefact associated with the first order total variation and the edge blurring effect associated with the quadratic H1 regularization or the second order bounded Hessian regularization. For image segmentation, the MSTGV can obtain clear and continuous boundaries of objects in the image. To improve computational efficiency, the implementation of the MSTGV does not directly solve its high order nonlinear partial differential equations and instead exploits the efficient split Bregman algorithm. The algorithm benefits from the fast Fourier transform, analytical generalized soft thresholding equation, and Gauss-Seidel iteration. Extensive experiments are conducted to demonstrate the effectiveness and efficiency of the proposed model.

  15. Implementation of dictionary pair learning algorithm for image quality improvement

    Science.gov (United States)

    Vimala, C.; Aruna Priya, P.

    2018-04-01

    This paper proposes an image denoising on dictionary pair learning algorithm. Visual information is transmitted in the form of digital images is becoming a major method of communication in the modern age, but the image obtained after transmissions is often corrupted with noise. The received image needs processing before it can be used in applications. Image denoising involves the manipulation of the image data to produce a visually high quality image.

  16. Efficient operator splitting algorithm for joint sparsity-regularized SPIRiT-based parallel MR imaging reconstruction.

    Science.gov (United States)

    Duan, Jizhong; Liu, Yu; Jing, Peiguang

    2018-02-01

    Self-consistent parallel imaging (SPIRiT) is an auto-calibrating model for the reconstruction of parallel magnetic resonance imaging, which can be formulated as a regularized SPIRiT problem. The Projection Over Convex Sets (POCS) method was used to solve the formulated regularized SPIRiT problem. However, the quality of the reconstructed image still needs to be improved. Though methods such as NonLinear Conjugate Gradients (NLCG) can achieve higher spatial resolution, these methods always demand very complex computation and converge slowly. In this paper, we propose a new algorithm to solve the formulated Cartesian SPIRiT problem with the JTV and JL1 regularization terms. The proposed algorithm uses the operator splitting (OS) technique to decompose the problem into a gradient problem and a denoising problem with two regularization terms, which is solved by our proposed split Bregman based denoising algorithm, and adopts the Barzilai and Borwein method to update step size. Simulation experiments on two in vivo data sets demonstrate that the proposed algorithm is 1.3 times faster than ADMM for datasets with 8 channels. Especially, our proposal is 2 times faster than ADMM for the dataset with 32 channels. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. A novel strategy for signal denoising using reweighted SVD and its applications to weak fault feature enhancement of rotating machinery

    Science.gov (United States)

    Zhao, Ming; Jia, Xiaodong

    2017-09-01

    Singular value decomposition (SVD), as an effective signal denoising tool, has been attracting considerable attention in recent years. The basic idea behind SVD denoising is to preserve the singular components (SCs) with significant singular values. However, it is shown that the singular values mainly reflect the energy of decomposed SCs, therefore traditional SVD denoising approaches are essentially energy-based, which tend to highlight the high-energy regular components in the measured signal, while ignoring the weak feature caused by early fault. To overcome this issue, a reweighted singular value decomposition (RSVD) strategy is proposed for signal denoising and weak feature enhancement. In this work, a novel information index called periodic modulation intensity is introduced to quantify the diagnostic information in a mechanical signal. With this index, the decomposed SCs can be evaluated and sorted according to their information levels, rather than energy. Based on that, a truncated linear weighting function is proposed to control the contribution of each SC in the reconstruction of the denoised signal. In this way, some weak but informative SCs could be highlighted effectively. The advantages of RSVD over traditional approaches are demonstrated by both simulated signals and real vibration/acoustic data from a two-stage gearbox as well as train bearings. The results demonstrate that the proposed method can successfully extract the weak fault feature even in the presence of heavy noise and ambient interferences.

  18. Image Processing Tools for Improved Visualization and Analysis of Remotely Sensed Images for Agriculture and Forest Classifications

    OpenAIRE

    SINHA G. R.

    2017-01-01

    This paper suggests Image Processing tools for improved visualization and better analysis of remotely sensed images. There are methods already available in literature for the purpose but the most important challenge among the limitations is lack of robustness. We propose an optimal method for image enhancement of the images using fuzzy based approaches and few optimization tools. The segmentation images subsequently obtained after de-noising will be classified into distinct information and th...

  19. Latent fingerprint wavelet transform image enhancement technique for optical coherence tomography

    CSIR Research Space (South Africa)

    Makinana, S

    2016-09-01

    Full Text Available (FMR) and Equal Error Rate (EER) were used. The results of these two measures gives the FMR of 3% and EER of 1.9% for denoised images which is better than non-denoised images where the EER is 8.7%....

  20. Quality-aware features-based noise level estimator for block matching and three-dimensional filtering algorithm

    Science.gov (United States)

    Xu, Shaoping; Hu, Lingyan; Yang, Xiaohui

    2016-01-01

    The performance of conventional denoising algorithms is usually controlled by one or several parameters whose optimal settings depend on the contents of the processed images and the characteristics of the noises. Among these parameters, noise level is a fundamental parameter that is always assumed to be known by most of the existing denoising algorithms (so-called nonblind denoising algorithms), which largely limits the applicability of these nonblind denoising algorithms in many applications. Moreover, these nonblind algorithms do not always achieve the best denoised images in visual quality even when fed with the actual noise level parameter. To address these shortcomings, in this paper we propose a new quality-aware features-based noise level estimator (NLE), which consists of quality-aware features extraction and optimal noise level parameter prediction. First, considering that image local contrast features convey important structural information that is closely related to image perceptual quality, we utilize the marginal statistics of two local contrast operators, i.e., the gradient magnitude and the Laplacian of Gaussian (LOG), to extract quality-aware features. The proposed quality-aware features have very low computational complexity, making them well suited for time-constrained applications. Then we propose a learning-based framework where the noise level parameter is estimated based on the quality-aware features. Based on the proposed NLE, we develop a blind block matching and three-dimensional filtering (BBM3D) denoising algorithm which is capable of effectively removing additive white Gaussian noise, even coupled with impulse noise. The noise level parameter of the BBM3D algorithm is automatically tuned according to the quality-aware features, guaranteeing the best performance. As such, the classical block matching and three-dimensional algorithm can be transformed into a blind one in an unsupervised manner. Experimental results demonstrate that the

  1. Image registration for a UV-Visible dual-band imaging system

    Science.gov (United States)

    Chen, Tao; Yuan, Shuang; Li, Jianping; Xing, Sheng; Zhang, Honglong; Dong, Yuming; Chen, Liangpei; Liu, Peng; Jiao, Guohua

    2018-06-01

    The detection of corona discharge is an effective way for early fault diagnosis of power equipment. UV-Visible dual-band imaging can detect and locate corona discharge spot at all-weather condition. In this study, we introduce an image registration protocol for this dual-band imaging system. The protocol consists of UV image denoising and affine transformation model establishment. We report the algorithm details of UV image preprocessing, affine transformation model establishment and relevant experiments for verification of their feasibility. The denoising algorithm was based on a correlation operation between raw UV images, a continuous mask and the transformation model was established by using corner feature and a statistical method. Finally, an image fusion test was carried out to verify the accuracy of affine transformation model. It has proved the average position displacement error between corona discharge and equipment fault at different distances in a 2.5m-20 m range are 1.34 mm and 1.92 mm in the horizontal and vertical directions, respectively, which are precise enough for most industrial applications. The resultant protocol is not only expected to improve the efficiency and accuracy of such imaging system for locating corona discharge spot, but also supposed to provide a more generalized reference for the calibration of various dual-band imaging systems in practice.

  2. A Shearlet-based algorithm for quantum noise removal in low-dose CT images

    Science.gov (United States)

    Zhang, Aguan; Jiang, Huiqin; Ma, Ling; Liu, Yumin; Yang, Xiaopeng

    2016-03-01

    Low-dose CT (LDCT) scanning is a potential way to reduce the radiation exposure of X-ray in the population. It is necessary to improve the quality of low-dose CT images. In this paper, we propose an effective algorithm for quantum noise removal in LDCT images using shearlet transform. Because the quantum noise can be simulated by Poisson process, we first transform the quantum noise by using anscombe variance stabilizing transform (VST), producing an approximately Gaussian noise with unitary variance. Second, the non-noise shearlet coefficients are obtained by adaptive hard-threshold processing in shearlet domain. Third, we reconstruct the de-noised image using the inverse shearlet transform. Finally, an anscombe inverse transform is applied to the de-noised image, which can produce the improved image. The main contribution is to combine the anscombe VST with the shearlet transform. By this way, edge coefficients and noise coefficients can be separated from high frequency sub-bands effectively. A number of experiments are performed over some LDCT images by using the proposed method. Both quantitative and visual results show that the proposed method can effectively reduce the quantum noise while enhancing the subtle details. It has certain value in clinical application.

  3. Study of Denoising in TEOAE Signals Using an Appropriate Mother Wavelet Function

    Directory of Open Access Journals (Sweden)

    Habib Alizadeh Dizaji

    2007-06-01

    Full Text Available Background and Aim: Matching a mother wavelet to class of signals can be of interest in signal analy­sis and denoising based on wavelet multiresolution analysis and decomposition. As transient evoked otoacoustic emissions (TEOAES are contaminated with noise, the aim of this work was to pro­vide a quantitative approach to the problem of matching a mother wavelet to TEOAE signals by us­ing tun­ing curves and to use it for analysis and denoising TEOAE signals. Approximated mother wave­let for TEOAE signals was calculated using an algorithm for designing wavelet to match a specified sig­nal.Materials and Methods: In this paper a tuning curve has used as a template for designing a mother wave­let that has maximum matching to the tuning curve. The mother wavelet matching was performed on tuning curves spectrum magnitude and phase independent of one another. The scaling function was calcu­lated from the matched mother wavelet and by using these functions, lowpass and highpass filters were designed for a filter bank and otoacoustic emissions signal analysis and synthesis. After signal analyz­ing, denoising was performed by time windowing the signal time-frequency component.Results: Aanalysis indicated more signal reconstruction improvement in comparison with coiflets mother wavelet and by using the purposed denoising algorithm it is possible to enhance signal to noise ra­tio up to dB.Conclusion: The wavelet generated from this algorithm was remarkably similar to the biorthogonal wave­lets. Therefore, by matching a biorthogonal wavelet to the tuning curve and using wavelet packet analy­sis, a high resolution time-frequency analysis for the otoacoustic emission signals is possible.

  4. Wavelet denoising method; application to the flow rate estimation for water level control

    International Nuclear Information System (INIS)

    Park, Gee Young; Park, Jin Ho; Lee, Jung Han; Kim, Bong Soo; Seong, Poong Hyun

    2003-01-01

    The wavelet transform decomposes a signal into time- and frequency-domain signals and it is well known that a noise-corrupted signal could be reconstructed or estimated when a proper denoising method is involved in the wavelet transform. Among the wavelet denoising methods proposed up to now, the wavelets by Mallat and Zhong can reconstruct best the pure transient signal from a highly corrupted signal. But there has been no systematic way of discriminating the original signal from the noise in a dyadic wavelet transform. In this paper, a systematic method is proposed for noise discrimination, which could be implemented easily into a digital system. For demonstrating the potential role of the wavelet denoising method in the nuclear field, this method is applied to the steam or feedwater flow rate estimation of the secondary loop. And the configuration of the S/G water level control system is proposed for incorporating the wavelet denoising method in estimating the flow rate value at low operating powers

  5. An enhanced approach for biomedical image restoration using image fusion techniques

    Science.gov (United States)

    Karam, Ghada Sabah; Abbas, Fatma Ismail; Abood, Ziad M.; Kadhim, Kadhim K.; Karam, Nada S.

    2018-05-01

    Biomedical image is generally noisy and little blur due to the physical mechanisms of the acquisition process, so one of the common degradations in biomedical image is their noise and poor contrast. The idea of biomedical image enhancement is to improve the quality of the image for early diagnosis. In this paper we are using Wavelet Transformation to remove the Gaussian noise from biomedical images: Positron Emission Tomography (PET) image and Radiography (Radio) image, in different color spaces (RGB, HSV, YCbCr), and we perform the fusion of the denoised images resulting from the above denoising techniques using add image method. Then some quantive performance metrics such as signal -to -noise ratio (SNR), peak signal-to-noise ratio (PSNR), and Mean Square Error (MSE), etc. are computed. Since this statistical measurement helps in the assessment of fidelity and image quality. The results showed that our approach can be applied of Image types of color spaces for biomedical images.

  6. 3D image restoration for confocal microscopy: toward a wavelet deconvolution for the study of complex biological structures

    Science.gov (United States)

    Boutet de Monvel, Jacques; Le Calvez, Sophie; Ulfendahl, Mats

    2000-05-01

    Image restoration algorithms provide efficient tools for recovering part of the information lost in the imaging process of a microscope. We describe recent progress in the application of deconvolution to confocal microscopy. The point spread function of a Biorad-MRC1024 confocal microscope was measured under various imaging conditions, and used to process 3D-confocal images acquired in an intact preparation of the inner ear developed at Karolinska Institutet. Using these experiments we investigate the application of denoising methods based on wavelet analysis as a natural regularization of the deconvolution process. Within the Bayesian approach to image restoration, we compare wavelet denoising with the use of a maximum entropy constraint as another natural regularization method. Numerical experiments performed with test images show a clear advantage of the wavelet denoising approach, allowing to `cool down' the image with respect to the signal, while suppressing much of the fine-scale artifacts appearing during deconvolution due to the presence of noise, incomplete knowledge of the point spread function, or undersampling problems. We further describe a natural development of this approach, which consists of performing the Bayesian inference directly in the wavelet domain.

  7. Wavelets in medical imaging

    International Nuclear Information System (INIS)

    Zahra, Noor e; Sevindir, Huliya A.; Aslan, Zafar; Siddiqi, A. H.

    2012-01-01

    The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.

  8. Wavelets in medical imaging

    Energy Technology Data Exchange (ETDEWEB)

    Zahra, Noor e; Sevindir, Huliya A.; Aslan, Zafar; Siddiqi, A. H. [Sharda University, SET, Department of Electronics and Communication, Knowledge Park 3rd, Gr. Noida (India); University of Kocaeli, Department of Mathematics, 41380 Kocaeli (Turkey); Istanbul Aydin University, Department of Computer Engineering, 34295 Istanbul (Turkey); Sharda University, SET, Department of Mathematics, 32-34 Knowledge Park 3rd, Greater Noida (India)

    2012-07-17

    The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.

  9. Medical Image Fusion Algorithm Based on Nonlinear Approximation of Contourlet Transform and Regional Features

    Directory of Open Access Journals (Sweden)

    Hui Huang

    2017-01-01

    Full Text Available According to the pros and cons of contourlet transform and multimodality medical imaging, here we propose a novel image fusion algorithm that combines nonlinear approximation of contourlet transform with image regional features. The most important coefficient bands of the contourlet sparse matrix are retained by nonlinear approximation. Low-frequency and high-frequency regional features are also elaborated to fuse medical images. The results strongly suggested that the proposed algorithm could improve the visual effects of medical image fusion and image quality, image denoising, and enhancement.

  10. A de-noising algorithm based on wavelet threshold-exponential adaptive window width-fitting for ground electrical source airborne transient electromagnetic signal

    Science.gov (United States)

    Ji, Yanju; Li, Dongsheng; Yu, Mingmei; Wang, Yuan; Wu, Qiong; Lin, Jun

    2016-05-01

    The ground electrical source airborne transient electromagnetic system (GREATEM) on an unmanned aircraft enjoys considerable prospecting depth, lateral resolution and detection efficiency, etc. In recent years it has become an important technical means of rapid resources exploration. However, GREATEM data are extremely vulnerable to stationary white noise and non-stationary electromagnetic noise (sferics noise, aircraft engine noise and other human electromagnetic noises). These noises will cause degradation of the imaging quality for data interpretation. Based on the characteristics of the GREATEM data and major noises, we propose a de-noising algorithm utilizing wavelet threshold method and exponential adaptive window width-fitting. Firstly, the white noise is filtered in the measured data using the wavelet threshold method. Then, the data are segmented using data window whose step length is even logarithmic intervals. The data polluted by electromagnetic noise are identified within each window based on the discriminating principle of energy detection, and the attenuation characteristics of the data slope are extracted. Eventually, an exponential fitting algorithm is adopted to fit the attenuation curve of each window, and the data polluted by non-stationary electromagnetic noise are replaced with their fitting results. Thus the non-stationary electromagnetic noise can be effectively removed. The proposed algorithm is verified by the synthetic and real GREATEM signals. The results show that in GREATEM signal, stationary white noise and non-stationary electromagnetic noise can be effectively filtered using the wavelet threshold-exponential adaptive window width-fitting algorithm, which enhances the imaging quality.

  11. High-quality compressive ghost imaging

    Science.gov (United States)

    Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun

    2018-04-01

    We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.

  12. Statistical x-ray computed tomography imaging from photon-starved measurements

    Science.gov (United States)

    Chang, Zhiqian; Zhang, Ruoqiao; Thibault, Jean-Baptiste; Sauer, Ken; Bouman, Charles

    2013-03-01

    Dose reduction in clinical X-ray computed tomography (CT) causes low signal-to-noise ratio (SNR) in photonsparse situations. Statistical iterative reconstruction algorithms have the advantage of retaining image quality while reducing input dosage, but they meet their limits of practicality when significant portions of the sinogram near photon starvation. The corruption of electronic noise leads to measured photon counts taking on negative values, posing a problem for the log() operation in preprocessing of data. In this paper, we propose two categories of projection correction methods: an adaptive denoising filter and Bayesian inference. The denoising filter is easy to implement and preserves local statistics, but it introduces correlation between channels and may affect image resolution. Bayesian inference is a point-wise estimation based on measurements and prior information. Both approaches help improve diagnostic image quality at dramatically reduced dosage.

  13. Improved CEEMDAN-wavelet transform de-noising method and its application in well logging noise reduction

    Science.gov (United States)

    Zhang, Jingxia; Guo, Yinghai; Shen, Yulin; Zhao, Difei; Li, Mi

    2018-06-01

    The use of geophysical logging data to identify lithology is an important groundwork in logging interpretation. Inevitably, noise is mixed in during data collection due to the equipment and other external factors and this will affect the further lithological identification and other logging interpretation. Therefore, to get a more accurate lithological identification it is necessary to adopt de-noising methods. In this study, a new de-noising method, namely improved complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN)-wavelet transform, is proposed, which integrates the superiorities of improved CEEMDAN and wavelet transform. Improved CEEMDAN, an effective self-adaptive multi-scale analysis method, is used to decompose non-stationary signals as the logging data to obtain the intrinsic mode function (IMF) of N different scales and one residual. Moreover, one self-adaptive scale selection method is used to determine the reconstruction scale k. Simultaneously, given the possible frequency aliasing problem between adjacent IMFs, a wavelet transform threshold de-noising method is used to reduce the noise of the (k-1)th IMF. Subsequently, the de-noised logging data are reconstructed by the de-noised (k-1)th IMF and the remaining low-frequency IMFs and the residual. Finally, empirical mode decomposition, improved CEEMDAN, wavelet transform and the proposed method are applied for analysis of the simulation and the actual data. Results show diverse performance of these de-noising methods with regard to accuracy for lithological identification. Compared with the other methods, the proposed method has the best self-adaptability and accuracy in lithological identification.

  14. Iris image recognition wavelet filter-banks based iris feature extraction schemes

    CERN Document Server

    Rahulkar, Amol D

    2014-01-01

    This book provides the new results in wavelet filter banks based feature extraction, and the classifier in the field of iris image recognition. It provides the broad treatment on the design of separable, non-separable wavelets filter banks, and the classifier. The design techniques presented in the book are applied on iris image analysis for person authentication. This book also brings together the three strands of research (wavelets, iris image analysis, and classifier). It compares the performance of the presented techniques with state-of-the-art available schemes. This book contains the compilation of basic material on the design of wavelets that avoids reading many different books. Therefore, it provide an easier path for the new-comers, researchers to master the contents. In addition, the designed filter banks and classifier can also be effectively used than existing filter-banks in many signal processing applications like pattern classification, data-compression, watermarking, denoising etc.  that will...

  15. Improving Signal-to-Noise Ratio in Susceptibility Weighted Imaging: A Novel Multicomponent Non-Local Approach.

    Directory of Open Access Journals (Sweden)

    Pasquale Borrelli

    Full Text Available In susceptibility-weighted imaging (SWI, the high resolution required to obtain a proper contrast generation leads to a reduced signal-to-noise ratio (SNR. The application of a denoising filter to produce images with higher SNR and still preserve small structures from excessive blurring is therefore extremely desirable. However, as the distributions of magnitude and phase noise may introduce biases during image restoration, the application of a denoising filter is non-trivial. Taking advantage of the potential multispectral nature of MR images, a multicomponent approach using a Non-Local Means (MNLM denoising filter may perform better than a component-by-component image restoration method. Here we present a new MNLM-based method (Multicomponent-Imaginary-Real-SWI, hereafter MIR-SWI to produce SWI images with high SNR and improved conspicuity. Both qualitative and quantitative comparisons of MIR-SWI with the original SWI scheme and previously proposed SWI restoring pipelines showed that MIR-SWI fared consistently better than the other approaches. Noise removal with MIR-SWI also provided improvement in contrast-to-noise ratio (CNR and vessel conspicuity at higher factors of phase mask multiplications than the one suggested in the literature for SWI vessel imaging. We conclude that a proper handling of noise in the complex MR dataset may lead to improved image quality for SWI data.

  16. Speckle Reduction on Ultrasound Liver Images Based on a Sparse Representation over a Learned Dictionary

    Directory of Open Access Journals (Sweden)

    Mohamed Yaseen Jabarulla

    2018-05-01

    Full Text Available Ultrasound images are corrupted with multiplicative noise known as speckle, which reduces the effectiveness of image processing and hampers interpretation. This paper proposes a multiplicative speckle suppression technique for ultrasound liver images, based on a new signal reconstruction model known as sparse representation (SR over dictionary learning. In the proposed technique, the non-uniform multiplicative signal is first converted into additive noise using an enhanced homomorphic filter. This is followed by pixel-based total variation (TV regularization and patch-based SR over a dictionary trained using K-singular value decomposition (KSVD. Finally, the split Bregman algorithm is used to solve the optimization problem and estimate the de-speckled image. The simulations performed on both synthetic and clinical ultrasound images for speckle reduction, the proposed technique achieved peak signal-to-noise ratios of 35.537 dB for the dictionary trained on noisy image patches and 35.033 dB for the dictionary trained using a set of reference ultrasound image patches. Further, the evaluation results show that the proposed method performs better than other state-of-the-art denoising algorithms in terms of both peak signal-to-noise ratio and subjective visual quality assessment.

  17. Research on Adaptive Optics Image Restoration Algorithm by Improved Expectation Maximization Method

    OpenAIRE

    Zhang, Lijuan; Li, Dongming; Su, Wei; Yang, Jinhua; Jiang, Yutong

    2014-01-01

    To improve the effect of adaptive optics images’ restoration, we put forward a deconvolution algorithm improved by the EM algorithm which joints multiframe adaptive optics images based on expectation-maximization theory. Firstly, we need to make a mathematical model for the degenerate multiframe adaptive optics images. The function model is deduced for the points that spread with time based on phase error. The AO images are denoised using the image power spectral density and support constrain...

  18. Self-adapting denoising, alignment and reconstruction in electron tomography in materials science

    Energy Technology Data Exchange (ETDEWEB)

    Printemps, Tony, E-mail: tony.printemps@cea.fr [Université Grenoble Alpes, F-38000 Grenoble (France); CEA, LETI, MINATEC Campus, F-38054 Grenoble (France); Mula, Guido [Dipartimento di Fisica, Università di Cagliari, Cittadella Universitaria, S.P. 8km 0.700, 09042 Monserrato (Italy); Sette, Daniele; Bleuet, Pierre; Delaye, Vincent; Bernier, Nicolas; Grenier, Adeline; Audoit, Guillaume; Gambacorti, Narciso; Hervé, Lionel [Université Grenoble Alpes, F-38000 Grenoble (France); CEA, LETI, MINATEC Campus, F-38054 Grenoble (France)

    2016-01-15

    An automatic procedure for electron tomography is presented. This procedure is adapted for specimens that can be fashioned into a needle-shaped sample and has been evaluated on inorganic samples. It consists of self-adapting denoising, automatic and accurate alignment including detection and correction of tilt axis, and 3D reconstruction. We propose the exploitation of a large amount of information of an electron tomography acquisition to achieve robust and automatic mixed Poisson–Gaussian noise parameter estimation and denoising using undecimated wavelet transforms. The alignment is made by mixing three techniques, namely (i) cross-correlations between neighboring projections, (ii) common line algorithm to get a precise shift correction in the direction of the tilt axis and (iii) intermediate reconstructions to precisely determine the tilt axis and shift correction in the direction perpendicular to that axis. Mixing alignment techniques turns out to be very efficient and fast. Significant improvements are highlighted in both simulations and real data reconstructions of porous silicon in high angle annular dark field mode and agglomerated silver nanoparticles in incoherent bright field mode. 3D reconstructions obtained with minimal user-intervention present fewer artefacts and less noise, which permits easier and more reliable segmentation and quantitative analysis. After careful sample preparation and data acquisition, the denoising procedure, alignment and reconstruction can be achieved within an hour for a 3D volume of about a hundred million voxels, which is a step toward a more routine use of electron tomography. - Highlights: • Goal: perform a reliable and user-independent 3D electron tomography reconstruction. • Proposed method: self-adapting denoising and alignment prior to 3D reconstruction. • Noise estimation and denoising are performed using wavelet transform. • Tilt axis determination is done automatically as well as projection alignment.

  19. Trackside acoustic diagnosis of axle box bearing based on kurtosis-optimization wavelet denoising

    Science.gov (United States)

    Peng, Chaoyong; Gao, Xiaorong; Peng, Jianping; Wang, Ai

    2018-04-01

    As one of the key components of railway vehicles, the operation condition of the axle box bearing has a significant effect on traffic safety. The acoustic diagnosis is more suitable than vibration diagnosis for trackside monitoring. The acoustic signal generated by the train axle box bearing is an amplitude modulation and frequency modulation signal with complex train running noise. Although empirical mode decomposition (EMD) and some improved time-frequency algorithms have proved to be useful in bearing vibration signal processing, it is hard to extract the bearing fault signal from serious trackside acoustic background noises by using those algorithms. Therefore, a kurtosis-optimization-based wavelet packet (KWP) denoising algorithm is proposed, as the kurtosis is the key indicator of bearing fault signal in time domain. Firstly, the geometry based Doppler correction is applied to signals of each sensor, and with the signal superposition of multiple sensors, random noises and impulse noises, which are the interference of the kurtosis indicator, are suppressed. Then, the KWP is conducted. At last, the EMD and Hilbert transform is applied to extract the fault feature. Experiment results indicate that the proposed method consisting of KWP and EMD is superior to the EMD.

  20. A New Pixels Flipping Method for Huge Watermarking Capacity of the Invoice Font Image

    Directory of Open Access Journals (Sweden)

    Li Li

    2014-01-01

    Full Text Available Invoice printing just has two-color printing, so invoice font image can be seen as binary image. To embed watermarks into invoice image, the pixels need to be flipped. The more huge the watermark is, the more the pixels need to be flipped. We proposed a new pixels flipping method in invoice image for huge watermarking capacity. The pixels flipping method includes one novel interpolation method for binary image, one flippable pixels evaluation mechanism, and one denoising method based on gravity center and chaos degree. The proposed interpolation method ensures that the invoice image keeps features well after scaling. The flippable pixels evaluation mechanism ensures that the pixels keep better connectivity and smoothness and the pattern has highest structural similarity after flipping. The proposed denoising method makes invoice font image smoother and fiter for human vision. Experiments show that the proposed flipping method not only keeps the invoice font structure well but also improves watermarking capacity.

  1. A new pixels flipping method for huge watermarking capacity of the invoice font image.

    Science.gov (United States)

    Li, Li; Hou, Qingzheng; Lu, Jianfeng; Xu, Qishuai; Dai, Junping; Mao, Xiaoyang; Chang, Chin-Chen

    2014-01-01

    Invoice printing just has two-color printing, so invoice font image can be seen as binary image. To embed watermarks into invoice image, the pixels need to be flipped. The more huge the watermark is, the more the pixels need to be flipped. We proposed a new pixels flipping method in invoice image for huge watermarking capacity. The pixels flipping method includes one novel interpolation method for binary image, one flippable pixels evaluation mechanism, and one denoising method based on gravity center and chaos degree. The proposed interpolation method ensures that the invoice image keeps features well after scaling. The flippable pixels evaluation mechanism ensures that the pixels keep better connectivity and smoothness and the pattern has highest structural similarity after flipping. The proposed denoising method makes invoice font image smoother and fiter for human vision. Experiments show that the proposed flipping method not only keeps the invoice font structure well but also improves watermarking capacity.

  2. Median Modified Wiener Filter for nonlinear adaptive spatial denoising of protein NMR multidimensional spectra

    KAUST Repository

    Cannistraci, Carlo Vittorio

    2015-01-26

    Denoising multidimensional NMR-spectra is a fundamental step in NMR protein structure determination. The state-of-the-art method uses wavelet-denoising, which may suffer when applied to non-stationary signals affected by Gaussian-white-noise mixed with strong impulsive artifacts, like those in multi-dimensional NMR-spectra. Regrettably, Wavelet\\'s performance depends on a combinatorial search of wavelet shapes and parameters; and multi-dimensional extension of wavelet-denoising is highly non-trivial, which hampers its application to multidimensional NMR-spectra. Here, we endorse a diverse philosophy of denoising NMR-spectra: less is more! We consider spatial filters that have only one parameter to tune: the window-size. We propose, for the first time, the 3D extension of the median-modified-Wiener-filter (MMWF), an adaptive variant of the median-filter, and also its novel variation named MMWF*. We test the proposed filters and the Wiener-filter, an adaptive variant of the mean-filter, on a benchmark set that contains 16 two-dimensional and three-dimensional NMR-spectra extracted from eight proteins. Our results demonstrate that the adaptive spatial filters significantly outperform their non-adaptive versions. The performance of the new MMWF* on 2D/3D-spectra is even better than wavelet-denoising. Noticeably, MMWF* produces stable high performance almost invariant for diverse window-size settings: this signifies a consistent advantage in the implementation of automatic pipelines for protein NMR-spectra analysis.

  3. Median Modified Wiener Filter for nonlinear adaptive spatial denoising of protein NMR multidimensional spectra

    KAUST Repository

    Cannistraci, Carlo Vittorio; Abbas, Ahmed; Gao, Xin

    2015-01-01

    Denoising multidimensional NMR-spectra is a fundamental step in NMR protein structure determination. The state-of-the-art method uses wavelet-denoising, which may suffer when applied to non-stationary signals affected by Gaussian-white-noise mixed with strong impulsive artifacts, like those in multi-dimensional NMR-spectra. Regrettably, Wavelet's performance depends on a combinatorial search of wavelet shapes and parameters; and multi-dimensional extension of wavelet-denoising is highly non-trivial, which hampers its application to multidimensional NMR-spectra. Here, we endorse a diverse philosophy of denoising NMR-spectra: less is more! We consider spatial filters that have only one parameter to tune: the window-size. We propose, for the first time, the 3D extension of the median-modified-Wiener-filter (MMWF), an adaptive variant of the median-filter, and also its novel variation named MMWF*. We test the proposed filters and the Wiener-filter, an adaptive variant of the mean-filter, on a benchmark set that contains 16 two-dimensional and three-dimensional NMR-spectra extracted from eight proteins. Our results demonstrate that the adaptive spatial filters significantly outperform their non-adaptive versions. The performance of the new MMWF* on 2D/3D-spectra is even better than wavelet-denoising. Noticeably, MMWF* produces stable high performance almost invariant for diverse window-size settings: this signifies a consistent advantage in the implementation of automatic pipelines for protein NMR-spectra analysis.

  4. Denoising of chaotic signal using independent component analysis and empirical mode decomposition with circulate translating

    Science.gov (United States)

    Wen-Bo, Wang; Xiao-Dong, Zhang; Yuchan, Chang; Xiang-Li, Wang; Zhao, Wang; Xi, Chen; Lei, Zheng

    2016-01-01

    In this paper, a new method to reduce noises within chaotic signals based on ICA (independent component analysis) and EMD (empirical mode decomposition) is proposed. The basic idea is decomposing chaotic signals and constructing multidimensional input vectors, firstly, on the base of EMD and its translation invariance. Secondly, it makes the independent component analysis on the input vectors, which means that a self adapting denoising is carried out for the intrinsic mode functions (IMFs) of chaotic signals. Finally, all IMFs compose the new denoised chaotic signal. Experiments on the Lorenz chaotic signal composed of different Gaussian noises and the monthly observed chaotic sequence on sunspots were put into practice. The results proved that the method proposed in this paper is effective in denoising of chaotic signals. Moreover, it can correct the center point in the phase space effectively, which makes it approach the real track of the chaotic attractor. Project supported by the National Science and Technology, China (Grant No. 2012BAJ15B04), the National Natural Science Foundation of China (Grant Nos. 41071270 and 61473213), the Natural Science Foundation of Hubei Province, China (Grant No. 2015CFB424), the State Key Laboratory Foundation of Satellite Ocean Environment Dynamics, China (Grant No. SOED1405), the Hubei Provincial Key Laboratory Foundation of Metallurgical Industry Process System Science, China (Grant No. Z201303), and the Hubei Key Laboratory Foundation of Transportation Internet of Things, Wuhan University of Technology, China (Grant No.2015III015-B02).

  5. Graph-cut based discrete-valued image reconstruction.

    Science.gov (United States)

    Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim

    2015-05-01

    Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.

  6. Underwater image quality enhancement of sea cucumbers based on improved histogram equalization and wavelet transform

    Directory of Open Access Journals (Sweden)

    Xi Qiao

    2017-09-01

    Full Text Available Sea cucumbers usually live in an environment where lighting and visibility are generally not controllable, which cause the underwater image of sea cucumbers to be distorted, blurred, and severely attenuated. Therefore, the valuable information from such an image cannot be fully extracted for further processing. To solve the problems mentioned above and improve the quality of the underwater images of sea cucumbers, pre-processing of a sea cucumber image is attracting increasing interest. This paper presents a new method based on contrast limited adaptive histogram equalization and wavelet transform (CLAHE-WT to enhance the sea cucumber image quality. CLAHE was used to process the underwater image for increasing contrast based on the Rayleigh distribution, and WT was used for de-noising based on a soft threshold. Qualitative analysis indicated that the proposed method exhibited better performance in enhancing the quality and retaining the image details. For quantitative analysis, the test with 120 underwater images showed that for the proposed method, the mean square error (MSE, peak signal to noise ratio (PSNR, and entropy were 49.2098, 13.3909, and 6.6815, respectively. The proposed method outperformed three established methods in enhancing the visual quality of sea cucumber underwater gray image.

  7. A nonlinear filtering algorithm for denoising HR(S)TEM micrographs

    International Nuclear Information System (INIS)

    Du, Hongchu

    2015-01-01

    Noise reduction of micrographs is often an essential task in high resolution (scanning) transmission electron microscopy (HR(S)TEM) either for a higher visual quality or for a more accurate quantification. Since HR(S)TEM studies are often aimed at resolving periodic atomistic columns and their non-periodic deviation at defects, it is important to develop a noise reduction algorithm that can simultaneously handle both periodic and non-periodic features properly. In this work, a nonlinear filtering algorithm is developed based on widely used techniques of low-pass filter and Wiener filter, which can efficiently reduce noise without noticeable artifacts even in HR(S)TEM micrographs with contrast of variation of background and defects. The developed nonlinear filtering algorithm is particularly suitable for quantitative electron microscopy, and is also of great interest for beam sensitive samples, in situ analyses, and atomic resolution EFTEM. - Highlights: • A nonlinear filtering algorithm for denoising HR(S)TEM images is developed. • It can simultaneously handle both periodic and non-periodic features properly. • It is particularly suitable for quantitative electron microscopy. • It is of great interest for beam sensitive samples, in situ analyses, and atomic resolution EFTEM

  8. The use of wavelet filters for reducing noise in posterior fossa Computed Tomography images

    International Nuclear Information System (INIS)

    Pita-Machado, Reinado; Perez-Diaz, Marlen; Lorenzo-Ginori, Juan V.; Bravo-Pino, Rolando

    2014-01-01

    Wavelet transform based de-noising like wavelet shrinkage, gives the good results in CT. This procedure affects very little the spatial resolution. Some applications are reconstruction methods, while others are a posteriori de-noising methods. De-noising after reconstruction is very difficult because the noise is non-stationary and has unknown distribution. Therefore, methods which work on the sinogram-space don’t have this problem, because they always work over a known noise distribution at this point. On the other hand, the posterior fossa in a head CT is a very complex region for physicians, because it is commonly affected by artifacts and noise which are not eliminated during the reconstruction procedure. This can leads to some false positive evaluations. The purpose of our present work is to compare different wavelet shrinkage de-noising filters to reduce noise, particularly in images of the posterior fossa within CT scans in the sinogram-space. This work describes an experimental search for the best wavelets, to reduce Poisson noise in Computed Tomography (CT) scans. Results showed that de-noising with wavelet filters improved the quality of posterior fossa region in terms of an increased CNR, without noticeable structural distortions

  9. Exploring an optimal wavelet-based filter for cryo-ET imaging.

    Science.gov (United States)

    Huang, Xinrui; Li, Sha; Gao, Song

    2018-02-07

    Cryo-electron tomography (cryo-ET) is one of the most advanced technologies for the in situ visualization of molecular machines by producing three-dimensional (3D) biological structures. However, cryo-ET imaging has two serious disadvantages-low dose and low image contrast-which result in high-resolution information being obscured by noise and image quality being degraded, and this causes errors in biological interpretation. The purpose of this research is to explore an optimal wavelet denoising technique to reduce noise in cryo-ET images. We perform tests using simulation data and design a filter using the optimum selected wavelet parameters (three-level decomposition, level-1 zeroed out, subband-dependent threshold, a soft-thresholding and spline-based discrete dyadic wavelet transform (DDWT)), which we call a modified wavelet shrinkage filter; this filter is suitable for noisy cryo-ET data. When testing using real cryo-ET experiment data, higher quality images and more accurate measures of a biological structure can be obtained with the modified wavelet shrinkage filter processing compared with conventional processing. Because the proposed method provides an inherent advantage when dealing with cryo-ET images, it can therefore extend the current state-of-the-art technology in assisting all aspects of cryo-ET studies: visualization, reconstruction, structural analysis, and interpretation.

  10. Portfolio Value at Risk Estimate for Crude Oil Markets: A Multivariate Wavelet Denoising Approach

    Directory of Open Access Journals (Sweden)

    Kin Keung Lai

    2012-04-01

    Full Text Available In the increasingly globalized economy these days, the major crude oil markets worldwide are seeing higher level of integration, which results in higher level of dependency and transmission of risks among different markets. Thus the risk of the typical multi-asset crude oil portfolio is influenced by dynamic correlation among different assets, which has both normal and transient behaviors. This paper proposes a novel multivariate wavelet denoising based approach for estimating Portfolio Value at Risk (PVaR. The multivariate wavelet analysis is introduced to analyze the multi-scale behaviors of the correlation among different markets and the portfolio volatility behavior in the higher dimensional time scale domain. The heterogeneous data and noise behavior are addressed in the proposed multi-scale denoising based PVaR estimation algorithm, which also incorporatesthe mainstream time series to address other well known data features such as autocorrelation and volatility clustering. Empirical studies suggest that the proposed algorithm outperforms the benchmark ExponentialWeighted Moving Average (EWMA and DCC-GARCH model, in terms of conventional performance evaluation criteria for the model reliability.

  11. Characterization of a sequential pipeline approach to automatic tissue segmentation from brain MR Images

    International Nuclear Information System (INIS)

    Hou, Zujun; Huang, Su

    2008-01-01

    Quantitative analysis of gray matter and white matter in brain magnetic resonance imaging (MRI) is valuable for neuroradiology and clinical practice. Submission of large collections of MRI scans to pipeline processing is increasingly important. We characterized this process and suggest several improvements. To investigate tissue segmentation from brain MR images through a sequential approach, a pipeline that consecutively executes denoising, skull/scalp removal, intensity inhomogeneity correction and intensity-based classification was developed. The denoising phase employs a 3D-extension of the Bayes-Shrink method. The inhomogeneity is corrected by an improvement of the Dawant et al.'s method with automatic generation of reference points. The N3 method has also been evaluated. Subsequently the brain tissue is segmented into cerebrospinal fluid, gray matter and white matter by a generalized Otsu thresholding technique. Intensive comparisons with other sequential or iterative methods have been carried out using simulated and real images. The sequential approach with judicious selection on the algorithm selection in each stage is not only advantageous in speed, but also can attain at least as accurate segmentation as iterative methods under a variety of noise or inhomogeneity levels. A sequential approach to tissue segmentation, which consecutively executes the wavelet shrinkage denoising, scalp/skull removal, inhomogeneity correction and intensity-based classification was developed to automatically segment the brain tissue into CSF, GM and WM from brain MR images. This approach is advantageous in several common applications, compared with other pipeline methods. (orig.)

  12. A general framework for regularized, similarity-based image restoration.

    Science.gov (United States)

    Kheradmand, Amin; Milanfar, Peyman

    2014-12-01

    Any image can be represented as a function defined on a weighted graph, in which the underlying structure of the image is encoded in kernel similarity and associated Laplacian matrices. In this paper, we develop an iterative graph-based framework for image restoration based on a new definition of the normalized graph Laplacian. We propose a cost function, which consists of a new data fidelity term and regularization term derived from the specific definition of the normalized graph Laplacian. The normalizing coefficients used in the definition of the Laplacian and associated regularization term are obtained using fast symmetry preserving matrix balancing. This results in some desired spectral properties for the normalized Laplacian such as being symmetric, positive semidefinite, and returning zero vector when applied to a constant image. Our algorithm comprises of outer and inner iterations, where in each outer iteration, the similarity weights are recomputed using the previous estimate and the updated objective function is minimized using inner conjugate gradient iterations. This procedure improves the performance of the algorithm for image deblurring, where we do not have access to a good initial estimate of the underlying image. In addition, the specific form of the cost function allows us to render the spectral analysis for the solutions of the corresponding linear equations. In addition, the proposed approach is general in the sense that we have shown its effectiveness for different restoration problems, including deblurring, denoising, and sharpening. Experimental results verify the effectiveness of the proposed algorithm on both synthetic and real examples.

  13. A proximity algorithm accelerated by Gauss-Seidel iterations for L1/TV denoising models

    Science.gov (United States)

    Li, Qia; Micchelli, Charles A.; Shen, Lixin; Xu, Yuesheng

    2012-09-01

    Our goal in this paper is to improve the computational performance of the proximity algorithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed-point equations expressed in terms of the proximity operators. Based upon this observation we develop an algorithm for solving the model and establish its convergence. Furthermore, we demonstrate that the proposed algorithm can be accelerated through the use of the componentwise Gauss-Seidel iteration so that the CPU time consumed is significantly reduced. Numerical experiments using the proposed algorithm for impulsive noise removal are included, with a comparison to three recently developed algorithms. The numerical results show that while the proposed algorithm enjoys a high quality of the restored images, as the other three known algorithms do, it performs significantly better in terms of computational efficiency measured in the CPU time consumed.

  14. A proximity algorithm accelerated by Gauss–Seidel iterations for L1/TV denoising models

    International Nuclear Information System (INIS)

    Li, Qia; Shen, Lixin; Xu, Yuesheng; Micchelli, Charles A

    2012-01-01

    Our goal in this paper is to improve the computational performance of the proximity algorithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed-point equations expressed in terms of the proximity operators. Based upon this observation we develop an algorithm for solving the model and establish its convergence. Furthermore, we demonstrate that the proposed algorithm can be accelerated through the use of the componentwise Gauss–Seidel iteration so that the CPU time consumed is significantly reduced. Numerical experiments using the proposed algorithm for impulsive noise removal are included, with a comparison to three recently developed algorithms. The numerical results show that while the proposed algorithm enjoys a high quality of the restored images, as the other three known algorithms do, it performs significantly better in terms of computational efficiency measured in the CPU time consumed. (paper)

  15. Random Modeling of Daily Rainfall and Runoff Using a Seasonal Model and Wavelet Denoising

    Directory of Open Access Journals (Sweden)

    Chien-ming Chou

    2014-01-01

    Full Text Available Instead of Fourier smoothing, this study applied wavelet denoising to acquire the smooth seasonal mean and corresponding perturbation term from daily rainfall and runoff data in traditional seasonal models, which use seasonal means for hydrological time series forecasting. The denoised rainfall and runoff time series data were regarded as the smooth seasonal mean. The probability distribution of the percentage coefficients can be obtained from calibrated daily rainfall and runoff data. For validated daily rainfall and runoff data, percentage coefficients were randomly generated according to the probability distribution and the law of linear proportion. Multiplying the generated percentage coefficient by the smooth seasonal mean resulted in the corresponding perturbation term. Random modeling of daily rainfall and runoff can be obtained by adding the perturbation term to the smooth seasonal mean. To verify the accuracy of the proposed method, daily rainfall and runoff data for the Wu-Tu watershed were analyzed. The analytical results demonstrate that wavelet denoising enhances the precision of daily rainfall and runoff modeling of the seasonal model. In addition, the wavelet denoising technique proposed in this study can obtain the smooth seasonal mean of rainfall and runoff processes and is suitable for modeling actual daily rainfall and runoff processes.

  16. A novel fractal image compression scheme with block classification and sorting based on Pearson's correlation coefficient.

    Science.gov (United States)

    Wang, Jianji; Zheng, Nanning

    2013-09-01

    Fractal image compression (FIC) is an image coding technology based on the local similarity of image structure. It is widely used in many fields such as image retrieval, image denoising, image authentication, and encryption. FIC, however, suffers from the high computational complexity in encoding. Although many schemes are published to speed up encoding, they do not easily satisfy the encoding time or the reconstructed image quality requirements. In this paper, a new FIC scheme is proposed based on the fact that the affine similarity between two blocks in FIC is equivalent to the absolute value of Pearson's correlation coefficient (APCC) between them. First, all blocks in the range and domain pools are chosen and classified using an APCC-based block classification method to increase the matching probability. Second, by sorting the domain blocks with respect to APCCs between these domain blocks and a preset block in each class, the matching domain block for a range block can be searched in the selected domain set in which these APCCs are closer to APCC between the range block and the preset block. Experimental results show that the proposed scheme can significantly speed up the encoding process in FIC while preserving the reconstructed image quality well.

  17. Statistical model for OCT image denoising

    KAUST Repository

    Li, Muxingzi; Idoughi, Ramzi; Choudhury, Biswarup; Heidrich, Wolfgang

    2017-01-01

    Optical coherence tomography (OCT) is a non-invasive technique with a large array of applications in clinical imaging and biological tissue visualization. However, the presence of speckle noise affects the analysis of OCT images and their diagnostic

  18. Denoising solar radiation data using coiflet wavelets

    Energy Technology Data Exchange (ETDEWEB)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my; Janier, Josefina B., E-mail: josefinajanier@petronas.com.my; Muthuvalu, Mohana Sundaram, E-mail: mohana.muthuvalu@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia); Hasan, Mohammad Khatim, E-mail: khatim@ftsm.ukm.my [Jabatan Komputeran Industri, Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor (Malaysia); Sulaiman, Jumat, E-mail: jumat@ums.edu.my [Program Matematik dengan Ekonomi, Universiti Malaysia Sabah, Beg Berkunci 2073, 88999 Kota Kinabalu, Sabah (Malaysia); Ismail, Mohd Tahir [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM Minden, Penang (Malaysia)

    2014-10-24

    Signal denoising and smoothing plays an important role in processing the given signal either from experiment or data collection through observations. Data collection usually was mixed between true data and some error or noise. This noise might be coming from the apparatus to measure or collect the data or human error in handling the data. Normally before the data is use for further processing purposes, the unwanted noise need to be filtered out. One of the efficient methods that can be used to filter the data is wavelet transform. Due to the fact that the received solar radiation data fluctuates according to time, there exist few unwanted oscillation namely noise and it must be filtered out before the data is used for developing mathematical model. In order to apply denoising using wavelet transform (WT), the thresholding values need to be calculated. In this paper the new thresholding approach is proposed. The coiflet2 wavelet with variation diminishing 4 is utilized for our purpose. From numerical results it can be seen clearly that, the new thresholding approach give better results as compare with existing approach namely global thresholding value.

  19. Segmentation-based retrospective shading correction in fluorescence microscopy E. coli images for quantitative analysis

    Science.gov (United States)

    Mai, Fei; Chang, Chunqi; Liu, Wenqing; Xu, Weichao; Hung, Yeung S.

    2009-10-01

    Due to the inherent imperfections in the imaging process, fluorescence microscopy images often suffer from spurious intensity variations, which is usually referred to as intensity inhomogeneity, intensity non uniformity, shading or bias field. In this paper, a retrospective shading correction method for fluorescence microscopy Escherichia coli (E. Coli) images is proposed based on segmentation result. Segmentation and shading correction are coupled together, so we iteratively correct the shading effects based on segmentation result and refine the segmentation by segmenting the image after shading correction. A fluorescence microscopy E. Coli image can be segmented (based on its intensity value) into two classes: the background and the cells, where the intensity variation within each class is close to zero if there is no shading. Therefore, we make use of this characteristics to correct the shading in each iteration. Shading is mathematically modeled as a multiplicative component and an additive noise component. The additive component is removed by a denoising process, and the multiplicative component is estimated using a fast algorithm to minimize the intra-class intensity variation. We tested our method on synthetic images and real fluorescence E.coli images. It works well not only for visual inspection, but also for numerical evaluation. Our proposed method should be useful for further quantitative analysis especially for protein expression value comparison.

  20. A REVIEW WAVELET TRANSFORM AND FUZZY K-MEANS BASED IMAGE DE-NOISING METHOD

    OpenAIRE

    Nidhi Patel*, Asst. Prof. Pratik Kumar Soni

    2017-01-01

    The research area of image processing technique using fuzzy k-means and wavelet transform. The enormous amount of data necessary for images is a main reason for the growth of many areas within the research field of computer imaging such as image processing and compression. In order to get this in requisites of the concerned research work, wavelet transforms and k-means clustering is applied. This can be done in order to discover more possible combinations that may lead to the finest de-noisin...

  1. A new technique for noise reduction at coronary CT angiography with multi-phase data-averaging and non-rigid image registration

    Energy Technology Data Exchange (ETDEWEB)

    Tatsugami, Fuminari; Higaki, Toru; Nakamura, Yuko; Yamagami, Takuji; Date, Shuji; Awai, Kazuo [Hiroshima University, Department of Diagnostic Radiology, Minami-ku, Hiroshima (Japan); Fujioka, Chikako; Kiguchi, Masao [Hiroshima University, Department of Radiology, Minami-ku, Hiroshima (Japan); Kihara, Yasuki [Hiroshima University, Department of Cardiovascular Medicine, Minami-ku, Hiroshima (Japan)

    2015-01-15

    To investigate the feasibility of a newly developed noise reduction technique at coronary CT angiography (CTA) that uses multi-phase data-averaging and non-rigid image registration. Sixty-five patients underwent coronary CTA with prospective ECG-triggering. The range of the phase window was set at 70-80 % of the R-R interval. First, three sets of consecutive volume data at 70 %, 75 % and 80 % of the R-R interval were prepared. Second, we applied non-rigid registration to align the 70 % and 80 % images to the 75 % image. Finally, we performed weighted averaging of the three images and generated a de-noised image. The image noise and contrast-to-noise ratio (CNR) in the proximal coronary arteries between the conventional 75 % and the de-noised images were compared. Two radiologists evaluated the image quality using a 5-point scale (1, poor; 5, excellent). On de-noised images, mean image noise was significantly lower than on conventional 75 % images (18.3 HU ± 2.6 vs. 23.0 HU ± 3.3, P < 0.01) and the CNR was significantly higher (P < 0.01). The mean image quality score for conventional 75 % and de-noised images was 3.9 and 4.4, respectively (P < 0.01). Our method reduces image noise and improves image quality at coronary CTA. (orig.)

  2. New variational image decomposition model for simultaneously denoising and segmenting optical coherence tomography images

    International Nuclear Information System (INIS)

    Duan, Jinming; Bai, Li; Tench, Christopher; Gottlob, Irene; Proudlock, Frank

    2015-01-01

    Optical coherence tomography (OCT) imaging plays an important role in clinical diagnosis and monitoring of diseases of the human retina. Automated analysis of optical coherence tomography images is a challenging task as the images are inherently noisy. In this paper, a novel variational image decomposition model is proposed to decompose an OCT image into three components: the first component is the original image but with the noise completely removed; the second contains the set of edges representing the retinal layer boundaries present in the image; and the third is an image of noise, or in image decomposition terms, the texture, or oscillatory patterns of the original image. In addition, a fast Fourier transform based split Bregman algorithm is developed to improve computational efficiency of solving the proposed model. Extensive experiments are conducted on both synthesised and real OCT images to demonstrate that the proposed model outperforms the state-of-the-art speckle noise reduction methods and leads to accurate retinal layer segmentation. (paper)

  3. Simultaneous multi-component seismic denoising and reconstruction via K-SVD

    Science.gov (United States)

    Hou, Sian; Zhang, Feng; Li, Xiangyang; Zhao, Qiang; Dai, Hengchang

    2018-06-01

    Data denoising and reconstruction play an increasingly significant role in seismic prospecting for their value in enhancing effective signals, dealing with surface obstacles and reducing acquisition costs. In this paper, we propose a novel method to denoise and reconstruct multicomponent seismic data simultaneously. This method lies within the framework of machine learning and the key points are defining a suitable weight function and a modified inner product operator. The purpose of these two processes are to perform missing data machine learning when the random noise deviation is unknown, and building a mathematical relationship for each component to incorporate all the information of multi-component data. Two examples, using synthetic and real multicomponent data, demonstrate that the new method is a feasible alternative for multi-component seismic data processing.

  4. Imaging with Kantorovich--Rubinstein Discrepancy

    KAUST Repository

    Lellmann, Jan

    2014-01-01

    © 2014 Society for Industrial and Applied Mathematics. We propose the use of the Kantorovich-Rubinstein norm from optimal transport in imaging problems. In particular, we discuss a variational regularization model endowed with a Kantorovich- Rubinstein discrepancy term and total variation regularization in the context of image denoising and cartoon-texture decomposition. We point out connections of this approach to several other recently proposed methods such as total generalized variation and norms capturing oscillating patterns. We also show that the respective optimization problem can be turned into a convex-concave saddle point problem with simple constraints and hence can be solved by standard tools. Numerical examples exhibit interesting features and favorable performance for denoising and cartoon-texture decomposition.

  5. Intelligent Mechanical Fault Diagnosis Based on Multiwavelet Adaptive Threshold Denoising and MPSO

    Directory of Open Access Journals (Sweden)

    Hao Sun

    2014-01-01

    Full Text Available The condition diagnosis of rotating machinery depends largely on the feature analysis of vibration signals measured for the condition diagnosis. However, the signals measured from rotating machinery usually are nonstationary and nonlinear and contain noise. The useful fault features are hidden in the heavy background noise. In this paper, a novel fault diagnosis method for rotating machinery based on multiwavelet adaptive threshold denoising and mutation particle swarm optimization (MPSO is proposed. Geronimo, Hardin, and Massopust (GHM multiwavelet is employed for extracting weak fault features under background noise, and the method of adaptively selecting appropriate threshold for multiwavelet with energy ratio of multiwavelet coefficient is presented. The six nondimensional symptom parameters (SPs in the frequency domain are defined to reflect the features of the vibration signals measured in each state. Detection index (DI using statistical theory has been also defined to evaluate the sensitiveness of SP for condition diagnosis. MPSO algorithm with adaptive inertia weight adjustment and particle mutation is proposed for condition identification. MPSO algorithm effectively solves local optimum and premature convergence problems of conventional particle swarm optimization (PSO algorithm. It can provide a more accurate estimate on fault diagnosis. Practical examples of fault diagnosis for rolling element bearings are given to verify the effectiveness of the proposed method.

  6. Correction of defective pixels for medical and space imagers based on Ising Theory

    Science.gov (United States)

    Cohen, Eliahu; Shnitser, Moriel; Avraham, Tsvika; Hadar, Ofer

    2014-09-01

    We propose novel models for image restoration based on statistical physics. We investigate the affinity between these fields and describe a framework from which interesting denoising algorithms can be derived: Ising-like models and simulated annealing techniques. When combined with known predictors such as Median and LOCO-I, these models become even more effective. In order to further examine the proposed models we apply them to two important problems: (i) Digital Cameras in space damaged from cosmic radiation. (ii) Ultrasonic medical devices damaged from speckle noise. The results, as well as benchmark and comparisons, suggest in most of the cases a significant gain in PSNR and SSIM in comparison to other filters.

  7. Segmentation of breast ultrasound images based on active contours using neutrosophic theory.

    Science.gov (United States)

    Lotfollahi, Mahsa; Gity, Masoumeh; Ye, Jing Yong; Mahlooji Far, A

    2018-04-01

    Ultrasound imaging is an effective approach for diagnosing breast cancer, but it is highly operator-dependent. Recent advances in computer-aided diagnosis have suggested that it can assist physicians in diagnosis. Definition of the region of interest before computer analysis is still needed. Since manual outlining of the tumor contour is tedious and time-consuming for a physician, developing an automatic segmentation method is important for clinical application. The present paper represents a novel method to segment breast ultrasound images. It utilizes a combination of region-based active contour and neutrosophic theory to overcome the natural properties of ultrasound images including speckle noise and tissue-related textures. First, due to inherent speckle noise and low contrast of these images, we have utilized a non-local means filter and fuzzy logic method for denoising and image enhancement, respectively. This paper presents an improved weighted region-scalable active contour to segment breast ultrasound images using a new feature derived from neutrosophic theory. This method has been applied to 36 breast ultrasound images. It generates true-positive and false-positive results, and similarity of 95%, 6%, and 90%, respectively. The purposed method indicates clear advantages over other conventional methods of active contour segmentation, i.e., region-scalable fitting energy and weighted region-scalable fitting energy.

  8. 3D Wavelet-Based Filter and Method

    Science.gov (United States)

    Moss, William C.; Haase, Sebastian; Sedat, John W.

    2008-08-12

    A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

  9. Seismic data interpolation and denoising by learning a tensor tight frame

    International Nuclear Information System (INIS)

    Liu, Lina; Ma, Jianwei; Plonka, Gerlind

    2017-01-01

    Seismic data interpolation and denoising plays a key role in seismic data processing. These problems can be understood as sparse inverse problems, where the desired data are assumed to be sparsely representable within a suitable dictionary. In this paper, we present a new method based on a data-driven tight frame (DDTF) of Kronecker type (KronTF) that avoids the vectorization step and considers the multidimensional structure of data in a tensor-product way. It takes advantage of the structure contained in all different modes (dimensions) simultaneously. In order to overcome the limitations of a usual tensor-product approach we also incorporate data-driven directionality. The complete method is formulated as a sparsity-promoting minimization problem. It includes two main steps. In the first step, a hard thresholding algorithm is used to update the frame coefficients of the data in the dictionary; in the second step, an iterative alternating method is used to update the tight frame (dictionary) in each different mode. The dictionary that is learned in this way contains the principal components in each mode. Furthermore, we apply the proposed KronTF to seismic interpolation and denoising. Examples with synthetic and real seismic data show that the proposed method achieves better results than the traditional projection onto convex sets method based on the Fourier transform and the previous vectorized DDTF methods. In particular, the simple structure of the new frame construction makes it essentially more efficient. (paper)

  10. Seismic data interpolation and denoising by learning a tensor tight frame

    Science.gov (United States)

    Liu, Lina; Plonka, Gerlind; Ma, Jianwei

    2017-10-01

    Seismic data interpolation and denoising plays a key role in seismic data processing. These problems can be understood as sparse inverse problems, where the desired data are assumed to be sparsely representable within a suitable dictionary. In this paper, we present a new method based on a data-driven tight frame (DDTF) of Kronecker type (KronTF) that avoids the vectorization step and considers the multidimensional structure of data in a tensor-product way. It takes advantage of the structure contained in all different modes (dimensions) simultaneously. In order to overcome the limitations of a usual tensor-product approach we also incorporate data-driven directionality. The complete method is formulated as a sparsity-promoting minimization problem. It includes two main steps. In the first step, a hard thresholding algorithm is used to update the frame coefficients of the data in the dictionary; in the second step, an iterative alternating method is used to update the tight frame (dictionary) in each different mode. The dictionary that is learned in this way contains the principal components in each mode. Furthermore, we apply the proposed KronTF to seismic interpolation and denoising. Examples with synthetic and real seismic data show that the proposed method achieves better results than the traditional projection onto convex sets method based on the Fourier transform and the previous vectorized DDTF methods. In particular, the simple structure of the new frame construction makes it essentially more efficient.

  11. Sparse Method for Direction of Arrival Estimation Using Denoised Fourth-Order Cumulants Vector.

    Science.gov (United States)

    Fan, Yangyu; Wang, Jianshu; Du, Rui; Lv, Guoyun

    2018-06-04

    Fourth-order cumulants (FOCs) vector-based direction of arrival (DOA) estimation methods of non-Gaussian sources may suffer from poor performance for limited snapshots or difficulty in setting parameters. In this paper, a novel FOCs vector-based sparse DOA estimation method is proposed. Firstly, by utilizing the concept of a fourth-order difference co-array (FODCA), an advanced FOCs vector denoising or dimension reduction procedure is presented for arbitrary array geometries. Then, a novel single measurement vector (SMV) model is established by the denoised FOCs vector, and efficiently solved by an off-grid sparse Bayesian inference (OGSBI) method. The estimation errors of FOCs are integrated in the SMV model, and are approximately estimated in a simple way. A necessary condition regarding the number of identifiable sources of our method is presented that, in order to uniquely identify all sources, the number of sources K must fulfill K ≤ ( M 4 - 2 M 3 + 7 M 2 - 6 M ) / 8 . The proposed method suits any geometry, does not need prior knowledge of the number of sources, is insensitive to associated parameters, and has maximum identifiability O ( M 4 ) , where M is the number of sensors in the array. Numerical simulations illustrate the superior performance of the proposed method.

  12. ECG Denoising Using Marginalized Particle Extended Kalman Filter With an Automatic Particle Weighting Strategy.

    Science.gov (United States)

    Hesar, Hamed Danandeh; Mohebbi, Maryam

    2017-05-01

    In this paper, a model-based Bayesian filtering framework called the "marginalized particle-extended Kalman filter (MP-EKF) algorithm" is proposed for electrocardiogram (ECG) denoising. This algorithm does not have the extended Kalman filter (EKF) shortcoming in handling non-Gaussian nonstationary situations because of its nonlinear framework. In addition, it has less computational complexity compared with particle filter. This filter improves ECG denoising performance by implementing marginalized particle filter framework while reducing its computational complexity using EKF framework. An automatic particle weighting strategy is also proposed here that controls the reliance of our framework to the acquired measurements. We evaluated the proposed filter on several normal ECGs selected from MIT-BIH normal sinus rhythm database. To do so, artificial white Gaussian and colored noises as well as nonstationary real muscle artifact (MA) noise over a range of low SNRs from 10 to -5 dB were added to these normal ECG segments. The benchmark methods were the EKF and extended Kalman smoother (EKS) algorithms which are the first model-based Bayesian algorithms introduced in the field of ECG denoising. From SNR viewpoint, the experiments showed that in the presence of Gaussian white noise, the proposed framework outperforms the EKF and EKS algorithms in lower input SNRs where the measurements and state model are not reliable. Owing to its nonlinear framework and particle weighting strategy, the proposed algorithm attained better results at all input SNRs in non-Gaussian nonstationary situations (such as presence of pink noise, brown noise, and real MA). In addition, the impact of the proposed filtering method on the distortion of diagnostic features of the ECG was investigated and compared with EKF/EKS methods using an ECG diagnostic distortion measure called the "Multi-Scale Entropy Based Weighted Distortion Measure" or MSEWPRD. The results revealed that our proposed

  13. Image Denoising And Segmentation Approchto Detect Tumor From BRAINMRI Images

    Directory of Open Access Journals (Sweden)

    Shanta Rangaswamy

    2018-04-01

    Full Text Available The detection of the Brain Tumor is a challenging problem, due to the structure of the Tumor cells in the brain. This project presents a systematic method that enhances the detection of brain tumor cells and to analyze functional structures by training and classification of the samples in SVM and tumor cell segmentation of the sample using DWT algorithm. From the input MRI Images collected, first noise is removed from MRI images by applying wiener filtering technique. In image enhancement phase, all the color components of MRI Images will be converted into gray scale image and make the edges clear in the image to get better identification and improvised quality of the image. In the segmentation phase, DWT on MRI Image to segment the grey-scale image is performed. During the post-processing, classification of tumor is performed by using SVM classifier. Wiener Filter, DWT, SVM Segmentation strategies were used to find and group the tumor position in the MRI filtered picture respectively. An essential perception in this work is that multi arrange approach utilizes various leveled classification strategy which supports execution altogether. This technique diminishes the computational complexity quality in time and memory. This classification strategy works accurately on all images and have achieved the accuracy of 93%.

  14. Color correction with blind image restoration based on multiple images using a low-rank model

    Science.gov (United States)

    Li, Dong; Xie, Xudong; Lam, Kin-Man

    2014-03-01

    We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.

  15. A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images

    Directory of Open Access Journals (Sweden)

    Zhiying Song

    2017-01-01

    Full Text Available The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS method and a dynamic threshold denoising (DTD method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933 on feature images and less Euclidean distance error (ED = 2.826 on landmark points, outperforming the source data (NC = −0.496, ED = 25.847 and the compared method (NC = −0.614, ED = 16.085. Moreover, our method is about ten times faster than the compared one.

  16. Application of NASVD method in the denoising of airborne gamma-ray data

    International Nuclear Information System (INIS)

    Yang Jia; Ge Liangquan; Zhang Qingxian; Gu Yi

    2010-01-01

    A noise reducing method based on multivariate statistical analysis f or gamma-ray spectra-the NASVD method (Noise Adjusted Singular Value Decomposition), main idea and algorithm for realizing of the NASVD are introduced in the paper. The NASVD method is used to an airborne gamma-ray data set, the result has show n that the method can dramatically remove statistical noise from raw gamma-ray spectra and the quality of processed data is much better than that of the conventional spectral denoising methods. (authors)

  17. A data-driven approach for denoising GNSS position time series

    Science.gov (United States)

    Li, Yanyan; Xu, Caijun; Yi, Lei; Fang, Rongxin

    2017-12-01

    Global navigation satellite system (GNSS) datasets suffer from common mode error (CME) and other unmodeled errors. To decrease the noise level in GNSS positioning, we propose a new data-driven adaptive multiscale denoising method in this paper. Both synthetic and real-world long-term GNSS datasets were employed to assess the performance of the proposed method, and its results were compared with those of stacking filtering, principal component analysis (PCA) and the recently developed multiscale multiway PCA. It is found that the proposed method can significantly eliminate the high-frequency white noise and remove the low-frequency CME. Furthermore, the proposed method is more precise for denoising GNSS signals than the other denoising methods. For example, in the real-world example, our method reduces the mean standard deviation of the north, east and vertical components from 1.54 to 0.26, 1.64 to 0.21 and 4.80 to 0.72 mm, respectively. Noise analysis indicates that for the original signals, a combination of power-law plus white noise model can be identified as the best noise model. For the filtered time series using our method, the generalized Gauss-Markov model is the best noise model with the spectral indices close to - 3, indicating that flicker walk noise can be identified. Moreover, the common mode error in the unfiltered time series is significantly reduced by the proposed method. After filtering with our method, a combination of power-law plus white noise model is the best noise model for the CMEs in the study region.

  18. ECG signal performance de-noising assessment based on threshold tuning of dual-tree wavelet transform.

    Science.gov (United States)

    El B'charri, Oussama; Latif, Rachid; Elmansouri, Khalifa; Abenaou, Abdenbi; Jenkal, Wissam

    2017-02-07

    Since the electrocardiogram (ECG) signal has a low frequency and a weak amplitude, it is sensitive to miscellaneous mixed noises, which may reduce the diagnostic accuracy and hinder the physician's correct decision on patients. The dual tree wavelet transform (DT-WT) is one of the most recent enhanced versions of discrete wavelet transform. However, threshold tuning on this method for noise removal from ECG signal has not been investigated yet. In this work, we shall provide a comprehensive study on the impact of the choice of threshold algorithm, threshold value, and the appropriate wavelet decomposition level to evaluate the ECG signal de-noising performance. A set of simulations is performed on both synthetic and real ECG signals to achieve the promised results. First, the synthetic ECG signal is used to observe the algorithm response. The evaluation results of synthetic ECG signal corrupted by various types of noise has showed that the modified unified threshold and wavelet hyperbolic threshold de-noising method is better in realistic and colored noises. The tuned threshold is then used on real ECG signals from the MIT-BIH database. The results has shown that the proposed method achieves higher performance than the ordinary dual tree wavelet transform into all kinds of noise removal from ECG signal. The simulation results indicate that the algorithm is robust for all kinds of noises with varying degrees of input noise, providing a high quality clean signal. Moreover, the algorithm is quite simple and can be used in real time ECG monitoring.

  19. Multiscale image restoration in nulear medicine

    International Nuclear Information System (INIS)

    Jammal, G.

    2001-01-01

    This work develops, analyzes and validates a new multiscale restoration framework for denoising and deconvolution in photon limited imagery. Denoising means the estimation of the intensity of a Poisson process from a single observation of the counts, whereas deconvolution refers to the recovery of an object related through a linear system of equations to the intensity function of the Poisson data. The developed framework has been named DeQuant in analogy to Denoising when the noise is of Quantum nature. DeQuant works according to the following scheme. (1) It starts by testing the statistical significance of the wavelet coefficients of the Poisson process, based on the knowledge of their probability density function. (2) A regularization constraint assigns a new value to the non significant coefficients enabling therewith to reduce artifacts and incorporate realistic prior information into the estimation process. Finally, (3) the application of the inverse wavelet transform yields the restored object. The whole procedure is iterated before obtaining the final estimate. The validation of DeQuant on nuclear medicine images showed excellent results. The obtained estimates enable a greater diagnostic confidence in clinical nuclear medicine since they give the physician the access to the diagnosis relevant information with a measure of the significance of the detected structures [de

  20. Subspace based adaptive denoising of surface EMG from neurological injury patients

    Science.gov (United States)

    Liu, Jie; Ying, Dongwen; Zev Rymer, William; Zhou, Ping

    2014-10-01

    Objective: After neurological injuries such as spinal cord injury, voluntary surface electromyogram (EMG) signals recorded from affected muscles are often corrupted by interferences, such as spurious involuntary spikes and background noises produced by physiological and extrinsic/accidental origins, imposing difficulties for signal processing. Conventional methods did not well address the problem caused by interferences. It is difficult to mitigate such interferences using conventional methods. The aim of this study was to develop a subspace-based denoising method to suppress involuntary background spikes contaminating voluntary surface EMG recordings. Approach: The Karhunen-Loeve transform was utilized to decompose a noisy signal into a signal subspace and a noise subspace. An optimal estimate of EMG signal is derived from the signal subspace and the noise power. Specifically, this estimator is capable of making a tradeoff between interference reduction and signal distortion. Since the estimator partially relies on the estimate of noise power, an adaptive method was presented to sequentially track the variation of interference power. The proposed method was evaluated using both semi-synthetic and real surface EMG signals. Main results: The experiments confirmed that the proposed method can effectively suppress interferences while keep the distortion of voluntary EMG signal in a low level. The proposed method can greatly facilitate further signal processing, such as onset detection of voluntary muscle activity. Significance: The proposed method can provide a powerful tool for suppressing background spikes and noise contaminating voluntary surface EMG signals of paretic muscles after neurological injuries, which is of great importance for their multi-purpose applications.

  1. Spectrum image analysis tool - A flexible MATLAB solution to analyze EEL and CL spectrum images.

    Science.gov (United States)

    Schmidt, Franz-Philipp; Hofer, Ferdinand; Krenn, Joachim R

    2017-02-01

    Spectrum imaging techniques, gaining simultaneously structural (image) and spectroscopic data, require appropriate and careful processing to extract information of the dataset. In this article we introduce a MATLAB based software that uses three dimensional data (EEL/CL spectrum image in dm3 format (Gatan Inc.'s DigitalMicrograph ® )) as input. A graphical user interface enables a fast and easy mapping of spectral dependent images and position dependent spectra. First, data processing such as background subtraction, deconvolution and denoising, second, multiple display options including an EEL/CL moviemaker and, third, the applicability on a large amount of data sets with a small work load makes this program an interesting tool to visualize otherwise hidden details. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. An Image Filter Based on Shearlet Transformation and Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Kai Hu

    2015-01-01

    Full Text Available Digital image is always polluted by noise and made data postprocessing difficult. To remove noise and preserve detail of image as much as possible, this paper proposed image filter algorithm which combined the merits of Shearlet transformation and particle swarm optimization (PSO algorithm. Firstly, we use classical Shearlet transform to decompose noised image into many subwavelets under multiscale and multiorientation. Secondly, we gave weighted factor to those subwavelets obtained. Then, using classical Shearlet inverse transform, we obtained a composite image which is composed of those weighted subwavelets. After that, we designed fast and rough evaluation method to evaluate noise level of the new image; by using this method as fitness, we adopted PSO to find the optimal weighted factor we added; after lots of iterations, by the optimal factors and Shearlet inverse transform, we got the best denoised image. Experimental results have shown that proposed algorithm eliminates noise effectively and yields good peak signal noise ratio (PSNR.

  3. Smartphones as image processing systems for prosthetic vision.

    Science.gov (United States)

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J

    2013-01-01

    The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.

  4. A Physics-Based Deep Learning Approach to Shadow Invariant Representations of Hyperspectral Images.

    Science.gov (United States)

    Windrim, Lloyd; Ramakrishnan, Rishi; Melkumyan, Arman; Murphy, Richard J

    2018-02-01

    This paper proposes the Relit Spectral Angle-Stacked Autoencoder, a novel unsupervised feature learning approach for mapping pixel reflectances to illumination invariant encodings. This work extends the Spectral Angle-Stacked Autoencoder so that it can learn a shadow-invariant mapping. The method is inspired by a deep learning technique, Denoising Autoencoders, with the incorporation of a physics-based model for illumination such that the algorithm learns a shadow invariant mapping without the need for any labelled training data, additional sensors, a priori knowledge of the scene or the assumption of Planckian illumination. The method is evaluated using datasets captured from several different cameras, with experiments to demonstrate the illumination invariance of the features and how they can be used practically to improve the performance of high-level perception algorithms that operate on images acquired outdoors.

  5. Partial differential equation-based approach for empirical mode decomposition: application on image analysis.

    Science.gov (United States)

    Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques

    2012-09-01

    The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.

  6. Automatic Fabric Defect Detection with a Multi-Scale Convolutional Denoising Autoencoder Network Model.

    Science.gov (United States)

    Mei, Shuang; Wang, Yudan; Wen, Guojun

    2018-04-02

    Fabric defect detection is a necessary and essential step of quality control in the textile manufacturing industry. Traditional fabric inspections are usually performed by manual visual methods, which are low in efficiency and poor in precision for long-term industrial applications. In this paper, we propose an unsupervised learning-based automated approach to detect and localize fabric defects without any manual intervention. This approach is used to reconstruct image patches with a convolutional denoising autoencoder network at multiple Gaussian pyramid levels and to synthesize detection results from the corresponding resolution channels. The reconstruction residual of each image patch is used as the indicator for direct pixel-wise prediction. By segmenting and synthesizing the reconstruction residual map at each resolution level, the final inspection result can be generated. This newly developed method has several prominent advantages for fabric defect detection. First, it can be trained with only a small amount of defect-free samples. This is especially important for situations in which collecting large amounts of defective samples is difficult and impracticable. Second, owing to the multi-modal integration strategy, it is relatively more robust and accurate compared to general inspection methods (the results at each resolution level can be viewed as a modality). Third, according to our results, it can address multiple types of textile fabrics, from simple to more complex. Experimental results demonstrate that the proposed model is robust and yields good overall performance with high precision and acceptable recall rates.

  7. Automatic Fabric Defect Detection with a Multi-Scale Convolutional Denoising Autoencoder Network Model

    Directory of Open Access Journals (Sweden)

    Shuang Mei

    2018-04-01

    Full Text Available Fabric defect detection is a necessary and essential step of quality control in the textile manufacturing industry. Traditional fabric inspections are usually performed by manual visual methods, which are low in efficiency and poor in precision for long-term industrial applications. In this paper, we propose an unsupervised learning-based automated approach to detect and localize fabric defects without any manual intervention. This approach is used to reconstruct image patches with a convolutional denoising autoencoder network at multiple Gaussian pyramid levels and to synthesize detection results from the corresponding resolution channels. The reconstruction residual of each image patch is used as the indicator for direct pixel-wise prediction. By segmenting and synthesizing the reconstruction residual map at each resolution level, the final inspection result can be generated. This newly developed method has several prominent advantages for fabric defect detection. First, it can be trained with only a small amount of defect-free samples. This is especially important for situations in which collecting large amounts of defective samples is difficult and impracticable. Second, owing to the multi-modal integration strategy, it is relatively more robust and accurate compared to general inspection methods (the results at each resolution level can be viewed as a modality. Third, according to our results, it can address multiple types of textile fabrics, from simple to more complex. Experimental results demonstrate that the proposed model is robust and yields good overall performance with high precision and acceptable recall rates.

  8. Edge Detection from RGB-D Image Based on Structured Forests

    Directory of Open Access Journals (Sweden)

    Heng Zhang

    2016-01-01

    Full Text Available This paper looks into the fundamental problem in computer vision: edge detection. We propose a new edge detector using structured random forests as the classifier, which can make full use of RGB-D image information from Kinect. Before classification, the adaptive bilateral filter is used for the denoising processing of the depth image. As data sources, information of 13 channels from RGB-D image is computed. In order to train the random forest classifier, the approximation measurement of the information gain is used. All the structured labels at a given node are mapped to a discrete set of labels using the Principal Component Analysis (PCA method. NYUD2 dataset is used to train our structured random forests. The random forest algorithm is used to classify the RGB-D image information for extracting the edge of the image. In addition to the proposed methodology, the quantitative comparisons of different algorithms are presented. The results of the experiments demonstrate the significant improvements of our algorithm over the state of the art.

  9. Entropy-Based Method of Choosing the Decomposition Level in Wavelet Threshold De-noising

    Directory of Open Access Journals (Sweden)

    Yan-Fang Sang

    2010-06-01

    Full Text Available In this paper, the energy distributions of various noises following normal, log-normal and Pearson-III distributions are first described quantitatively using the wavelet energy entropy (WEE, and the results are compared and discussed. Then, on the basis of these analytic results, a method for use in choosing the decomposition level (DL in wavelet threshold de-noising (WTD is put forward. Finally, the performance of the proposed method is verified by analysis of both synthetic and observed series. Analytic results indicate that the proposed method is easy to operate and suitable for various signals. Moreover, contrary to traditional white noise testing which depends on “autocorrelations”, the proposed method uses energy distributions to distinguish real signals and noise in noisy series, therefore the chosen DL is reliable, and the WTD results of time series can be improved.

  10. Second-order oriented partial-differential equations for denoising in electronic-speckle-pattern interferometry fringes.

    Science.gov (United States)

    Tang, Chen; Han, Lin; Ren, Hongwei; Zhou, Dongjian; Chang, Yiming; Wang, Xiaohang; Cui, Xiaolong

    2008-10-01

    We derive the second-order oriented partial-differential equations (PDEs) for denoising in electronic-speckle-pattern interferometry fringe patterns from two points of view. The first is based on variational methods, and the second is based on controlling diffusion direction. Our oriented PDE models make the diffusion along only the fringe orientation. The main advantage of our filtering method, based on oriented PDE models, is that it is very easy to implement compared with the published filtering methods along the fringe orientation. We demonstrate the performance of our oriented PDE models via application to two computer-simulated and experimentally obtained speckle fringes and compare with related PDE models.

  11. Comparative analysis of different methods for image enhancement

    Institute of Scientific and Technical Information of China (English)

    吴笑峰; 胡仕刚; 赵瑾; 李志明; 李劲; 唐志军; 席在芳

    2014-01-01

    Image enhancement technology plays a very important role to improve image quality in image processing. By enhancing some information and restraining other information selectively, it can improve image visual effect. The objective of this work is to implement the image enhancement to gray scale images using different techniques. After the fundamental methods of image enhancement processing are demonstrated, image enhancement algorithms based on space and frequency domains are systematically investigated and compared. The advantage and defect of the above-mentioned algorithms are analyzed. The algorithms of wavelet based image enhancement are also deduced and generalized. Wavelet transform modulus maxima (WTMM) is a method for detecting the fractal dimension of a signal, it is well used for image enhancement. The image techniques are compared by using the mean (μ), standard deviation (s), mean square error (MSE) and PSNR (peak signal to noise ratio). A group of experimental results demonstrate that the image enhancement algorithm based on wavelet transform is effective for image de-noising and enhancement. Wavelet transform modulus maxima method is one of the best methods for image enhancement.

  12. Comparative analysis of chosen transforms in the context of de-noising harmonic signals

    Directory of Open Access Journals (Sweden)

    Artur Zacniewski

    2015-09-01

    Full Text Available In the article, comparison of popular transforms used i.a. in denoising harmonical signals was presented. The division of signals submitted to mathematical analysis was shown and chosen transforms such as Short Time Fourier Transform, Wigner-Ville Distribution, Wavelet Transform and Discrete Cosine Transform were presented. Harmonic signal with white noise added was submitted for research. During research, the parameters of noise were changed to analyze the effects of using particular transform on noised signal. The importance of right choice for transform and its parameters (different for particular kind of transform was shown. Small changes in parameters or different functions used in transform can lead to considerably different results.[b]Keywords[/b]: denoising of harmonical signals, wavelet transform, discrete cosine transform, DCT

  13. Spectral data de-noising using semi-classical signal analysis: application to localized MRS

    KAUST Repository

    Laleg-Kirati, Taous-Meriem

    2016-09-05

    In this paper, we propose a new post-processing technique called semi-classical signal analysis (SCSA) for MRS data de-noising. Similar to Fourier transformation, SCSA decomposes the input real positive MR spectrum into a set of linear combinations of squared eigenfunctions equivalently represented by localized functions with shape derived from the potential function of the Schrodinger operator. In this manner, the MRS spectral peaks represented as a sum of these \\'shaped like\\' functions are efficiently separated from noise and accurately analyzed. The performance of the method is tested by analyzing simulated and real MRS data. The results obtained demonstrate that the SCSA method is highly efficient in localized MRS data de-noising and allows for an accurate data quantification.

  14. Spectral data de-noising using semi-classical signal analysis: application to localized MRS

    KAUST Repository

    Laleg-Kirati, Taous-Meriem; Zhang, Jiayu; Achten, Eric; Serrai, Hacene

    2016-01-01

    In this paper, we propose a new post-processing technique called semi-classical signal analysis (SCSA) for MRS data de-noising. Similar to Fourier transformation, SCSA decomposes the input real positive MR spectrum into a set of linear combinations of squared eigenfunctions equivalently represented by localized functions with shape derived from the potential function of the Schrodinger operator. In this manner, the MRS spectral peaks represented as a sum of these 'shaped like' functions are efficiently separated from noise and accurately analyzed. The performance of the method is tested by analyzing simulated and real MRS data. The results obtained demonstrate that the SCSA method is highly efficient in localized MRS data de-noising and allows for an accurate data quantification.

  15. FOG Random Drift Signal Denoising Based on the Improved AR Model and Modified Sage-Husa Adaptive Kalman Filter.

    Science.gov (United States)

    Sun, Jin; Xu, Xiaosu; Liu, Yiting; Zhang, Tao; Li, Yao

    2016-07-12

    In order to reduce the influence of fiber optic gyroscope (FOG) random drift error on inertial navigation systems, an improved auto regressive (AR) model is put forward in this paper. First, based on real-time observations at each restart of the gyroscope, the model of FOG random drift can be established online. In the improved AR model, the FOG measured signal is employed instead of the zero mean signals. Then, the modified Sage-Husa adaptive Kalman filter (SHAKF) is introduced, which can directly carry out real-time filtering on the FOG signals. Finally, static and dynamic experiments are done to verify the effectiveness. The filtering results are analyzed with Allan variance. The analysis results show that the improved AR model has high fitting accuracy and strong adaptability, and the minimum fitting accuracy of single noise is 93.2%. Based on the improved AR(3) model, the denoising method of SHAKF is more effective than traditional methods, and its effect is better than 30%. The random drift error of FOG is reduced effectively, and the precision of the FOG is improved.

  16. Multiscale vision model for event detection and reconstruction in two-photon imaging data

    DEFF Research Database (Denmark)

    Brazhe, Alexey; Mathiesen, Claus; Lind, Barbara Lykke

    2014-01-01

    on a modified multiscale vision model, an object detection framework based on the thresholding of wavelet coefficients and hierarchical trees of significant coefficients followed by nonlinear iterative partial object reconstruction, for the analysis of two-photon calcium imaging data. The framework is discussed...... of the multiscale vision model is similar in the denoising, but provides a better segmenation of the image into meaningful objects, whereas other methods need to be combined with dedicated thresholding and segmentation utilities....

  17. Conservative image transformations with restoration and scale-space properties

    NARCIS (Netherlands)

    Weickert, J.A.; Haar Romenij, ter B.M.; Viergever, M.A.; Delogne, P.

    1996-01-01

    Many image processing applications require to solve problems such as denoising with edge enhancement, preprocessing for segmentation, or the completion of interrupted lines. This may be accomplished by applying a suitable nonlinear anisotropic diffusion process to the image. Its diffusion tensor is

  18. Progressive image denoising through hybrid graph Laplacian regularization: a unified framework.

    Science.gov (United States)

    Liu, Xianming; Zhai, Deming; Zhao, Debin; Zhai, Guangtao; Gao, Wen

    2014-04-01

    Recovering images from corrupted observations is necessary for many real-world applications. In this paper, we propose a unified framework to perform progressive image recovery based on hybrid graph Laplacian regularized regression. We first construct a multiscale representation of the target image by Laplacian pyramid, then progressively recover the degraded image in the scale space from coarse to fine so that the sharp edges and texture can be eventually recovered. On one hand, within each scale, a graph Laplacian regularization model represented by implicit kernel is learned, which simultaneously minimizes the least square error on the measured samples and preserves the geometrical structure of the image data space. In this procedure, the intrinsic manifold structure is explicitly considered using both measured and unmeasured samples, and the nonlocal self-similarity property is utilized as a fruitful resource for abstracting a priori knowledge of the images. On the other hand, between two successive scales, the proposed model is extended to a projected high-dimensional feature space through explicit kernel mapping to describe the interscale correlation, in which the local structure regularity is learned and propagated from coarser to finer scales. In this way, the proposed algorithm gradually recovers more and more image details and edges, which could not been recovered in previous scale. We test our algorithm on one typical image recovery task: impulse noise removal. Experimental results on benchmark test images demonstrate that the proposed method achieves better performance than state-of-the-art algorithms.

  19. Accurate prediction of subcellular location of apoptosis proteins combining Chou’s PseAAC and PsePSSM based on wavelet denoising

    Science.gov (United States)

    Chen, Cheng; Chen, Rui-Xin; Wang, Lei; Wang, Ming-Hui; Zhang, Yan

    2017-01-01

    Apoptosis proteins subcellular localization information are very important for understanding the mechanism of programmed cell death and the development of drugs. The prediction of subcellular localization of an apoptosis protein is still a challenging task because the prediction of apoptosis proteins subcellular localization can help to understand their function and the role of metabolic processes. In this paper, we propose a novel method for protein subcellular localization prediction. Firstly, the features of the protein sequence are extracted by combining Chou's pseudo amino acid composition (PseAAC) and pseudo-position specific scoring matrix (PsePSSM), then the feature information of the extracted is denoised by two-dimensional (2-D) wavelet denoising. Finally, the optimal feature vectors are input to the SVM classifier to predict subcellular location of apoptosis proteins. Quite promising predictions are obtained using the jackknife test on three widely used datasets and compared with other state-of-the-art methods. The results indicate that the method proposed in this paper can remarkably improve the prediction accuracy of apoptosis protein subcellular localization, which will be a supplementary tool for future proteomics research. PMID:29296195

  20. 3D seismic data de-noising and reconstruction using Multichannel Time Slice Singular Spectrum Analysis

    Science.gov (United States)

    Rekapalli, Rajesh; Tiwari, R. K.; Sen, Mrinal K.; Vedanti, Nimisha

    2017-05-01

    Noises and data gaps complicate the seismic data processing and subsequently cause difficulties in the geological interpretation. We discuss a recent development and application of the Multi-channel Time Slice Singular Spectrum Analysis (MTSSSA) for 3D seismic data de-noising in time domain. In addition, L1 norm based simultaneous data gap filling of 3D seismic data using MTSSSA also discussed. We discriminated the noises from single individual time slices of 3D volumes by analyzing Eigen triplets of the trajectory matrix. We first tested the efficacy of the method on 3D synthetic seismic data contaminated with noise and then applied to the post stack seismic reflection data acquired from the Sleipner CO2 storage site (pre and post CO2 injection) from Norway. Our analysis suggests that the MTSSSA algorithm is efficient to enhance the S/N for better identification of amplitude anomalies along with simultaneous data gap filling. The bright spots identified in the de-noised data indicate upward migration of CO2 towards the top of the Utsira formation. The reflections identified applying MTSSSA to pre and post injection data correlate well with the geology of the Southern Viking Graben (SVG).

  1. Dictionary-enhanced imaging cytometry

    Science.gov (United States)

    Orth, Antony; Schaak, Diane; Schonbrun, Ethan

    2017-02-01

    State-of-the-art high-throughput microscopes are now capable of recording image data at a phenomenal rate, imaging entire microscope slides in minutes. In this paper we investigate how a large image set can be used to perform automated cell classification and denoising. To this end, we acquire an image library consisting of over one quarter-million white blood cell (WBC) nuclei together with CD15/CD16 protein expression for each cell. We show that the WBC nucleus images alone can be used to replicate CD expression-based gating, even in the presence of significant imaging noise. We also demonstrate that accurate estimates of white blood cell images can be recovered from extremely noisy images by comparing with a reference dictionary. This has implications for dose-limited imaging when samples belong to a highly restricted class such as a well-studied cell type. Furthermore, large image libraries may endow microscopes with capabilities beyond their hardware specifications in terms of sensitivity and resolution. We call for researchers to crowd source large image libraries of common cell lines to explore this possibility.

  2. An energy kurtosis demodulation technique for signal denoising and bearing fault detection

    International Nuclear Information System (INIS)

    Wang, Wilson; Lee, Hewen

    2013-01-01

    Rolling element bearings are commonly used in rotary machinery. Reliable bearing fault detection techniques are very useful in industries for predictive maintenance operations. Bearing fault detection still remains a very challenging task especially when defects occur on rotating bearing components because the fault-related features are non-stationary in nature. In this work, an energy kurtosis demodulation (EKD) technique is proposed for bearing fault detection especially for non-stationary signature analysis. The proposed EKD technique firstly denoises the signal by using a maximum kurtosis deconvolution filter to counteract the effect of signal transmission path so as to highlight defect-associated impulses. Next, the denoised signal is modulated over several frequency bands; a novel signature integration strategy is proposed to enhance feature characteristics. The effectiveness of the proposed EKD fault detection technique is verified by a series of experimental tests corresponding to different bearing conditions. (paper)

  3. Denoising traffic collision data using ensemble empirical mode decomposition (EEMD) and its application for constructing continuous risk profile (CRP).

    Science.gov (United States)

    Kim, Nam-Seog; Chung, Koohong; Ahn, Seongchae; Yu, Jeong Whon; Choi, Keechoo

    2014-10-01

    Filtering out the noise in traffic collision data is essential in reducing false positive rates (i.e., requiring safety investigation of sites where it is not needed) and can assist government agencies in better allocating limited resources. Previous studies have demonstrated that denoising traffic collision data is possible when there exists a true known high collision concentration location (HCCL) list to calibrate the parameters of a denoising method. However, such a list is often not readily available in practice. To this end, the present study introduces an innovative approach for denoising traffic collision data using the Ensemble Empirical Mode Decomposition (EEMD) method which is widely used for analyzing nonlinear and nonstationary data. The present study describes how to transform the traffic collision data before the data can be decomposed using the EEMD method to obtain set of Intrinsic Mode Functions (IMFs) and residue. The attributes of the IMFs were then carefully examined to denoise the data and to construct Continuous Risk Profiles (CRPs). The findings from comparing the resulting CRP profiles with CRPs in which the noise was filtered out with two different empirically calibrated weighted moving window lengths are also documented, and the results and recommendations for future research are discussed. Published by Elsevier Ltd.

  4. Comparison of JADE and canonical correlation analysis for ECG de-noising.

    Science.gov (United States)

    Kuzilek, Jakub; Kremen, Vaclav; Lhotska, Lenka

    2014-01-01

    This paper explores differences between two methods for blind source separation within frame of ECG de-noising. First method is joint approximate diagonalization of eigenmatrices, which is based on estimation of fourth order cross-cummulant tensor and its diagonalization. Second one is the statistical method known as canonical correlation analysis, which is based on estimation of correlation matrices between two multidimensional variables. Both methods were used within method, which combines the blind source separation algorithm with decision tree. The evaluation was made on large database of 382 long-term ECG signals and the results were examined. Biggest difference was found in results of 50 Hz power line interference where the CCA algorithm completely failed. Thus main power of CCA lies in estimation of unstructured noise within ECG. JADE algorithm has larger computational complexity thus the CCA perfomed faster when estimating the components.

  5. Multi-Channel Electroencephalogram (EEG) Signal Acquisition and its Effective Channel selection with De-noising Using AWICA for Biometric System

    OpenAIRE

    B.Sabarigiri; D.Suganyadevi

    2014-01-01

    the embedding of low cost electroencephalogram (EEG) sensors in wireless headsets gives improved authentication based on their brain wave signals has become a practical opportunity. In this paper signal acquisition along with effective multi-channel selection from a specific area of the brain and denoising using AWICA methods are proposed for EEG based personal identification. At this point, to develop identification system the steps are as follows. (i) the high-quality device with the least ...

  6. The possibilities of compressed-sensing-based Kirchhoff prestack migration

    KAUST Repository

    Aldawood, Ali; Hoteit, Ibrahim; Alkhalifah, Tariq Ali

    2014-01-01

    An approximate subsurface reflectivity distribution of the earth is usually obtained through the migration process. However, conventional migration algorithms, including those based on the least-squares approach, yield structure descriptions that are slightly smeared and of low resolution caused by the common migration artifacts due to limited aperture, coarse sampling, band-limited source, and low subsurface illumination. To alleviate this problem, we use the fact that minimizing the L1-norm of a signal promotes its sparsity. Thus, we formulated the Kirchhoff migration problem as a compressed-sensing (CS) basis pursuit denoise problem to solve for highly focused migrated images compared with those obtained by standard and least-squares migration algorithms. The results of various subsurface reflectivity models revealed that solutions computed using the CS based migration provide a more accurate subsurface reflectivity location and amplitude. We applied the CS algorithm to image synthetic data from a fault model using dense and sparse acquisition geometries. Our results suggest that the proposed approach may still provide highly resolved images with a relatively small number of measurements. We also evaluated the robustness of the basis pursuit denoise algorithm in the presence of Gaussian random observational noise and in the case of imaging the recorded data with inaccurate migration velocities.

  7. The possibilities of compressed-sensing-based Kirchhoff prestack migration

    KAUST Repository

    Aldawood, Ali

    2014-05-08

    An approximate subsurface reflectivity distribution of the earth is usually obtained through the migration process. However, conventional migration algorithms, including those based on the least-squares approach, yield structure descriptions that are slightly smeared and of low resolution caused by the common migration artifacts due to limited aperture, coarse sampling, band-limited source, and low subsurface illumination. To alleviate this problem, we use the fact that minimizing the L1-norm of a signal promotes its sparsity. Thus, we formulated the Kirchhoff migration problem as a compressed-sensing (CS) basis pursuit denoise problem to solve for highly focused migrated images compared with those obtained by standard and least-squares migration algorithms. The results of various subsurface reflectivity models revealed that solutions computed using the CS based migration provide a more accurate subsurface reflectivity location and amplitude. We applied the CS algorithm to image synthetic data from a fault model using dense and sparse acquisition geometries. Our results suggest that the proposed approach may still provide highly resolved images with a relatively small number of measurements. We also evaluated the robustness of the basis pursuit denoise algorithm in the presence of Gaussian random observational noise and in the case of imaging the recorded data with inaccurate migration velocities.

  8. A Method for Denoising Image Contours

    Directory of Open Access Journals (Sweden)

    Ovidiu COSMA

    2017-12-01

    Full Text Available The edge detection techniques have to compromise between sensitivity and noise. In order for the main contours to be uninterrupted, the level of sensitivity has to be raised, which however has the negative effect of producing a multitude of insignificant contours (noise. This article proposes a method of removing this noise, which acts directly on the binary representation of the image contours.

  9. Perona Malik anisotropic diffusion model using Peaceman Rachford scheme on digital radiographic image

    International Nuclear Information System (INIS)

    Halim, Suhaila Abd; Razak, Rohayu Abd; Ibrahim, Arsmah; Manurung, Yupiter HP

    2014-01-01

    In image processing, it is important to remove noise without affecting the image structure as well as preserving all the edges. Perona Malik Anisotropic Diffusion (PMAD) is a PDE-based model which is suitable for image denoising and edge detection problems. In this paper, the Peaceman Rachford scheme is applied on PMAD to remove unwanted noise as the scheme is efficient and unconditionally stable. The capability of the scheme to remove noise is evaluated on several digital radiography weld defect images computed using MATLAB R2009a. Experimental results obtained show that the Peaceman Rachford scheme improves the image quality substantially well based on the Peak Signal to Noise Ratio (PSNR). The Peaceman Rachford scheme used in solving the PMAD model successfully removes unwanted noise in digital radiographic image

  10. Perona Malik anisotropic diffusion model using Peaceman Rachford scheme on digital radiographic image

    Energy Technology Data Exchange (ETDEWEB)

    Halim, Suhaila Abd; Razak, Rohayu Abd; Ibrahim, Arsmah [Center of Mathematics Studies, Faculty of Computer and Mathematical Sciences, Universiti Teknologi MARA, 40450 Shah Alam. Selangor DE (Malaysia); Manurung, Yupiter HP [Advanced Manufacturing Technology Excellence Center (AMTEx), Faculty of Mechanical Engineering, Universiti Teknologi MARA, 40450 Shah Alam. Selangor DE (Malaysia)

    2014-06-19

    In image processing, it is important to remove noise without affecting the image structure as well as preserving all the edges. Perona Malik Anisotropic Diffusion (PMAD) is a PDE-based model which is suitable for image denoising and edge detection problems. In this paper, the Peaceman Rachford scheme is applied on PMAD to remove unwanted noise as the scheme is efficient and unconditionally stable. The capability of the scheme to remove noise is evaluated on several digital radiography weld defect images computed using MATLAB R2009a. Experimental results obtained show that the Peaceman Rachford scheme improves the image quality substantially well based on the Peak Signal to Noise Ratio (PSNR). The Peaceman Rachford scheme used in solving the PMAD model successfully removes unwanted noise in digital radiographic image.

  11. Adaptive wiener filter based on Gaussian mixture distribution model for denoising chest X-ray CT image

    International Nuclear Information System (INIS)

    Tabuchi, Motohiro; Yamane, Nobumoto; Morikawa, Yoshitaka

    2008-01-01

    In recent decades, X-ray CT imaging has become more important as a result of its high-resolution performance. However, it is well known that the X-ray dose is insufficient in the techniques that use low-dose imaging in health screening or thin-slice imaging in work-up. Therefore, the degradation of CT images caused by the streak artifact frequently becomes problematic. In this study, we applied a Wiener filter (WF) using the universal Gaussian mixture distribution model (UNI-GMM) as a statistical model to remove streak artifact. In designing the WF, it is necessary to estimate the statistical model and the precise co-variances of the original image. In the proposed method, we obtained a variety of chest X-ray CT images using a phantom simulating a chest organ, and we estimated the statistical information using the images for training. The results of simulation showed that it is possible to fit the UNI-GMM to the chest X-ray CT images and reduce the specific noise. (author)

  12. Sadhana | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    Home; Journals; Sadhana. Mantosh Biswas. Articles written in Sadhana. Volume 39 Issue 4 August 2014 pp 879-900. An adaptive image denoising method based on local parameters optimization · Hari Om Mantosh Biswas · More Details Abstract Fulltext PDF. In image denoising algorithms, the noise is handled by either ...

  13. LCD denoise and the vector mutual information method in the application of the gear fault diagnosis under different working conditions

    Science.gov (United States)

    Xiangfeng, Zhang; Hong, Jiang

    2018-03-01

    In this paper, the full vector LCD method is proposed to solve the misjudgment problem caused by the change of the working condition. First, the signal from different working condition is decomposed by LCD, to obtain the Intrinsic Scale Component (ISC)whose instantaneous frequency with physical significance. Then, calculate of the cross correlation coefficient between ISC and the original signal, signal denoising based on the principle of mutual information minimum. At last, calculate the sum of absolute Vector mutual information of the sample under different working condition and the denoised ISC as the characteristics to classify by use of Support vector machine (SVM). The wind turbines vibration platform gear box experiment proves that this method can identify fault characteristics under different working conditions. The advantage of this method is that it reduce dependence of man’s subjective experience, identify fault directly from the original data of vibration signal. It will has high engineering value.

  14. A sparsity-based iterative algorithm for reconstruction of micro-CT images from highly undersampled projection datasets obtained with a synchrotron X-ray source

    Science.gov (United States)

    Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.

    2016-12-01

    Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.

  15. Denoising multicriterion iterative reconstruction in emission spectral tomography

    Science.gov (United States)

    Wan, Xiong; Yin, Aihan

    2007-03-01

    In the study of optical testing, the computed tomogaphy technique has been widely adopted to reconstruct three-dimensional distributions of physical parameters of various kinds of fluid fields, such as flame, plasma, etc. In most cases, projection data are often stained by noise due to environmental disturbance, instrumental inaccuracy, and other random interruptions. To improve the reconstruction performance in noisy cases, an algorithm that combines a self-adaptive prefiltering denoising approach (SPDA) with a multicriterion iterative reconstruction (MCIR) is proposed and studied. First, the level of noise is approximately estimated with a frequency domain statistical method. Then the cutoff frequency of a Butterworth low-pass filter was established based on the evaluated noise energy. After the SPDA processing, the MCIR algorithm was adopted for limited-view optical computed tomography reconstruction. Simulated reconstruction of two test phantoms and a flame emission spectral tomography experiment were employed to evaluate the performance of SPDA-MCIR in noisy cases. Comparison with some traditional methods and experiment results showed that the SPDA-MCIR combination had obvious improvement in the case of noisy data reconstructions.

  16. Fast MR image reconstruction for partially parallel imaging with arbitrary k-space trajectories.

    Science.gov (United States)

    Ye, Xiaojing; Chen, Yunmei; Lin, Wei; Huang, Feng

    2011-03-01

    Both acquisition and reconstruction speed are crucial for magnetic resonance (MR) imaging in clinical applications. In this paper, we present a fast reconstruction algorithm for SENSE in partially parallel MR imaging with arbitrary k-space trajectories. The proposed method is a combination of variable splitting, the classical penalty technique and the optimal gradient method. Variable splitting and the penalty technique reformulate the SENSE model with sparsity regularization as an unconstrained minimization problem, which can be solved by alternating two simple minimizations: One is the total variation and wavelet based denoising that can be quickly solved by several recent numerical methods, whereas the other one involves a linear inversion which is solved by the optimal first order gradient method in our algorithm to significantly improve the performance. Comparisons with several recent parallel imaging algorithms indicate that the proposed method significantly improves the computation efficiency and achieves state-of-the-art reconstruction quality.

  17. Image understanding using sparse representations

    CERN Document Server

    Thiagarajan, Jayaraman J; Turaga, Pavan; Spanias, Andreas

    2014-01-01

    Image understanding has been playing an increasingly crucial role in several inverse problems and computer vision. Sparse models form an important component in image understanding, since they emulate the activity of neural receptors in the primary visual cortex of the human brain. Sparse methods have been utilized in several learning problems because of their ability to provide parsimonious, interpretable, and efficient models. Exploiting the sparsity of natural signals has led to advances in several application areas including image compression, denoising, inpainting, compressed sensing, blin

  18. An Interactive Procedure to Preserve the Desired Edges during the Image Processing of Noise Reduction

    Directory of Open Access Journals (Sweden)

    Lin-Tsang Lee

    2010-01-01

    Full Text Available The paper propose a new procedure including four stages in order to preserve the desired edges during the image processing of noise reduction. A denoised image can be obtained from a noisy image at the first stage of the procedure. At the second stage, an edge map can be obtained by the Canny edge detector to find the edges of the object contours. Manual modification of an edge map at the third stage is optional to capture all the desired edges of the object contours. At the final stage, a new method called Edge Preserved Inhomogeneous Diffusion Equation (EPIDE is used to smooth the noisy images or the previously denoised image at the first stage for achieving the edge preservation. The Optical Character Recognition (OCR results in the experiments show that the proposed procedure has the best recognition result because of the capability of edge preservation.

  19. Fault diagnosis of rolling bearing based on second generation wavelet denoising and morphological filter

    International Nuclear Information System (INIS)

    Meng, Lingjie; Xiang, Jiawei; Zhong, Yongteng; Song, Wenlei

    2015-01-01

    Defective rolling bearing response is often characterized by the presence of periodic impulses. However, the in-situ sampled vibration signal is ordinarily mixed with ambient noises and easy to be interfered even submerged. The hybrid approach combining the second generation wavelet denoising with morphological filter is presented. The raw signal is purified using the second generation wavelet. The difference between the closing and opening operator is employed as the morphology filter to extract the periodicity impulsive features from the purified signal and the defect information is easily to be extracted from the corresponding frequency spectrum. The proposed approach is evaluated by simulations and vibration signals from defective bearings with inner race fault, outer race fault, rolling element fault and compound faults, espectively. Results show that the ambient noises can be fully restrained and the defect information of the above defective bearings is well extracted, which demonstrates that the approach is feasible and effective for the fault detection of rolling bearing.

  20. Imaging-based enrichment criteria using deep learning algorithms for efficient clinical trials in mild cognitive impairment.

    Science.gov (United States)

    Ithapu, Vamsi K; Singh, Vikas; Okonkwo, Ozioma C; Chappell, Richard J; Dowling, N Maritza; Johnson, Sterling C

    2015-12-01

    The mild cognitive impairment (MCI) stage of Alzheimer's disease (AD) may be optimal for clinical trials to test potential treatments for preventing or delaying decline to dementia. However, MCI is heterogeneous in that not all cases progress to dementia within the time frame of a trial and some may not have underlying AD pathology. Identifying those MCIs who are most likely to decline during a trial and thus most likely to benefit from treatment will improve trial efficiency and power to detect treatment effects. To this end, using multimodal, imaging-derived, inclusion criteria may be especially beneficial. Here, we present a novel multimodal imaging marker that predicts future cognitive and neural decline from [F-18]fluorodeoxyglucose positron emission tomography (PET), amyloid florbetapir PET, and structural magnetic resonance imaging, based on a new deep learning algorithm (randomized denoising autoencoder marker, rDAm). Using ADNI2 MCI data, we show that using rDAm as a trial enrichment criterion reduces the required sample estimates by at least five times compared with the no-enrichment regime and leads to smaller trials with high statistical power, compared with existing methods. Copyright © 2015 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  1. Edge-preserving color image denoising through tensor voting

    OpenAIRE

    Moreno, Rodrigo; García, Miguel Ángel; Puig, Domenec; Juli, Carme

    2011-01-01

    This is the author’s version of a work that was accepted for publication in Computer Vision and Image Understanding. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Computer Vision and Image Understanding, 115, 11 (2011) DOI:...

  2. Joint denoising, demosaicing, and chromatic aberration correction for UHD video

    Science.gov (United States)

    Jovanov, Ljubomir; Philips, Wilfried; Damstra, Klaas Jan; Ellenbroek, Frank

    2017-09-01

    High-resolution video capture is crucial for numerous applications such as surveillance, security, industrial inspection, medical imaging and digital entertainment. In the last two decades, we are witnessing a dramatic increase of the spatial resolution and the maximal frame rate of video capturing devices. In order to achieve further resolution increase, numerous challenges will be facing us. Due to the reduced size of the pixel, the amount of light also reduces, leading to the increased noise level. Moreover, the reduced pixel size makes the lens imprecisions more pronounced, which especially applies to chromatic aberrations. Even in the case when high quality lenses are used some chromatic aberration artefacts will remain. Next, noise level additionally increases due to the higher frame rates. To reduce the complexity and the price of the camera, one sensor captures all three colors, by relying on Color Filter Arrays. In order to obtain full resolution color image, missing color components have to be interpolated, i.e. demosaicked, which is more challenging than in the case of lower resolution, due to the increased noise and aberrations. In this paper, we propose a new method, which jointly performs chromatic aberration correction, denoising and demosaicking. By jointly performing the reduction of all artefacts, we are reducing the overall complexity of the system and the introduction of new artefacts. In order to reduce possible flicker we also perform temporal video enhancement. We evaluate the proposed method on a number of publicly available UHD sequences and on sequences recorded in our studio.

  3. Approaches for improving image quality in magnetic induction tomography

    International Nuclear Information System (INIS)

    Maimaitijiang, Y; Roula, M A; Kahlert, J

    2010-01-01

    Magnetic induction tomography (MIT) is a contactless and non-invasive method for imaging the passive electrical properties of objects. Measuring the weak signal produced by eddy currents within biological soft tissues can be challenging in the presence of noise and the large signals resulting from the direct excitation–detection coil coupling. To detect haemorrhagic stroke in the brain, for instance, high measurement accuracy is required to enable images with enough contrast to differentiate between normal and haemorrhaged brain tissues. The reconstructed images are often very sensitive to inevitable measurement noise from the environment, system instabilities and patient-related artefacts such as movement and sweating. We propose methods for mitigating signal noise and improving image reconstruction. We evaluated and compared the use of a range wavelet transforms for signal denoising. Adaptive regularization methods including L-curve, generalized cross validation (GCV) and noise estimation were also compared. We evaluated all these described methods with measurements of in vitro tissues resembling a peripheral haemorrhagic cerebral stroke created by placing a bio-membrane package filled with 10 ml blood in a swine brain of 100 ml. We show that wavelet packet denoising combined with adaptive regularization can improve the quality of reconstructed images

  4. Alexander fractional differential window filter for ECG denoising.

    Science.gov (United States)

    Verma, Atul Kumar; Saini, Indu; Saini, Barjinder Singh

    2018-06-01

    The electrocardiogram (ECG) non-invasively monitors the electrical activities of the heart. During the process of recording and transmission, ECG signals are often corrupted by various types of noises. Minimizations of these noises facilitate accurate detection of various anomalies. In the present paper, Alexander fractional differential window (AFDW) filter is proposed for ECG signal denoising. The designed filter is based on the concept of generalized Alexander polynomial and the R-L differential equation of fractional calculus. This concept is utilized to formulate a window that acts as a forward filter. Thereafter, the backward filter is constructed by reversing the coefficients of the forward filter. The proposed AFDW filter is then obtained by averaging of the forward and backward filter coefficients. The performance of the designed AFDW filter is validated by adding the various type of noise to the original ECG signal obtained from MIT-BIH arrhythmia database. The two non-diagnostic measure, i.e., SNR, MSE, and one diagnostic measure, i.e., wavelet energy based diagnostic distortion (WEDD) have been employed for the quantitative evaluation of the designed filter. Extensive experimentations on all the 48-records of MIT-BIH arrhythmia database resulted in average SNR of 22.014 ± 3.806365, 14.703 ± 3.790275, 13.3183 ± 3.748230; average MSE of 0.001458 ± 0.00028, 0.0078 ± 0.000319, 0.01061 ± 0.000472; and average WEDD value of 0.020169 ± 0.01306, 0.1207 ± 0.061272, 0.1432 ± 0.073588, for ECG signal contaminated by the power line, random, and the white Gaussian noise respectively. A new metric named as morphological power preservation measure (MPPM) is also proposed that account for the power preservance (as indicated by PSD plots) and the QRS morphology. The proposed AFDW filter retained much of the original (clean) signal power without any significant morphological distortion as validated by MPPM measure that were 0

  5. Curvature correction of retinal OCTs using graph-based geometry detection

    Science.gov (United States)

    Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D.; Sonka, Milan

    2013-05-01

    In this paper, we present a new algorithm as an enhancement and preprocessing step for acquired optical coherence tomography (OCT) images of the retina. The proposed method is composed of two steps, first of which is a denoising algorithm with wavelet diffusion based on a circular symmetric Laplacian model, and the second part can be described in terms of graph-based geometry detection and curvature correction according to the hyper-reflective complex layer in the retina. The proposed denoising algorithm showed an improvement of contrast-to-noise ratio from 0.89 to 1.49 and an increase of signal-to-noise ratio (OCT image SNR) from 18.27 to 30.43 dB. By applying the proposed method for estimation of the interpolated curve using a full automatic method, the mean ± SD unsigned border positioning error was calculated for normal and abnormal cases. The error values of 2.19 ± 1.25 and 8.53 ± 3.76 µm were detected for 200 randomly selected slices without pathological curvature and 50 randomly selected slices with pathological curvature, respectively. The important aspect of this algorithm is its ability in detection of curvature in strongly pathological images that surpasses previously introduced methods; the method is also fast, compared to the relatively low speed of similar methods.

  6. Quantitative assessment of pain-related thermal dysfunction through clinical digital infrared thermal imaging

    Directory of Open Access Journals (Sweden)

    Frize Monique

    2004-06-01

    Full Text Available Abstract Background The skin temperature distribution of a healthy human body exhibits a contralateral symmetry. Some nociceptive and most neuropathic pain pathologies are associated with an alteration of the thermal distribution of the human body. Since the dissipation of heat through the skin occurs for the most part in the form of infrared radiation, infrared thermography is the method of choice to study the physiology of thermoregulation and the thermal dysfunction associated with pain. Assessing thermograms is a complex and subjective task that can be greatly facilitated by computerised techniques. Methods This paper presents techniques for automated computerised assessment of thermal images of pain, in order to facilitate the physician's decision making. First, the thermal images are pre-processed to reduce the noise introduced during the initial acquisition and to extract the irrelevant background. Then, potential regions of interest are identified using fixed dermatomal subdivisions of the body, isothermal analysis and segmentation techniques. Finally, we assess the degree of asymmetry between contralateral regions of interest using statistical computations and distance measures between comparable regions. Results The wavelet domain-based Poisson noise removal techniques compared favourably against Wiener and other wavelet-based denoising methods, when qualitative criteria were used. It was shown to improve slightly the subsequent analysis. The automated background removal technique based on thresholding and morphological operations was successful for both noisy and denoised images with a correct removal rate of 85% of the images in the database. The automation of the regions of interest (ROIs delimitation process was achieved successfully for images with a good contralateral symmetry. Isothermal division complemented well the fixed ROIs division based on dermatomes, giving a more accurate map of potentially abnormal regions. The measure

  7. Combination of oriented partial differential equation and shearlet transform for denoising in electronic speckle pattern interferometry fringe patterns.

    Science.gov (United States)

    Xu, Wenjun; Tang, Chen; Gu, Fan; Cheng, Jiajia

    2017-04-01

    It is a key step to remove the massive speckle noise in electronic speckle pattern interferometry (ESPI) fringe patterns. In the spatial-domain filtering methods, oriented partial differential equations have been demonstrated to be a powerful tool. In the transform-domain filtering methods, the shearlet transform is a state-of-the-art method. In this paper, we propose a filtering method for ESPI fringe patterns denoising, which is a combination of second-order oriented partial differential equation (SOOPDE) and the shearlet transform, named SOOPDE-Shearlet. Here, the shearlet transform is introduced into the ESPI fringe patterns denoising for the first time. This combination takes advantage of the fact that the spatial-domain filtering method SOOPDE and the transform-domain filtering method shearlet transform benefit from each other. We test the proposed SOOPDE-Shearlet on five experimentally obtained ESPI fringe patterns with poor quality and compare our method with SOOPDE, shearlet transform, windowed Fourier filtering (WFF), and coherence-enhancing diffusion (CEDPDE). Among them, WFF and CEDPDE are the state-of-the-art methods for ESPI fringe patterns denoising in transform domain and spatial domain, respectively. The experimental results have demonstrated the good performance of the proposed SOOPDE-Shearlet.

  8. Infrared image enhancement with learned features

    Science.gov (United States)

    Fan, Zunlin; Bi, Duyan; Ding, Wenshan

    2017-11-01

    Due to the variation of imaging environment and limitations of infrared imaging sensors, infrared images usually have some drawbacks: low contrast, few details and indistinct edges. Hence, to promote the applications of infrared imaging technology, it is essential to improve the qualities of infrared images. To enhance image details and edges adaptively, we propose an infrared image enhancement method under the proposed image enhancement scheme. On the one hand, on the assumption of high-quality image taking more evident structure singularities than low-quality images, we propose an image enhancement scheme that depends on the extractions of structure features. On the other hand, different from the current image enhancement algorithms based on deep learning networks that try to train and build the end-to-end mappings on improving image quality, we analyze the significance of first layer in Stacked Sparse Denoising Auto-encoder and propose a novel feature extraction for the proposed image enhancement scheme. Experiment results prove that the novel feature extraction is free from some artifacts on the edges such as blocking artifacts, ;gradient reversal;, and pseudo contours. Compared with other enhancement methods, the proposed method achieves the best performance in infrared image enhancement.

  9. An approach of point cloud denoising based on improved bilateral filtering

    Science.gov (United States)

    Zheng, Zeling; Jia, Songmin; Zhang, Guoliang; Li, Xiuzhi; Zhang, Xiangyin

    2018-04-01

    An omnidirectional mobile platform is designed for building point cloud based on an improved filtering algorithm which is employed to handle the depth image. First, the mobile platform can move flexibly and the control interface is convenient to control. Then, because the traditional bilateral filtering algorithm is time-consuming and inefficient, a novel method is proposed which called local bilateral filtering (LBF). LBF is applied to process depth image obtained by the Kinect sensor. The results show that the effect of removing noise is improved comparing with the bilateral filtering. In the condition of off-line, the color images and processed images are used to build point clouds. Finally, experimental results demonstrate that our method improves the speed of processing time of depth image and the effect of point cloud which has been built.

  10. A quality quantitative method of silicon direct bonding based on wavelet image analysis

    Science.gov (United States)

    Tan, Xiao; Tao, Zhi; Li, Haiwang; Xu, Tiantong; Yu, Mingxing

    2018-04-01

    The rapid development of MEMS (micro-electro-mechanical systems) has received significant attention from researchers in various fields and subjects. In particular, the MEMS fabrication process is elaborate and, as such, has been the focus of extensive research inquiries. However, in MEMS fabrication, component bonding is difficult to achieve and requires a complex approach. Thus, improvements in bonding quality are relatively important objectives. A higher quality bond can only be achieved with improved measurement and testing capabilities. In particular, the traditional testing methods mainly include infrared testing, tensile testing, and strength testing, despite the fact that using these methods to measure bond quality often results in low efficiency or destructive analysis. Therefore, this paper focuses on the development of a precise, nondestructive visual testing method based on wavelet image analysis that is shown to be highly effective in practice. The process of wavelet image analysis includes wavelet image denoising, wavelet image enhancement, and contrast enhancement, and as an end result, can display an image with low background noise. In addition, because the wavelet analysis software was developed with MATLAB, it can reveal the bonding boundaries and bonding rates to precisely indicate the bond quality at all locations on the wafer. This work also presents a set of orthogonal experiments that consist of three prebonding factors, the prebonding temperature, the positive pressure value and the prebonding time, which are used to analyze the prebonding quality. This method was used to quantify the quality of silicon-to-silicon wafer bonding, yielding standard treatment quantities that could be practical for large-scale use.

  11. Impulsive noise suppression in color images based on the geodesic digital paths

    Science.gov (United States)

    Smolka, Bogdan; Cyganek, Boguslaw

    2015-02-01

    In the paper a novel filtering design based on the concept of exploration of the pixel neighborhood by digital paths is presented. The paths start from the boundary of a filtering window and reach its center. The cost of transitions between adjacent pixels is defined in the hybrid spatial-color space. Then, an optimal path of minimum total cost, leading from pixels of the window's boundary to its center is determined. The cost of an optimal path serves as a degree of similarity of the central pixel to the samples from the local processing window. If a pixel is an outlier, then all the paths starting from the window's boundary will have high costs and the minimum one will also be high. The filter output is calculated as a weighted mean of the central pixel and an estimate constructed using the information on the minimum cost assigned to each image pixel. So, first the costs of optimal paths are used to build a smoothed image and in the second step the minimum cost of the central pixel is utilized for construction of the weights of a soft-switching scheme. The experiments performed on a set of standard color images, revealed that the efficiency of the proposed algorithm is superior to the state-of-the-art filtering techniques in terms of the objective restoration quality measures, especially for high noise contamination ratios. The proposed filter, due to its low computational complexity, can be applied for real time image denoising and also for the enhancement of video streams.

  12. Integration of speckle de-noising and image segmentation using ...

    Indian Academy of Sciences (India)

    2Department of Electronics and Communication Engineering, National Institute of Technology Karnataka,. Surathkal, Mangalore 575 025, India. ... cal images obtained from the satellites are often prone to bad climatic conditions and hence ... (2009) for satellite image segmentation. Mean shift segmentation (MSS) is a non-.

  13. Signal de-noising methods for fault diagnosis and troubleshooting at CANDU{sup ®} stations

    Energy Technology Data Exchange (ETDEWEB)

    Nasimi, Elnara; Gabbar, Hossam A., E-mail: hossam.gabbar@uoit.ca

    2014-12-15

    Highlights: • Fault modelling using a Fault Semantic Network (FSN). • Intelligent filtering techniques for signal de-noise in NPP. • Signal feature extraction is applied as integrated with FSN. • Increase signal-to-noise ratio (SNR). - Abstract: Over the past several years a number of domestic CANDU{sup ®} stations have experienced issues with neutron detection systems that challenged safety and operation. Intelligent troubleshooting methodology is required to aid in making risk-informed decisions related to design and operational activities, which can aid current stations and be used for the future generation of CANDU{sup ®} designs. Fault modelling approach using Fault Semantic Network (FSN) with risk estimation is proposed for this purpose. One major challenge in troubleshooting is the determination of accurate data. It is typical to have missing, incomplete or corrupted data points in large process data sets from dynamically changing systems. Therefore, it is expected that quality of obtained data will have a direct impact on the system's ability to recognize developing trends in the process upset situations. In order to enable fault detection process, intelligent filtering techniques are required to de-noise process data and extract valuable signal features in the presence of background noise. In this study, the impact of applying an optimized and intelligent filtering of process signals prior to data analysis is discussed. This is particularly important for neutronic signals in order to increase signal-to-noise ratio (SNR) which suffers the most during start-ups and low power operation. This work is complimentary to the previously published studies on FSN-based fault modelling in CANDU stations. The main objective of this work is to explore the potential research methods using a specific case study and, based on the results and outcomes from this work, to note the possible future improvements and innovation areas.

  14. Improvement of nonlinear diffusion equation using relaxed geometric mean filter for low PSNR images

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan

    2013-01-01

    A new method to improve the performance of low PSNR image denoising is presented. The proposed scheme estimates edge gradient from an image that is regularised with a relaxed geometric mean filter. The proposed method consists of two stages; the first stage consists of a second order nonlinear an...

  15. Research on Adaptive Optics Image Restoration Algorithm by Improved Expectation Maximization Method

    Directory of Open Access Journals (Sweden)

    Lijuan Zhang

    2014-01-01

    Full Text Available To improve the effect of adaptive optics images’ restoration, we put forward a deconvolution algorithm improved by the EM algorithm which joints multiframe adaptive optics images based on expectation-maximization theory. Firstly, we need to make a mathematical model for the degenerate multiframe adaptive optics images. The function model is deduced for the points that spread with time based on phase error. The AO images are denoised using the image power spectral density and support constraint. Secondly, the EM algorithm is improved by combining the AO imaging system parameters and regularization technique. A cost function for the joint-deconvolution multiframe AO images is given, and the optimization model for their parameter estimations is built. Lastly, the image-restoration experiments on both analog images and the real AO are performed to verify the recovery effect of our algorithm. The experimental results show that comparing with the Wiener-IBD or RL-IBD algorithm, our iterations decrease 14.3% and well improve the estimation accuracy. The model distinguishes the PSF of the AO images and recovers the observed target images clearly.

  16. Complex diffusion process for noise reduction

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Barari, A.

    2014-01-01

    equations (PDEs) in image restoration and de-noising prompted many researchers to search for an improvement in the technique. In this paper, a new method is presented for signal de-noising, based on PDEs and Schrodinger equations, named as complex diffusion process (CDP). This method assumes that variations...... for signal de-noising. To evaluate the performance of the proposed method, a number of experiments have been performed using Sinusoid, multi-component and FM signals cluttered with noise. The results indicate that the proposed method outperforms the approaches for signal de-noising known in prior art....

  17. Heuristic Enhancement of Magneto-Optical Images for NDE

    Science.gov (United States)

    Cacciola, Matteo; Megali, Giuseppe; Pellicanò, Diego; Calcagno, Salvatore; Versaci, Mario; Morabito, FrancescoCarlo

    2010-12-01

    The quality of measurements in nondestructive testing and evaluation plays a key role in assessing the reliability of different inspection techniques. Each different technique, like the magneto-optic imaging here treated, is affected by some special types of noise which are related to the specific device used for their acquisition. Therefore, the design of even more accurate image processing is often required by relevant applications, for instance, in implementing integrated solutions for flaw detection and characterization. The aim of this paper is to propose a preprocessing procedure based on independent component analysis (ICA) to ease the detection of rivets and/or flaws in the specimens under test. A comparison of the proposed approach with some other advanced image processing methodologies used for denoising magneto-optic images (MOIs) is carried out, in order to show advantages and weakness of ICA in improving the accuracy and performance of the rivets/flaw detection.

  18. A wavelet phase filter for emission tomography

    International Nuclear Information System (INIS)

    Olsen, E.T.; Lin, B.

    1995-01-01

    The presence of a high level of noise is a characteristic in some tomographic imaging techniques such as positron emission tomography (PET). Wavelet methods can smooth out noise while preserving significant features of images. Mallat et al. proposed a wavelet based denoising scheme exploiting wavelet modulus maxima, but the scheme is sensitive to noise. In this study, the authors explore the properties of wavelet phase, with a focus on reconstruction of emission tomography images. Specifically, they show that the wavelet phase of regular Poisson noise under a Haar-type wavelet transform converges in distribution to a random variable uniformly distributed on [0, 2π). They then propose three wavelet-phase-based denoising schemes which exploit this property: edge tracking, local phase variance thresholding, and scale phase variation thresholding. Some numerical results are also presented. The numerical experiments indicate that wavelet phase techniques show promise for wavelet based denoising methods

  19. l0 Sparsity for Image Denoising with Local and Global Priors

    Directory of Open Access Journals (Sweden)

    Xiaoni Gao

    2015-01-01

    Full Text Available We propose a l0 sparsity based approach to remove additive white Gaussian noise from a given image. To achieve this goal, we combine the local prior and global prior together to recover the noise-free values of pixels. The local prior depends on the neighborhood relationships of a search window to help maintain edges and smoothness. The global prior is generated from a hierarchical l0 sparse representation to help eliminate the redundant information and preserve the global consistency. In addition, to make the correlations between pixels more meaningful, we adopt Principle Component Analysis to measure the similarities, which can be both propitious to reduce the computational complexity and improve the accuracies. Experiments on the benchmark image set show that the proposed approach can achieve superior performance to the state-of-the-art approaches both in accuracy and perception in removing the zero-mean additive white Gaussian noise.

  20. Super-Resolution Reconstruction of Remote Sensing Images Using Multifractal Analysis

    Directory of Open Access Journals (Sweden)

    Mao-Gui Hu

    2009-10-01

    Full Text Available Satellite remote sensing (RS is an important contributor to Earth observation, providing various kinds of imagery every day, but low spatial resolution remains a critical bottleneck in a lot of applications, restricting higher spatial resolution analysis (e.g., intraurban. In this study, a multifractal-based super-resolution reconstruction method is proposed to alleviate this problem. The multifractal characteristic is common in Nature. The self-similarity or self-affinity presented in the image is useful to estimate details at larger and smaller scales than the original. We first look for the presence of multifractal characteristics in the images. Then we estimate parameters of the information transfer function and noise of the low resolution image. Finally, a noise-free, spatial resolutionenhanced image is generated by a fractal coding-based denoising and downscaling method. The empirical case shows that the reconstructed super-resolution image performs well indetail enhancement. This method is not only useful for remote sensing in investigating Earth, but also for other images with multifractal characteristics.

  1. Stacking denoising auto-encoders in a deep network to segment the brainstem on MRI in brain cancer patients: A clinical study.

    Science.gov (United States)

    Dolz, Jose; Betrouni, Nacim; Quidet, Mathilde; Kharroubi, Dris; Leroy, Henri A; Reyns, Nicolas; Massoptier, Laurent; Vermandel, Maximilien

    2016-09-01

    Delineation of organs at risk (OARs) is a crucial step in surgical and treatment planning in brain cancer, where precise OARs volume delineation is required. However, this task is still often manually performed, which is time-consuming and prone to observer variability. To tackle these issues a deep learning approach based on stacking denoising auto-encoders has been proposed to segment the brainstem on magnetic resonance images in brain cancer context. Additionally to classical features used in machine learning to segment brain structures, two new features are suggested. Four experts participated in this study by segmenting the brainstem on 9 patients who underwent radiosurgery. Analysis of variance on shape and volume similarity metrics indicated that there were significant differences (p<0.05) between the groups of manual annotations and automatic segmentations. Experimental evaluation also showed an overlapping higher than 90% with respect to the ground truth. These results are comparable, and often higher, to those of the state of the art segmentation methods but with a considerably reduction of the segmentation time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Curvature correction of retinal OCTs using graph-based geometry detection

    International Nuclear Information System (INIS)

    Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D; Sonka, Milan

    2013-01-01

    In this paper, we present a new algorithm as an enhancement and preprocessing step for acquired optical coherence tomography (OCT) images of the retina. The proposed method is composed of two steps, first of which is a denoising algorithm with wavelet diffusion based on a circular symmetric Laplacian model, and the second part can be described in terms of graph-based geometry detection and curvature correction according to the hyper-reflective complex layer in the retina. The proposed denoising algorithm showed an improvement of contrast-to-noise ratio from 0.89 to 1.49 and an increase of signal-to-noise ratio (OCT image SNR) from 18.27 to 30.43 dB. By applying the proposed method for estimation of the interpolated curve using a full automatic method, the mean ± SD unsigned border positioning error was calculated for normal and abnormal cases. The error values of 2.19 ± 1.25 and 8.53 ± 3.76 µm were detected for 200 randomly selected slices without pathological curvature and 50 randomly selected slices with pathological curvature, respectively. The important aspect of this algorithm is its ability in detection of curvature in strongly pathological images that surpasses previously introduced methods; the method is also fast, compared to the relatively low speed of similar methods. (paper)

  3. Compressed sensing in imaging mass spectrometry

    International Nuclear Information System (INIS)

    Bartels, Andreas; Dülk, Patrick; Trede, Dennis; Alexandrov, Theodore; Maaß, Peter

    2013-01-01

    Imaging mass spectrometry (IMS) is a technique of analytical chemistry for spatially resolved, label-free and multipurpose analysis of biological samples that is able to detect the spatial distribution of hundreds of molecules in one experiment. The hyperspectral IMS data is typically generated by a mass spectrometer analyzing the surface of the sample. In this paper, we propose a compressed sensing approach to IMS which potentially allows for faster data acquisition by collecting only a part of the pixels in the hyperspectral image and reconstructing the full image from this data. We present an integrative approach to perform both peak-picking spectra and denoising m/z-images simultaneously, whereas the state of the art data analysis methods solve these problems separately. We provide a proof of the robustness of the recovery of both the spectra and individual channels of the hyperspectral image and propose an algorithm to solve our optimization problem which is based on proximal mappings. The paper concludes with the numerical reconstruction results for an IMS dataset of a rat brain coronal section. (paper)

  4. STRUCTURE TENSOR IMAGE FILTERING USING RIEMANNIAN L1 AND L∞ CENTER-OF-MASS

    Directory of Open Access Journals (Sweden)

    Jesus Angulo

    2014-06-01

    Full Text Available Structure tensor images are obtained by a Gaussian smoothing of the dyadic product of gradient image. These images give at each pixel a n×n symmetric positive definite matrix SPD(n, representing the local orientation and the edge information. Processing such images requires appropriate algorithms working on the Riemannian manifold on the SPD(n matrices. This contribution deals with structure tensor image filtering based on Lp geometric averaging. In particular, L1 center-of-mass (Riemannian median or Fermat-Weber point and L∞ center-of-mass (Riemannian circumcenter can be obtained for structure tensors using recently proposed algorithms. Our contribution in this paper is to study the interest of L1 and L∞ Riemannian estimators for structure tensor image processing. In particular, we compare both for two image analysis tasks: (i structure tensor image denoising; (ii anomaly detection in structure tensor images.

  5. Multiband multi-echo imaging of simultaneous oxygenation and flow timeseries for resting state connectivity.

    Science.gov (United States)

    Cohen, Alexander D; Nencka, Andrew S; Lebel, R Marc; Wang, Yang

    2017-01-01

    A novel sequence has been introduced that combines multiband imaging with a multi-echo acquisition for simultaneous high spatial resolution pseudo-continuous arterial spin labeling (ASL) and blood-oxygenation-level dependent (BOLD) echo-planar imaging (MBME ASL/BOLD). Resting-state connectivity in healthy adult subjects was assessed using this sequence. Four echoes were acquired with a multiband acceleration of four, in order to increase spatial resolution, shorten repetition time, and reduce slice-timing effects on the ASL signal. In addition, by acquiring four echoes, advanced multi-echo independent component analysis (ME-ICA) denoising could be employed to increase the signal-to-noise ratio (SNR) and BOLD sensitivity. Seed-based and dual-regression approaches were utilized to analyze functional connectivity. Cerebral blood flow (CBF) and BOLD coupling was also evaluated by correlating the perfusion-weighted timeseries with the BOLD timeseries. These metrics were compared between single echo (E2), multi-echo combined (MEC), multi-echo combined and denoised (MECDN), and perfusion-weighted (PW) timeseries. Temporal SNR increased for the MECDN data compared to the MEC and E2 data. Connectivity also increased, in terms of correlation strength and network size, for the MECDN compared to the MEC and E2 datasets. CBF and BOLD coupling was increased in major resting-state networks, and that correlation was strongest for the MECDN datasets. These results indicate our novel MBME ASL/BOLD sequence, which collects simultaneous high-resolution ASL/BOLD data, could be a powerful tool for detecting functional connectivity and dynamic neurovascular coupling during the resting state. The collection of more than two echoes facilitates the use of ME-ICA denoising to greatly improve the quality of resting state functional connectivity MRI.

  6. XQ-NLM: Denoising Diffusion MRI Data via x-q Space Non-Local Patch Matching.

    Science.gov (United States)

    Chen, Geng; Wu, Yafeng; Shen, Dinggang; Yap, Pew-Thian

    2016-10-01

    Noise is a major issue influencing quantitative analysis in diffusion MRI. The effects of noise can be reduced by repeated acquisitions, but this leads to long acquisition times that can be unrealistic in clinical settings. For this reason, post-acquisition denoising methods have been widely used to improve SNR. Among existing methods, non-local means (NLM) has been shown to produce good image quality with edge preservation. However, currently the application of NLM to diffusion MRI has been mostly focused on the spatial space (i.e., the x -space), despite the fact that diffusion data live in a combined space consisting of the x -space and the q -space (i.e., the space of wavevectors). In this paper, we propose to extend NLM to both x -space and q -space. We show how patch-matching, as required in NLM, can be performed concurrently in x-q space with the help of azimuthal equidistant projection and rotation invariant features. Extensive experiments on both synthetic and real data confirm that the proposed x-q space NLM (XQ-NLM) outperforms the classic NLM.

  7. A fast method to emulate an iterative POCS image reconstruction algorithm.

    Science.gov (United States)

    Zeng, Gengsheng L

    2017-10-01

    Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.

  8. Recognition of Wheat Spike from Field Based Phenotype Platform Using Multi-Sensor Fusion and Improved Maximum Entropy Segmentation Algorithms

    Directory of Open Access Journals (Sweden)

    Chengquan Zhou

    2018-02-01

    Full Text Available To obtain an accurate count of wheat spikes, which is crucial for estimating yield, this paper proposes a new algorithm that uses computer vision to achieve this goal from an image. First, a home-built semi-autonomous multi-sensor field-based phenotype platform (FPP is used to obtain orthographic images of wheat plots at the filling stage. The data acquisition system of the FPP provides high-definition RGB images and multispectral images of the corresponding quadrats. Then, the high-definition panchromatic images are obtained by fusion of three channels of RGB. The Gram–Schmidt fusion algorithm is then used to fuse these multispectral and panchromatic images, thereby improving the color identification degree of the targets. Next, the maximum entropy segmentation method is used to do the coarse-segmentation. The threshold of this method is determined by a firefly algorithm based on chaos theory (FACT, and then a morphological filter is used to de-noise the coarse-segmentation results. Finally, morphological reconstruction theory is applied to segment the adhesive part of the de-noised image and realize the fine-segmentation of the image. The computer-generated counting results for the wheat plots, using independent regional statistical function in Matlab R2017b software, are then compared with field measurements which indicate that the proposed method provides a more accurate count of wheat spikes when compared with other traditional fusion and segmentation methods mentioned in this paper.

  9. Performance evaluation of cardiac MRI image denoising techniques

    NARCIS (Netherlands)

    AlAttar, M.A.; Mohamed, A.G.A.; Osman, N.F.; Fahmy, A.S.

    2008-01-01

    Black-blood cardiac magnetic resonance imaging (MRI) plays an important role in diagnosing a number of heart diseases. The technique suffers inherently from low contrast-to-noise ratio between the myocardium and the blood. In this work, we examined the performance of different classification

  10. Seeing Beyond the Painting Surface with Terahertz Time-Domain Imaging (THz-TDI): a signal separation method for extracting images of buried individual layers in paintings

    DEFF Research Database (Denmark)

    Filtenborg, Troels Folke; Skou-Hansen, Jakob; Koch Dandolo, Corinna Ludovica

    2015-01-01

    and denoising methods used to resolve temporal features of THz reflected signals, they result impractical if applied to large images analysis. Therefore, we presented a simple, fast and effective method to separate single THz pulses of interest from the entire signal recorded at each spatial coordinate...

  11. Regularization by Functions of Bounded Variation and Applications to Image Enhancement

    International Nuclear Information System (INIS)

    Casas, E.; Kunisch, K.; Pola, C.

    1999-01-01

    Optimization problems regularized by bounded variation seminorms are analyzed. The optimality system is obtained and finite-dimensional approximations of bounded variation function spaces as well as of the optimization problems are studied. It is demonstrated that the choice of the vector norm in the definition of the bounded variation seminorm is of special importance for approximating subspaces consisting of piecewise constant functions. Algorithms based on a primal-dual framework that exploit the structure of these nondifferentiable optimization problems are proposed. Numerical examples are given for denoising of blocky images with very high noise

  12. MO-F-CAMPUS-I-03: GPU Accelerated Monte Carlo Technique for Fast Concurrent Image and Dose Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Becchetti, M; Tian, X; Segars, P; Samei, E [Clinical Imaging Physics Group, Department of Radiology, Duke University Me, Durham, NC (United States)

    2015-06-15

    Purpose: To develop an accurate and fast Monte Carlo (MC) method of simulating CT that is capable of correlating dose with image quality using voxelized phantoms. Methods: A realistic voxelized phantom based on patient CT data, XCAT, was used with a GPU accelerated MC code for helical MDCT. Simulations were done with both uniform density organs and with textured organs. The organ doses were validated using previous experimentally validated simulations of the same phantom under the same conditions. Images acquired by tracking photons through the phantom with MC require lengthy computation times due to the large number of photon histories necessary for accurate representation of noise. A substantial speed up of the process was attained by using a low number of photon histories with kernel denoising of the projections from the scattered photons. These FBP reconstructed images were validated against those that were acquired in simulations using many photon histories by ensuring a minimal normalized root mean square error. Results: Organ doses simulated in the XCAT phantom are within 10% of the reference values. Corresponding images attained using projection kernel smoothing were attained with 3 orders of magnitude less computation time compared to a reference simulation using many photon histories. Conclusion: Combining GPU acceleration with kernel denoising of scattered photon projections in MC simulations allows organ dose and corresponding image quality to be attained with reasonable accuracy and substantially reduced computation time than is possible with standard simulation approaches.

  13. MO-F-CAMPUS-I-03: GPU Accelerated Monte Carlo Technique for Fast Concurrent Image and Dose Simulation

    International Nuclear Information System (INIS)

    Becchetti, M; Tian, X; Segars, P; Samei, E

    2015-01-01

    Purpose: To develop an accurate and fast Monte Carlo (MC) method of simulating CT that is capable of correlating dose with image quality using voxelized phantoms. Methods: A realistic voxelized phantom based on patient CT data, XCAT, was used with a GPU accelerated MC code for helical MDCT. Simulations were done with both uniform density organs and with textured organs. The organ doses were validated using previous experimentally validated simulations of the same phantom under the same conditions. Images acquired by tracking photons through the phantom with MC require lengthy computation times due to the large number of photon histories necessary for accurate representation of noise. A substantial speed up of the process was attained by using a low number of photon histories with kernel denoising of the projections from the scattered photons. These FBP reconstructed images were validated against those that were acquired in simulations using many photon histories by ensuring a minimal normalized root mean square error. Results: Organ doses simulated in the XCAT phantom are within 10% of the reference values. Corresponding images attained using projection kernel smoothing were attained with 3 orders of magnitude less computation time compared to a reference simulation using many photon histories. Conclusion: Combining GPU acceleration with kernel denoising of scattered photon projections in MC simulations allows organ dose and corresponding image quality to be attained with reasonable accuracy and substantially reduced computation time than is possible with standard simulation approaches

  14. GPR Signal Denoising and Target Extraction With the CEEMD Method

    KAUST Repository

    Li, Jing

    2015-04-17

    In this letter, we apply a time and frequency analysis method based on the complete ensemble empirical mode decomposition (CEEMD) method in ground-penetrating radar (GPR) signal processing. It decomposes the GPR signal into a sum of oscillatory components, with guaranteed positive and smoothly varying instantaneous frequencies. The key idea of this method relies on averaging the modes obtained by empirical mode decomposition (EMD) applied to several realizations of Gaussian white noise added to the original signal. It can solve the mode-mixing problem in the EMD method and improve the resolution of ensemble EMD (EEMD) when the signal has a low signal-to-noise ratio. First, we analyze the difference between the basic theory of EMD, EEMD, and CEEMD. Then, we compare the time and frequency analysis with Hilbert-Huang transform to test the results of different methods. The synthetic and real GPR data demonstrate that CEEMD promises higher spectral-spatial resolution than the other two EMD methods in GPR signal denoising and target extraction. Its decomposition is complete, with a numerically negligible error.

  15. Independent component analysis-based artefact reduction: application to the electrocardiogram for improved magnetic resonance imaging triggering

    International Nuclear Information System (INIS)

    Oster, Julien; Pietquin, Olivier; Felblinger, Jacques; Abächerli, Roger; Kraemer, Michel

    2009-01-01

    Electrocardiogram (ECG) is required during magnetic resonance (MR) examination for monitoring patients under anaesthesia or with heart diseases and for synchronizing image acquisition with heart activity (triggering). Accurate and fast QRS detection is therefore desirable, but this task is complicated by artefacts related to the complex MR environment (high magnetic field, radio-frequency pulses and fast switching magnetic gradients). Specific signal processing has been proposed, whether using specific MR QRS detectors or ECG denoising methods. Most state-of-the-art techniques use a connection to the MR system for achieving their task, which is a major drawback since access to the MR system is often restricted. This paper introduces a new method for on-line ECG signal enhancement, called ICARE, which takes advantage of using multi-lead ECG and does not require any connection to the MR system. It is based on independent component analysis (ICA) and applied in real time. This algorithm yields accurate QRS detection for efficient triggering

  16. ProxImaL: efficient image optimization using proximal algorithms

    KAUST Repository

    Heide, Felix

    2016-07-11

    Computational photography systems are becoming increasingly diverse, while computational resources-for example on mobile platforms-are rapidly increasing. As diverse as these camera systems may be, slightly different variants of the underlying image processing tasks, such as demosaicking, deconvolution, denoising, inpainting, image fusion, and alignment, are shared between all of these systems. Formal optimization methods have recently been demonstrated to achieve state-of-the-art quality for many of these applications. Unfortunately, different combinations of natural image priors and optimization algorithms may be optimal for different problems, and implementing and testing each combination is currently a time-consuming and error-prone process. ProxImaL is a domain-specific language and compiler for image optimization problems that makes it easy to experiment with different problem formulations and algorithm choices. The language uses proximal operators as the fundamental building blocks of a variety of linear and nonlinear image formation models and cost functions, advanced image priors, and noise models. The compiler intelligently chooses the best way to translate a problem formulation and choice of optimization algorithm into an efficient solver implementation. In applications to the image processing pipeline, deconvolution in the presence of Poisson-distributed shot noise, and burst denoising, we show that a few lines of ProxImaL code can generate highly efficient solvers that achieve state-of-the-art results. We also show applications to the nonlinear and nonconvex problem of phase retrieval.

  17. ECG denoising and fiducial point extraction using an extended Kalman filtering framework with linear and nonlinear phase observations.

    Science.gov (United States)

    Akhbari, Mahsa; Shamsollahi, Mohammad B; Jutten, Christian; Armoundas, Antonis A; Sayadi, Omid

    2016-02-01

    In this paper we propose an efficient method for denoising and extracting fiducial point (FP) of ECG signals. The method is based on a nonlinear dynamic model which uses Gaussian functions to model ECG waveforms. For estimating the model parameters, we use an extended Kalman filter (EKF). In this framework called EKF25, all the parameters of Gaussian functions as well as the ECG waveforms (P-wave, QRS complex and T-wave) in the ECG dynamical model, are considered as state variables. In this paper, the dynamic time warping method is used to estimate the nonlinear ECG phase observation. We compare this new approach with linear phase observation models. Using linear and nonlinear EKF25 for ECG denoising and nonlinear EKF25 for fiducial point extraction and ECG interval analysis are the main contributions of this paper. Performance comparison with other EKF-based techniques shows that the proposed method results in higher output SNR with an average SNR improvement of 12 dB for an input SNR of -8 dB. To evaluate the FP extraction performance, we compare the proposed method with a method based on partially collapsed Gibbs sampler and an established EKF-based method. The mean absolute error and the root mean square error of all FPs, across all databases are 14 ms and 22 ms, respectively, for our proposed method, with an advantage when using a nonlinear phase observation. These errors are significantly smaller than errors obtained with other methods. For ECG interval analysis, with an absolute mean error and a root mean square error of about 22 ms and 29 ms, the proposed method achieves better accuracy and smaller variability with respect to other methods.

  18. Poisson denoising on the sphere: application to the Fermi gamma ray space telescope

    Science.gov (United States)

    Schmitt, J.; Starck, J. L.; Casandjian, J. M.; Fadili, J.; Grenier, I.

    2010-07-01

    The Large Area Telescope (LAT), the main instrument of the Fermi gamma-ray Space telescope, detects high energy gamma rays with energies from 20 MeV to more than 300 GeV. The two main scientific objectives, the study of the Milky Way diffuse background and the detection of point sources, are complicated by the lack of photons. That is why we need a powerful Poisson noise removal method on the sphere which is efficient on low count Poisson data. This paper presents a new multiscale decomposition on the sphere for data with Poisson noise, called multi-scale variance stabilizing transform on the sphere (MS-VSTS). This method is based on a variance stabilizing transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has a quasi constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. MS-VSTS consists of decomposing the data into a sparse multi-scale dictionary like wavelets or curvelets, and then applying a VST on the coefficients in order to get almost Gaussian stabilized coefficients. In this work, we use the isotropic undecimated wavelet transform (IUWT) and the curvelet transform as spherical multi-scale transforms. Then, binary hypothesis testing is carried out to detect significant coefficients, and the denoised image is reconstructed with an iterative algorithm based on hybrid steepest descent (HSD). To detect point sources, we have to extract the Galactic diffuse background: an extension of the method to background separation is then proposed. In contrary, to study the Milky Way diffuse background, we remove point sources with a binary mask. The gaps have to be interpolated: an extension to inpainting is then proposed. The method, applied on simulated Fermi LAT data, proves to be adaptive, fast and easy to implement.

  19. Mixed Gaussian-Impulse Noise Image Restoration Via Total Variation

    Science.gov (United States)

    2012-05-01

    deblurring under impulse noise ,” J. Math. Imaging Vis., vol. 36, pp. 46–53, January 2010. [5] B. Li, Q. Liu, J. Xu, and X. Luo, “A new method for removing......Several Total Variation (TV) regularization methods have recently been proposed to address denoising under mixed Gaussian and impulse noise . While

  20. Video denoising, deblocking, and enhancement through separable 4-D nonlocal spatiotemporal transforms.

    Science.gov (United States)

    Maggioni, Matteo; Boracchi, Giacomo; Foi, Alessandro; Egiazarian, Karen

    2012-09-01

    We propose a powerful video filtering algorithm that exploits temporal and spatial redundancy characterizing natural video sequences. The algorithm implements the paradigm of nonlocal grouping and collaborative filtering, where a higher dimensional transform-domain representation of the observations is leveraged to enforce sparsity, and thus regularize the data: 3-D spatiotemporal volumes are constructed by tracking blocks along trajectories defined by the motion vectors. Mutually similar volumes are then grouped together by stacking them along an additional fourth dimension, thus producing a 4-D structure, termed group, where different types of data correlation exist along the different dimensions: local correlation along the two dimensions of the blocks, temporal correlation along the motion trajectories, and nonlocal spatial correlation (i.e., self-similarity) along the fourth dimension of the group. Collaborative filtering is then realized by transforming each group through a decorrelating 4-D separable transform and then by shrinkage and inverse transformation. In this way, the collaborative filtering provides estimates for each volume stacked in the group, which are then returned and adaptively aggregated to their original positions in the video. The proposed filtering procedure addresses several video processing applications, such as denoising, deblocking, and enhancement of both grayscale and color data. Experimental results prove the effectiveness of our method in terms of both subjective and objective visual quality, and show that it outperforms the state of the art in video denoising.

  1. Speckle noise reduction in breast ultrasound images: SMU (srad median unsharp) approch

    International Nuclear Information System (INIS)

    Njeh, I.; Sassi, O. B.; Ben Hamida, A.; Chtourou, K.

    2011-01-01

    Image denoising has become a very essential for better information extraction from the image and mainly from so noised ones, such as ultrasound images. In certain cases, for instance in ultrasound images, the noise can restrain information which is valuable for the general practitioner. Consequently medical images are very inconsistent, and it is crucial to operate case to case. This paper presents a novel algorithm SMU (Srad Median Unsharp) for noise suppression in ultrasound breast images in order to realize a computer aided diagnosis (CAD) for breast cancer.

  2. Channel Compensation for Speaker Recognition using MAP Adapted PLDA and Denoising DNNs

    Science.gov (United States)

    2016-06-21

    05 Jabra Cellphone Earwrap Mic 06 Motorola Cellphone Earbud 07 Olympus Pearlcorder 08 Radio Shack Computer Desktop Mic Table 1: Mixer 1 and 2...EER and min DCF vs λ for 2cov map adapt PLDA the MAP adapted PLDA model using a λ of 0.5. The remain- ing rows demonstrate the impact of the feature...degrading perfor- mance on conversational telephone speech. To assess the per- formance impact of the denoising DNN on telephony data we evaluated the

  3. Multiresolution edge detection using enhanced fuzzy c-means clustering for ultrasound image speckle reduction

    Energy Technology Data Exchange (ETDEWEB)

    Tsantis, Stavros [Department of Medical Physics, School of Medicine, University of Patras, Rion, GR 26504 (Greece); Spiliopoulos, Stavros; Karnabatidis, Dimitrios [Department of Radiology, School of Medicine, University of Patras, Rion, GR 26504 (Greece); Skouroliakou, Aikaterini [Department of Energy Technology Engineering, Technological Education Institute of Athens, Athens 12210 (Greece); Hazle, John D. [Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Kagadis, George C., E-mail: gkagad@gmail.com, E-mail: George.Kagadis@med.upatras.gr, E-mail: GKagadis@mdanderson.org [Department of Medical Physics, School of Medicine, University of Patras, Rion, GR 26504, Greece and Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States)

    2014-07-15

    Purpose: Speckle suppression in ultrasound (US) images of various anatomic structures via a novel speckle noise reduction algorithm. Methods: The proposed algorithm employs an enhanced fuzzy c-means (EFCM) clustering and multiresolution wavelet analysis to distinguish edges from speckle noise in US images. The edge detection procedure involves a coarse-to-fine strategy with spatial and interscale constraints so as to classify wavelet local maxima distribution at different frequency bands. As an outcome, an edge map across scales is derived whereas the wavelet coefficients that correspond to speckle are suppressed in the inverse wavelet transform acquiring the denoised US image. Results: A total of 34 thyroid, liver, and breast US examinations were performed on a Logiq 9 US system. Each of these images was subjected to the proposed EFCM algorithm and, for comparison, to commercial speckle reduction imaging (SRI) software and another well-known denoising approach, Pizurica's method. The quantification of the speckle suppression performance in the selected set of US images was carried out via Speckle Suppression Index (SSI) with results of 0.61, 0.71, and 0.73 for EFCM, SRI, and Pizurica's methods, respectively. Peak signal-to-noise ratios of 35.12, 33.95, and 29.78 and edge preservation indices of 0.94, 0.93, and 0.86 were found for the EFCM, SIR, and Pizurica's method, respectively, demonstrating that the proposed method achieves superior speckle reduction performance and edge preservation properties. Based on two independent radiologists’ qualitative evaluation the proposed method significantly improved image characteristics over standard baseline B mode images, and those processed with the Pizurica's method. Furthermore, it yielded results similar to those for SRI for breast and thyroid images significantly better results than SRI for liver imaging, thus improving diagnostic accuracy in both superficial and in-depth structures. Conclusions: A

  4. Multiresolution edge detection using enhanced fuzzy c-means clustering for ultrasound image speckle reduction

    International Nuclear Information System (INIS)

    Tsantis, Stavros; Spiliopoulos, Stavros; Karnabatidis, Dimitrios; Skouroliakou, Aikaterini; Hazle, John D.; Kagadis, George C.

    2014-01-01

    Purpose: Speckle suppression in ultrasound (US) images of various anatomic structures via a novel speckle noise reduction algorithm. Methods: The proposed algorithm employs an enhanced fuzzy c-means (EFCM) clustering and multiresolution wavelet analysis to distinguish edges from speckle noise in US images. The edge detection procedure involves a coarse-to-fine strategy with spatial and interscale constraints so as to classify wavelet local maxima distribution at different frequency bands. As an outcome, an edge map across scales is derived whereas the wavelet coefficients that correspond to speckle are suppressed in the inverse wavelet transform acquiring the denoised US image. Results: A total of 34 thyroid, liver, and breast US examinations were performed on a Logiq 9 US system. Each of these images was subjected to the proposed EFCM algorithm and, for comparison, to commercial speckle reduction imaging (SRI) software and another well-known denoising approach, Pizurica's method. The quantification of the speckle suppression performance in the selected set of US images was carried out via Speckle Suppression Index (SSI) with results of 0.61, 0.71, and 0.73 for EFCM, SRI, and Pizurica's methods, respectively. Peak signal-to-noise ratios of 35.12, 33.95, and 29.78 and edge preservation indices of 0.94, 0.93, and 0.86 were found for the EFCM, SIR, and Pizurica's method, respectively, demonstrating that the proposed method achieves superior speckle reduction performance and edge preservation properties. Based on two independent radiologists’ qualitative evaluation the proposed method significantly improved image characteristics over standard baseline B mode images, and those processed with the Pizurica's method. Furthermore, it yielded results similar to those for SRI for breast and thyroid images significantly better results than SRI for liver imaging, thus improving diagnostic accuracy in both superficial and in-depth structures. Conclusions: A

  5. An intelligent despeckling method for swept source optical coherence tomography images of skin

    Science.gov (United States)

    Adabi, Saba; Mohebbikarkhoran, Hamed; Mehregan, Darius; Conforto, Silvia; Nasiriavanaki, Mohammadreza

    2017-03-01

    Optical Coherence Optical coherence tomography is a powerful high-resolution imaging method with a broad biomedical application. Nonetheless, OCT images suffer from a multiplicative artefacts so-called speckle, a result of coherent imaging of system. Digital filters become ubiquitous means for speckle reduction. Addressing the fact that there still a room for despeckling in OCT, we proposed an intelligent speckle reduction framework based on OCT tissue morphological, textural and optical features that through a trained network selects the winner filter in which adaptively suppress the speckle noise while preserve structural information of OCT signal. These parameters are calculated for different steps of the procedure to be used in designed Artificial Neural Network decider that select the best denoising technique for each segment of the image. Results of training shows the dominant filter is BM3D from the last category.

  6. A Fast Alternating Minimization Algorithm for Nonlocal Vectorial Total Variational Multichannel Image Denoising

    Directory of Open Access Journals (Sweden)

    Rubing Xi

    2014-01-01

    Full Text Available The variational models with nonlocal regularization offer superior image restoration quality over traditional method. But the processing speed remains a bottleneck due to the calculation quantity brought by the recent iterative algorithms. In this paper, a fast algorithm is proposed to restore the multichannel image in the presence of additive Gaussian noise by minimizing an energy function consisting of an l2-norm fidelity term and a nonlocal vectorial total variational regularization term. This algorithm is based on the variable splitting and penalty techniques in optimization. Following our previous work on the proof of the existence and the uniqueness of the solution of the model, we establish and prove the convergence properties of this algorithm, which are the finite convergence for some variables and the q-linear convergence for the rest. Experiments show that this model has a fabulous texture-preserving property in restoring color images. Both the theoretical derivation of the computation complexity analysis and the experimental results show that the proposed algorithm performs favorably in comparison to the widely used fixed point algorithm.

  7. Robust Image Analysis of Faces for Genetic Applications

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2010-01-01

    Roč. 6, č. 2 (2010), s. 95-102 ISSN 1801-5603 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : object localization * template matching * eye or mouth detection * robust correlation analysis * image denoising Subject RIV: BB - Applied Statistics, Operational Research http://www.ejbi.cz/articles/201012/47/1.html

  8. Two-Stage Approach to Image Classification by Deep Neural Networks

    Science.gov (United States)

    Ososkov, Gennady; Goncharov, Pavel

    2018-02-01

    The paper demonstrates the advantages of the deep learning networks over the ordinary neural networks on their comparative applications to image classifying. An autoassociative neural network is used as a standalone autoencoder for prior extraction of the most informative features of the input data for neural networks to be compared further as classifiers. The main efforts to deal with deep learning networks are spent for a quite painstaking work of optimizing the structures of those networks and their components, as activation functions, weights, as well as the procedures of minimizing their loss function to improve their performances and speed up their learning time. It is also shown that the deep autoencoders develop the remarkable ability for denoising images after being specially trained. Convolutional Neural Networks are also used to solve a quite actual problem of protein genetics on the example of the durum wheat classification. Results of our comparative study demonstrate the undoubted advantage of the deep networks, as well as the denoising power of the autoencoders. In our work we use both GPU and cloud services to speed up the calculations.

  9. Wavelet Denoising of Mobile Radiation Data

    International Nuclear Information System (INIS)

    Campbell, D.B.

    2008-01-01

    The FY08 phase of this project investigated the merits of video fusion as a method for mitigating the false alarms encountered by vehicle borne detection systems in an effort to realize performance gains associated with wavelet denoising. The fusion strategy exploited the significant correlations which exist between data obtained from radiation detectors and video systems with coincident fields of view. The additional information provided by optical systems can greatly increase the capabilities of these detection systems by reducing the burden of false alarms and through the generation of actionable information. The investigation into the use of wavelet analysis techniques as a means of filtering the gross-counts signal obtained from moving radiation detectors showed promise for vehicle borne systems. However, the applicability of these techniques to man-portable systems is limited due to minimal gains in performance over the rapid feedback available to system operators under walking conditions. Furthermore, the fusion of video holds significant promise for systems operating from vehicles or systems organized into stationary arrays; however, the added complexity and hardware required by this technique renders it infeasible for man-portable systems

  10. Auditory steady state responses and cochlear implants: Modeling the artifact-response mixture in the perspective of denoising.

    Science.gov (United States)

    Mina, Faten; Attina, Virginie; Duroc, Yvan; Veuillet, Evelyne; Truy, Eric; Thai-Van, Hung

    2017-01-01

    Auditory steady state responses (ASSRs) in cochlear implant (CI) patients are contaminated by the spread of a continuous CI electrical stimulation artifact. The aim of this work was to model the electrophysiological mixture of the CI artifact and the corresponding evoked potentials on scalp electrodes in order to evaluate the performance of denoising algorithms in eliminating the CI artifact in a controlled environment. The basis of the proposed computational framework is a neural mass model representing the nodes of the auditory pathways. Six main contributors to auditory evoked potentials from the cochlear level and up to the auditory cortex were taken into consideration. The simulated dynamics were then projected into a 3-layer realistic head model. 32-channel scalp recordings of the CI artifact-response were then generated by solving the electromagnetic forward problem. As an application, the framework's simulated 32-channel datasets were used to compare the performance of 4 commonly used Independent Component Analysis (ICA) algorithms: infomax, extended infomax, jade and fastICA in eliminating the CI artifact. As expected, two major components were detectable in the simulated datasets, a low frequency component at the modulation frequency and a pulsatile high frequency component related to the stimulation frequency. The first can be attributed to the phase-locked ASSR and the second to the stimulation artifact. Among the ICA algorithms tested, simulations showed that infomax was the most efficient and reliable in denoising the CI artifact-response mixture. Denoising algorithms can induce undesirable deformation of the signal of interest in real CI patient recordings. The proposed framework is a valuable tool for evaluating these algorithms in a controllable environment ahead of experimental or clinical applications.

  11. Auditory steady state responses and cochlear implants: Modeling the artifact-response mixture in the perspective of denoising.

    Directory of Open Access Journals (Sweden)

    Faten Mina

    Full Text Available Auditory steady state responses (ASSRs in cochlear implant (CI patients are contaminated by the spread of a continuous CI electrical stimulation artifact. The aim of this work was to model the electrophysiological mixture of the CI artifact and the corresponding evoked potentials on scalp electrodes in order to evaluate the performance of denoising algorithms in eliminating the CI artifact in a controlled environment. The basis of the proposed computational framework is a neural mass model representing the nodes of the auditory pathways. Six main contributors to auditory evoked potentials from the cochlear level and up to the auditory cortex were taken into consideration. The simulated dynamics were then projected into a 3-layer realistic head model. 32-channel scalp recordings of the CI artifact-response were then generated by solving the electromagnetic forward problem. As an application, the framework's simulated 32-channel datasets were used to compare the performance of 4 commonly used Independent Component Analysis (ICA algorithms: infomax, extended infomax, jade and fastICA in eliminating the CI artifact. As expected, two major components were detectable in the simulated datasets, a low frequency component at the modulation frequency and a pulsatile high frequency component related to the stimulation frequency. The first can be attributed to the phase-locked ASSR and the second to the stimulation artifact. Among the ICA algorithms tested, simulations showed that infomax was the most efficient and reliable in denoising the CI artifact-response mixture. Denoising algorithms can induce undesirable deformation of the signal of interest in real CI patient recordings. The proposed framework is a valuable tool for evaluating these algorithms in a controllable environment ahead of experimental or clinical applications.

  12. A Morphological Hessian Based Approach for Retinal Blood Vessels Segmentation and Denoising Using Region Based Otsu Thresholding.

    Directory of Open Access Journals (Sweden)

    Khan BahadarKhan

    Full Text Available Diabetic Retinopathy (DR harm retinal blood vessels in the eye causing visual deficiency. The appearance and structure of blood vessels in retinal images play an essential part in the diagnoses of an eye sicknesses. We proposed a less computational unsupervised automated technique with promising results for detection of retinal vasculature by using morphological hessian based approach and region based Otsu thresholding. Contrast Limited Adaptive Histogram Equalization (CLAHE and morphological filters have been used for enhancement and to remove low frequency noise or geometrical objects, respectively. The hessian matrix and eigenvalues approach used has been in a modified form at two different scales to extract wide and thin vessel enhanced images separately. Otsu thresholding has been further applied in a novel way to classify vessel and non-vessel pixels from both enhanced images. Finally, postprocessing steps has been used to eliminate the unwanted region/segment, non-vessel pixels, disease abnormalities and noise, to obtain a final segmented image. The proposed technique has been analyzed on the openly accessible DRIVE (Digital Retinal Images for Vessel Extraction and STARE (STructured Analysis of the REtina databases along with the ground truth data that has been precisely marked by the experts.

  13. Mathematical filtering minimizes metallic halation of titanium implants in MicroCT images.

    Science.gov (United States)

    Ha, Jee; Osher, Stanley J; Nishimura, Ichiro

    2013-01-01

    Microcomputed tomography (MicroCT) images containing titanium implant suffer from x-rays scattering, artifact and the implant surface is critically affected by metallic halation. To improve the metallic halation artifact, a nonlinear Total Variation denoising algorithm such as Split Bregman algorithm was applied to the digital data set of MicroCT images. This study demonstrated that the use of a mathematical filter could successfully reduce metallic halation, facilitating the osseointegration evaluation at the bone implant interface in the reconstructed images.

  14. A neuro-fuzzy inference system for sensor failure detection using wavelet denoising, PCA and SPRT

    International Nuclear Information System (INIS)

    Na, Man Gyun

    2001-01-01

    In this work, a neuro-fuzzy inference system combined with the wavelet denoising, PCA(principal component analysis) and SPRT (sequential probability ratio test) methods is developed to detect the relevant sensor failure using other sensor signals. The wavelet denoising technique is applied to remove noise components in input signals into the neuro-fuzzy system. The PCA is used to reduce the dimension of an input space without losing a significant amount of information, The PCA makes easy the selection of the input signals into the neuro-fuzzy system. Also, a lower dimensional input space usually reduces the time necessary to train a neuro-fuzzy system. The parameters of the neuro-fuzzy inference system which estimates the relevant sensor signal are optimized by a genetic algorithm and a least-squares algorithm. The residuals between the estimated signals and the measured signals are used to detect whether the sensors are failed or not. The SPRT is used in this failure detection algorithm. The proposed sensor-monitoring algorithm was verified through applications to the pressurizer water level and the hot-leg flowrate sensors in pressurized water reactors

  15. Combination of canonical correlation analysis and empirical mode decomposition applied to denoising the labor electrohysterogram.

    Science.gov (United States)

    Hassan, Mahmoud; Boudaoud, Sofiane; Terrien, Jérémy; Karlsson, Brynjar; Marque, Catherine

    2011-09-01

    The electrohysterogram (EHG) is often corrupted by electronic and electromagnetic noise as well as movement artifacts, skeletal electromyogram, and ECGs from both mother and fetus. The interfering signals are sporadic and/or have spectra overlapping the spectra of the signals of interest rendering classical filtering ineffective. In the absence of efficient methods for denoising the monopolar EHG signal, bipolar methods are usually used. In this paper, we propose a novel combination of blind source separation using canonical correlation analysis (BSS_CCA) and empirical mode decomposition (EMD) methods to denoise monopolar EHG. We first extract the uterine bursts by using BSS_CCA then the biggest part of any residual noise is removed from the bursts by EMD. Our algorithm, called CCA_EMD, was compared with wavelet filtering and independent component analysis. We also compared CCA_EMD with the corresponding bipolar signals to demonstrate that the new method gives signals that have not been degraded by the new method. The proposed method successfully removed artifacts from the signal without altering the underlying uterine activity as observed by bipolar methods. The CCA_EMD algorithm performed considerably better than the comparison methods.

  16. Variational approach for restoring blurred images with cauchy noise

    DEFF Research Database (Denmark)

    Sciacchitano, Federica; Dong, Yiqiu; Zeng, Tieyong

    2015-01-01

    model, we add a quadratic penalty term, which guarantees the uniqueness of the solution. Due to the convexity of our model, the primal dual algorithm is employed to solve the minimization problem. Experimental results show the effectiveness of the proposed method for simultaneously deblurring...... and denoising images corrupted by Cauchy noise. Comparison with other existing and well-known methods is provided as well....

  17. A Variational Approach to the Denoising of Images Based on Different Variants of the TV-Regularization

    International Nuclear Information System (INIS)

    Bildhauer, Michael; Fuchs, Martin

    2012-01-01

    We discuss several variants of the TV-regularization model used in image recovery. The proposed alternatives are either of nearly linear growth or even of linear growth, but with some weak ellipticity properties. The main feature of the paper is the investigation of the analytic properties of the corresponding solutions.

  18. Uncertainty quantification of cinematic imaging for development of predictive simulations of turbulent combustion.

    Energy Technology Data Exchange (ETDEWEB)

    Lawson, Matthew; Debusschere, Bert J.; Najm, Habib N.; Sargsyan, Khachik; Frank, Jonathan H.

    2010-09-01

    Recent advances in high frame rate complementary metal-oxide-semiconductor (CMOS) cameras coupled with high repetition rate lasers have enabled laser-based imaging measurements of the temporal evolution of turbulent reacting flows. This measurement capability provides new opportunities for understanding the dynamics of turbulence-chemistry interactions, which is necessary for developing predictive simulations of turbulent combustion. However, quantitative imaging measurements using high frame rate CMOS cameras require careful characterization of the their noise, non-linear response, and variations in this response from pixel to pixel. We develop a noise model and calibration tools to mitigate these problems and to enable quantitative use of CMOS cameras. We have demonstrated proof of principle for image de-noising using both wavelet methods and Bayesian inference. The results offer new approaches for quantitative interpretation of imaging measurements from noisy data acquired with non-linear detectors. These approaches are potentially useful in many areas of scientific research that rely on quantitative imaging measurements.

  19. Application of wavelet domain wiener filter in denoising of airborne γ-ray data

    International Nuclear Information System (INIS)

    Luo Yaoyao; Ge Liangquan; Xiong Chao; Xu Lipeng; Hua Yongtao

    2012-01-01

    The wavelet domain Wiener filter method, which combines the traditional wavelet method and the wiener filter, is established at CUT to reduce noising in as-recorded airborne gamma-ray spectra. It was used to treat an airborne gamma-ray data collected from an area m Inner Mongolia. The results showed that using this method, statistical noise could be greatly removed from the raw airborne gamma-ray spectra, and quality of the processed data is much better than those by conventional spectral denoising methods. (authors)

  20. Patch-based anisotropic diffusion scheme for fluorescence diffuse optical tomography—part 1: technical principles

    International Nuclear Information System (INIS)

    Correia, Teresa; Arridge, Simon

    2016-01-01

    Fluorescence diffuse optical tomography (fDOT) provides 3D images of fluorescence distributions in biological tissue, which represent molecular and cellular processes. The image reconstruction problem is highly ill-posed and requires regularisation techniques to stabilise and find meaningful solutions. Quadratic regularisation tends to either oversmooth or generate very noisy reconstructions, depending on the regularisation strength. Edge preserving methods, such as anisotropic diffusion regularisation (AD), can preserve important features in the fluorescence image and smooth out noise. However, AD has limited ability to distinguish an edge from noise. In this two-part paper, we propose a patch-based anisotropic diffusion regularisation (PAD), where regularisation strength is determined by a weighted average according to the similarity between patches around voxels within a search window, instead of a simple local neighbourhood strategy. However, this method has higher computational complexity and, hence, we wavelet compress the patches (PAD-WT) to speed it up, while simultaneously taking advantage of the denoising properties of wavelet thresholding. The proposed method combines the nonlocal means (NLM), AD and wavelet shrinkage methods, which are image processing methods. Therefore, in this first paper, we used a denoising test problem to analyse the performance of the new method. Our results show that the proposed PAD-WT method provides better results than the AD or NLM methods alone. The efficacy of the method for fDOT image reconstruction problem is evaluated in part 2. (paper)

  1. Mathematics behind a Class of Image Restoration Algorithms

    Directory of Open Access Journals (Sweden)

    Luminita STATE

    2012-01-01

    Full Text Available The restoration techniques are usually oriented toward modeling the type of degradation in order to infer the inverse process for recovering the given image. This approach usually involves the option for a criterion to numerically evaluate the quality of the resulted image and consequently the restoration process can be expressed in terms of an optimization problem. Most of the approaches are essentially based on additional hypothesis concerning the statistical properties of images. However, in real life applications, there is no enough information to support a certain particular image model, and consequently model-free developments have to be used instead. In our approaches the problem of image denoising/restoration is viewed as an information transmission/processing system, where the signal representing a certain clean image is transmitted through a noisy channel and only a noise-corrupted version is available. The aim is to recover the available signal as much as possible by using different noise removal techniques that is to build an accurate approximation of the initial image. Unfortunately, a series of image qualities, as for instance clarity, brightness, contrast, are affected by the noise removal techniques and consequently there is a need to partially restore them on the basis of information extracted exclusively from data. Following a brief description of the image restoration framework provided in the introductory part, a PCA-based methodology is presented in the second section of the paper. The basics of a new informational-based development for image restoration purposes and scatter matrix-based methods are given in the next two sections. The final section contains concluding remarks and suggestions for further work.

  2. Bearing faults identification and resonant band demodulation based on wavelet de-noising methods and envelope analysis

    Science.gov (United States)

    Abdelrhman, Ahmed M.; Sei Kien, Yong; Salman Leong, M.; Meng Hee, Lim; Al-Obaidi, Salah M. Ali

    2017-07-01

    The vibration signals produced by rotating machinery contain useful information for condition monitoring and fault diagnosis. Fault severities assessment is a challenging task. Wavelet Transform (WT) as a multivariate analysis tool is able to compromise between the time and frequency information in the signals and served as a de-noising method. The CWT scaling function gives different resolutions to the discretely signals such as very fine resolution at lower scale but coarser resolution at a higher scale. However, the computational cost increased as it needs to produce different signal resolutions. DWT has better low computation cost as the dilation function allowed the signals to be decomposed through a tree of low and high pass filters and no further analysing the high-frequency components. In this paper, a method for bearing faults identification is presented by combing Continuous Wavelet Transform (CWT) and Discrete Wavelet Transform (DWT) with envelope analysis for bearing fault diagnosis. The experimental data was sampled by Case Western Reserve University. The analysis result showed that the proposed method is effective in bearing faults detection, identify the exact fault’s location and severity assessment especially for the inner race and outer race faults.

  3. ASSESSMENT OF RESTORATION METHODS OF X-RAY IMAGES WITH EMPHASIS ON MEDICAL PHOTOGRAMMETRIC USAGE

    Directory of Open Access Journals (Sweden)

    S. Hosseinian

    2016-06-01

    Full Text Available Nowadays, various medical X-ray imaging methods such as digital radiography, computed tomography and fluoroscopy are used as important tools in diagnostic and operative processes especially in the computer and robotic assisted surgeries. The procedures of extracting information from these images require appropriate deblurring and denoising processes on the pre- and intra-operative images in order to obtain more accurate information. This issue becomes more considerable when the X-ray images are planned to be employed in the photogrammetric processes for 3D reconstruction from multi-view X-ray images since, accurate data should be extracted from images for 3D modelling and the quality of X-ray images affects directly on the results of the algorithms. For restoration of X-ray images, it is essential to consider the nature and characteristics of these kinds of images. X-ray images exhibit severe quantum noise due to limited X-ray photons involved. The assumptions of Gaussian modelling are not appropriate for photon-limited images such as X-ray images, because of the nature of signal-dependant quantum noise. These images are generally modelled by Poisson distribution which is the most common model for low-intensity imaging. In this paper, existing methods are evaluated. For this purpose, after demonstrating the properties of medical X-ray images, the more efficient and recommended methods for restoration of X-ray images would be described and assessed. After explaining these approaches, they are implemented on samples from different kinds of X-ray images. By considering the results, it is concluded that using PURE-LET, provides more effective and efficient denoising than other examined methods in this research.

  4. Terahertz composite imaging method

    Institute of Scientific and Technical Information of China (English)

    QIAO Xiaoli; REN Jiaojiao; ZHANG Dandan; CAO Guohua; LI Lijuan; ZHANG Xinming

    2017-01-01

    In order to improve the imaging quality of terahertz(THz) spectroscopy, Terahertz Composite Imaging Method(TCIM) is proposed. The traditional methods of improving THz spectroscopy image quality are mainly from the aspects of de-noising and image enhancement. TCIM breaks through this limitation. A set of images, reconstructed in a single data collection, can be utilized to construct two kinds of composite images. One algorithm, called Function Superposition Imaging Algorithm(FSIA), is to construct a new gray image utilizing multiple gray images through a certain function. The features of the Region Of Interest (ROI) are more obvious after operating, and it has capability of merging ROIs in multiple images. The other, called Multi-characteristics Pseudo-color Imaging Algorithm(McPcIA), is to construct a pseudo-color image by combining multiple reconstructed gray images in a single data collection. The features of ROI are enhanced by color differences. Two algorithms can not only improve the contrast of ROIs, but also increase the amount of information resulting in analysis convenience. The experimental results show that TCIM is a simple and effective tool for THz spectroscopy image analysis.

  5. Vector sparse representation of color image using quaternion matrix analysis.

    Science.gov (United States)

    Xu, Yi; Yu, Licheng; Xu, Hongteng; Zhang, Hao; Nguyen, Truong

    2015-04-01

    Traditional sparse image models treat color image pixel as a scalar, which represents color channels separately or concatenate color channels as a monochrome image. In this paper, we propose a vector sparse representation model for color images using quaternion matrix analysis. As a new tool for color image representation, its potential applications in several image-processing tasks are presented, including color image reconstruction, denoising, inpainting, and super-resolution. The proposed model represents the color image as a quaternion matrix, where a quaternion-based dictionary learning algorithm is presented using the K-quaternion singular value decomposition (QSVD) (generalized K-means clustering for QSVD) method. It conducts the sparse basis selection in quaternion space, which uniformly transforms the channel images to an orthogonal color space. In this new color space, it is significant that the inherent color structures can be completely preserved during vector reconstruction. Moreover, the proposed sparse model is more efficient comparing with the current sparse models for image restoration tasks due to lower redundancy between the atoms of different color channels. The experimental results demonstrate that the proposed sparse image model avoids the hue bias issue successfully and shows its potential as a general and powerful tool in color image analysis and processing domain.

  6. Two-Stage Approach to Image Classification by Deep Neural Networks

    Directory of Open Access Journals (Sweden)

    Ososkov Gennady

    2018-01-01

    Full Text Available The paper demonstrates the advantages of the deep learning networks over the ordinary neural networks on their comparative applications to image classifying. An autoassociative neural network is used as a standalone autoencoder for prior extraction of the most informative features of the input data for neural networks to be compared further as classifiers. The main efforts to deal with deep learning networks are spent for a quite painstaking work of optimizing the structures of those networks and their components, as activation functions, weights, as well as the procedures of minimizing their loss function to improve their performances and speed up their learning time. It is also shown that the deep autoencoders develop the remarkable ability for denoising images after being specially trained. Convolutional Neural Networks are also used to solve a quite actual problem of protein genetics on the example of the durum wheat classification. Results of our comparative study demonstrate the undoubted advantage of the deep networks, as well as the denoising power of the autoencoders. In our work we use both GPU and cloud services to speed up the calculations.

  7. A machine learning approach to quantifying noise in medical images

    Science.gov (United States)

    Chowdhury, Aritra; Sevinsky, Christopher J.; Yener, Bülent; Aggour, Kareem S.; Gustafson, Steven M.

    2016-03-01

    As advances in medical imaging technology are resulting in significant growth of biomedical image data, new techniques are needed to automate the process of identifying images of low quality. Automation is needed because it is very time consuming for a domain expert such as a medical practitioner or a biologist to manually separate good images from bad ones. While there are plenty of de-noising algorithms in the literature, their focus is on designing filters which are necessary but not sufficient for determining how useful an image is to a domain expert. Thus a computational tool is needed to assign a score to each image based on its perceived quality. In this paper, we introduce a machine learning-based score and call it the Quality of Image (QoI) score. The QoI score is computed by combining the confidence values of two popular classification techniques—support vector machines (SVMs) and Naïve Bayes classifiers. We test our technique on clinical image data obtained from cancerous tissue samples. We used 747 tissue samples that are stained by four different markers (abbreviated as CK15, pck26, E_cad and Vimentin) leading to a total of 2,988 images. The results show that images can be classified as good (high QoI), bad (low QoI) or ugly (intermediate QoI) based on their QoI scores. Our automated labeling is in agreement with the domain experts with a bi-modal classification accuracy of 94%, on average. Furthermore, ugly images can be recovered and forwarded for further post-processing.

  8. Ultrasound speckle reduction based on fractional order differentiation.

    Science.gov (United States)

    Shao, Dangguo; Zhou, Ting; Liu, Fan; Yi, Sanli; Xiang, Yan; Ma, Lei; Xiong, Xin; He, Jianfeng

    2017-07-01

    Ultrasound images show a granular pattern of noise known as speckle that diminishes their quality and results in difficulties in diagnosis. To preserve edges and features, this paper proposes a fractional differentiation-based image operator to reduce speckle in ultrasound. An image de-noising model based on fractional partial differential equations with balance relation between k (gradient modulus threshold that controls the conduction) and v (the order of fractional differentiation) was constructed by the effective combination of fractional calculus theory and a partial differential equation, and the numerical algorithm of it was achieved using a fractional differential mask operator. The proposed algorithm has better speckle reduction and structure preservation than the three existing methods [P-M model, the speckle reducing anisotropic diffusion (SRAD) technique, and the detail preserving anisotropic diffusion (DPAD) technique]. And it is significantly faster than bilateral filtering (BF) in producing virtually the same experimental results. Ultrasound phantom testing and in vivo imaging show that the proposed method can improve the quality of an ultrasound image in terms of tissue SNR, CNR, and FOM values.

  9. Principal component and spatial correlation analysis of spectroscopic-imaging data in scanning probe microscopy

    International Nuclear Information System (INIS)

    Jesse, Stephen; Kalinin, Sergei V

    2009-01-01

    An approach for the analysis of multi-dimensional, spectroscopic-imaging data based on principal component analysis (PCA) is explored. PCA selects and ranks relevant response components based on variance within the data. It is shown that for examples with small relative variations between spectra, the first few PCA components closely coincide with results obtained using model fitting, and this is achieved at rates approximately four orders of magnitude faster. For cases with strong response variations, PCA allows an effective approach to rapidly process, de-noise, and compress data. The prospects for PCA combined with correlation function analysis of component maps as a universal tool for data analysis and representation in microscopy are discussed.

  10. A Real-Time De-Noising Algorithm for E-Noses in a Wireless Sensor Network

    Science.gov (United States)

    Qu, Jianfeng; Chai, Yi; Yang, Simon X.

    2009-01-01

    A wireless e-nose network system is developed for the special purpose of monitoring odorant gases and accurately estimating odor strength in and around livestock farms. This system is to simultaneously acquire accurate odor strength values remotely at various locations, where each node is an e-nose that includes four metal-oxide semiconductor (MOS) gas sensors. A modified Kalman filtering technique is proposed for collecting raw data and de-noising based on the output noise characteristics of those gas sensors. The measurement noise variance is obtained in real time by data analysis using the proposed slip windows average method. The optimal system noise variance of the filter is obtained by using the experiments data. The Kalman filter theory on how to acquire MOS gas sensors data is discussed. Simulation results demonstrate that the proposed method can adjust the Kalman filter parameters and significantly reduce the noise from the gas sensors. PMID:22399946

  11. A Real-Time De-Noising Algorithm for E-Noses in a Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Yi Chai

    2009-02-01

    Full Text Available A wireless e-nose network system is developed for the special purpose of monitoring odorant gases and accurately estimating odor strength in and around livestock farms. This system is to simultaneously acquire accurate odor strength values remotely at various locations, where each node is an e-nose that includes four metal-oxide semiconductor (MOS gas sensors. A modified Kalman filtering technique is proposed for collecting raw data and de-noising based on the output noise characteristics of those gas sensors. The measurement noise variance is obtained in real time by data analysis using the proposed slip windows average method. The optimal system noise variance of the filter is obtained by using the experiments data. The Kalman filter theory on how to acquire MOS gas sensors data is discussed. Simulation results demonstrate that the proposed method can adjust the Kalman filter parameters and significantly reduce the noise from the gas sensors.

  12. Bayesian demosaicing using Gaussian scale mixture priors with local adaptivity in the dual tree complex wavelet packet transform domain

    Science.gov (United States)

    Goossens, Bart; Aelterman, Jan; Luong, Hiep; Pizurica, Aleksandra; Philips, Wilfried

    2013-02-01

    In digital cameras and mobile phones, there is an ongoing trend to increase the image resolution, decrease the sensor size and to use lower exposure times. Because smaller sensors inherently lead to more noise and a worse spatial resolution, digital post-processing techniques are required to resolve many of the artifacts. Color filter arrays (CFAs), which use alternating patterns of color filters, are very popular because of price and power consumption reasons. However, color filter arrays require the use of a post-processing technique such as demosaicing to recover full resolution RGB images. Recently, there has been some interest in techniques that jointly perform the demosaicing and denoising. This has the advantage that the demosaicing and denoising can be performed optimally (e.g. in the MSE sense) for the considered noise model, while avoiding artifacts introduced when using demosaicing and denoising sequentially. In this paper, we will continue the research line of the wavelet-based demosaicing techniques. These approaches are computationally simple and very suited for combination with denoising. Therefore, we will derive Bayesian Minimum Squared Error (MMSE) joint demosaicing and denoising rules in the complex wavelet packet domain, taking local adaptivity into account. As an image model, we will use Gaussian Scale Mixtures, thereby taking advantage of the directionality of the complex wavelets. Our results show that this technique is well capable of reconstructing fine details in the image, while removing all of the noise, at a relatively low computational cost. In particular, the complete reconstruction (including color correction, white balancing etc) of a 12 megapixel RAW image takes 3.5 sec on a recent mid-range GPU.

  13. MO-DE-207A-02: A Feature-Preserving Image Reconstruction Method for Improved Pancreaticlesion Classification in Diagnostic CT Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Xu, J; Tsui, B [Johns Hopkins University, Baltimore, MD (United States); Noo, F [University of Utah, Salt Lake City, UT (United States)

    2016-06-15

    Purpose: To develop a feature-preserving model based image reconstruction (MBIR) method that improves performance in pancreatic lesion classification at equal or reduced radiation dose. Methods: A set of pancreatic lesion models was created with both benign and premalignant lesion types. These two classes of lesions are distinguished by their fine internal structures; their delineation is therefore crucial to the task of pancreatic lesion classification. To reduce image noise while preserving the features of the lesions, we developed a MBIR method with curvature-based regularization. The novel regularization encourages formation of smooth surfaces that model both the exterior shape and the internal features of pancreatic lesions. Given that the curvature depends on the unknown image, image reconstruction or denoising becomes a non-convex optimization problem; to address this issue an iterative-reweighting scheme was used to calculate and update the curvature using the image from the previous iteration. Evaluation was carried out with insertion of the lesion models into the pancreas of a patient CT image. Results: Visual inspection was used to compare conventional TV regularization with our curvature-based regularization. Several penalty-strengths were considered for TV regularization, all of which resulted in erasing portions of the septation (thin partition) in a premalignant lesion. At matched noise variance (50% noise reduction in the patient stomach region), the connectivity of the septation was well preserved using the proposed curvature-based method. Conclusion: The curvature-based regularization is able to reduce image noise while simultaneously preserving the lesion features. This method could potentially improve task performance for pancreatic lesion classification at equal or reduced radiation dose. The result is of high significance for longitudinal surveillance studies of patients with pancreatic cysts, which may develop into pancreatic cancer. The

  14. An adaptive image denoising method based on local parameters

    Indian Academy of Sciences (India)

    term, i.e., individual pixels or block-by-block, i.e., group of pixels, using suitable shrinkage factor and threshold function. The shrinkage factor is generally a function of threshold and some other characteristics of the neighbouring pixels of the ...

  15. An adaptive image denoising method based on local parameters ...

    Indian Academy of Sciences (India)

    noise-free that are used to obtain the variances corresponding to the noise-free .... of too many noisy coefficients completely because the threshold value is at higher side. .... The wavelet coefficients are shrinked using the following expression.

  16. Discriminative Transfer Learning for General Image Restoration

    KAUST Repository

    Xiao, Lei; Heide, Felix; Heidrich, Wolfgang; Schö lkopf, Bernhard; Hirsch, Michael

    2018-01-01

    Recently, several discriminative learning approaches have been proposed for effective image restoration, achieving convincing trade-off between image quality and computational efficiency. However, these methods require separate training for each restoration task (e.g., denoising, deblurring, demosaicing) and problem condition (e.g., noise level of input images). This makes it time-consuming and difficult to encompass all tasks and conditions during training. In this paper, we propose a discriminative transfer learning method that incorporates formal proximal optimization and discriminative learning for general image restoration. The method requires a single-pass discriminative training and allows for reuse across various problems and conditions while achieving an efficiency comparable to previous discriminative approaches. Furthermore, after being trained, our model can be easily transferred to new likelihood terms to solve untrained tasks, or be combined with existing priors to further improve image restoration quality.

  17. Discriminative Transfer Learning for General Image Restoration

    KAUST Repository

    Xiao, Lei

    2018-04-30

    Recently, several discriminative learning approaches have been proposed for effective image restoration, achieving convincing trade-off between image quality and computational efficiency. However, these methods require separate training for each restoration task (e.g., denoising, deblurring, demosaicing) and problem condition (e.g., noise level of input images). This makes it time-consuming and difficult to encompass all tasks and conditions during training. In this paper, we propose a discriminative transfer learning method that incorporates formal proximal optimization and discriminative learning for general image restoration. The method requires a single-pass discriminative training and allows for reuse across various problems and conditions while achieving an efficiency comparable to previous discriminative approaches. Furthermore, after being trained, our model can be easily transferred to new likelihood terms to solve untrained tasks, or be combined with existing priors to further improve image restoration quality.

  18. Adaptive and robust statistical methods for processing near-field scanning microwave microscopy images.

    Science.gov (United States)

    Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P

    2015-03-01

    Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical. Published by Elsevier B.V.

  19. Comparative analysis on some spatial-domain filters for fringe pattern denoising.

    Science.gov (United States)

    Wang, Haixia; Kemao, Qian

    2011-04-20

    Fringe patterns produced by various optical interferometric techniques encode information such as shape, deformation, and refractive index. Noise affects further processing of the fringe patterns. Denoising is often needed before fringe pattern demodulation. Filtering along the fringe orientation is an effective option. Such filters include coherence enhancing diffusion, spin filtering with curve windows, second-order oriented partial-differential equations, and the regularized quadratic cost function for oriented fringe pattern filtering. These filters are analyzed to establish the relationships among them. Theoretical analysis shows that the four filters are largely equivalent to each other. Quantitative results are given on simulated fringe patterns to validate the theoretical analysis and to compare the performance of these filters. © 2011 Optical Society of America

  20. Looking for the Signal: A guide to iterative noise and artefact removal in X-ray tomographic reconstructions of porous geomaterials

    Science.gov (United States)

    Bruns, S.; Stipp, S. L. S.; Sørensen, H. O.

    2017-07-01

    X-ray micro- and nanotomography has evolved into a quantitative analysis tool rather than a mere qualitative visualization technique for the study of porous natural materials. Tomographic reconstructions are subject to noise that has to be handled by image filters prior to quantitative analysis. Typically, denoising filters are designed to handle random noise, such as Gaussian or Poisson noise. In tomographic reconstructions, noise has been projected from Radon space to Euclidean space, i.e. post reconstruction noise cannot be expected to be random but to be correlated. Reconstruction artefacts, such as streak or ring artefacts, aggravate the filtering process so algorithms performing well with random noise are not guaranteed to provide satisfactory results for X-ray tomography reconstructions. With sufficient image resolution, the crystalline origin of most geomaterials results in tomography images of objects that are untextured. We developed a denoising framework for these kinds of samples that combines a noise level estimate with iterative nonlocal means denoising. This allows splitting the denoising task into several weak denoising subtasks where the later filtering steps provide a controlled level of texture removal. We describe a hands-on explanation for the use of this iterative denoising approach and the validity and quality of the image enhancement filter was evaluated in a benchmarking experiment with noise footprints of a varying level of correlation and residual artefacts. They were extracted from real tomography reconstructions. We found that our denoising solutions were superior to other denoising algorithms, over a broad range of contrast-to-noise ratios on artificial piecewise constant signals.

  1. Noise reduction in Lidar signal using correlation-based EMD combined with soft thresholding and roughness penalty

    Science.gov (United States)

    Chang, Jianhua; Zhu, Lingyan; Li, Hongxu; Xu, Fan; Liu, Binggang; Yang, Zhenbo

    2018-01-01

    Empirical mode decomposition (EMD) is widely used to analyze the non-linear and non-stationary signals for noise reduction. In this study, a novel EMD-based denoising method, referred to as EMD with soft thresholding and roughness penalty (EMD-STRP), is proposed for the Lidar signal denoising. With the proposed method, the relevant and irrelevant intrinsic mode functions are first distinguished via a correlation coefficient. Then, the soft thresholding technique is applied to the irrelevant modes, and the roughness penalty technique is applied to the relevant modes to extract as much information as possible. The effectiveness of the proposed method was evaluated using three typical signals contaminated by white Gaussian noise. The denoising performance was then compared to the denoising capabilities of other techniques, such as correlation-based EMD partial reconstruction, correlation-based EMD hard thresholding, and wavelet transform. The use of EMD-STRP on the measured Lidar signal resulted in the noise being efficiently suppressed, with an improved signal to noise ratio of 22.25 dB and an extended detection range of 11 km.

  2. Texture Feature Analysis for Different Resolution Level of Kidney Ultrasound Images

    Science.gov (United States)

    Kairuddin, Wan Nur Hafsha Wan; Mahmud, Wan Mahani Hafizah Wan

    2017-08-01

    Image feature extraction is a technique to identify the characteristic of the image. The objective of this work is to discover the texture features that best describe a tissue characteristic of a healthy kidney from ultrasound (US) image. Three ultrasound machines that have different specifications are used in order to get a different quality (different resolution) of the image. Initially, the acquired images are pre-processed to de-noise the speckle to ensure the image preserve the pixels in a region of interest (ROI) for further extraction. Gaussian Low- pass Filter is chosen as the filtering method in this work. 150 of enhanced images then are segmented by creating a foreground and background of image where the mask is created to eliminate some unwanted intensity values. Statistical based texture features method is used namely Intensity Histogram (IH), Gray-Level Co-Occurance Matrix (GLCM) and Gray-level run-length matrix (GLRLM).This method is depends on the spatial distribution of intensity values or gray levels in the kidney region. By using One-Way ANOVA in SPSS, the result indicated that three features (Contrast, Difference Variance and Inverse Difference Moment Normalized) from GLCM are not statistically significant; this concludes that these three features describe a healthy kidney characteristics regardless of the ultrasound image quality.

  3. ℓ0TV: A new method for image restoration in the presence of impulse noise

    KAUST Repository

    Yuan, Ganzhao; Ghanem, Bernard

    2015-01-01

    In this paper, we propose a new method, called L0T V -PADMM, which solves the TV-based restoration problem with L0-norm data fidelity. To effectively deal with the resulting non-convex nonsmooth optimization problem, we first reformulate it as an equivalent MPEC (Mathematical Program with Equilibrium Constraints), and then solve it using a proximal Alternating Direction Method of Multipliers (PADMM). Our L0TV-PADMM method finds a desirable solution to the original L0-norm optimization problem and is proven to be convergent under mild conditions. We apply L0TV-PADMM to the problems of image denoising and deblurring in the presence of impulse noise. Our extensive experiments demonstrate that L0TV-PADMM outperforms state-of-the-art image restoration methods.

  4. l0TV: A Sparse Optimization Method for Impulse Noise Image Restoration

    KAUST Repository

    Yuan, Ganzhao; Ghanem, Bernard

    2017-01-01

    Total Variation (TV) is an effective and popular prior model in the field of regularization-based image processing. This paper focuses on total variation for removing impulse noise in image restoration. This type of noise frequently arises in data acquisition and transmission due to many reasons, e.g. a faulty sensor or analog-to-digital converter errors. Removing this noise is an important task in image restoration. State-of-the-art methods such as Adaptive Outlier Pursuit(AOP), which is based on TV with l02-norm data fidelity, only give sub-optimal performance. In this paper, we propose a new sparse optimization method, called l0TV-PADMM, which solves the TV-based restoration problem with l0-norm data fidelity. To effectively deal with the resulting non-convex non-smooth optimization problem, we first reformulate it as an equivalent biconvex Mathematical Program with Equilibrium Constraints (MPEC), and then solve it using a proximal Alternating Direction Method of Multipliers (PADMM). Our l0TV-PADMM method finds a desirable solution to the original l0-norm optimization problem and is proven to be convergent under mild conditions. We apply l0TV-PADMM to the problems of image denoising and deblurring in the presence of impulse noise. Our extensive experiments demonstrate that l0TV-PADMM outperforms state-of-the-art image restoration methods.

  5. l0TV: A Sparse Optimization Method for Impulse Noise Image Restoration

    KAUST Repository

    Yuan, Ganzhao

    2017-12-18

    Total Variation (TV) is an effective and popular prior model in the field of regularization-based image processing. This paper focuses on total variation for removing impulse noise in image restoration. This type of noise frequently arises in data acquisition and transmission due to many reasons, e.g. a faulty sensor or analog-to-digital converter errors. Removing this noise is an important task in image restoration. State-of-the-art methods such as Adaptive Outlier Pursuit(AOP), which is based on TV with l02-norm data fidelity, only give sub-optimal performance. In this paper, we propose a new sparse optimization method, called l0TV-PADMM, which solves the TV-based restoration problem with l0-norm data fidelity. To effectively deal with the resulting non-convex non-smooth optimization problem, we first reformulate it as an equivalent biconvex Mathematical Program with Equilibrium Constraints (MPEC), and then solve it using a proximal Alternating Direction Method of Multipliers (PADMM). Our l0TV-PADMM method finds a desirable solution to the original l0-norm optimization problem and is proven to be convergent under mild conditions. We apply l0TV-PADMM to the problems of image denoising and deblurring in the presence of impulse noise. Our extensive experiments demonstrate that l0TV-PADMM outperforms state-of-the-art image restoration methods.

  6. Wavelet-based regularization and edge preservation for submillimetre 3D list-mode reconstruction data from a high resolution small animal PET system

    Energy Technology Data Exchange (ETDEWEB)

    Jesus Ochoa Dominguez, Humberto de, E-mail: hochoa@uacj.mx [Departamento de Ingenieria Eectrica y Computacion, Universidad Autonoma de Ciudad Juarez, Avenida del Charro 450 Norte, C.P. 32310 Ciudad Juarez, Chihuahua (Mexico); Ortega Maynez, Leticia; Osiris Vergara Villegas, Osslan; Gordillo Castillo, Nelly; Guadalupe Cruz Sanchez, Vianey; Gutierrez Casas, Efren David [Departamento de Ingenieria Eectrica y Computacion, Universidad Autonoma de Ciudad Juarez, Avenida del Charro 450 Norte, C.P. 32310 Ciudad Juarez, Chihuahua (Mexico)

    2011-10-01

    The data obtained from a PET system tend to be noisy because of the limitations of the current instrumentation and the detector efficiency. This problem is particularly severe in images of small animals as the noise contaminates areas of interest within small organs. Therefore, denoising becomes a challenging task. In this paper, a novel wavelet-based regularization and edge preservation method is proposed to reduce such noise. To demonstrate this method, image reconstruction using a small mouse {sup 18}F NEMA phantom and a {sup 18}F mouse was performed. Investigation on the effects of the image quality was addressed for each reconstruction case. Results show that the proposed method drastically reduces the noise and preserves the image details.

  7. FlexISP: a flexible camera image processing framework

    KAUST Repository

    Heide, Felix

    2014-11-19

    Conventional pipelines for capturing, displaying, and storing images are usually defined as a series of cascaded modules, each responsible for addressing a particular problem. While this divide-and-conquer approach offers many benefits, it also introduces a cumulative error, as each step in the pipeline only considers the output of the previous step, not the original sensor data. We propose an end-to-end system that is aware of the camera and image model, enforces natural-image priors, while jointly accounting for common image processing steps like demosaicking, denoising, deconvolution, and so forth, all directly in a given output representation (e.g., YUV, DCT). Our system is flexible and we demonstrate it on regular Bayer images as well as images from custom sensors. In all cases, we achieve large improvements in image quality and signal reconstruction compared to state-of-the-art techniques. Finally, we show that our approach is capable of very efficiently handling high-resolution images, making even mobile implementations feasible.

  8. FlexISP: a flexible camera image processing framework

    KAUST Repository

    Heide, Felix; Egiazarian, Karen; Kautz, Jan; Pulli, Kari; Steinberger, Markus; Tsai, Yun-Ta; Rouf, Mushfiqur; Pająk, Dawid; Reddy, Dikpal; Gallo, Orazio; Liu, Jing; Heidrich, Wolfgang

    2014-01-01

    Conventional pipelines for capturing, displaying, and storing images are usually defined as a series of cascaded modules, each responsible for addressing a particular problem. While this divide-and-conquer approach offers many benefits, it also introduces a cumulative error, as each step in the pipeline only considers the output of the previous step, not the original sensor data. We propose an end-to-end system that is aware of the camera and image model, enforces natural-image priors, while jointly accounting for common image processing steps like demosaicking, denoising, deconvolution, and so forth, all directly in a given output representation (e.g., YUV, DCT). Our system is flexible and we demonstrate it on regular Bayer images as well as images from custom sensors. In all cases, we achieve large improvements in image quality and signal reconstruction compared to state-of-the-art techniques. Finally, we show that our approach is capable of very efficiently handling high-resolution images, making even mobile implementations feasible.

  9. An ART iterative reconstruction algorithm for computed tomography of diffraction enhanced imaging

    International Nuclear Information System (INIS)

    Wang Zhentian; Zhang Li; Huang Zhifeng; Kang Kejun; Chen Zhiqiang; Fang Qiaoguang; Zhu Peiping

    2009-01-01

    X-ray diffraction enhanced imaging (DEI) has extremely high sensitivity for weakly absorbing low-Z samples in medical and biological fields. In this paper, we propose an Algebra Reconstruction Technique (ART) iterative reconstruction algorithm for computed tomography of diffraction enhanced imaging (DEI-CT). An Ordered Subsets (OS) technique is used to accelerate the ART reconstruction. Few-view reconstruction is also studied, and a partial differential equation (PDE) type filter which has the ability of edge-preserving and denoising is used to improve the image quality and eliminate the artifacts. The proposed algorithm is validated with both the numerical simulations and the experiment at the Beijing synchrotron radiation facility (BSRF). (authors)

  10. Image segmentation and particles classification using texture analysis method

    Directory of Open Access Journals (Sweden)

    Mayar Aly Atteya

    Full Text Available Introduction: Ingredients of oily fish include a large amount of polyunsaturated fatty acids, which are important elements in various metabolic processes of humans, and have also been used to prevent diseases. However, in an attempt to reduce cost, recent developments are starting a replace the ingredients of fish oil with products of microalgae, that also produce polyunsaturated fatty acids. To do so, it is important to closely monitor morphological changes in algae cells and monitor their age in order to achieve the best results. This paper aims to describe an advanced vision-based system to automatically detect, classify, and track the organic cells using a recently developed SOPAT-System (Smart On-line Particle Analysis Technology, a photo-optical image acquisition device combined with innovative image analysis software. Methods The proposed method includes image de-noising, binarization and Enhancement, as well as object recognition, localization and classification based on the analysis of particles’ size and texture. Results The methods allowed for correctly computing cell’s size for each particle separately. By computing an area histogram for the input images (1h, 18h, and 42h, the variation could be observed showing a clear increase in cell. Conclusion The proposed method allows for algae particles to be correctly identified with accuracies up to 99% and classified correctly with accuracies up to 100%.

  11. Medical image processing on the GPU - past, present and future.

    Science.gov (United States)

    Eklund, Anders; Dufort, Paul; Forsberg, Daniel; LaConte, Stephen M

    2013-12-01

    Graphics processing units (GPUs) are used today in a wide range of applications, mainly because they can dramatically accelerate parallel computing, are affordable and energy efficient. In the field of medical imaging, GPUs are in some cases crucial for enabling practical use of computationally demanding algorithms. This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations. The review covers GPU acceleration of basic image processing operations (filtering, interpolation, histogram estimation and distance transforms), the most commonly used algorithms in medical imaging (image registration, image segmentation and image denoising) and algorithms that are specific to individual modalities (CT, PET, SPECT, MRI, fMRI, DTI, ultrasound, optical imaging and microscopy). The review ends by highlighting some future possibilities and challenges. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. lop-DWI: A Novel Scheme for Pre-Processing of Diffusion-Weighted Images in the Gradient Direction Domain.

    Science.gov (United States)

    Sepehrband, Farshid; Choupan, Jeiran; Caruyer, Emmanuel; Kurniawan, Nyoman D; Gal, Yaniv; Tieng, Quang M; McMahon, Katie L; Vegh, Viktor; Reutens, David C; Yang, Zhengyi

    2014-01-01

    We describe and evaluate a pre-processing method based on a periodic spiral sampling of diffusion-gradient directions for high angular resolution diffusion magnetic resonance imaging. Our pre-processing method incorporates prior knowledge about the acquired diffusion-weighted signal, facilitating noise reduction. Periodic spiral sampling of gradient direction encodings results in an acquired signal in each voxel that is pseudo-periodic with characteristics that allow separation of low-frequency signal from high frequency noise. Consequently, it enhances local reconstruction of the orientation distribution function used to define fiber tracks in the brain. Denoising with periodic spiral sampling was tested using synthetic data and in vivo human brain images. The level of improvement in signal-to-noise ratio and in the accuracy of local reconstruction of fiber tracks was significantly improved using our method.

  13. Compressive Transient Imaging

    KAUST Repository

    Sun, Qilin

    2017-04-01

    High resolution transient/3D imaging technology is of high interest in both scientific research and commercial application. Nowadays, all of the transient imaging methods suffer from low resolution or time consuming mechanical scanning. We proposed a new method based on TCSPC and Compressive Sensing to achieve a high resolution transient imaging with a several seconds capturing process. Picosecond laser sends a serious of equal interval pulse while synchronized SPAD camera\\'s detecting gate window has a precise phase delay at each cycle. After capturing enough points, we are able to make up a whole signal. By inserting a DMD device into the system, we are able to modulate all the frames of data using binary random patterns to reconstruct a super resolution transient/3D image later. Because the low fill factor of SPAD sensor will make a compressive sensing scenario ill-conditioned, We designed and fabricated a diffractive microlens array. We proposed a new CS reconstruction algorithm which is able to denoise at the same time for the measurements suffering from Poisson noise. Instead of a single SPAD senor, we chose a SPAD array because it can drastically reduce the requirement for the number of measurements and its reconstruction time. Further more, it not easy to reconstruct a high resolution image with only one single sensor while for an array, it just needs to reconstruct small patches and a few measurements. In this thesis, we evaluated the reconstruction methods using both clean measurements and the version corrupted by Poisson noise. The results show how the integration over the layers influence the image quality and our algorithm works well while the measurements suffer from non-trival Poisson noise. It\\'s a breakthrough in the areas of both transient imaging and compressive sensing.

  14. A Neuro-Fuzzy Inference System Combining Wavelet Denoising, Principal Component Analysis, and Sequential Probability Ratio Test for Sensor Monitoring

    International Nuclear Information System (INIS)

    Na, Man Gyun; Oh, Seungrohk

    2002-01-01

    A neuro-fuzzy inference system combined with the wavelet denoising, principal component analysis (PCA), and sequential probability ratio test (SPRT) methods has been developed to monitor the relevant sensor using the information of other sensors. The parameters of the neuro-fuzzy inference system that estimates the relevant sensor signal are optimized by a genetic algorithm and a least-squares algorithm. The wavelet denoising technique was applied to remove noise components in input signals into the neuro-fuzzy system. By reducing the dimension of an input space into the neuro-fuzzy system without losing a significant amount of information, the PCA was used to reduce the time necessary to train the neuro-fuzzy system, simplify the structure of the neuro-fuzzy inference system, and also, make easy the selection of the input signals into the neuro-fuzzy system. By using the residual signals between the estimated signals and the measured signals, the SPRT is applied to detect whether the sensors are degraded or not. The proposed sensor-monitoring algorithm was verified through applications to the pressurizer water level, the pressurizer pressure, and the hot-leg temperature sensors in pressurized water reactors

  15. Hyperspectral imaging system for disease scanning on banana plants

    Science.gov (United States)

    Ochoa, Daniel; Cevallos, Juan; Vargas, German; Criollo, Ronald; Romero, Dennis; Castro, Rodrigo; Bayona, Oswaldo

    2016-05-01

    Black Sigatoka (BS) is a banana plant disease caused by the fungus Mycosphaerella fijiensis. BS symptoms can be observed at late infection stages. By that time, BS has probably spread to other plants. In this paper, we present our current work on building an hyper-spectral (HS) imaging system aimed at in-vivo detection of BS pre-symptomatic responses in banana leaves. The proposed imaging system comprises a motorized stage, a high-sensitivity VIS-NIR camera and an optical spectrograph. To capture images of the banana leaf, the stage's speed and camera's frame rate must be computed to reduce motion blur and to obtain the same resolution along both spatial dimensions of the resulting HS cube. Our continuous leaf scanning approach allows imaging leaves of arbitrary length with minimum frame loss. Once the images are captured, a denoising step is performed to improve HS image quality and spectral profile extraction.

  16. IMAGE DESCRIPTIONS FOR SKETCH BASED IMAGE RETRIEVAL

    OpenAIRE

    SAAVEDRA RONDO, JOSE MANUEL; SAAVEDRA RONDO, JOSE MANUEL

    2008-01-01

    Due to the massive use of Internet together with the proliferation of media devices, content based image retrieval has become an active discipline in computer science. A common content based image retrieval approach requires that the user gives a regular image (e.g, a photo) as a query. However, having a regular image as query may be a serious problem. Indeed, people commonly use an image retrieval system because they do not count on the desired image. An easy alternative way t...

  17. VIDEO DENOISING USING SWITCHING ADAPTIVE DECISION BASED ALGORITHM WITH ROBUST MOTION ESTIMATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V. Jayaraj

    2010-08-01

    Full Text Available A Non-linear adaptive decision based algorithm with robust motion estimation technique is proposed for removal of impulse noise, Gaussian noise and mixed noise (impulse and Gaussian with edge and fine detail preservation in images and videos. The algorithm includes detection of corrupted pixels and the estimation of values for replacing the corrupted pixels. The main advantage of the proposed algorithm is that an appropriate filter is used for replacing the corrupted pixel based on the estimation of the noise variance present in the filtering window. This leads to reduced blurring and better fine detail preservation even at the high mixed noise density. It performs both spatial and temporal filtering for removal of the noises in the filter window of the videos. The Improved Cross Diamond Search Motion Estimation technique uses Least Median Square as a cost function, which shows improved performance than other motion estimation techniques with existing cost functions. The results show that the proposed algorithm outperforms the other algorithms in the visual point of view and in Peak Signal to Noise Ratio, Mean Square Error and Image Enhancement Factor.

  18. A Total Variation Model Based on the Strictly Convex Modification for Image Denoising

    Directory of Open Access Journals (Sweden)

    Boying Wu

    2014-01-01

    Full Text Available We propose a strictly convex functional in which the regular term consists of the total variation term and an adaptive logarithm based convex modification term. We prove the existence and uniqueness of the minimizer for the proposed variational problem. The existence, uniqueness, and long-time behavior of the solution of the associated evolution system is also established. Finally, we present experimental results to illustrate the effectiveness of the model in noise reduction, and a comparison is made in relation to the more classical methods of the traditional total variation (TV, the Perona-Malik (PM, and the more recent D-α-PM method. Additional distinction from the other methods is that the parameters, for manual manipulation, in the proposed algorithm are reduced to basically only one.

  19. A volume-based method for denoising on curved surfaces

    KAUST Repository

    Biddle, Harry

    2013-09-01

    We demonstrate a method for removing noise from images or other data on curved surfaces. Our approach relies on in-surface diffusion: we formulate both the Gaussian diffusion and Perona-Malik edge-preserving diffusion equations in a surface-intrinsic way. Using the Closest Point Method, a recent technique for solving partial differential equations (PDEs) on general surfaces, we obtain a very simple algorithm where we merely alternate a time step of the usual Gaussian diffusion (and similarly Perona-Malik) in a small 3D volume containing the surface with an interpolation step. The method uses a closest point function to represent the underlying surface and can treat very general surfaces. Experimental results include image filtering on smooth surfaces, open surfaces, and general triangulated surfaces. © 2013 IEEE.

  20. A volume-based method for denoising on curved surfaces

    KAUST Repository

    Biddle, Harry; von Glehn, Ingrid; Macdonald, Colin B.; Marz, Thomas

    2013-01-01

    We demonstrate a method for removing noise from images or other data on curved surfaces. Our approach relies on in-surface diffusion: we formulate both the Gaussian diffusion and Perona-Malik edge-preserving diffusion equations in a surface-intrinsic way. Using the Closest Point Method, a recent technique for solving partial differential equations (PDEs) on general surfaces, we obtain a very simple algorithm where we merely alternate a time step of the usual Gaussian diffusion (and similarly Perona-Malik) in a small 3D volume containing the surface with an interpolation step. The method uses a closest point function to represent the underlying surface and can treat very general surfaces. Experimental results include image filtering on smooth surfaces, open surfaces, and general triangulated surfaces. © 2013 IEEE.

  1. 3-D image pre-processing algorithms for improved automated tracing of neuronal arbors.

    Science.gov (United States)

    Narayanaswamy, Arunachalam; Wang, Yu; Roysam, Badrinath

    2011-09-01

    The accuracy and reliability of automated neurite tracing systems is ultimately limited by image quality as reflected in the signal-to-noise ratio, contrast, and image variability. This paper describes a novel combination of image processing methods that operate on images of neurites captured by confocal and widefield microscopy, and produce synthetic images that are better suited to automated tracing. The algorithms are based on the curvelet transform (for denoising curvilinear structures and local orientation estimation), perceptual grouping by scalar voting (for elimination of non-tubular structures and improvement of neurite continuity while preserving branch points), adaptive focus detection, and depth estimation (for handling widefield images without deconvolution). The proposed methods are fast, and capable of handling large images. Their ability to handle images of unlimited size derives from automated tiling of large images along the lateral dimension, and processing of 3-D images one optical slice at a time. Their speed derives in part from the fact that the core computations are formulated in terms of the Fast Fourier Transform (FFT), and in part from parallel computation on multi-core computers. The methods are simple to apply to new images since they require very few adjustable parameters, all of which are intuitive. Examples of pre-processing DIADEM Challenge images are used to illustrate improved automated tracing resulting from our pre-processing methods.

  2. Robust Short-Lag Spatial Coherence Imaging.

    Science.gov (United States)

    Nair, Arun Asokan; Tran, Trac Duy; Bell, Muyinatu A Lediju

    2018-03-01

    Short-lag spatial coherence (SLSC) imaging displays the spatial coherence between backscattered ultrasound echoes instead of their signal amplitudes and is more robust to noise and clutter artifacts when compared with traditional delay-and-sum (DAS) B-mode imaging. However, SLSC imaging does not consider the content of images formed with different lags, and thus does not exploit the differences in tissue texture at each short-lag value. Our proposed method improves SLSC imaging by weighting the addition of lag values (i.e., M-weighting) and by applying robust principal component analysis (RPCA) to search for a low-dimensional subspace for projecting coherence images created with different lag values. The RPCA-based projections are considered to be denoised versions of the originals that are then weighted and added across lags to yield a final robust SLSC (R-SLSC) image. Our approach was tested on simulation, phantom, and in vivo liver data. Relative to DAS B-mode images, the mean contrast, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) improvements with R-SLSC images are 21.22 dB, 2.54, and 2.36, respectively, when averaged over simulated, phantom, and in vivo data and over all lags considered, which corresponds to mean improvements of 96.4%, 121.2%, and 120.5%, respectively. When compared with SLSC images, the corresponding mean improvements with R-SLSC images were 7.38 dB, 1.52, and 1.30, respectively (i.e., mean improvements of 14.5%, 50.5%, and 43.2%, respectively). Results show great promise for smoothing out the tissue texture of SLSC images and enhancing anechoic or hypoechoic target visibility at higher lag values, which could be useful in clinical tasks such as breast cyst visualization, liver vessel tracking, and obese patient imaging.

  3. Denoising of Mechanical Vibration Signals Using Quantum-Inspired Adaptive Wavelet Shrinkage

    Directory of Open Access Journals (Sweden)

    Yan-long Chen

    2014-01-01

    Full Text Available The potential application of a quantum-inspired adaptive wavelet shrinkage (QAWS technique to mechanical vibration signals with a focus on noise reduction is studied in this paper. This quantum-inspired shrinkage algorithm combines three elements: an adaptive non-Gaussian statistical model of dual-tree complex wavelet transform (DTCWT coefficients proposed to improve practicability of prior information, the quantum superposition introduced to describe the interscale dependencies of DTCWT coefficients, and the quantum-inspired probability of noise defined to shrink wavelet coefficients in a Bayesian framework. By combining all these elements, this signal processing scheme incorporating the DTCWT with quantum theory can both reduce noise and preserve signal details. A practical vibration signal measured from a power-shift steering transmission is utilized to evaluate the denoising ability of QAWS. Application results demonstrate the effectiveness of the proposed method. Moreover, it achieves better performance than hard and soft thresholding.

  4. ℓ0TV: A new method for image restoration in the presence of impulse noise

    KAUST Repository

    Yuan, Ganzhao

    2015-06-02

    Total Variation (TV) is an effective and popular prior model in the field of regularization-based image processing. This paper focuses on TV for image restoration in the presence of impulse noise. This type of noise frequently arises in data acquisition and transmission due to many reasons, e.g. a faulty sensor or analog-to-digital converter errors. Removing this noise is an important task in image restoration. State-of-the-art methods such as Adaptive Outlier Pursuit(AOP), which is based on TV with L02-norm data fidelity, only give sub-optimal performance. In this paper, we propose a new method, called L0T V -PADMM, which solves the TV-based restoration problem with L0-norm data fidelity. To effectively deal with the resulting non-convex nonsmooth optimization problem, we first reformulate it as an equivalent MPEC (Mathematical Program with Equilibrium Constraints), and then solve it using a proximal Alternating Direction Method of Multipliers (PADMM). Our L0TV-PADMM method finds a desirable solution to the original L0-norm optimization problem and is proven to be convergent under mild conditions. We apply L0TV-PADMM to the problems of image denoising and deblurring in the presence of impulse noise. Our extensive experiments demonstrate that L0TV-PADMM outperforms state-of-the-art image restoration methods.

  5. Combined Tensor Fitting and TV Regularization in Diffusion Tensor Imaging Based on a Riemannian Manifold Approach.

    Science.gov (United States)

    Baust, Maximilian; Weinmann, Andreas; Wieczorek, Matthias; Lasser, Tobias; Storath, Martin; Navab, Nassir

    2016-08-01

    In this paper, we consider combined TV denoising and diffusion tensor fitting in DTI using the affine-invariant Riemannian metric on the space of diffusion tensors. Instead of first fitting the diffusion tensors, and then denoising them, we define a suitable TV type energy functional which incorporates the measured DWIs (using an inverse problem setup) and which measures the nearness of neighboring tensors in the manifold. To approach this functional, we propose generalized forward- backward splitting algorithms which combine an explicit and several implicit steps performed on a decomposition of the functional. We validate the performance of the derived algorithms on synthetic and real DTI data. In particular, we work on real 3D data. To our knowledge, the present paper describes the first approach to TV regularization in a combined manifold and inverse problem setup.

  6. Enhancing Speech Recognition Using Improved Particle Swarm Optimization Based Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Lokesh Selvaraj

    2014-01-01

    Full Text Available Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO is suggested. The suggested methodology contains four stages, namely, (i denoising, (ii feature mining (iii, vector quantization, and (iv IPSO based hidden Markov model (HMM technique (IP-HMM. At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC, mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.

  7. Low dose CT image restoration using a database of image patches

    Science.gov (United States)

    Ha, Sungsoo; Mueller, Klaus

    2015-01-01

    Reducing the radiation dose in CT imaging has become an active research topic and many solutions have been proposed to remove the significant noise and streak artifacts in the reconstructed images. Most of these methods operate within the domain of the image that is subject to restoration. This, however, poses limitations on the extent of filtering possible. We advocate to take into consideration the vast body of external knowledge that exists in the domain of already acquired medical CT images, since after all, this is what radiologists do when they examine these low quality images. We can incorporate this knowledge by creating a database of prior scans, either of the same patient or a diverse corpus of different patients, to assist in the restoration process. Our paper follows up on our previous work that used a database of images. Using images, however, is challenging since it requires tedious and error prone registration and alignment. Our new method eliminates these problems by storing a diverse set of small image patches in conjunction with a localized similarity matching scheme. We also empirically show that it is sufficient to store these patches without anatomical tags since their statistics are sufficiently strong to yield good similarity matches from the database and as a direct effect, produce image restorations of high quality. A final experiment demonstrates that our global database approach can recover image features that are difficult to preserve with conventional denoising approaches.

  8. Hemorrhage detection in MRI brain images using images features

    Science.gov (United States)

    Moraru, Luminita; Moldovanu, Simona; Bibicu, Dorin; Stratulat (Visan), Mirela

    2013-11-01

    The abnormalities appear frequently on Magnetic Resonance Images (MRI) of brain in elderly patients presenting either stroke or cognitive impairment. Detection of brain hemorrhage lesions in MRI is an important but very time-consuming task. This research aims to develop a method to extract brain tissue features from T2-weighted MR images of the brain using a selection of the most valuable texture features in order to discriminate between normal and affected areas of the brain. Due to textural similarity between normal and affected areas in brain MR images these operation are very challenging. A trauma may cause microstructural changes, which are not necessarily perceptible by visual inspection, but they could be detected by using a texture analysis. The proposed analysis is developed in five steps: i) in the pre-processing step: the de-noising operation is performed using the Daubechies wavelets; ii) the original images were transformed in image features using the first order descriptors; iii) the regions of interest (ROIs) were cropped from images feature following up the axial symmetry properties with respect to the mid - sagittal plan; iv) the variation in the measurement of features was quantified using the two descriptors of the co-occurrence matrix, namely energy and homogeneity; v) finally, the meaningful of the image features is analyzed by using the t-test method. P-value has been applied to the pair of features in order to measure they efficacy.

  9. Convolutional auto-encoder for image denoising of ultra-low-dose CT

    Directory of Open Access Journals (Sweden)

    Mizuho Nishio

    2017-08-01

    Conclusion: Neural network with convolutional auto-encoder could be trained using pairs of standard-dose and ultra-low-dose CT image patches. According to the visual assessment by radiologists and technologists, the performance of our proposed method was superior to that of large-scale nonlocal mean and block-matching and 3D filtering.

  10. Digital Path Approach Despeckle Filter for Ultrasound Imaging and Video

    Directory of Open Access Journals (Sweden)

    Marek Szczepański

    2017-01-01

    Full Text Available We propose a novel filtering technique capable of reducing the multiplicative noise in ultrasound images that is an extension of the denoising algorithms based on the concept of digital paths. In this approach, the filter weights are calculated taking into account the similarity between pixel intensities that belongs to the local neighborhood of the processed pixel, which is called a path. The output of the filter is estimated as the weighted average of pixels connected by the paths. The way of creating paths is pivotal and determines the effectiveness and computational complexity of the proposed filtering design. Such procedure can be effective for different types of noise but fail in the presence of multiplicative noise. To increase the filtering efficiency for this type of disturbances, we introduce some improvements of the basic concept and new classes of similarity functions and finally extend our techniques to a spatiotemporal domain. The experimental results prove that the proposed algorithm provides the comparable results with the state-of-the-art techniques for multiplicative noise removal in ultrasound images and it can be applied for real-time image enhancement of video streams.

  11. Super-resolution for everybody: An image processing workflow to obtain high-resolution images with a standard confocal microscope.

    Science.gov (United States)

    Lam, France; Cladière, Damien; Guillaume, Cyndélia; Wassmann, Katja; Bolte, Susanne

    2017-02-15

    In the presented work we aimed at improving confocal imaging to obtain highest possible resolution in thick biological samples, such as the mouse oocyte. We therefore developed an image processing workflow that allows improving the lateral and axial resolution of a standard confocal microscope. Our workflow comprises refractive index matching, the optimization of microscope hardware parameters and image restoration by deconvolution. We compare two different deconvolution algorithms, evaluate the necessity of denoising and establish the optimal image restoration procedure. We validate our workflow by imaging sub resolution fluorescent beads and measuring the maximum lateral and axial resolution of the confocal system. Subsequently, we apply the parameters to the imaging and data restoration of fluorescently labelled meiotic spindles of mouse oocytes. We measure a resolution increase of approximately 2-fold in the lateral and 3-fold in the axial direction throughout a depth of 60μm. This demonstrates that with our optimized workflow we reach a resolution that is comparable to 3D-SIM-imaging, but with better depth penetration for confocal images of beads and the biological sample. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Content-Based Image Retrial Based on Hadoop

    Directory of Open Access Journals (Sweden)

    DongSheng Yin

    2013-01-01

    Full Text Available Generally, time complexity of algorithms for content-based image retrial is extremely high. In order to retrieve images on large-scale databases efficiently, a new way for retrieving based on Hadoop distributed framework is proposed. Firstly, a database of images features is built by using Speeded Up Robust Features algorithm and Locality-Sensitive Hashing and then perform the search on Hadoop platform in a parallel way specially designed. Considerable experimental results show that it is able to retrieve images based on content on large-scale cluster and image sets effectively.

  13. Forecasting East Asian Indices Futures via a Novel Hybrid of Wavelet-PCA Denoising and Artificial Neural Network Models

    Science.gov (United States)

    2016-01-01

    The motivation behind this research is to innovatively combine new methods like wavelet, principal component analysis (PCA), and artificial neural network (ANN) approaches to analyze trade in today’s increasingly difficult and volatile financial futures markets. The main focus of this study is to facilitate forecasting by using an enhanced denoising process on market data, taken as a multivariate signal, in order to deduct the same noise from the open-high-low-close signal of a market. This research offers evidence on the predictive ability and the profitability of abnormal returns of a new hybrid forecasting model using Wavelet-PCA denoising and ANN (named WPCA-NN) on futures contracts of Hong Kong’s Hang Seng futures, Japan’s NIKKEI 225 futures, Singapore’s MSCI futures, South Korea’s KOSPI 200 futures, and Taiwan’s TAIEX futures from 2005 to 2014. Using a host of technical analysis indicators consisting of RSI, MACD, MACD Signal, Stochastic Fast %K, Stochastic Slow %K, Stochastic %D, and Ultimate Oscillator, empirical results show that the annual mean returns of WPCA-NN are more than the threshold buy-and-hold for the validation, test, and evaluation periods; this is inconsistent with the traditional random walk hypothesis, which insists that mechanical rules cannot outperform the threshold buy-and-hold. The findings, however, are consistent with literature that advocates technical analysis. PMID:27248692

  14. Fast Sparse Coding for Range Data Denoising with Sparse Ridges Constraint

    Directory of Open Access Journals (Sweden)

    Zhi Gao

    2018-05-01

    Full Text Available Light detection and ranging (LiDAR sensors have been widely deployed on intelligent systems such as unmanned ground vehicles (UGVs and unmanned aerial vehicles (UAVs to perform localization, obstacle detection, and navigation tasks. Thus, research into range data processing with competitive performance in terms of both accuracy and efficiency has attracted increasing attention. Sparse coding has revolutionized signal processing and led to state-of-the-art performance in a variety of applications. However, dictionary learning, which plays the central role in sparse coding techniques, is computationally demanding, resulting in its limited applicability in real-time systems. In this study, we propose sparse coding algorithms with a fixed pre-learned ridge dictionary to realize range data denoising via leveraging the regularity of laser range measurements in man-made environments. Experiments on both synthesized data and real data demonstrate that our method obtains accuracy comparable to that of sophisticated sparse coding methods, but with much higher computational efficiency.

  15. Fast Sparse Coding for Range Data Denoising with Sparse Ridges Constraint.

    Science.gov (United States)

    Gao, Zhi; Lao, Mingjie; Sang, Yongsheng; Wen, Fei; Ramesh, Bharath; Zhai, Ruifang

    2018-05-06

    Light detection and ranging (LiDAR) sensors have been widely deployed on intelligent systems such as unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs) to perform localization, obstacle detection, and navigation tasks. Thus, research into range data processing with competitive performance in terms of both accuracy and efficiency has attracted increasing attention. Sparse coding has revolutionized signal processing and led to state-of-the-art performance in a variety of applications. However, dictionary learning, which plays the central role in sparse coding techniques, is computationally demanding, resulting in its limited applicability in real-time systems. In this study, we propose sparse coding algorithms with a fixed pre-learned ridge dictionary to realize range data denoising via leveraging the regularity of laser range measurements in man-made environments. Experiments on both synthesized data and real data demonstrate that our method obtains accuracy comparable to that of sophisticated sparse coding methods, but with much higher computational efficiency.

  16. ADI splitting schemes for a fourth-order nonlinear partial differential equation from image processing

    KAUST Repository

    Calatroni, Luca

    2013-08-01

    We present directional operator splitting schemes for the numerical solution of a fourth-order, nonlinear partial differential evolution equation which arises in image processing. This equation constitutes the H -1-gradient flow of the total variation and represents a prototype of higher-order equations of similar type which are popular in imaging for denoising, deblurring and inpainting problems. The efficient numerical solution of this equation is very challenging due to the stiffness of most numerical schemes. We show that the combination of directional splitting schemes with implicit time-stepping provides a stable and computationally cheap numerical realisation of the equation.

  17. ADI splitting schemes for a fourth-order nonlinear partial differential equation from image processing

    KAUST Repository

    Calatroni, Luca; Dü ring, Bertram; Schö nlieb, Carola-Bibiane

    2013-01-01

    We present directional operator splitting schemes for the numerical solution of a fourth-order, nonlinear partial differential evolution equation which arises in image processing. This equation constitutes the H -1-gradient flow of the total variation and represents a prototype of higher-order equations of similar type which are popular in imaging for denoising, deblurring and inpainting problems. The efficient numerical solution of this equation is very challenging due to the stiffness of most numerical schemes. We show that the combination of directional splitting schemes with implicit time-stepping provides a stable and computationally cheap numerical realisation of the equation.

  18. Computer processing of image captured by the passive THz imaging device as an effective tool for its de-noising

    Science.gov (United States)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Kuchik, Igor E.; Zhang, Cun-lin; Deng, Chao; Zhao, Yuan-meng; Zhang, Xin

    2012-12-01

    As it is well-known, passive THz imaging devices have big potential for solution of the security problem. Nevertheless, one of the main problems, which take place on the way of using these devices, consists in the low image quality of developed passive THz camera. To change this situation, it is necessary to improve the engineering characteristics (resolution, sensitivity and so on) of the THz camera or to use computer processing of the image. In our opinion, the last issue is more preferable because it is more inexpensive. Below we illustrate possibility of suppression of the noise of the image captured by three THz passive camera developed in CNU (Beijing. China). After applying the computer processing of the image, its quality enhances many times. Achieved quality in many cases becomes enough for the detection of the object hidden under opaque clothes. We stress that the performance of developed computer code is enough high and does not restrict the performance of passive THz imaging device. The obtained results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem. Nevertheless, developing the new spatial filter for treatment of the THz image remains a modern problem at present time.

  19. Multichannel Poisson denoising and deconvolution on the sphere: application to the Fermi Gamma-ray Space Telescope

    Science.gov (United States)

    Schmitt, J.; Starck, J. L.; Casandjian, J. M.; Fadili, J.; Grenier, I.

    2012-10-01

    A multiscale representation-based denoising method for spherical data contaminated with Poisson noise, the multiscale variance stabilizing transform on the sphere (MS-VSTS), has been previously proposed. This paper first extends this MS-VSTS to spherical two and one dimensions data (2D-1D), where the two first dimensions are longitude and latitude, and the third dimension is a meaningful physical index such as energy or time. We then introduce a novel multichannel deconvolution built upon the 2D-1D MS-VSTS, which allows us to get rid of both the noise and the blur introduced by the point spread function (PSF) in each energy (or time) band. The method is applied to simulated data from the Large Area Telescope (LAT), the main instrument of the Fermi Gamma-ray Space Telescope, which detects high energy gamma-rays in a very wide energy range (from 20 MeV to more than 300 GeV), and whose PSF is strongly energy-dependent (from about 3.5 at 100 MeV to less than 0.1 at 10 GeV).

  20. Nanoplatform-based molecular imaging

    National Research Council Canada - National Science Library

    Chen, Xiaoyuan

    2011-01-01

    "Nanoplathform-Based Molecular Imaging provides rationale for using nanoparticle-based probes for molecular imaging, then discusses general strategies for this underutilized, yet promising, technology...

  1. Halftoning processing on a JPEG-compressed image

    Science.gov (United States)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  2. Adaptive Fourier decomposition based R-peak detection for noisy ECG Signals.

    Science.gov (United States)

    Ze Wang; Chi Man Wong; Feng Wan

    2017-07-01

    An adaptive Fourier decomposition (AFD) based R-peak detection method is proposed for noisy ECG signals. Although lots of QRS detection methods have been proposed in literature, most detection methods require high signal quality. The proposed method extracts the R waves from the energy domain using the AFD and determines the R-peak locations based on the key decomposition parameters, achieving the denoising and the R-peak detection at the same time. Validated by clinical ECG signals in the MIT-BIH Arrhythmia Database, the proposed method shows better performance than the Pan-Tompkin (PT) algorithm in both situations of a native PT and the PT with a denoising process.

  3. Bayesian inference on multiscale models for poisson intensity estimation: applications to photon-limited image denoising.

    Science.gov (United States)

    Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George

    2009-08-01

    We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.

  4. 2D biological representations with reduced speckle obtained from two perpendicular ultrasonic arrays.

    Science.gov (United States)

    Rodriguez-Hernandez, Miguel A; Gomez-Sacristan, Angel; Sempere-Payá, Víctor M

    2016-04-29

    Ultrasound diagnosis is a widely used medical tool. Among the various ultrasound techniques, ultrasonic imaging is particularly relevant. This paper presents an improvement to a two-dimensional (2D) ultrasonic system using measurements taken from perpendicular planes, where digital signal processing techniques are used to combine one-dimensional (1D) A-scans were acquired by individual transducers in arrays located in perpendicular planes. An algorithm used to combine measurements is improved based on the wavelet transform, which includes a denoising step during the 2D representation generation process. The inclusion of this new denoising stage generates higher quality 2D representations with a reduced level of speckling. The paper includes different 2D representations obtained from noisy A-scans and compares the improvements obtained by including the denoising stage.

  5. Directory of Open Access Journals (Sweden)

    Khadidja Kaibiche

    2017-10-01

    Full Text Available The conservation and restoration of old stained manuscripts is an activity devoted to the preservation and protection of things of historical and personal significance made mainly from paper, parchment, and skin. We present in this paper a hybrid implementation for de-noising and restoration of old degraded and stained manuscripts. This implementation is based on the statistical dependence of the wavelet coefficients of type Ortho-normal Wavelet Thresholding Algorithm based on the principle of Stein’s Unbiased Risk-Estimate Linear Expansion of Thresholds (OWT SURE-LET and the synergy with bilateral filtering. First, the non-biased quadratic risk Stein estimator is applied to de-noise images corrupted by white Gaussian noise. In a second step, an improved bilateral filter is introduced to smooth and eliminate unnecessary details with the advantage of preserving edges between image regions. Obtained results show the effectiveness of the proposed synergy compared to separated approaches both on gray scale images and stained old manuscript.

  6. Blind source separation analysis of PET dynamic data: a simple method with exciting MR-PET applications

    Energy Technology Data Exchange (ETDEWEB)

    Oros-Peusquens, Ana-Maria; Silva, Nuno da [Institute of Neuroscience and Medicine, Forschungszentrum Jülich GmbH, 52425 Jülich (Germany); Weiss, Carolin [Department of Neurosurgery, University Hospital Cologne, 50924 Cologne (Germany); Stoffels, Gabrielle; Herzog, Hans; Langen, Karl J [Institute of Neuroscience and Medicine, Forschungszentrum Jülich GmbH, 52425 Jülich (Germany); Shah, N Jon [Institute of Neuroscience and Medicine, Forschungszentrum Jülich GmbH, 52425 Jülich (Germany); Jülich-Aachen Research Alliance (JARA) - Section JARA-Brain RWTH Aachen University, 52074 Aachen (Germany)

    2014-07-29

    Denoising of dynamic PET data improves parameter imaging by PET and is gaining momentum. This contribution describes an analysis of dynamic PET data by blind source separation methods and comparison of the results with MR-based brain properties.

  7. Variational contrast enhancement guided by global and local contrast measurements for single-image defogging

    Science.gov (United States)

    Zhou, Li; Bi, Du-Yan; He, Lin-Yuan

    2015-01-01

    The visibility of images captured in foggy conditions is impaired severely by a decrease in the contrasts of objects and veiling with a characteristic gray hue, which may limit the performance of visual applications out of doors. Contrast enhancement together with color restoration is a challenging mission for conventional fog-removal methods, as the degrading effect of fog is largely dependent on scene depth information. Nowadays, people change their minds by establishing a variational framework for contrast enhancement based on a physically based analytical model, unexpectedly resulting in color distortion, dark-patch distortion, or fuzzy features of local regions. Unlike previous work, our method treats an atmospheric veil as a scattering disturbance and formulates a foggy image as an energy functional minimization to estimate direct attenuation, originating from the work of image denoising. In addition to a global contrast measurement based on a total variation norm, an additional local measurement is designed in that optimal problem for the purpose of digging out more local details as well as suppressing dark-patch distortion. Moreover, we estimate the airlight precisely by maximization with a geometric constraint and a natural image prior in order to protect the faithfulness of the scene color. With the estimated direct attenuation and airlight, the fog-free image can be restored. Finally, our method is tested on several benchmark and realistic images evaluated by two assessment approaches. The experimental results imply that our proposed method works well compared with the state-of-the-art defogging methods.

  8. Discrete gradient methods for solving variational image regularisation models

    International Nuclear Information System (INIS)

    Grimm, V; McLachlan, Robert I; McLaren, David I; Quispel, G R W; Schönlieb, C-B

    2017-01-01

    Discrete gradient methods are well-known methods of geometric numerical integration, which preserve the dissipation of gradient systems. In this paper we show that this property of discrete gradient methods can be interesting in the context of variational models for image processing, that is where the processed image is computed as a minimiser of an energy functional. Numerical schemes for computing minimisers of such energies are desired to inherit the dissipative property of the gradient system associated to the energy and consequently guarantee a monotonic decrease of the energy along iterations, avoiding situations in which more computational work might lead to less optimal solutions. Under appropriate smoothness assumptions on the energy functional we prove that discrete gradient methods guarantee a monotonic decrease of the energy towards stationary states, and we promote their use in image processing by exhibiting experiments with convex and non-convex variational models for image deblurring, denoising, and inpainting. (paper)

  9. Marginalised Stacked Denoising Autoencoders for Robust Representation of Real-Time Multi-View Action Recognition

    Directory of Open Access Journals (Sweden)

    Feng Gu

    2015-07-01

    Full Text Available Multi-view action recognition has gained a great interest in video surveillance, human computer interaction, and multimedia retrieval, where multiple cameras of different types are deployed to provide a complementary field of views. Fusion of multiple camera views evidently leads to more robust decisions on both tracking multiple targets and analysing complex human activities, especially where there are occlusions. In this paper, we incorporate the marginalised stacked denoising autoencoders (mSDA algorithm to further improve the bag of words (BoWs representation in terms of robustness and usefulness for multi-view action recognition. The resulting representations are fed into three simple fusion strategies as well as a multiple kernel learning algorithm at the classification stage. Based on the internal evaluation, the codebook size of BoWs and the number of layers of mSDA may not significantly affect recognition performance. According to results on three multi-view benchmark datasets, the proposed framework improves recognition performance across all three datasets and outputs record recognition performance, beating the state-of-art algorithms in the literature. It is also capable of performing real-time action recognition at a frame rate ranging from 33 to 45, which could be further improved by using more powerful machines in future applications.

  10. Pixel Classification of SAR ice images using ANFIS-PSO Classifier

    Directory of Open Access Journals (Sweden)

    G. Vasumathi

    2016-12-01

    Full Text Available Synthetic Aperture Radar (SAR is playing a vital role in taking extremely high resolution radar images. It is greatly used to monitor the ice covered ocean regions. Sea monitoring is important for various purposes which includes global climate systems and ship navigation. Classification on the ice infested area gives important features which will be further useful for various monitoring process around the ice regions. Main objective of this paper is to classify the SAR ice image that helps in identifying the regions around the ice infested areas. In this paper three stages are considered in classification of SAR ice images. It starts with preprocessing in which the speckled SAR ice images are denoised using various speckle removal filters; comparison is made on all these filters to find the best filter in speckle removal. Second stage includes segmentation in which different regions are segmented using K-means and watershed segmentation algorithms; comparison is made between these two algorithms to find the best in segmenting SAR ice images. The last stage includes pixel based classification which identifies and classifies the segmented regions using various supervised learning classifiers. The algorithms includes Back propagation neural networks (BPN, Fuzzy Classifier, Adaptive Neuro Fuzzy Inference Classifier (ANFIS classifier and proposed ANFIS with Particle Swarm Optimization (PSO classifier; comparison is made on all these classifiers to propose which classifier is best suitable for classifying the SAR ice image. Various evaluation metrics are performed separately at all these three stages.

  11. Evidence-based cancer imaging

    Energy Technology Data Exchange (ETDEWEB)

    Shinagare, Atul B.; Khorasani, Ramin [Dept. of Radiology, Brigham and Women' s Hospital, Boston (Korea, Republic of)

    2017-01-15

    With the advances in the field of oncology, imaging is increasingly used in the follow-up of cancer patients, leading to concerns about over-utilization. Therefore, it has become imperative to make imaging more evidence-based, efficient, cost-effective and equitable. This review explores the strategies and tools to make diagnostic imaging more evidence-based, mainly in the context of follow-up of cancer patients.

  12. Automatic metal parts inspection: Use of thermographic images and anomaly detection algorithms

    Science.gov (United States)

    Benmoussat, M. S.; Guillaume, M.; Caulier, Y.; Spinnler, K.

    2013-11-01

    A fully-automatic approach based on the use of induction thermography and detection algorithms is proposed to inspect industrial metallic parts containing different surface and sub-surface anomalies such as open cracks, open and closed notches with different sizes and depths. A practical experimental setup is developed, where lock-in and pulsed thermography (LT and PT, respectively) techniques are used to establish a dataset of thermal images for three different mockups. Data cubes are constructed by stacking up the temporal sequence of thermogram images. After the reduction of the data space dimension by means of denoising and dimensionality reduction methods; anomaly detection algorithms are applied on the reduced data cubes. The dimensions of the reduced data spaces are automatically calculated with arbitrary criterion. The results show that, when reduced data cubes are used, the anomaly detection algorithms originally developed for hyperspectral data, the well-known Reed and Xiaoli Yu detector (RX) and the regularized adaptive RX (RARX), give good detection performances for both surface and sub-surface defects in a non-supervised way.

  13. Edge-based correlation image registration for multispectral imaging

    Science.gov (United States)

    Nandy, Prabal [Albuquerque, NM

    2009-11-17

    Registration information for images of a common target obtained from a plurality of different spectral bands can be obtained by combining edge detection and phase correlation. The images are edge-filtered, and pairs of the edge-filtered images are then phase correlated to produce phase correlation images. The registration information can be determined based on these phase correlation images.

  14. Computer-Aided Diagnosis with Deep Learning Architecture: Applications to Breast Lesions in US Images and Pulmonary Nodules in CT Scans.

    Science.gov (United States)

    Cheng, Jie-Zhi; Ni, Dong; Chou, Yi-Hong; Qin, Jing; Tiu, Chui-Mei; Chang, Yeun-Chung; Huang, Chiun-Sheng; Shen, Dinggang; Chen, Chung-Ming

    2016-04-15

    This paper performs a comprehensive study on the deep-learning-based computer-aided diagnosis (CADx) for the differential diagnosis of benign and malignant nodules/lesions by avoiding the potential errors caused by inaccurate image processing results (e.g., boundary segmentation), as well as the classification bias resulting from a less robust feature set, as involved in most conventional CADx algorithms. Specifically, the stacked denoising auto-encoder (SDAE) is exploited on the two CADx applications for the differentiation of breast ultrasound lesions and lung CT nodules. The SDAE architecture is well equipped with the automatic feature exploration mechanism and noise tolerance advantage, and hence may be suitable to deal with the intrinsically noisy property of medical image data from various imaging modalities. To show the outperformance of SDAE-based CADx over the conventional scheme, two latest conventional CADx algorithms are implemented for comparison. 10 times of 10-fold cross-validations are conducted to illustrate the efficacy of the SDAE-based CADx algorithm. The experimental results show the significant performance boost by the SDAE-based CADx algorithm over the two conventional methods, suggesting that deep learning techniques can potentially change the design paradigm of the CADx systems without the need of explicit design and selection of problem-oriented features.

  15. Utilisation of spatial and temporal correlations in positron emission tomography

    International Nuclear Information System (INIS)

    Sureau, F.

    2008-06-01

    In this thesis we propose, implement, and evaluate algorithms improving spatial resolution in reconstructed images and reducing data noise in positron emission tomography imaging. These algorithms have been developed for a high resolution tomograph (HRRT) and applied to brain imaging, but can be used for other tomographs or studies. We first developed an iterative reconstruction algorithm including a stationary and isotropic model of resolution in image space, experimentally measured. We evaluated the impact of such a model of resolution in Monte-Carlo simulations, physical phantom experiments and in two clinical studies by comparing our algorithm with a reference reconstruction algorithm. This study suggests that biases due to partial volume effects are reduced, in particular in the clinical studies. Better spatial and temporal correlations are also found at the voxel level. However, other methods should be developed to further reduce data noise. We then proposed a maximum a posteriori de-noising algorithm that can be used for dynamic data to de-noise temporally raw data (sino-grams) or reconstructed images. The a priori modeled the coefficients in a wavelet basis of all the signals without noise (in an image or sinogram). We compared this technique with a reference de-noising method on replicated simulations. This illustrates the potential benefits of our approach of sinogram de-noising. (author)

  16. 基于多尺度区间插值小波法的牛肉图像中大理石花纹分割%Application of multi-scale interval interpolation wavelet in beef image of marbling segmentation

    Institute of Scientific and Technical Information of China (English)

    张彦娥; 魏颖慧; 梅树立; 朱梦婷

    2016-01-01

    The richness of the marbling in beef, as an important index of beef quality, can be used to characterize the beef fat content. In particular, the area ratio of marbling, big fat density, and small fat density are the main indicators for most existing beef grade determination. Researchers have investigated that computer vision and image processing is applicable to the automatic grading of beef marbling, and thus plays a great role in promoting the development of the beef industry. However, images may be polluted when experiencing acquisition, transmitting and other processing. Consequently, the quality of the images may be reduced, and thereby, more uncertainties emerge. Importantly, the texture of the beef marbling image becomes blurred and texture contour is not clear. It will further affect the subsequent procedures of texture segmentation and extraction. Therefore, it is necessary to use the de-noising method with better edge preserving property to keep the edge and texture information of the image. In this study, we aimed to use the method of multi-scale interval interpolation wavelet to de-noise images, and thereby to smooth the gray values to segment and extract the regions of beef muscle, large and small fat particles from the beef marbling image. Here, we used the method of multi-scale interval interpolation wavelet to solve the partial differential equation, thus to de-noise images. Specifically, from this method, the edge-preserving smoothing for different object area can be realized, so that the texture and edge of beef marble were made more clearly. In addition, in this method, we chose the external collocation points adaptively, thus the computational efficiency can be greatly improved. In particular, extension method based on Center Similarity Transformation can be used to solve the boundary effect effectively. Firstly, on the basis of the objective evaluation index of the image, the PSNR (Peak Signal to Noise Ratio) mean value of the image de-noised

  17. An effective approach to attenuate random noise based on compressive sensing and curvelet transform

    International Nuclear Information System (INIS)

    Liu, Wei; Cao, Siyuan; Zu, Shaohuan; Chen, Yangkang

    2016-01-01

    Random noise attenuation is an important step in seismic data processing. In this paper, we propose a novel denoising approach based on compressive sensing and the curvelet transform. We formulate the random noise attenuation problem as an L _1 norm regularized optimization problem. We propose to use the curvelet transform as the sparse transform in the optimization problem to regularize the sparse coefficients in order to separate signal and noise and to use the gradient projection for sparse reconstruction (GPSR) algorithm to solve the formulated optimization problem with an easy implementation and a fast convergence. We tested the performance of our proposed approach on both synthetic and field seismic data. Numerical results show that the proposed approach can effectively suppress the distortion near the edge of seismic events during the noise attenuation process and has high computational efficiency compared with the traditional curvelet thresholding and iterative soft thresholding based denoising methods. Besides, compared with f-x deconvolution, the proposed denoising method is capable of eliminating the random noise more effectively while preserving more useful signals. (paper)

  18. Content Based Medical Image Retrieval for Histopathological, CT and MRI Images

    Directory of Open Access Journals (Sweden)

    Swarnambiga AYYACHAMY

    2013-09-01

    Full Text Available A content based approach is followed for medical images. The purpose of this study is to access the stability of these methods for medical image retrieval. The methods used in color based retrieval for histopathological images are color co-occurrence matrix (CCM and histogram with meta features. For texture based retrieval GLCM (gray level co-occurrence matrix and local binary pattern (LBP were used. For shape based retrieval canny edge detection and otsu‘s method with multivariable threshold were used. Texture and shape based retrieval were implemented using MRI (magnetic resonance images. The most remarkable characteristics of the article are its content based approach for each medical imaging modality. Our efforts were focused on the initial visual search. From our experiment, histogram with meta features in color based retrieval for histopathological images shows a precision of 60 % and recall of 30 %. Whereas GLCM in texture based retrieval for MRI images shows a precision of 70 % and recall of 20 %. Shape based retrieval for MRI images shows a precision of 50% and recall of 25 %. The retrieval results shows that this simple approach is successful.

  19. Using wavelet denoising and mathematical morphology in the segmentation technique applied to blood cells images

    OpenAIRE

    Boix García, Macarena; Cantó Colomina, Begoña

    2013-01-01

    Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet...

  20. Detail Enhancement for Infrared Images Based on Propagated Image Filter

    Directory of Open Access Journals (Sweden)

    Yishu Peng

    2016-01-01

    Full Text Available For displaying high-dynamic-range images acquired by thermal camera systems, 14-bit raw infrared data should map into 8-bit gray values. This paper presents a new method for detail enhancement of infrared images to display the image with a relatively satisfied contrast and brightness, rich detail information, and no artifacts caused by the image processing. We first adopt a propagated image filter to smooth the input image and separate the image into the base layer and the detail layer. Then, we refine the base layer by using modified histogram projection for compressing. Meanwhile, the adaptive weights derived from the layer decomposition processing are used as the strict gain control for the detail layer. The final display result is obtained by recombining the two modified layers. Experimental results on both cooled and uncooled infrared data verify that the proposed method outperforms the method based on log-power histogram modification and bilateral filter-based detail enhancement in both detail enhancement and visual effect.

  1. Exploiting sparsity and low-rank structure for the recovery of multi-slice breast MRIs with reduced sampling error.

    Science.gov (United States)

    Yin, X X; Ng, B W-H; Ramamohanarao, K; Baghai-Wadji, A; Abbott, D

    2012-09-01

    It has been shown that, magnetic resonance images (MRIs) with sparsity representation in a transformed domain, e.g. spatial finite-differences (FD), or discrete cosine transform (DCT), can be restored from undersampled k-space via applying current compressive sampling theory. The paper presents a model-based method for the restoration of MRIs. The reduced-order model, in which a full-system-response is projected onto a subspace of lower dimensionality, has been used to accelerate image reconstruction by reducing the size of the involved linear system. In this paper, the singular value threshold (SVT) technique is applied as a denoising scheme to reduce and select the model order of the inverse Fourier transform image, and to restore multi-slice breast MRIs that have been compressively sampled in k-space. The restored MRIs with SVT for denoising show reduced sampling errors compared to the direct MRI restoration methods via spatial FD, or DCT. Compressive sampling is a technique for finding sparse solutions to underdetermined linear systems. The sparsity that is implicit in MRIs is to explore the solution to MRI reconstruction after transformation from significantly undersampled k-space. The challenge, however, is that, since some incoherent artifacts result from the random undersampling, noise-like interference is added to the image with sparse representation. These recovery algorithms in the literature are not capable of fully removing the artifacts. It is necessary to introduce a denoising procedure to improve the quality of image recovery. This paper applies a singular value threshold algorithm to reduce the model order of image basis functions, which allows further improvement of the quality of image reconstruction with removal of noise artifacts. The principle of the denoising scheme is to reconstruct the sparse MRI matrices optimally with a lower rank via selecting smaller number of dominant singular values. The singular value threshold algorithm is performed

  2. The Jump Set under Geometric Regularization. Part 1: Basic Technique and First-Order Denoising

    KAUST Repository

    Valkonen, Tuomo

    2015-01-01

    © 2015 Society for Industrial and Applied Mathematics. Let u ∈ BV(Ω) solve the total variation (TV) denoising problem with L2-squared fidelity and data f. Caselles, Chambolle, and Novaga [Multiscale Model. Simul., 6 (2008), pp. 879-894] have shown the containment Hm-1 (Ju \\\\Jf) = 0 of the jump set Ju of u in that of f. Their proof unfortunately depends heavily on the co-area formula, as do many results in this area, and as such is not directly extensible to higher-order, curvature-based, and other advanced geometric regularizers, such as total generalized variation and Euler\\'s elastica. These have received increased attention in recent times due to their better practical regularization properties compared to conventional TV or wavelets. We prove analogous jump set containment properties for a general class of regularizers. We do this with novel Lipschitz transformation techniques and do not require the co-area formula. In the present Part 1 we demonstrate the general technique on first-order regularizers, while in Part 2 we will extend it to higher-order regularizers. In particular, we concentrate in this part on TV and, as a novelty, Huber-regularized TV. We also demonstrate that the technique would apply to nonconvex TV models as well as the Perona-Malik anisotropic diffusion, if these approaches were well-posed to begin with.

  3. An explorative chemometric approach applied to hyperspectral images for the study of illuminated manuscripts

    Science.gov (United States)

    Catelli, Emilio; Randeberg, Lise Lyngsnes; Alsberg, Bjørn Kåre; Gebremariam, Kidane Fanta; Bracci, Silvano

    2017-04-01

    Hyperspectral imaging (HSI) is a fast non-invasive imaging technology recently applied in the field of art conservation. With the help of chemometrics, important information about the spectral properties and spatial distribution of pigments can be extracted from HSI data. With the intent of expanding the applications of chemometrics to the interpretation of hyperspectral images of historical documents, and, at the same time, to study the colorants and their spatial distribution on ancient illuminated manuscripts, an explorative chemometric approach is here presented. The method makes use of chemometric tools for spectral de-noising (minimum noise fraction (MNF)) and image analysis (multivariate image analysis (MIA) and iterative key set factor analysis (IKSFA)/spectral angle mapper (SAM)) which have given an efficient separation, classification and mapping of colorants from visible-near-infrared (VNIR) hyperspectral images of an ancient illuminated fragment. The identification of colorants was achieved by extracting and interpreting the VNIR spectra as well as by using a portable X-ray fluorescence (XRF) spectrometer.

  4. Dictionary Based Image Segmentation

    DEFF Research Database (Denmark)

    Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2015-01-01

    We propose a method for weakly supervised segmentation of natural images, which may contain both textured or non-textured regions. Our texture representation is based on a dictionary of image patches. To divide an image into separated regions with similar texture we use an implicit level sets...

  5. Automated intraretinal layer segmentation of optical coherence tomography images using graph-theoretical methods

    Science.gov (United States)

    Roy, Priyanka; Gholami, Peyman; Kuppuswamy Parthasarathy, Mohana; Zelek, John; Lakshminarayanan, Vasudevan

    2018-02-01

    Segmentation of spectral-domain Optical Coherence Tomography (SD-OCT) images facilitates visualization and quantification of sub-retinal layers for diagnosis of retinal pathologies. However, manual segmentation is subjective, expertise dependent, and time-consuming, which limits applicability of SD-OCT. Efforts are therefore being made to implement active-contours, artificial intelligence, and graph-search to automatically segment retinal layers with accuracy comparable to that of manual segmentation, to ease clinical decision-making. Although, low optical contrast, heavy speckle noise, and pathologies pose challenges to automated segmentation. Graph-based image segmentation approach stands out from the rest because of its ability to minimize the cost function while maximising the flow. This study has developed and implemented a shortest-path based graph-search algorithm for automated intraretinal layer segmentation of SD-OCT images. The algorithm estimates the minimal-weight path between two graph-nodes based on their gradients. Boundary position indices (BPI) are computed from the transition between pixel intensities. The mean difference between BPIs of two consecutive layers quantify individual layer thicknesses, which shows statistically insignificant differences when compared to a previous study [for overall retina: p = 0.17, for individual layers: p > 0.05 (except one layer: p = 0.04)]. These results substantiate the accurate delineation of seven intraretinal boundaries in SD-OCT images by this algorithm, with a mean computation time of 0.93 seconds (64-bit Windows10, core i5, 8GB RAM). Besides being self-reliant for denoising, the algorithm is further computationally optimized to restrict segmentation within the user defined region-of-interest. The efficiency and reliability of this algorithm, even in noisy image conditions, makes it clinically applicable.

  6. SALIENCY BASED SEGMENTATION OF SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    A. Sharma

    2015-03-01

    Full Text Available Saliency gives the way as humans see any image and saliency based segmentation can be eventually helpful in Psychovisual image interpretation. Keeping this in view few saliency models are used along with segmentation algorithm and only the salient segments from image have been extracted. The work is carried out for terrestrial images as well as for satellite images. The methodology used in this work extracts those segments from segmented image which are having higher or equal saliency value than a threshold value. Salient and non salient regions of image become foreground and background respectively and thus image gets separated. For carrying out this work a dataset of terrestrial images and Worldview 2 satellite images (sample data are used. Results show that those saliency models which works better for terrestrial images are not good enough for satellite image in terms of foreground and background separation. Foreground and background separation in terrestrial images is based on salient objects visible on the images whereas in satellite images this separation is based on salient area rather than salient objects.

  7. A novel biomedical image indexing and retrieval system via deep preference learning.

    Science.gov (United States)

    Pang, Shuchao; Orgun, Mehmet A; Yu, Zhezhou

    2018-05-01

    The traditional biomedical image retrieval methods as well as content-based image retrieval (CBIR) methods originally designed for non-biomedical images either only consider using pixel and low-level features to describe an image or use deep features to describe images but still leave a lot of room for improving both accuracy and efficiency. In this work, we propose a new approach, which exploits deep learning technology to extract the high-level and compact features from biomedical images. The deep feature extraction process leverages multiple hidden layers to capture substantial feature structures of high-resolution images and represent them at different levels of abstraction, leading to an improved performance for indexing and retrieval of biomedical images. We exploit the current popular and multi-layered deep neural networks, namely, stacked denoising autoencoders (SDAE) and convolutional neural networks (CNN) to represent the discriminative features of biomedical images by transferring the feature representations and parameters of pre-trained deep neural networks from another domain. Moreover, in order to index all the images for finding the similarly referenced images, we also introduce preference learning technology to train and learn a kind of a preference model for the query image, which can output the similarity ranking list of images from a biomedical image database. To the best of our knowledge, this paper introduces preference learning technology for the first time into biomedical image retrieval. We evaluate the performance of two powerful algorithms based on our proposed system and compare them with those of popular biomedical image indexing approaches and existing regular image retrieval methods with detailed experiments over several well-known public biomedical image databases. Based on different criteria for the evaluation of retrieval performance, experimental results demonstrate that our proposed algorithms outperform the state

  8. Automatic medical image annotation and keyword-based image retrieval using relevance feedback.

    Science.gov (United States)

    Ko, Byoung Chul; Lee, JiHyeon; Nam, Jae-Yeal

    2012-08-01

    This paper presents novel multiple keywords annotation for medical images, keyword-based medical image retrieval, and relevance feedback method for image retrieval for enhancing image retrieval performance. For semantic keyword annotation, this study proposes a novel medical image classification method combining local wavelet-based center symmetric-local binary patterns with random forests. For keyword-based image retrieval, our retrieval system use the confidence score that is assigned to each annotated keyword by combining probabilities of random forests with predefined body relation graph. To overcome the limitation of keyword-based image retrieval, we combine our image retrieval system with relevance feedback mechanism based on visual feature and pattern classifier. Compared with other annotation and relevance feedback algorithms, the proposed method shows both improved annotation performance and accurate retrieval results.

  9. Image inpainting based on stacked autoencoders

    International Nuclear Information System (INIS)

    Shcherbakov, O; Batishcheva, V

    2014-01-01

    Recently we have proposed the algorithm for the problem of image inpaiting (filling in occluded or damaged parts of images). This algorithm was based on the criterion spectrum entropy and showed promising results despite of using hand-crafted representation of images. In this paper, we present a method for solving image inpaiting task based on learning some image representation. Some results are shown to illustrate quality of image reconstruction.

  10. Evidence based medical imaging (EBMI)

    International Nuclear Information System (INIS)

    Smith, Tony

    2008-01-01

    Background: The evidence based paradigm was first described about a decade ago. Previous authors have described a framework for the application of evidence based medicine which can be readily adapted to medical imaging practice. Purpose: This paper promotes the application of the evidence based framework in both the justification of the choice of examination type and the optimisation of the imaging technique used. Methods: The framework includes five integrated steps: framing a concise clinical question; searching for evidence to answer that question; critically appraising the evidence; applying the evidence in clinical practice; and, evaluating the use of revised practices. Results: This paper illustrates the use of the evidence based framework in medical imaging (that is, evidence based medical imaging) using the examples of two clinically relevant case studies. In doing so, a range of information technology and other resources available to medical imaging practitioners are identified with the intention of encouraging the application of the evidence based paradigm in radiography and radiology. Conclusion: There is a perceived need for radiographers and radiologists to make greater use of valid research evidence from the literature to inform their clinical practice and thus provide better quality services

  11. The reduction of image noise and streak artifact in the thoracic inlet during low dose and ultra-low dose thoracic CT

    International Nuclear Information System (INIS)

    Paul, N S; Prezelj, E; Burey, P; Menezes, R J; Blobel, J; Ursani, A; Kashani, H; Siewerdsen, J H

    2010-01-01

    Increased pixel noise and streak artifact reduce CT image quality and limit the potential for radiation dose reduction during CT of the thoracic inlet. We propose to quantify the pixel noise of mediastinal structures in the thoracic inlet, during low-dose (LDCT) and ultralow-dose (uLDCT) thoracic CT, and assess the utility of new software (quantum denoising system and BOOST3D) in addressing these limitations. Twelve patients had LDCT (120 kV, 25 mAs) and uLDCT (120 kV, 10 mAs) images reconstructed initially using standard mediastinal and lung filters followed by the quantum denoising system (QDS) to reduce pixel noise and BOOST3D (B3D) software to correct photon starvation noise as follows: group 1 no QDS, no B3D; group 2 B3D alone; group 3 QDS alone and group 4 both QDS and B3D. Nine regions of interest (ROIs) were replicated on mediastinal anatomy in the thoracic inlet, for each patient resulting in 3456 data points to calculate pixel noise and attenuation. QDS reduced pixel noise by 18.4% (lung images) and 15.8% (mediastinal images) at 25 mAs. B3D reduced pixel noise by ∼8% in the posterior thorax and in combination there was a 35.5% reduction in effective radiation dose (E) for LDCT (1.63-1.05 mSv) in lung images and 32.2% (1.55-1.05 mSv) in mediastinal images. The same combination produced 20.7% reduction (0.53-0.42 mSv) in E for uLDCT, for lung images and 17.3% (0.51-0.42) for mediastinal images. This quantitative analysis of image quality confirms the utility of dedicated processing software in targeting image noise and streak artifact in thoracic LDCT and uLDCT images taken in the thoracic inlet. This processing software potentiates substantial reductions in radiation dose during thoracic LDCT and uLDCT.

  12. Wavelet analysis deformation monitoring data of high-speed railway bridge

    Science.gov (United States)

    Tang, ShiHua; Huang, Qing; Zhou, Conglin; Xu, HongWei; Liu, YinTao; Li, FeiDa

    2015-12-01

    Deformation monitoring data of high-speed railway bridges will inevitably be affected because of noise pollution, A deformation monitoring point of high-speed railway bridge was measurd by using sokkia SDL30 electronic level for a long time,which got a large number of deformation monitoring data. Based on the characteristics of the deformation monitoring data of high-speed railway bridge, which contain lots of noise. Based on the MATLAB software platform, 120 groups of deformation monitoring data were applied to analysis of wavelet denoising.sym6,db6 wavelet basis function were selected to analyze and remove the noise.The original signal was broken into three layers wavelet,which contain high frequency coefficients and low frequency coefficients.However, high frequency coefficient have plenty of noise.Adaptive method of soft and hard threshold were used to handle in the high frequency coefficient.Then,high frequency coefficient that was removed much of noise combined with low frequency coefficient to reconstitute and obtain reconstruction wavelet signal.Root Mean Square Error (RMSE) and Signal-To-Noise Ratio (SNR) were regarded as evaluation index of denoising,The smaller the root mean square error and the greater signal-to-noise ratio indicate that them have a good effect in denoising. We can surely draw some conclusions in the experimental analysis:the db6 wavelet basis function has a good effect in wavelet denoising by using a adaptive soft threshold method,which root mean square error is minimum and signal-to-noise ratio is maximum.Moreover,the reconstructed image are more smooth than original signal denoising after wavelet denoising, which removed noise and useful signal are obtained in the original signal.Compared to the other three methods, this method has a good effect in denoising, which not only retain useful signal in the original signal, but aiso reach the goal of removing noise. So, it has a strong practical value in a actual deformation monitoring

  13. A Divergence Median-based Geometric Detector with A Weighted Averaging Filter

    Science.gov (United States)

    Hua, Xiaoqiang; Cheng, Yongqiang; Li, Yubo; Wang, Hongqiang; Qin, Yuliang

    2018-01-01

    To overcome the performance degradation of the classical fast Fourier transform (FFT)-based constant false alarm rate detector with the limited sample data, a divergence median-based geometric detector on the Riemannian manifold of Heimitian positive definite matrices is proposed in this paper. In particular, an autocorrelation matrix is used to model the correlation of sample data. This method of the modeling can avoid the poor Doppler resolution as well as the energy spread of the Doppler filter banks result from the FFT. Moreover, a weighted averaging filter, conceived from the philosophy of the bilateral filtering in image denoising, is proposed and combined within the geometric detection framework. As the weighted averaging filter acts as the clutter suppression, the performance of the geometric detector is improved. Numerical experiments are given to validate the effectiveness of our proposed method.

  14. Image based Monument Recognition using Graph based Visual Saliency

    DEFF Research Database (Denmark)

    Kalliatakis, Grigorios; Triantafyllidis, Georgios

    2013-01-01

    This article presents an image-based application aiming at simple image classification of well-known monuments in the area of Heraklion, Crete, Greece. This classification takes place by utilizing Graph Based Visual Saliency (GBVS) and employing Scale Invariant Feature Transform (SIFT) or Speeded......, the images have been previously processed according to the Graph Based Visual Saliency model in order to keep either SIFT or SURF features corresponding to the actual monuments while the background “noise” is minimized. The application is then able to classify these images, helping the user to better...

  15. Content-Based Image Retrieval Based on Electromagnetism-Like Mechanism

    Directory of Open Access Journals (Sweden)

    Hamid A. Jalab

    2013-01-01

    Full Text Available Recently, many researchers in the field of automatic content-based image retrieval have devoted a remarkable amount of research looking for methods to retrieve the best relevant images to the query image. This paper presents a novel algorithm for increasing the precision in content-based image retrieval based on electromagnetism optimization technique. The electromagnetism optimization is a nature-inspired technique that follows the collective attraction-repulsion mechanism by considering each image as an electrical charge. The algorithm is composed of two phases: fitness function measurement and electromagnetism optimization technique. It is implemented on a database with 8,000 images spread across 80 classes with 100 images in each class. Eight thousand queries are fired on the database, and the overall average precision is computed. Experimental results of the proposed approach have shown significant improvement in the retrieval performance in regard to precision.

  16. Fast single image dehazing based on image fusion

    Science.gov (United States)

    Liu, Haibo; Yang, Jie; Wu, Zhengping; Zhang, Qingnian

    2015-01-01

    Images captured in foggy weather conditions often fade the colors and reduce the contrast of the observed objects. An efficient image fusion method is proposed to remove haze from a single input image. First, the initial medium transmission is estimated based on the dark channel prior. Second, the method adopts an assumption that the degradation level affected by haze of each region is the same, which is similar to the Retinex theory, and uses a simple Gaussian filter to get the coarse medium transmission. Then, pixel-level fusion is achieved between the initial medium transmission and coarse medium transmission. The proposed method can recover a high-quality haze-free image based on the physical model, and the complexity of the proposed method is only a linear function of the number of input image pixels. Experimental results demonstrate that the proposed method can allow a very fast implementation and achieve better restoration for visibility and color fidelity compared to some state-of-the-art methods.

  17. Color-Based Image Retrieval from High-Similarity Image Databases

    DEFF Research Database (Denmark)

    Hansen, Michael Adsetts Edberg; Carstensen, Jens Michael

    2003-01-01

    Many image classification problems can fruitfully be thought of as image retrieval in a "high similarity image database" (HSID) characterized by being tuned towards a specific application and having a high degree of visual similarity between entries that should be distinguished. We introduce...... a method for HSID retrieval using a similarity measure based on a linear combination of Jeffreys-Matusita (JM) distances between distributions of color (and color derivatives) estimated from a set of automatically extracted image regions. The weight coefficients are estimated based on optimal retrieval...... performance. Experimental results on the difficult task of visually identifying clones of fungal colonies grown in a petri dish and categorization of pelts show a high retrieval accuracy of the method when combined with standardized sample preparation and image acquisition....

  18. J-substitution algorithm in magnetic resonance electrical impedance tomography (MREIT): phantom experiments for static resistivity images.

    Science.gov (United States)

    Khang, Hyun Soo; Lee, Byung Il; Oh, Suk Hoon; Woo, Eung Je; Lee, Soo Yeol; Cho, Min Hyoung; Kwon, Ohin; Yoon, Jeong Rock; Seo, Jin Keun

    2002-06-01

    Recently, a new static resistivity image reconstruction algorithm is proposed utilizing internal current density data obtained by magnetic resonance current density imaging technique. This new imaging method is called magnetic resonance electrical impedance tomography (MREIT). The derivation and performance of J-substitution algorithm in MREIT have been reported as a new accurate and high-resolution static impedance imaging technique via computer simulation methods. In this paper, we present experimental procedures, denoising techniques, and image reconstructions using a 0.3-tesla (T) experimental MREIT system and saline phantoms. MREIT using J-substitution algorithm effectively utilizes the internal current density information resolving the problem inherent in a conventional EIT, that is, the low sensitivity of boundary measurements to any changes of internal tissue resistivity values. Resistivity images of saline phantoms show an accuracy of 6.8%-47.2% and spatial resolution of 64 x 64. Both of them can be significantly improved by using an MRI system with a better signal-to-noise ratio.

  19. Characterization of statistical prior image constrained compressed sensing (PICCS): II. Application to dose reduction

    International Nuclear Information System (INIS)

    Lauzier, Pascal Thériault; Chen Guanghong

    2013-01-01

    Purpose: The ionizing radiation imparted to patients during computed tomography exams is raising concerns. This paper studies the performance of a scheme called dose reduction using prior image constrained compressed sensing (DR-PICCS). The purpose of this study is to characterize the effects of a statistical model of x-ray detection in the DR-PICCS framework and its impact on spatial resolution. Methods: Both numerical simulations with known ground truth and in vivo animal dataset were used in this study. In numerical simulations, a phantom was simulated with Poisson noise and with varying levels of eccentricity. Both the conventional filtered backprojection (FBP) and the PICCS algorithms were used to reconstruct images. In PICCS reconstructions, the prior image was generated using two different denoising methods: a simple Gaussian blur and a more advanced diffusion filter. Due to the lack of shift-invariance in nonlinear image reconstruction such as the one studied in this paper, the concept of local spatial resolution was used to study the sharpness of a reconstructed image. Specifically, a directional metric of image sharpness, the so-called pseudopoint spread function (pseudo-PSF), was employed to investigate local spatial resolution. Results: In the numerical studies, the pseudo-PSF was reduced from twice the voxel width in the prior image down to less than 1.1 times the voxel width in DR-PICCS reconstructions when the statistical model was not included. At the same noise level, when statistical weighting was used, the pseudo-PSF width in DR-PICCS reconstructed images varied between 1.5 and 0.75 times the voxel width depending on the direction along which it was measured. However, this anisotropy was largely eliminated when the prior image was generated using diffusion filtering; the pseudo-PSF width was reduced to below one voxel width in that case. In the in vivo study, a fourfold improvement in CNR was achieved while qualitatively maintaining sharpness

  20. Mammographic Image Analysis of Breast Using Neural Network

    Directory of Open Access Journals (Sweden)

    Lesa MAMBWE

    2015-07-01

    Full Text Available This paper discusses the various stages of detecting tumours of the breast mammogram images. A Neural Network algorithm is applied for obtaining the complete classification of the tumour into normal or abnormal. The most important procedure or technique for obtaining the classification is the feature extraction, by extracting a few of discriminative features, first-order statistical intensities and gradients. The Image Pre-processing technique is essential prior to Image Segmentation in order to obtain accurate segmentation. Thus mass detection can be carried out. The processes involved in achieving the three techniques mentioned above include global equalization transformation, denoising, binarization, breast orientation determination and the pectoral muscle suppression. The presented feature difference matrices could be created by five features extracted from a suspicious region of interest (ROI. Grey Level Co-occurrence Matrix (GLCM aids the obtaining of statistical features such as correlation, energy, entropy and homogeneity. The other statistical to features to obtain are area, moment, variance, entropy, standard deviation and moment. The Neural network technique yields results of abnormal mammograms.