WorldWideScience

Sample records for based image denoising

  1. Image denoising based on noise detection

    Science.gov (United States)

    Jiang, Yuanxiang; Yuan, Rui; Sun, Yuqiu; Tian, Jinwen

    2018-03-01

    Because of the noise points in the images, any operation of denoising would change the original information of non-noise pixel. A noise detection algorithm based on fractional calculus was proposed to denoise in this paper. Convolution of the image was made to gain direction gradient masks firstly. Then, the mean gray was calculated to obtain the gradient detection maps. Logical product was made to acquire noise position image next. Comparisons in the visual effect and evaluation parameters after processing, the results of experiment showed that the denoising algorithms based on noise were better than that of traditional methods in both subjective and objective aspects.

  2. MMSE based map estimation for image denoising

    Science.gov (United States)

    Om, Hari; Biswas, Mantosh

    2014-04-01

    Denoising of a natural image corrupted by the additive white Gaussian noise (AWGN) is a classical problem in image processing. The NeighShrink [17,18], LAWML [19], BiShrink [20,21], IIDMWT [23], IAWDMNC [25], and GIDMNWC [24] denoising algorithms remove the noise from the noisy wavelet coefficients using thresholding by retaining only the large coefficients and setting the remaining to zero. Generally the threshold depends mainly on the variance, image size, and image decomposition levels. The performances of these methods are not very effective as they are not spatially adaptive i.e., the parameters considered are not smoothly varied in the neighborhood window. Our proposed method overcomes this weakness by using minimum mean square error (MMSE) based maximum a posterior (MAP) estimation. In this paper, we modify the parameters such as variance of the classical MMSE estimator in the neighborhood window of the noisy wavelet coefficients to remove the noise effectively. We demonstrate experimentally that our method outperforms the NeighShrink, LAWML, BiShrink, IIDMWT, IAWDMNC, and GIDMNWC methods in terms of the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM). It is more effective particularly for the highly corrupted natural images.

  3. Evolutionary Fuzzy Block-Matching-Based Camera Raw Image Denoising.

    Science.gov (United States)

    Yang, Chin-Chang; Guo, Shu-Mei; Tsai, Jason Sheng-Hong

    2017-09-01

    An evolutionary fuzzy block-matching-based image denoising algorithm is proposed to remove noise from a camera raw image. Recently, a variance stabilization transform is widely used to stabilize the noise variance, so that a Gaussian denoising algorithm can be used to remove the signal-dependent noise in camera sensors. However, in the stabilized domain, the existed denoising algorithm may blur too much detail. To provide a better estimate of the noise-free signal, a new block-matching approach is proposed to find similar blocks by the use of a type-2 fuzzy logic system (FLS). Then, these similar blocks are averaged with the weightings which are determined by the FLS. Finally, an efficient differential evolution is used to further improve the performance of the proposed denoising algorithm. The experimental results show that the proposed denoising algorithm effectively improves the performance of image denoising. Furthermore, the average performance of the proposed method is better than those of two state-of-the-art image denoising algorithms in subjective and objective measures.

  4. Image denoising based on wavelet cone of influence analysis

    Science.gov (United States)

    Pang, Wei; Li, Yufeng

    2009-11-01

    Donoho et al have proposed a method for denoising by thresholding based on wavelet transform, and indeed, the application of their method to image denoising has been extremely successful. But this method is based on the assumption that the type of noise is only additive Gaussian white noise, which is not efficient to impulse noise. In this paper, a new image denoising algorithm based on wavelet cone of influence (COI) analyzing is proposed, and which can effectively remove the impulse noise and preserve the image edges via undecimated discrete wavelet transform (UDWT). Furthermore, combining with the traditional wavelet thresholding denoising method, it can be also used to restrain more widely type of noise such as Gaussian noise, impulse noise, poisson noise and other mixed noise. Experiment results illustrate the advantages of this method.

  5. Regularized image denoising based on spectral gradient optimization

    International Nuclear Information System (INIS)

    Lukić, Tibor; Lindblad, Joakim; Sladoje, Nataša

    2011-01-01

    Image restoration methods, such as denoising, deblurring, inpainting, etc, are often based on the minimization of an appropriately defined energy function. We consider energy functions for image denoising which combine a quadratic data-fidelity term and a regularization term, where the properties of the latter are determined by a used potential function. Many potential functions are suggested for different purposes in the literature. We compare the denoising performance achieved by ten different potential functions. Several methods for efficient minimization of regularized energy functions exist. Most are only applicable to particular choices of potential functions, however. To enable a comparison of all the observed potential functions, we propose to minimize the objective function using a spectral gradient approach; spectral gradient methods put very weak restrictions on the used potential function. We present and evaluate the performance of one spectral conjugate gradient and one cyclic spectral gradient algorithm, and conclude from experiments that both are well suited for the task. We compare the performance with three total variation-based state-of-the-art methods for image denoising. From the empirical evaluation, we conclude that denoising using the Huber potential (for images degraded by higher levels of noise; signal-to-noise ratio below 10 dB) and the Geman and McClure potential (for less noisy images), in combination with the spectral conjugate gradient minimization algorithm, shows the overall best performance

  6. An adaptive image denoising method based on local parameters ...

    Indian Academy of Sciences (India)

    An adaptive image denoising method based on local parameters optimization. 881 the computations and the directional decomposition is done using the directional filter banks. (DFB). Then, the Donoho and Johnstone's threshold is used to modify the coefficients, which in turn provide the noise-free image on applying the ...

  7. Deep Marginalized Sparse Denoising Auto-Encoder for Image Denoising

    Science.gov (United States)

    Ma, Hongqiang; Ma, Shiping; Xu, Yuelei; Zhu, Mingming

    2018-01-01

    Stacked Sparse Denoising Auto-Encoder (SSDA) has been successfully applied to image denoising. As a deep network, the SSDA network with powerful data feature learning ability is superior to the traditional image denoising algorithms. However, the algorithm has high computational complexity and slow convergence rate in the training. To address this limitation, we present a method of image denoising based on Deep Marginalized Sparse Denoising Auto-Encoder (DMSDA). The loss function of Sparse Denoising Auto-Encoder is marginalized so that it satisfies both sparseness and marginality. The experimental results show that the proposed algorithm can not only outperform SSDA in the convergence speed and training time, but also has better denoising performance than the current excellent denoising algorithms, including both the subjective and objective evaluation of image denoising.

  8. An adaptive image denoising method based on local parameters ...

    Indian Academy of Sciences (India)

    ML); peak signal-to-noise ratio (PSNR). 1. Introduction. Image denoising is one of the major research topics in image processing. An efficient image denoising method is that in which a compromise has to be found between the noise reduction.

  9. Remote Sensing Image Classification Based on Stacked Denoising Autoencoder

    Directory of Open Access Journals (Sweden)

    Peng Liang

    2017-12-01

    Full Text Available Focused on the issue that conventional remote sensing image classification methods have run into the bottlenecks in accuracy, a new remote sensing image classification method inspired by deep learning is proposed, which is based on Stacked Denoising Autoencoder. First, the deep network model is built through the stacked layers of Denoising Autoencoder. Then, with noised input, the unsupervised Greedy layer-wise training algorithm is used to train each layer in turn for more robust expressing, characteristics are obtained in supervised learning by Back Propagation (BP neural network, and the whole network is optimized by error back propagation. Finally, Gaofen-1 satellite (GF-1 remote sensing data are used for evaluation, and the total accuracy and kappa accuracy reach 95.7% and 0.955, respectively, which are higher than that of the Support Vector Machine and Back Propagation neural network. The experiment results show that the proposed method can effectively improve the accuracy of remote sensing image classification.

  10. Image denoising method based on FPGA in digital video transmission

    Science.gov (United States)

    Xiahou, Yaotao; Wang, Wanping; Huang, Tao

    2017-11-01

    In the image acquisition and transmission link, due to the acquisition of equipment and methods, the image would suffer some different degree of interference ,and the interference will reduce the quality of image which would influence the subsequent processing. Therefore, the image filtering and image enhancement are particularly important.The traditional image denoising algorithm smoothes the image while removing the noise, so that the details of the image are lost. In order to improve image quality and save image detail, this paper proposes an improved filtering algorithm based on edge detection, Gaussian filter and median filter. This method can not only reduce the noise effectively, but also the image details are saved relatively well, and the FPGA implementation scheme of this filter algorithm is also given in this paper.

  11. Regularized Fractional Power Parameters for Image Denoising Based on Convex Solution of Fractional Heat Equation

    Directory of Open Access Journals (Sweden)

    Hamid A. Jalab

    2014-01-01

    Full Text Available The interest in using fractional mask operators based on fractional calculus operators has grown for image denoising. Denoising is one of the most fundamental image restoration problems in computer vision and image processing. This paper proposes an image denoising algorithm based on convex solution of fractional heat equation with regularized fractional power parameters. The performances of the proposed algorithms were evaluated by computing the PSNR, using different types of images. Experiments according to visual perception and the peak signal to noise ratio values show that the improvements in the denoising process are competent with the standard Gaussian filter and Wiener filter.

  12. An Edge-Preserved Image Denoising Algorithm Based on Local Adaptive Regularization

    Directory of Open Access Journals (Sweden)

    Li Guo

    2016-01-01

    Full Text Available Image denoising methods are often based on the minimization of an appropriately defined energy function. Many gradient dependent energy functions, such as Potts model and total variation denoising, regard image as piecewise constant function. In these methods, some important information such as edge sharpness and location is well preserved, but some detailed image feature like texture is often compromised in the process of denoising. For this reason, an image denoising method based on local adaptive regularization is proposed in this paper, which can adaptively adjust denoising degree of noisy image by adding spatial variable fidelity term, so as to better preserve fine scale features of image. Experimental results show that the proposed denoising method can achieve state-of-the-art subjective visual effect, and the signal-noise-ratio (SNR is also objectively improved by 0.3–0.6 dB.

  13. Image Classification Based on Convolutional Denoising Sparse Autoencoder

    Directory of Open Access Journals (Sweden)

    Shuangshuang Chen

    2017-01-01

    Full Text Available Image classification aims to group images into corresponding semantic categories. Due to the difficulties of interclass similarity and intraclass variability, it is a challenging issue in computer vision. In this paper, an unsupervised feature learning approach called convolutional denoising sparse autoencoder (CDSAE is proposed based on the theory of visual attention mechanism and deep learning methods. Firstly, saliency detection method is utilized to get training samples for unsupervised feature learning. Next, these samples are sent to the denoising sparse autoencoder (DSAE, followed by convolutional layer and local contrast normalization layer. Generally, prior in a specific task is helpful for the task solution. Therefore, a new pooling strategy—spatial pyramid pooling (SPP fused with center-bias prior—is introduced into our approach. Experimental results on the common two image datasets (STL-10 and CIFAR-10 demonstrate that our approach is effective in image classification. They also demonstrate that none of these three components: local contrast normalization, SPP fused with center-prior, and l2 vector normalization can be excluded from our proposed approach. They jointly improve image representation and classification performance.

  14. Improved deadzone modeling for bivariate wavelet shrinkage-based image denoising

    Science.gov (United States)

    DelMarco, Stephen

    2016-05-01

    Modern image processing performed on-board low Size, Weight, and Power (SWaP) platforms, must provide high- performance while simultaneously reducing memory footprint, power consumption, and computational complexity. Image preprocessing, along with downstream image exploitation algorithms such as object detection and recognition, and georegistration, place a heavy burden on power and processing resources. Image preprocessing often includes image denoising to improve data quality for downstream exploitation algorithms. High-performance image denoising is typically performed in the wavelet domain, where noise generally spreads and the wavelet transform compactly captures high information-bearing image characteristics. In this paper, we improve modeling fidelity of a previously-developed, computationally-efficient wavelet-based denoising algorithm. The modeling improvements enhance denoising performance without significantly increasing computational cost, thus making the approach suitable for low-SWAP platforms. Specifically, this paper presents modeling improvements to the Sendur-Selesnick model (SSM) which implements a bivariate wavelet shrinkage denoising algorithm that exploits interscale dependency between wavelet coefficients. We formulate optimization problems for parameters controlling deadzone size which leads to improved denoising performance. Two formulations are provided; one with a simple, closed form solution which we use for numerical result generation, and the second as an integral equation formulation involving elliptic integrals. We generate image denoising performance results over different image sets drawn from public domain imagery, and investigate the effect of wavelet filter tap length on denoising performance. We demonstrate denoising performance improvement when using the enhanced modeling over performance obtained with the baseline SSM model.

  15. Markov chain Monte Carlo sampling based terahertz holography image denoising.

    Science.gov (United States)

    Chen, Guanghao; Li, Qi

    2015-05-10

    Terahertz digital holography has attracted much attention in recent years. This technology combines the strong transmittance of terahertz and the unique features of digital holography. Nonetheless, the low clearness of the images captured has hampered the popularization of this imaging technique. In this paper, we perform a digital image denoising technique on our multiframe superposed images. The noise suppression model is concluded as Bayesian least squares estimation and is solved with Markov chain Monte Carlo (MCMC) sampling. In this algorithm, a weighted mean filter with a Gaussian kernel is first applied to the noisy image, and then by nonlinear contrast transform, the contrast of the image is restored to the former level. By randomly walking on the preprocessed image, the MCMC-based filter keeps collecting samples, assigning them weights by similarity assessment, and constructs multiple sample sequences. Finally, these sequences are used to estimate the value of each pixel. Our algorithm shares some good qualities with nonlocal means filtering and the algorithm based on conditional sampling proposed by Wong et al. [Opt. Express18, 8338 (2010)10.1364/OE.18.008338OPEXFF1094-4087], such as good uniformity, and, moreover, reveals better performance in structure preservation, as shown in numerical comparison using the structural similarity index measurement and the peak signal-to-noise ratio.

  16. Image denoising using cloud images

    Science.gov (United States)

    Yue, Huanjing; Sun, Xiaoyan; Yang, Jingyu; Wu, Feng

    2013-09-01

    Image denoising manages to recover a digital image from its noisy version by exploring the statistical features inside a given noisy image. Most denoising methods perform well at low noise levels but lose efficiency at higher ones. In this paper, we propose a novel image denoising method, which restores an image by exploiting the correlations between the noisy image and the images retrieved from the cloud. Given a noisy image, we first retrieve relevant images based on feature-level similarity. These images are then geometrically aligned to the noisy image to increase global statistical correlation. Using the aligned images as references, we propose recovering the image with patch-level noise removal. For each noisy patch, we first retrieve similar patches from the references and stack these patches (including the noisy one) into a three dimensional (3D) group. We then obtain the noise free (NF) patches by collaborative filtering over the 3D groups. These recovered NF patches are aggregated together, producing the desired NF image. Experimental results demonstrate that our scheme achieves significantly better results compared to state-of-the-art methods in terms of both objective and subjective qualities.

  17. Image denoising using ridgelet shrinkage

    Science.gov (United States)

    Kumar, Pawan; Bhurchandi, Kishore

    2015-03-01

    Protecting fine details and edges while denoising digital images is a challenging area of research due to changing characteristics of both, noise and signal. Denoising is used to remove noise from corrupted images but in the process fine details like weak edges and textures are hampered. In this paper we propose an algorithm based on Ridgelet transform to denoise images and protect fine details. Here we use cycle spinning on Ridgelet coefficients with soft thresholding and name the algorithm as Ridgelet Shrinkage in order to suppress noise and preserve details. The projections in Ridgelets filter out the noise while protecting the details while the ridgelet shrinkage further suppress noise. The proposed algorithm out performs the Wavelet Shrinkage and Non-local (NL) means denoising algorithms on the basis of Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) numerically and visually both.

  18. Night Vision Image De-Noising of Apple Harvesting Robots Based on the Wavelet Fuzzy Threshold

    Directory of Open Access Journals (Sweden)

    Chengzhi Ruan

    2015-12-01

    Full Text Available In this paper, the de-noising problem of night vision images is studied for apple harvesting robots working at night. The wavelet threshold method is applied to the de-noising of night vision images. Due to the fact that the choice of wavelet threshold function restricts the effect of the wavelet threshold method, the fuzzy theory is introduced to construct the fuzzy threshold function. We then propose the de-noising algorithm based on the wavelet fuzzy threshold. This new method can reduce image noise interferences, which is conducive to further image segmentation and recognition. To demonstrate the performance of the proposed method, we conducted simulation experiments and compared the median filtering and the wavelet soft threshold de-noising methods. It is shown that this new method can achieve the highest relative PSNR. Compared with the original images, the median filtering de-noising method and the classical wavelet threshold de-noising method, the relative PSNR increases 24.86%, 13.95%, and 11.38% respectively. We carry out comparisons from various aspects, such as intuitive visual evaluation, objective data evaluation, edge evaluation and artificial light evaluation. The experimental results show that the proposed method has unique advantages for the de-noising of night vision images, which lay the foundation for apple harvesting robots working at night.

  19. Image de-noising based on mathematical morphology and multi-objective particle swarm optimization

    Science.gov (United States)

    Dou, Liyun; Xu, Dan; Chen, Hao; Liu, Yicheng

    2017-07-01

    To overcome the problem of image de-noising, an efficient image de-noising approach based on mathematical morphology and multi-objective particle swarm optimization (MOPSO) is proposed in this paper. Firstly, constructing a series and parallel compound morphology filter based on open-close (OC) operation and selecting a structural element with different sizes try best to eliminate all noise in a series link. Then, combining multi-objective particle swarm optimization (MOPSO) to solve the parameters setting of multiple structural element. Simulation result shows that our algorithm can achieve a superior performance compared with some traditional de-noising algorithm.

  20. PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.

    Science.gov (United States)

    Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David

    2009-04-01

    Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.

  1. Wavelet-based de-noising algorithm for images acquired with parallel magnetic resonance imaging (MRI)

    International Nuclear Information System (INIS)

    Delakis, Ioannis; Hammad, Omer; Kitney, Richard I

    2007-01-01

    Wavelet-based de-noising has been shown to improve image signal-to-noise ratio in magnetic resonance imaging (MRI) while maintaining spatial resolution. Wavelet-based de-noising techniques typically implemented in MRI require that noise displays uniform spatial distribution. However, images acquired with parallel MRI have spatially varying noise levels. In this work, a new algorithm for filtering images with parallel MRI is presented. The proposed algorithm extracts the edges from the original image and then generates a noise map from the wavelet coefficients at finer scales. The noise map is zeroed at locations where edges have been detected and directional analysis is also used to calculate noise in regions of low-contrast edges that may not have been detected. The new methodology was applied on phantom and brain images and compared with other applicable de-noising techniques. The performance of the proposed algorithm was shown to be comparable with other techniques in central areas of the images, where noise levels are high. In addition, finer details and edges were maintained in peripheral areas, where noise levels are low. The proposed methodology is fully automated and can be applied on final reconstructed images without requiring sensitivity profiles or noise matrices of the receiver coils, therefore making it suitable for implementation in a clinical MRI setting

  2. A New Low-Rank Representation Based Hyperspectral Image Denoising Method for Mineral Mapping

    Directory of Open Access Journals (Sweden)

    Lianru Gao

    2017-11-01

    Full Text Available Hyperspectral imaging technology has been used for geological analysis for many years wherein mineral mapping is the dominant application for hyperspectral images (HSIs. The very high spectral resolution of HSIs enables the identification and the diagnosis of different minerals with detection accuracy far beyond that offered by multispectral images. However, HSIs are inevitably corrupted by noise during acquisition and transmission processes. The presence of noise may significantly degrade the quality of the extracted mineral information. In order to improve the accuracy of mineral mapping, denoising is a crucial pre-processing task. By leveraging on low-rank and self-similarity properties of HSIs, this paper proposes a state-of-the-art HSI denoising algorithm that implements two main steps: (1 signal subspace learning via fine-tuned Robust Principle Component Analysis (RPCA; and (2 denoising the images associated with the representation coefficients, with respect to an orthogonal subspace basis, using BM3D, a self-similarity based state-of-the-art denoising algorithm. Accordingly, the proposed algorithm is named Hyperspectral Denoising via Robust principle component analysis and Self-similarity (HyDRoS, which can be considered as a supervised version of FastHyDe. The effectiveness of HyDRoS is evaluated in a series of mineral mapping experiments using noise-reduced AVIRIS and Hyperion HSIs. In these experiments, the proposed denoiser yielded systematically state-of-the-art performance.

  3. Performance evaluation of image denoising developed using convolutional denoising autoencoders in chest radiography

    Science.gov (United States)

    Lee, Donghoon; Choi, Sunghoon; Kim, Hee-Joung

    2018-03-01

    When processing medical images, image denoising is an important pre-processing step. Various image denoising algorithms have been developed in the past few decades. Recently, image denoising using the deep learning method has shown excellent performance compared to conventional image denoising algorithms. In this study, we introduce an image denoising technique based on a convolutional denoising autoencoder (CDAE) and evaluate clinical applications by comparing existing image denoising algorithms. We train the proposed CDAE model using 3000 chest radiograms training data. To evaluate the performance of the developed CDAE model, we compare it with conventional denoising algorithms including median filter, total variation (TV) minimization, and non-local mean (NLM) algorithms. Furthermore, to verify the clinical effectiveness of the developed denoising model with CDAE, we investigate the performance of the developed denoising algorithm on chest radiograms acquired from real patients. The results demonstrate that the proposed denoising algorithm developed using CDAE achieves a superior noise-reduction effect in chest radiograms compared to TV minimization and NLM algorithms, which are state-of-the-art algorithms for image noise reduction. For example, the peak signal-to-noise ratio and structure similarity index measure of CDAE were at least 10% higher compared to conventional denoising algorithms. In conclusion, the image denoising algorithm developed using CDAE effectively eliminated noise without loss of information on anatomical structures in chest radiograms. It is expected that the proposed denoising algorithm developed using CDAE will be effective for medical images with microscopic anatomical structures, such as terminal bronchioles.

  4. Spatiospectral denoising framework for multispectral optoacoustic imaging based on sparse signal representation.

    Science.gov (United States)

    Tzoumas, Stratis; Rosenthal, Amir; Lutzweiler, Christian; Razansky, Daniel; Ntziachristos, Vasilis

    2014-11-01

    One of the major challenges in dynamic multispectral optoacoustic imaging is its relatively low signal-to-noise ratio which often requires repetitive signal acquisition and averaging, thus limiting imaging rate. The development of denoising methods which prevent the need for signal averaging in time presents an important goal for advancing the dynamic capabilities of the technology. In this paper, a denoising method is developed for multispectral optoacoustic imaging which exploits the implicit sparsity of multispectral optoacoustic signals both in space and in spectrum. Noise suppression is achieved by applying thresholding on a combined wavelet-Karhunen-Loève representation in which multispectral optoacoustic signals appear particularly sparse. The method is based on inherent characteristics of multispectral optoacoustic signals of tissues, offering promise for general application in different incarnations of multispectral optoacoustic systems. The performance of the proposed method is demonstrated on mouse images acquired in vivo for two common additive noise sources: time-varying parasitic signals and white noise. In both cases, the proposed method shows considerable improvement in image quality in comparison to previously published denoising strategies that do not consider multispectral information. The suggested denoising methodology can achieve noise suppression with minimal signal loss and considerably outperforms previously proposed denoising strategies, holding promise for advancing the dynamic capabilities of multispectral optoacoustic imaging while retaining image quality.

  5. Low-light level image de-noising algorithm based on PCA

    Science.gov (United States)

    Miao, Zhuang; Wang, Xiuqin; Yin, Panqiang; Lu, Dongming

    2014-11-01

    A de-noising method based on PCA (Principal Component Analysis) is proposed to suppress the noise of LLL (Low-Light Level) image. At first, the feasibility of de-noising with the algorithm of PCA is analyzed in detail. Since the image data is correlated in time and space, it is retained as principal component, while the noise is considered to be uncorrelated in both time and space and be removed as minor component. Then some LLL images is used in the experiment to confirm the proposed method. The sampling number of LLL image which can lead to the best de-noising effects is given. Some performance parameters are calculated and the results are analyzed in detail. To compare with the proposed method, some traditional de-noising algorithm are utilized to suppress noise of LLL images. Judging from the results, the proposed method has more significant effects of de-noising than the traditional algorithm. Theoretical analysis and experimental results show that the proposed method is reasonable and efficient.

  6. Dictionary-Based Image Denoising by Fused-Lasso Atom Selection

    Directory of Open Access Journals (Sweden)

    Ao Li

    2014-01-01

    Full Text Available We proposed an efficient image denoising scheme by fused lasso with dictionary learning. The scheme has two important contributions. The first one is that we learned the patch-based adaptive dictionary by principal component analysis (PCA with clustering the image into many subsets, which can better preserve the local geometric structure. The second one is that we coded the patches in each subset by fused lasso with the clustering learned dictionary and proposed an iterative Split Bregman to solve it rapidly. We present the capabilities with several experiments. The results show that the proposed scheme is competitive to some excellent denoising algorithms.

  7. Multicomponent MR Image Denoising

    Directory of Open Access Journals (Sweden)

    José V. Manjón

    2009-01-01

    Full Text Available Magnetic Resonance images are normally corrupted by random noise from the measurement process complicating the automatic feature extraction and analysis of clinical data. It is because of this reason that denoising methods have been traditionally applied to improve MR image quality. Many of these methods use the information of a single image without taking into consideration the intrinsic multicomponent nature of MR images. In this paper we propose a new filter to reduce random noise in multicomponent MR images by spatially averaging similar pixels using information from all available image components to perform the denoising process. The proposed algorithm also uses a local Principal Component Analysis decomposition as a postprocessing step to remove more noise by using information not only in the spatial domain but also in the intercomponent domain dealing in a higher noise reduction without significantly affecting the original image resolution. The proposed method has been compared with similar state-of-art methods over synthetic and real clinical multicomponent MR images showing an improved performance in all cases analyzed.

  8. Patch Similarity Modulus and Difference Curvature Based Fourth-Order Partial Differential Equation for Image Denoising

    Directory of Open Access Journals (Sweden)

    Yunjiao Bai

    2015-01-01

    Full Text Available The traditional fourth-order nonlinear diffusion denoising model suffers the isolated speckles and the loss of fine details in the processed image. For this reason, a new fourth-order partial differential equation based on the patch similarity modulus and the difference curvature is proposed for image denoising. First, based on the intensity similarity of neighbor pixels, this paper presents a new edge indicator called patch similarity modulus, which is strongly robust to noise. Furthermore, the difference curvature which can effectively distinguish between edges and noise is incorporated into the denoising algorithm to determine the diffusion process by adaptively adjusting the size of the diffusion coefficient. The experimental results show that the proposed algorithm can not only preserve edges and texture details, but also avoid isolated speckles and staircase effect while filtering out noise. And the proposed algorithm has a better performance for the images with abundant details. Additionally, the subjective visual quality and objective evaluation index of the denoised image obtained by the proposed algorithm are higher than the ones from the related methods.

  9. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    Science.gov (United States)

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.

  10. Projection domain denoising method based on dictionary learning for low-dose CT image reconstruction.

    Science.gov (United States)

    Zhang, Haiyan; Zhang, Liyi; Sun, Yunshan; Zhang, Jingyu

    2015-01-01

    Reducing X-ray tube current is one of the widely used methods for decreasing the radiation dose. Unfortunately, the signal-to-noise ratio (SNR) of the projection data degrades simultaneously. To improve the quality of reconstructed images, a dictionary learning based penalized weighted least-squares (PWLS) approach is proposed for sinogram denoising. The weighted least-squares considers the statistical characteristic of noise and the penalty models the sparsity of sinogram based on dictionary learning. Then reconstruct CT image using filtered back projection (FBP) algorithm from the denoised sinogram. The proposed method is particularly suitable for the projection data with low SNR. Experimental results show that the proposed method can get high-quality CT images when the signal to noise ratio of projection data declines sharply.

  11. An enhanced fractal image denoising algorithm

    International Nuclear Information System (INIS)

    Lu Jian; Ye Zhongxing; Zou Yuru; Ye Ruisong

    2008-01-01

    In recent years, there has been a significant development in image denoising using fractal-based method. This paper presents an enhanced fractal predictive denoising algorithm for denoising the images corrupted by an additive white Gaussian noise (AWGN) by using quadratic gray-level function. Meanwhile, a quantization method for the fractal gray-level coefficients of the quadratic function is proposed to strictly guarantee the contractivity requirement of the enhanced fractal coding, and in terms of the quality of the fractal representation measured by PSNR, the enhanced fractal image coding using quadratic gray-level function generally performs better than the standard fractal coding using linear gray-level function. Based on this enhanced fractal coding, the enhanced fractal image denoising is implemented by estimating the fractal gray-level coefficients of the quadratic function of the noiseless image from its noisy observation. Experimental results show that, compared with other standard fractal-based image denoising schemes using linear gray-level function, the enhanced fractal denoising algorithm can improve the quality of the restored image efficiently

  12. Normal Inverse Gaussian Model-Based Image Denoising in the NSCT Domain

    Directory of Open Access Journals (Sweden)

    Jian Jia

    2015-01-01

    Full Text Available The objective of image denoising is to retain useful details while removing as much noise as possible to recover an original image from its noisy version. This paper proposes a novel normal inverse Gaussian (NIG model-based method that uses a Bayesian estimator to carry out image denoising in the nonsubsampled contourlet transform (NSCT domain. In the proposed method, the NIG model is first used to describe the distributions of the image transform coefficients of each subband in the NSCT domain. Then, the corresponding threshold function is derived from the model using Bayesian maximum a posteriori probability estimation theory. Finally, optimal linear interpolation thresholding algorithm (OLI-Shrink is employed to guarantee a gentler thresholding effect. The results of comparative experiments conducted indicate that the denoising performance of our proposed method in terms of peak signal-to-noise ratio is superior to that of several state-of-the-art methods, including BLS-GSM, K-SVD, BivShrink, and BM3D. Further, the proposed method achieves structural similarity (SSIM index values that are comparable to those of the block-matching 3D transformation (BM3D method.

  13. Wavelet-based multicomponent denoising on GPU to improve the classification of hyperspectral images

    Science.gov (United States)

    Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco; Mouriño, J. C.

    2017-10-01

    Supervised classification allows handling a wide range of remote sensing hyperspectral applications. Enhancing the spatial organization of the pixels over the image has proven to be beneficial for the interpretation of the image content, thus increasing the classification accuracy. Denoising in the spatial domain of the image has been shown as a technique that enhances the structures in the image. This paper proposes a multi-component denoising approach in order to increase the classification accuracy when a classification method is applied. It is computed on multicore CPUs and NVIDIA GPUs. The method combines feature extraction based on a 1Ddiscrete wavelet transform (DWT) applied in the spectral dimension followed by an Extended Morphological Profile (EMP) and a classifier (SVM or ELM). The multi-component noise reduction is applied to the EMP just before the classification. The denoising recursively applies a separable 2D DWT after which the number of wavelet coefficients is reduced by using a threshold. Finally, inverse 2D-DWT filters are applied to reconstruct the noise free original component. The computational cost of the classifiers as well as the cost of the whole classification chain is high but it is reduced achieving real-time behavior for some applications through their computation on NVIDIA multi-GPU platforms.

  14. Hyperspectral light field image denoising

    Science.gov (United States)

    Liu, Yun; Qi, Na; Cheng, Zhen; Liu, Dong; Xiong, Zhiwei

    2018-01-01

    Hyperspectral light field (HSLF) images with enriched spectral and angular information provide better representation of real scenes than conventional 2D images. In this paper, we propose a novel denoising method for HSLF images. The proposed method consists of two main steps. First, we generalize the intrinsic tensor sparsity (ITS) measure previously used for 3D hyperspectral image denoising to the 5D HSLF, by using the global correlation along the spectral dimension and the nonlocal similarity across the spatial and angular dimensions. Second, we further exploit the spatial-angular correlation by integrating light field super-resolution (SR) into the denoising process. In this way, the 5D HSLF can be better recovered. Experimental results validate the superior performance of the proposed method in terms of both objective and subjective quality on a self-collected HSLF dataset, in comparison with directly applying the state-of-the-art denoising methods.

  15. A Digital Image Denoising Algorithm Based on Gaussian Filtering and Bilateral Filtering

    Directory of Open Access Journals (Sweden)

    Piao Weiying

    2018-01-01

    Full Text Available Bilateral filtering has been applied in the area of digital image processing widely, but in the high gradient region of the image, bilateral filtering may generate staircase effect. Bilateral filtering can be regarded as one particular form of local mode filtering, according to above analysis, an mixed image de-noising algorithm is proposed based on Gaussian filter and bilateral filtering. First of all, it uses Gaussian filter to filtrate the noise image and get the reference image, then to take both the reference image and noise image as the input for range kernel function of bilateral filter. The reference image can provide the image’s low frequency information, and noise image can provide image’s high frequency information. Through the competitive experiment on both the method in this paper and traditional bilateral filtering, the experimental result showed that the mixed de-noising algorithm can effectively overcome staircase effect, and the filtrated image was more smooth, its textural features was also more close to the original image, and it can achieve higher PSNR value, but the amount of calculation of above two algorithms are basically the same.

  16. Geodesic denoising for optical coherence tomography images

    Science.gov (United States)

    Shahrian Varnousfaderani, Ehsan; Vogl, Wolf-Dieter; Wu, Jing; Gerendas, Bianca S.; Simader, Christian; Langs, Georg; Waldstein, Sebastian M.; Schmidt-Erfurth, Ursula

    2016-03-01

    Optical coherence tomography (OCT) is an optical signal acquisition method capturing micrometer resolution, cross-sectional three-dimensional images. OCT images are used widely in ophthalmology to diagnose and monitor retinal diseases such as age-related macular degeneration (AMD) and Glaucoma. While OCT allows the visualization of retinal structures such as vessels and retinal layers, image quality and contrast is reduced by speckle noise, obfuscating small, low intensity structures and structural boundaries. Existing denoising methods for OCT images may remove clinically significant image features such as texture and boundaries of anomalies. In this paper, we propose a novel patch based denoising method, Geodesic Denoising. The method reduces noise in OCT images while preserving clinically significant, although small, pathological structures, such as fluid-filled cysts in diseased retinas. Our method selects optimal image patch distribution representations based on geodesic patch similarity to noisy samples. Patch distributions are then randomly sampled to build a set of best matching candidates for every noisy sample, and the denoised value is computed based on a geodesic weighted average of the best candidate samples. Our method is evaluated qualitatively on real pathological OCT scans and quantitatively on a proposed set of ground truth, noise free synthetic OCT scans with artificially added noise and pathologies. Experimental results show that performance of our method is comparable with state of the art denoising methods while outperforming them in preserving the critical clinically relevant structures.

  17. Fractional Partial Differential Equation: Fractional Total Variation and Fractional Steepest Descent Approach-Based Multiscale Denoising Model for Texture Image

    Directory of Open Access Journals (Sweden)

    Yi-Fei Pu

    2013-01-01

    Full Text Available The traditional integer-order partial differential equation-based image denoising approaches often blur the edge and complex texture detail; thus, their denoising effects for texture image are not very good. To solve the problem, a fractional partial differential equation-based denoising model for texture image is proposed, which applies a novel mathematical method—fractional calculus to image processing from the view of system evolution. We know from previous studies that fractional-order calculus has some unique properties comparing to integer-order differential calculus that it can nonlinearly enhance complex texture detail during the digital image processing. The goal of the proposed model is to overcome the problems mentioned above by using the properties of fractional differential calculus. It extended traditional integer-order equation to a fractional order and proposed the fractional Green’s formula and the fractional Euler-Lagrange formula for two-dimensional image processing, and then a fractional partial differential equation based denoising model was proposed. The experimental results prove that the abilities of the proposed denoising model to preserve the high-frequency edge and complex texture information are obviously superior to those of traditional integral based algorithms, especially for texture detail rich images.

  18. Medical Image Denoising Using Mixed Transforms

    Directory of Open Access Journals (Sweden)

    Jaleel Sadoon Jameel

    2018-02-01

    Full Text Available  In this paper,  a mixed transform method is proposed based on a combination of wavelet transform (WT and multiwavelet transform (MWT in order to denoise medical images. The proposed method consists of WT and MWT in cascade form to enhance the denoising performance of image processing. Practically, the first step is to add a noise to Magnetic Resonance Image (MRI or Computed Tomography (CT images for the sake of testing. The noisy image is processed by WT to achieve four sub-bands and each sub-band is treated individually using MWT before the soft/hard denoising stage. Simulation results show that a high peak signal to noise ratio (PSNR is improved significantly and the characteristic features are well preserved by employing mixed transform of WT and MWT due to their capability of separating noise signals from image signals. Moreover, the corresponding mean square error (MSE is decreased accordingly compared to other available methods.

  19. Denoising functional MR images : A comparison of wavelet denoising and Gaussian smoothing

    NARCIS (Netherlands)

    Wink, Alle Meije; Roerdink, Jos B.T.M.

    We present a general wavelet-based denoising scheme for functional magnetic resonance imaging (fMRI) data and compare it to Gaussian smoothing, the traditional denoising method used in fMRI analysis. One-dimensional WaveLab thresholding routines were adapted to two-dimensional images, and applied to

  20. GPU-Based Block-Wise Nonlocal Means Denoising for 3D Ultrasound Images

    Directory of Open Access Journals (Sweden)

    Liu Li

    2013-01-01

    Full Text Available Speckle suppression plays an important role in improving ultrasound (US image quality. While lots of algorithms have been proposed for 2D US image denoising with remarkable filtering quality, there is relatively less work done on 3D ultrasound speckle suppression, where the whole volume data rather than just one frame needs to be considered. Then, the most crucial problem with 3D US denoising is that the computational complexity increases tremendously. The nonlocal means (NLM provides an effective method for speckle suppression in US images. In this paper, a programmable graphic-processor-unit- (GPU- based fast NLM filter is proposed for 3D ultrasound speckle reduction. A Gamma distribution noise model, which is able to reliably capture image statistics for Log-compressed ultrasound images, was used for the 3D block-wise NLM filter on basis of Bayesian framework. The most significant aspect of our method was the adopting of powerful data-parallel computing capability of GPU to improve the overall efficiency. Experimental results demonstrate that the proposed method can enormously accelerate the algorithm.

  1. Multispectral Photoacoustic Imaging Artifact Removal and Denoising Using Time Series Model-Based Spectral Noise Estimation.

    Science.gov (United States)

    Kazakeviciute, Agne; Ho, Chris Jun Hui; Olivo, Malini

    2016-09-01

    The aim of this study is to solve a problem of denoising and artifact removal from in vivo multispectral photoacoustic imaging when the level of noise is not known a priori. The study analyzes Wiener filtering in Fourier domain when a family of anisotropic shape filters is considered. The unknown noise and signal power spectral densities are estimated using spectral information of images and the autoregressive of the power 1 ( AR(1)) model. Edge preservation is achieved by detecting image edges in the original and the denoised image and superimposing a weighted contribution of the two edge images to the resulting denoised image. The method is tested on multispectral photoacoustic images from simulations, a tissue-mimicking phantom, as well as in vivo imaging of the mouse, with its performance compared against that of the standard Wiener filtering in Fourier domain. The results reveal better denoising and fine details preservation capabilities of the proposed method when compared to that of the standard Wiener filtering in Fourier domain, suggesting that this could be a useful denoising technique for other multispectral photoacoustic studies.

  2. Fringe pattern denoising via image decomposition.

    Science.gov (United States)

    Fu, Shujun; Zhang, Caiming

    2012-02-01

    Filtering off noise from a fringe pattern is one of the key tasks in optical interferometry. In this Letter, using some suitable function spaces to model different components of a fringe pattern, we propose a new fringe pattern denoising method based on image decomposition. In our method, a fringe image is divided into three parts: low-frequency fringe, high-frequency fringe, and noise, which are processed in different spaces. An adaptive threshold in wavelet shrinkage involved in this algorithm improves its denoising performance. Simulation and experimental results show that our algorithm obtains smooth and clean fringes with different frequencies while preserving fringe features effectively.

  3. Image denoising using the squared eigenfunctions of the Schrodinger operator

    KAUST Repository

    Kaisserli, Zineb

    2015-02-02

    This study introduces a new image denoising method based on the spectral analysis of the semi-classical Schrodinger operator. The noisy image is considered as a potential of the Schrodinger operator, and the denoised image is reconstructed using the discrete spectrum of this operator. First results illustrating the performance of the proposed approach are presented and compared to the singular value decomposition method.

  4. Image denoising using non linear diffusion tensors

    International Nuclear Information System (INIS)

    Benzarti, F.; Amiri, H.

    2011-01-01

    Image denoising is an important pre-processing step for many image analysis and computer vision system. It refers to the task of recovering a good estimate of the true image from a degraded observation without altering and changing useful structure in the image such as discontinuities and edges. In this paper, we propose a new approach for image denoising based on the combination of two non linear diffusion tensors. One allows diffusion along the orientation of greatest coherences, while the other allows diffusion along orthogonal directions. The idea is to track perfectly the local geometry of the degraded image and applying anisotropic diffusion mainly along the preferred structure direction. To illustrate the effective performance of our model, we present some experimental results on a test and real photographic color images.

  5. The effect of image enhancement on the statistical analysis of functional neuroimages : Wavelet-based denoising and Gaussian smoothing

    NARCIS (Netherlands)

    Wink, AM; Roerdink, JBTM; Sonka, M; Fitzpatrick, JM

    2003-01-01

    The quality of statistical analyses of functional neuroimages is studied after applying various preprocessing methods. We present wavelet-based denoising as an alternative to Gaussian smoothing, the standard denoising method in statistical parametric mapping (SPM). The wavelet-based denoising

  6. Non parametric denoising methods based on wavelets: Application to electron microscopy images in low exposure time

    International Nuclear Information System (INIS)

    Soumia, Sid Ahmed; Messali, Zoubeida; Ouahabi, Abdeldjalil; Trepout, Sylvain; Messaoudi, Cedric; Marco, Sergio

    2015-01-01

    The 3D reconstruction of the Cryo-Transmission Electron Microscopy (Cryo-TEM) and Energy Filtering TEM images (EFTEM) hampered by the noisy nature of these images, so that their alignment becomes so difficult. This noise refers to the collision between the frozen hydrated biological samples and the electrons beam, where the specimen is exposed to the radiation with a high exposure time. This sensitivity to the electrons beam led specialists to obtain the specimen projection images at very low exposure time, which resulting the emergence of a new problem, an extremely low signal-to-noise ratio (SNR). This paper investigates the problem of TEM images denoising when they are acquired at very low exposure time. So, our main objective is to enhance the quality of TEM images to improve the alignment process which will in turn improve the three dimensional tomography reconstructions. We have done multiple tests on special TEM images acquired at different exposure time 0.5s, 0.2s, 0.1s and 1s (i.e. with different values of SNR)) and equipped by Golding beads for helping us in the assessment step. We herein, propose a structure to combine multiple noisy copies of the TEM images. The structure is based on four different denoising methods, to combine the multiple noisy TEM images copies. Namely, the four different methods are Soft, the Hard as Wavelet-Thresholding methods, Bilateral Filter as a non-linear technique able to maintain the edges neatly, and the Bayesian approach in the wavelet domain, in which context modeling is used to estimate the parameter for each coefficient. To ensure getting a high signal-to-noise ratio, we have guaranteed that we are using the appropriate wavelet family at the appropriate level. So we have chosen âĂIJsym8âĂİ wavelet at level 3 as the most appropriate parameter. Whereas, for the bilateral filtering many tests are done in order to determine the proper filter parameters represented by the size of the filter, the range parameter and the

  7. Maximum likelihood estimation-based denoising of magnetic resonance images using restricted local neighborhoods

    International Nuclear Information System (INIS)

    Rajan, Jeny; Jeurissen, Ben; Sijbers, Jan; Verhoye, Marleen; Van Audekerke, Johan

    2011-01-01

    In this paper, we propose a method to denoise magnitude magnetic resonance (MR) images, which are Rician distributed. Conventionally, maximum likelihood methods incorporate the Rice distribution to estimate the true, underlying signal from a local neighborhood within which the signal is assumed to be constant. However, if this assumption is not met, such filtering will lead to blurred edges and loss of fine structures. As a solution to this problem, we put forward the concept of restricted local neighborhoods where the true intensity for each noisy pixel is estimated from a set of preselected neighboring pixels. To this end, a reference image is created from the noisy image using a recently proposed nonlocal means algorithm. This reference image is used as a prior for further noise reduction. A scheme is developed to locally select an appropriate subset of pixels from which the underlying signal is estimated. Experimental results based on the peak signal to noise ratio, structural similarity index matrix, Bhattacharyya coefficient and mean absolute difference from synthetic and real MR images demonstrate the superior performance of the proposed method over other state-of-the-art methods.

  8. SPATIAL-VARIANT MORPHOLOGICAL FILTERS WITH NONLOCAL-PATCH-DISTANCE-BASED AMOEBA KERNEL FOR IMAGE DENOISING

    Directory of Open Access Journals (Sweden)

    Shuo Yang

    2015-01-01

    Full Text Available Filters of the Spatial-Variant amoeba morphology can preserve edges better, but with too much noise being left. For better denoising, this paper presents a new method to generate structuring elements for Spatially-Variant amoeba morphology.  The amoeba kernel in the proposed strategy is divided into two parts: one is the patch distance based amoeba center, and another is the geodesic distance based amoeba boundary, by which the nonlocal patch distance and local geodesic distance are both taken into consideration. Compared to traditional amoeba kernel, the new one has more stable center and its shape can be less influenced by noise in pilot image. What’s more important is that the nonlocal processing approach can induce a couple of adjoint dilation and erosion, and combinations of them can construct adaptive opening, closing, alternating sequential filters, etc. By designing the new amoeba kernel, a family of morphological filters therefore is derived. Finally, this paper presents a series of results on both synthetic and real images along with comparisons with current state-of-the-art techniques, including novel applications to medical image processing and noisy SAR image restoration.

  9. High-SNR multiple T2(*)-contrast magnetic resonance imaging using a robust denoising method based on tissue characteristics.

    Science.gov (United States)

    Eo, Taejoon; Kim, Taeseong; Jun, Yohan; Lee, Hongpyo; Ahn, Sung Soo; Kim, Dong-Hyun; Hwang, Dosik

    2017-06-01

    To develop an effective method that can suppress noise in successive multiecho T 2 (*)-weighted magnetic resonance (MR) brain images while preventing filtering artifacts. For the simulation experiments, we used multiple T 2 -weighted images of an anatomical brain phantom. For in vivo experiments, successive multiecho MR brain images were acquired from five healthy subjects using a multiecho gradient-recalled-echo (MGRE) sequence with a 3T MRI system. Our denoising method is a nonlinear filter whose filtering weights are determined by tissue characteristics among pixels. The similarity of the tissue characteristics is measured based on the l 2 -difference between two temporal decay signals. Both numerical and subjective evaluations were performed in order to compare the effectiveness of our denoising method with those of conventional filters, including Gaussian low-pass filter (LPF), anisotropic diffusion filter (ADF), and bilateral filter. Root-mean-square error (RMSE), signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were used in the numerical evaluation. Five observers, including one radiologist, assessed the image quality and rated subjective scores in the subjective evaluation. Our denoising method significantly improves RMSE, SNR, and CNR of numerical phantom images, and CNR of in vivo brain images in comparison with conventional filters (P structure conspicuity (8.2 to 9.4 out of 10) and naturalness (9.2 to 9.8 out of 10) among the conventional filters in the subjective evaluation. This study demonstrates that high-SNR multiple T 2 (*)-contrast MR images can be obtained using our denoising method based on tissue characteristics without noticeable artifacts. Evidence level: 2 J. MAGN. RESON. IMAGING 2017;45:1835-1845. © 2016 International Society for Magnetic Resonance in Medicine.

  10. Fractional order integration and fuzzy logic based filter for denoising of echocardiographic image.

    Science.gov (United States)

    Saadia, Ayesha; Rashdi, Adnan

    2016-12-01

    Ultrasound is widely used for imaging due to its cost effectiveness and safety feature. However, ultrasound images are inherently corrupted with speckle noise which severely affects the quality of these images and create difficulty for physicians in diagnosis. To get maximum benefit from ultrasound imaging, image denoising is an essential requirement. To perform image denoising, a two stage methodology using fuzzy weighted mean and fractional integration filter has been proposed in this research work. In stage-1, image pixels are processed by applying a 3 × 3 window around each pixel and fuzzy logic is used to assign weights to the pixels in each window, replacing central pixel of the window with weighted mean of all neighboring pixels present in the same window. Noise suppression is achieved by assigning weights to the pixels while preserving edges and other important features of an image. In stage-2, the resultant image is further improved by fractional order integration filter. Effectiveness of the proposed methodology has been analyzed for standard test images artificially corrupted with speckle noise and real ultrasound B-mode images. Results of the proposed technique have been compared with different state-of-the-art techniques including Lsmv, Wiener, Geometric filter, Bilateral, Non-local means, Wavelet, Perona et al., Total variation (TV), Global Adaptive Fractional Integral Algorithm (GAFIA) and Improved Fractional Order Differential (IFD) model. Comparison has been done on quantitative and qualitative basis. For quantitative analysis different metrics like Peak Signal to Noise Ratio (PSNR), Speckle Suppression Index (SSI), Structural Similarity (SSIM), Edge Preservation Index (β) and Correlation Coefficient (ρ) have been used. Simulations have been done using Matlab. Simulation results of artificially corrupted standard test images and two real Echocardiographic images reveal that the proposed method outperforms existing image denoising techniques

  11. Fringe pattern denoising by image dimensionality reduction

    Science.gov (United States)

    Vargas, J.; Sorzano, C. O. S.; Antonio Quiroga, J.; Estrada, J. C.; Carazo, J. M.

    2013-07-01

    Noise is a key problem in fringe pattern processing, especially in single frame demodulation of interferograms. In this work, we propose to filter the pattern noise using a straightforward, fast and easy to implement denoising method, which is based on a dimensionality reduction approach, in the sense of image rank reduction. The proposed technique has been applied to simulated and experimental ESPI interferograms obtaining satisfactory results.

  12. Adaptive image denoising based on support vector machine and wavelet description

    Science.gov (United States)

    An, Feng-Ping; Zhou, Xian-Wei

    2017-12-01

    Adaptive image denoising method decomposes the original image into a series of basic pattern feature images on the basis of wavelet description and constructs the support vector machine regression function to realize the wavelet description of the original image. The support vector machine method allows the linear expansion of the signal to be expressed as a nonlinear function of the parameters associated with the SVM. Using the radial basis kernel function of SVM, the original image can be extended into a MEXICAN function and a residual trend. This MEXICAN represents a basic image feature pattern. If the residual does not fluctuate, it can also be represented as a characteristic pattern. If the residuals fluctuate significantly, it is treated as a new image and the same decomposition process is repeated until the residuals obtained by the decomposition do not significantly fluctuate. Experimental results show that the proposed method in this paper performs well; especially, it satisfactorily solves the problem of image noise removal. It may provide a new tool and method for image denoising.

  13. Nonlinear Image Denoising Methodologies

    Science.gov (United States)

    2002-05-01

    Prof. Wesley Snyder and Prof. Joel Trussell for their help. Far too many people to mention individually have assisted me during my research work at...scale space in filtering theory by Witkin ([95]), is an isotropic method that smooth signals with Gaussian kernel uniformly by increasing scale, which...PDE As noted in the previous chapter, the equivalence between a Gaussian filter and heat equation-based evolution, led Witkin [95] to propose the

  14. A Faster Patch Ordering Method for Image Denoising

    OpenAIRE

    Munir, Badre

    2017-01-01

    Among the patch-based image denoising processing methods, smooth ordering of local patches (patch ordering) has been shown to give state-of-art results. For image denoising the patch ordering method forms two large TSPs (Traveling Salesman Problem) comprised of nodes in N-dimensional space. Ten approximate solutions of the two large TSPs are then used in a filtering process to form the reconstructed image. Use of large TSPs makes patch ordering a computationally intensive method. A modified p...

  15. Non-local means denoising of dynamic PET images.

    Directory of Open Access Journals (Sweden)

    Joyita Dutta

    Full Text Available Dynamic positron emission tomography (PET, which reveals information about both the spatial distribution and temporal kinetics of a radiotracer, enables quantitative interpretation of PET data. Model-based interpretation of dynamic PET images by means of parametric fitting, however, is often a challenging task due to high levels of noise, thus necessitating a denoising step. The objective of this paper is to develop and characterize a denoising framework for dynamic PET based on non-local means (NLM.NLM denoising computes weighted averages of voxel intensities assigning larger weights to voxels that are similar to a given voxel in terms of their local neighborhoods or patches. We introduce three key modifications to tailor the original NLM framework to dynamic PET. Firstly, we derive similarities from less noisy later time points in a typical PET acquisition to denoise the entire time series. Secondly, we use spatiotemporal patches for robust similarity computation. Finally, we use a spatially varying smoothing parameter based on a local variance approximation over each spatiotemporal patch.To assess the performance of our denoising technique, we performed a realistic simulation on a dynamic digital phantom based on the Digimouse atlas. For experimental validation, we denoised [Formula: see text] PET images from a mouse study and a hepatocellular carcinoma patient study. We compared the performance of NLM denoising with four other denoising approaches - Gaussian filtering, PCA, HYPR, and conventional NLM based on spatial patches.The simulation study revealed significant improvement in bias-variance performance achieved using our NLM technique relative to all the other methods. The experimental data analysis revealed that our technique leads to clear improvement in contrast-to-noise ratio in Patlak parametric images generated from denoised preclinical and clinical dynamic images, indicating its ability to preserve image contrast and high

  16. A virtualized software based on the NVIDIA cuFFT library for image denoising: performance analysis

    DEFF Research Database (Denmark)

    Galletti, Ardelio; Marcellino, Livia; Montella, Raffaele

    2017-01-01

    ancillary libraries with good results. Here, our aim is to analyze the applicability of this powerful tool to a real problem, which uses the NVIDIA cuFFT library. As case study we consider a simple denoising algorithm, implementing a virtualized GPU-parallel software based on the convolution theorem...... in order to perform the noise removal procedure in the frequency domain. We report some preliminary tests in both physical and virtualized environments to study and analyze the potential scalability of such an algorithm....

  17. Image Denoising Using Weighted Local Regression

    OpenAIRE

    Šťasta, Jakub

    2017-01-01

    The problem of accurately simulating light transport using Monte Carlo integration can be very difficult. In particular, scenes with complex illumination effects or complex materials can cause a scene to converge very slowly and demand a lot of computational time. To overcome this problem, image denoising algorithms have become popular in recent years. In this work we first review known approaches to denoising and adaptive rendering. We implement one of the promising algorithm by Moon et al. ...

  18. An image denoising application using shearlets

    Science.gov (United States)

    Sevindir, Hulya Kodal; Yazici, Cuneyt

    2013-10-01

    Medical imaging is a multidisciplinary field related to computer science, electrical/electronic engineering, physics, mathematics and medicine. There has been dramatic increase in variety, availability and resolution of medical imaging devices for the last half century. For proper medical imaging highly trained technicians and clinicians are needed to pull out clinically pertinent information from medical data correctly. Artificial systems must be designed to analyze medical data sets either in a partially or even a fully automatic manner to fulfil the need. For this purpose there has been numerous ongoing research for finding optimal representations in image processing and computer vision [1, 18]. Medical images almost always contain artefacts and it is crucial to remove these artefacts to obtain healthy results. Out of many methods for denoising images, in this paper, two denoising methods, wavelets and shearlets, have been applied to mammography images. Comparing these two methods, shearlets give better results for denoising such data.

  19. Spatio-temporal TGV denoising for ASL perfusion imaging.

    Science.gov (United States)

    Spann, Stefan M; Kazimierski, Kamil S; Aigner, Christoph S; Kraiger, Markus; Bredies, Kristian; Stollberger, Rudolf

    2017-08-15

    In arterial spin labeling (ASL) a perfusion weighted image is achieved by subtracting a label image from a control image. This perfusion weighted image has an intrinsically low signal to noise ratio and numerous measurements are required to achieve reliable image quality, especially at higher spatial resolutions. To overcome this limitation various denoising approaches have been published using the perfusion weighted image as input for denoising. In this study we propose a new spatio-temporal filtering approach based on total generalized variation (TGV) regularization which exploits the inherent information of control and label pairs simultaneously. In this way, the temporal and spatial similarities of all images are used to jointly denoise the control and label images. To assess the effect of denoising, virtual ground truth data were produced at different SNR levels. Furthermore, high-resolution in-vivo pulsed ASL data sets were acquired and processed. The results show improved image quality, quantitative accuracy and robustness against outliers compared to seven state of the art denoising approaches. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Statistical model for OCT image denoising

    KAUST Repository

    Li, Muxingzi

    2017-08-01

    Optical coherence tomography (OCT) is a non-invasive technique with a large array of applications in clinical imaging and biological tissue visualization. However, the presence of speckle noise affects the analysis of OCT images and their diagnostic utility. In this article, we introduce a new OCT denoising algorithm. The proposed method is founded on a numerical optimization framework based on maximum-a-posteriori estimate of the noise-free OCT image. It combines a novel speckle noise model, derived from local statistics of empirical spectral domain OCT (SD-OCT) data, with a Huber variant of total variation regularization for edge preservation. The proposed approach exhibits satisfying results in terms of speckle noise reduction as well as edge preservation, at reduced computational cost.

  1. Affine Non-Local Means Image Denoising.

    Science.gov (United States)

    Fedorov, Vadim; Ballester, Coloma

    2017-05-01

    This paper presents an extension of the Non-Local Means denoising method, that effectively exploits the affine invariant self-similarities present in the images of real scenes. Our method provides a better image denoising result by grounding on the fact that in many occasions similar patches exist in the image but have undergone a transformation. The proposal uses an affine invariant patch similarity measure that performs an appropriate patch comparison by automatically and intrinsically adapting the size and shape of the patches. As a result, more similar patches are found and appropriately used. We show that this image denoising method achieves top-tier performance in terms of PSNR, outperforming consistently the results of the regular Non-Local Means, and that it provides state-of-the-art qualitative results.

  2. Denoising imaging polarimetry by adapted BM3D method.

    Science.gov (United States)

    Tibbs, Alexander B; Daly, Ilse M; Roberts, Nicholas W; Bull, David R

    2018-04-01

    In addition to the visual information contained in intensity and color, imaging polarimetry allows visual information to be extracted from the polarization of light. However, a major challenge of imaging polarimetry is image degradation due to noise. This paper investigates the mitigation of noise through denoising algorithms and compares existing denoising algorithms with a new method, based on BM3D (Block Matching 3D). This algorithm, Polarization-BM3D (PBM3D), gives visual quality superior to the state of the art across all images and noise standard deviations tested. We show that denoising polarization images using PBM3D allows the degree of polarization to be more accurately calculated by comparing it with spectral polarimetry measurements.

  3. Dictionary Pair Learning on Grassmann Manifolds for Image Denoising.

    Science.gov (United States)

    Zeng, Xianhua; Bian, Wei; Liu, Wei; Shen, Jialie; Tao, Dacheng

    2015-11-01

    Image denoising is a fundamental problem in computer vision and image processing that holds considerable practical importance for real-world applications. The traditional patch-based and sparse coding-driven image denoising methods convert 2D image patches into 1D vectors for further processing. Thus, these methods inevitably break down the inherent 2D geometric structure of natural images. To overcome this limitation pertaining to the previous image denoising methods, we propose a 2D image denoising model, namely, the dictionary pair learning (DPL) model, and we design a corresponding algorithm called the DPL on the Grassmann-manifold (DPLG) algorithm. The DPLG algorithm first learns an initial dictionary pair (i.e., the left and right dictionaries) by employing a subspace partition technique on the Grassmann manifold, wherein the refined dictionary pair is obtained through a sub-dictionary pair merging. The DPLG obtains a sparse representation by encoding each image patch only with the selected sub-dictionary pair. The non-zero elements of the sparse representation are further smoothed by the graph Laplacian operator to remove the noise. Consequently, the DPLG algorithm not only preserves the inherent 2D geometric structure of natural images but also performs manifold smoothing in the 2D sparse coding space. We demonstrate that the DPLG algorithm also improves the structural SIMilarity values of the perceptual visual quality for denoised images using the experimental evaluations on the benchmark images and Berkeley segmentation data sets. Moreover, the DPLG also produces the competitive peak signal-to-noise ratio values from popular image denoising algorithms.

  4. Sparse representations via learned dictionaries for x-ray angiogram image denoising

    Science.gov (United States)

    Shang, Jingfan; Huang, Zhenghua; Li, Qian; Zhang, Tianxu

    2018-03-01

    X-ray angiogram image denoising is always an active research topic in the field of computer vision. In particular, the denoising performance of many existing methods had been greatly improved by the widely use of nonlocal similar patches. However, the only nonlocal self-similar (NSS) patch-based methods can be still be improved and extended. In this paper, we propose an image denoising model based on the sparsity of the NSS patches to obtain high denoising performance and high-quality image. In order to represent the sparsely NSS patches in every location of the image well and solve the image denoising model more efficiently, we obtain dictionaries as a global image prior by the K-SVD algorithm over the processing image; Then the single and effectively alternating directions method of multipliers (ADMM) method is used to solve the image denoising model. The results of widely synthetic experiments demonstrate that, owing to learned dictionaries by K-SVD algorithm, a sparsely augmented lagrangian image denoising (SALID) model, which perform effectively, obtains a state-of-the-art denoising performance and better high-quality images. Moreover, we also give some denoising results of clinical X-ray angiogram images.

  5. Weighted thin-plate spline image denoising

    Czech Academy of Sciences Publication Activity Database

    Kašpar, Roman; Zitová, Barbara

    2003-01-01

    Roč. 36, č. 12 (2003), s. 3027-3030 ISSN 0031-3203 R&D Projects: GA ČR GP102/01/P065 Institutional research plan: CEZ:AV0Z1075907 Keywords : image denoising * thin-plate splines Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.611, year: 2003

  6. The Noise Clinic: a Blind Image Denoising Algorithm

    Directory of Open Access Journals (Sweden)

    Marc Lebrun

    2015-01-01

    Full Text Available This paper describes the complete implementation of a blind image algorithm, that takes any digital image as input. In a first step the algorithm estimates a Signal and Frequency Dependent (SFD noise model. In a second step, the image is denoised by a multiscale adaptation of the Non-local Bayes denoising method. We focus here on a careful analysis of the denoising step and present a detailed discussion of the influence of its parameters. Extensive commented tests of the blind denoising algorithm are presented, on real JPEG images and scans of old photographs.

  7. Stacked Denoise Autoencoder Based Feature Extraction and Classification for Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Chen Xing

    2016-01-01

    Full Text Available Deep learning methods have been successfully applied to learn feature representations for high-dimensional data, where the learned features are able to reveal the nonlinear properties exhibited in the data. In this paper, deep learning method is exploited for feature extraction of hyperspectral data, and the extracted features can provide good discriminability for classification task. Training a deep network for feature extraction and classification includes unsupervised pretraining and supervised fine-tuning. We utilized stacked denoise autoencoder (SDAE method to pretrain the network, which is robust to noise. In the top layer of the network, logistic regression (LR approach is utilized to perform supervised fine-tuning and classification. Since sparsity of features might improve the separation capability, we utilized rectified linear unit (ReLU as activation function in SDAE to extract high level and sparse features. Experimental results using Hyperion, AVIRIS, and ROSIS hyperspectral data demonstrated that the SDAE pretraining in conjunction with the LR fine-tuning and classification (SDAE_LR can achieve higher accuracies than the popular support vector machine (SVM classifier.

  8. Scalar Parameters Optimization in PDE Based Medical Image Denoising by using Cellular Wave Computing

    Directory of Open Access Journals (Sweden)

    GACSÁDI Alexandru

    2016-10-01

    Full Text Available In order to help with biomedical images, a set of complex and effective mathematical models are available, based on the PDE (PDE - partial differential equation. On one hand, effective implementation of these methods is difficult, due to the difficulty of determining the scalar parameter values, on which the image processing efficiency depends, while on the other hand, due to the considerable computing power needed in order to perform in real time. Currently there are no analytical and / or experimental methods in the literature for the exact values determination of the scaled parameters to provide the best results for a specific image processing. This paper proposes a method for optimizing the values of a scaling parameter set, which ensure effective noise reduction of medical images by using cellular wave computing. To assess the overall performance of noise extraction, the error function (quantitative component and direct visualization (qualitative component are used at the same time. Moreover, by using this analysis, the degree to which the CNN templates are robust against the range of values of the scalar parameter, is obtainable.

  9. Robust Image Denoising using a Virtual Flash Image for Monte Carlo Ray Tracing

    DEFF Research Database (Denmark)

    Moon, Bochang; Jun, Jong Yun; Lee, JongHyeob

    2013-01-01

    We propose an efficient and robust image-space denoising method for noisy images generated by Monte Carlo ray tracing methods. Our method is based on two new concepts: virtual flash images and homogeneous pixels. Inspired by recent developments in flash photography, virtual flash images emulate...

  10. A new method for mobile phone image denoising

    Science.gov (United States)

    Jin, Lianghai; Jin, Min; Li, Xiang; Xu, Xiangyang

    2015-12-01

    Images captured by mobile phone cameras via pipeline processing usually contain various kinds of noises, especially granular noise with different shapes and sizes in both luminance and chrominance channels. In chrominance channels, noise is closely related to image brightness. To improve image quality, this paper presents a new method to denoise such mobile phone images. The proposed scheme converts the noisy RGB image to luminance and chrominance images, which are then denoised by a common filtering framework. The common filtering framework processes a noisy pixel by first excluding the neighborhood pixels that significantly deviate from the (vector) median and then utilizing the other neighborhood pixels to restore the current pixel. In the framework, the strength of chrominance image denoising is controlled by image brightness. The experimental results show that the proposed method obviously outperforms some other representative denoising methods in terms of both objective measure and visual evaluation.

  11. Electrocardiogram de-noising based on forward wavelet transform ...

    Indian Academy of Sciences (India)

    Abstract. In this paper, we propose a new technique of Electrocardiogram (ECG) signal de-noising based on thresholding of the coefficients obtained from the appli- cation of the Forward Wavelet Transform Translation Invariant (FWT_TI) to each. Bionic Wavelet coefficient. The De-noise De-noised ECG is obtained from the ...

  12. Image denoising via collaborative support-agnostic recovery

    KAUST Repository

    Behzad, Muzammil

    2017-06-20

    In this paper, we propose a novel patch-based image denoising algorithm using collaborative support-agnostic sparse reconstruction. In the proposed collaborative scheme, similar patches are assumed to share the same support taps. For sparse reconstruction, the likelihood of a tap being active in a patch is computed and refined through a collaboration process with other similar patches in the similarity group. This provides a very good patch support estimation, hence enhancing the quality of image restoration. Performance comparisons with state-of-the-art algorithms, in terms of PSNR and SSIM, demonstrate the superiority of the proposed algorithm.

  13. Image Denoising via Bayesian Estimation of Statistical Parameter Using Generalized Gamma Density Prior in Gaussian Noise Model

    Science.gov (United States)

    Kittisuwan, Pichid

    2015-03-01

    The application of image processing in industry has shown remarkable success over the last decade, for example, in security and telecommunication systems. The denoising of natural image corrupted by Gaussian noise is a classical problem in image processing. So, image denoising is an indispensable step during image processing. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. One of the cruxes of the Bayesian image denoising algorithms is to estimate the statistical parameter of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with generalized Gamma density prior for local observed variance and Laplacian or Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by efficient and flexible properties of generalized Gamma density. The experimental results show that the proposed method yields good denoising results.

  14. Study on torpedo fuze signal denoising method based on WPT

    Science.gov (United States)

    Zhao, Jun; Sun, Changcun; Zhang, Tao; Ren, Zhiliang

    2013-07-01

    Torpedo fuze signal denoising is an important action to ensure reliable operation of fuze. Based on the good characteristics of wavelet packet transform (WPT) in signal denoising, the paper used wavelet packet transform to denoise the fuze signal under a complex background interference, and a simulation of the denoising results with Matlab is performed. Simulation result shows that the WPT denoising method can effectively eliminate background noise exist in torpedo fuze target signal with higher precision and less distortion, leading to advance the reliability of torpedo fuze operation.

  15. A Total Variation Model Based on the Strictly Convex Modification for Image Denoising

    Directory of Open Access Journals (Sweden)

    Boying Wu

    2014-01-01

    Full Text Available We propose a strictly convex functional in which the regular term consists of the total variation term and an adaptive logarithm based convex modification term. We prove the existence and uniqueness of the minimizer for the proposed variational problem. The existence, uniqueness, and long-time behavior of the solution of the associated evolution system is also established. Finally, we present experimental results to illustrate the effectiveness of the model in noise reduction, and a comparison is made in relation to the more classical methods of the traditional total variation (TV, the Perona-Malik (PM, and the more recent D-α-PM method. Additional distinction from the other methods is that the parameters, for manual manipulation, in the proposed algorithm are reduced to basically only one.

  16. A Non-Reference Image Denoising Method for Infrared Thermal Image Based on Enhanced Dual-Tree Complex Wavelet Optimized by Fruit Fly Algorithm and Bilateral Filter

    Directory of Open Access Journals (Sweden)

    Yiwen Liu

    2017-11-01

    Full Text Available To eliminate the noise of infrared thermal image without reference and noise model, an improved dual-tree complex wavelet transform (DTCWT, optimized by an improved fruit-fly optimization algorithm (IFOA and bilateral filter (BF, is proposed in this paper. Firstly, the noisy image is transformed by DTCWT, and the noise variance threshold is optimized by the IFOA, which is enhanced through a fly step range with inertia weight. Then, the denoised image will be re-processed using bilateral filter to improve the denoising performance and enhance the edge information. In the experiment, the proposed method is applied to eliminate both addictive noise and multiplicative noise, and the denoising results are compared with other representative methods, such as DTCWT, block-matching and 3D filtering (BM3D, median filter, wiener filter, wavelet decomposition filter (WDF and bilateral filter. Moreover, the proposed method is applied as pre-processing utilization for infrared thermal images in a coal mining working face.

  17. Chaotic Signal Denoising Based on Hierarchical Threshold Synchrosqueezed Wavelet Transform

    Science.gov (United States)

    Wang, Wen-Bo; Jing, Yun-yu; Zhao, Yan-chao; Zhang, Lian-Hua; Wang, Xiang-Li

    2017-12-01

    In order to overcoming the shortcoming of single threshold synchrosqueezed wavelet transform(SWT) denoising method, an adaptive hierarchical threshold SWT chaotic signal denoising method is proposed. Firstly, a new SWT threshold function is constructed based on Stein unbiased risk estimation, which is two order continuous derivable. Then, by using of the new threshold function, a threshold process based on the minimum mean square error was implemented, and the optimal estimation value of each layer threshold in SWT chaotic denoising is obtained. The experimental results of the simulating chaotic signal and measured sunspot signals show that, the proposed method can filter the noise of chaotic signal well, and the intrinsic chaotic characteristic of the original signal can be recovered very well. Compared with the EEMD denoising method and the single threshold SWT denoising method, the proposed method can obtain better denoising result for the chaotic signal.

  18. Medical image denoising using dual tree complex thresholding wavelet transform and Wiener filter

    Directory of Open Access Journals (Sweden)

    Hilal Naimi

    2015-01-01

    Full Text Available Image denoising is the process to remove the noise from the image naturally corrupted by the noise. The wavelet method is one among various methods for recovering infinite dimensional objects like curves, densities, images, etc. The wavelet techniques are very effective to remove the noise because of their ability to capture the energy of a signal in few energy transform values. The wavelet methods are based on shrinking the wavelet coefficients in the wavelet domain. We propose in this paper, a denoising approach basing on dual tree complex wavelet and shrinkage with the Wiener filter technique (where either hard or soft thresholding operators of dual tree complex wavelet transform for the denoising of medical images are used. The results proved that the denoised images using DTCWT (Dual Tree Complex Wavelet Transform with Wiener filter have a better balance between smoothness and accuracy than the DWT and are less redundant than SWT (StationaryWavelet Transform. We used the SSIM (Structural Similarity Index Measure along with PSNR (Peak Signal to Noise Ratio and SSIM map to assess the quality of denoised images.

  19. Image denoising via adaptive eigenvectors of graph Laplacian

    Science.gov (United States)

    Chen, Ying; Tang, Yibin; Xu, Ning; Zhou, Lin; Zhao, Li

    2016-07-01

    An image denoising method via adaptive eigenvectors of graph Laplacian (EGL) is proposed. Unlike the trivial parameter setting of the used eigenvectors in the traditional EGL method, in our method, the eigenvectors are adaptively selected in the whole denoising procedure. In detail, a rough image is first built with the eigenvectors from the noisy image, where the eigenvectors are selected by using the deviation estimation of the clean image. Subsequently, a guided image is effectively restored with a weighted average of the noisy and rough images. In this operation, the average coefficient is adaptively obtained to set the deviation of the guided image to approximately that of the clean image. Finally, the denoised image is achieved by a group-sparse model with the pattern from the guided image, where the eigenvectors are chosen in the error control of the noise deviation. Moreover, a modified group orthogonal matching pursuit algorithm is developed to efficiently solve the above group sparse model. The experiments show that our method not only improves the practicality of the EGL methods with the dependence reduction of the parameter setting, but also can outperform some well-developed denoising methods, especially for noise with large deviations.

  20. A Novel Approach of Low-Light Image Denoising for Face Recognition

    Directory of Open Access Journals (Sweden)

    Yimei Kang

    2014-04-01

    Full Text Available Illumination variation makes automatic face recognition a challenging task, especially in low light environments. A very simple and efficient novel low-light image denoising of low frequency noise (DeLFN is proposed. The noise frequency distribution of low-light images is presented based on massive experimental results. The low and very low frequency noise are dominant in low light conditions. DeLFN is a three-level image denoising method. The first level denoises mixed noises by histogram equalization (HE to improve overall contrast. The second level denoises low frequency noise by logarithmic transformation (LOG to enhance the image detail. The third level denoises residual very low frequency noise by high-pass filtering to recover more features of the true images. The PCA (Principal Component Analysis recognition method is applied to test recognition rate of the preprocessed face images with DeLFN. DeLFN are compared with several representative illumination preprocessing methods on the Yale Face Database B, the Extended Yale face database B, and the CMU PIE face database, respectively. DeLFN not only outperformed other algorithms in improving visual quality and face recognition rate, but also is simpler and computationally efficient for real time applications.

  1. GPU-accelerated denoising of 3D magnetic resonance images

    Energy Technology Data Exchange (ETDEWEB)

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  2. Improved extreme value weighted sparse representational image denoising with random perturbation

    Science.gov (United States)

    Xuan, Shibin; Han, Yulan

    2015-11-01

    Research into the removal of mixed noise is a hot topic in the field of image denoising. Currently, weighted encoding with sparse nonlocal regularization represents an excellent mixed noise removal method. To make the fitting function closer to the requirements of a robust estimation technique, an extreme value technique is used that allows the fitting function to satisfy three conditions of robust estimation on a larger interval. Moreover, a random disturbance sequence is integrated into the denoising model to prevent the iterative solving process from falling into local optima. A radon transform-based noise detection algorithm and an adaptive median filter are used to obtain a high-quality initial solution for the iterative procedure of the image denoising model. Experimental results indicate that this improved method efficiently enhances the weighted encoding with a sparse nonlocal regularization model. The proposed method can effectively remove mixed noise from corrupted images, while better preserving the edges and details of the processed image.

  3. Application of morphological filtration to fast neutron image denoising processing

    International Nuclear Information System (INIS)

    Zhang Faqiang; China Academy of Engineering Physics, Mianyang; Yang Jianlun; Li Zhenghong

    2006-01-01

    Fast neutron radiography system is mainly composed by a scintillation fiber array and a scientific grade optical CCD. Fast neutron images obtained by the system always suffer a serious noise disturbance. In order to weaken pepper and salt noise and Poisson noise, morphological filtration is applied to fast neutron image denoising processing. The results indicate that for fast neutron images, morphological filtration operations with two-dimensional multi-directional structure element are effective to filter the noise and hold image details. (authors)

  4. Nonlinear Denoising and Analysis of Neuroimages With Kernel Principal Component Analysis and Pre-Image Estimation

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Abrahamsen, Trine Julie; Madsen, Kristoffer Hougaard

    2012-01-01

    We investigate the use of kernel principal component analysis (PCA) and the inverse problem known as pre-image estimation in neuroimaging: i) We explore kernel PCA and pre-image estimation as a means for image denoising as part of the image preprocessing pipeline. Evaluation of the denoising proc...... base these illustrations on two fMRI BOLD data sets — one from a simple finger tapping experiment and the other from an experiment on object recognition in the ventral temporal lobe.......We investigate the use of kernel principal component analysis (PCA) and the inverse problem known as pre-image estimation in neuroimaging: i) We explore kernel PCA and pre-image estimation as a means for image denoising as part of the image preprocessing pipeline. Evaluation of the denoising...... procedure is performed within a data-driven split-half evaluation framework. ii) We introduce manifold navigation for exploration of a nonlinear data manifold, and illustrate how pre-image estimation can be used to generate brain maps in the continuum between experimentally defined brain states/classes. We...

  5. Sparsity-based Poisson denoising with dictionary learning.

    Science.gov (United States)

    Giryes, Raja; Elad, Michael

    2014-12-01

    The problem of Poisson denoising appears in various imaging applications, such as low-light photography, medical imaging, and microscopy. In cases of high SNR, several transformations exist so as to convert the Poisson noise into an additive-independent identically distributed. Gaussian noise, for which many effective algorithms are available. However, in a low-SNR regime, these transformations are significantly less accurate, and a strategy that relies directly on the true noise statistics is required. Salmon et al took this route, proposing a patch-based exponential image representation model based on Gaussian mixture model, leading to state-of-the-art results. In this paper, we propose to harness sparse-representation modeling to the image patches, adopting the same exponential idea. Our scheme uses a greedy pursuit with boot-strapping-based stopping condition and dictionary learning within the denoising process. The reconstruction performance of the proposed scheme is competitive with leading methods in high SNR and achieving state-of-the-art results in cases of low SNR.

  6. Computed tomography perfusion imaging denoising using Gaussian process regression

    International Nuclear Information System (INIS)

    Zhu Fan; Gonzalez, David Rodriguez; Atkinson, Malcolm; Carpenter, Trevor; Wardlaw, Joanna

    2012-01-01

    Brain perfusion weighted images acquired using dynamic contrast studies have an important clinical role in acute stroke diagnosis and treatment decisions. However, computed tomography (CT) images suffer from low contrast-to-noise ratios (CNR) as a consequence of the limitation of the exposure to radiation of the patient. As a consequence, the developments of methods for improving the CNR are valuable. The majority of existing approaches for denoising CT images are optimized for 3D (spatial) information, including spatial decimation (spatially weighted mean filters) and techniques based on wavelet and curvelet transforms. However, perfusion imaging data is 4D as it also contains temporal information. Our approach using Gaussian process regression (GPR), which takes advantage of the temporal information, to reduce the noise level. Over the entire image, GPR gains a 99% CNR improvement over the raw images and also improves the quality of haemodynamic maps allowing a better identification of edges and detailed information. At the level of individual voxel, GPR provides a stable baseline, helps us to identify key parameters from tissue time-concentration curves and reduces the oscillations in the curve. GPR is superior to the comparable techniques used in this study. (note)

  7. Enhancement and denoising of mammographic images for breast disease detection

    International Nuclear Information System (INIS)

    Yazdani, S.; Yusof, R.; Karimian, A.; Hematian, A.; Yousefi, M.

    2012-01-01

    In these two decades breast cancer is one of the leading cause of death among women. In breast cancer research, Mammographic Image is being assessed as a potential tool for detecting breast disease and investigating response to chemotherapy. In first stage of breast disease discovery, the density measurement of the breast in mammographic images provides very useful information. Because of the importance of the role of mammographic images the need for accurate and robust automated image enhancement techniques is becoming clear. Mammographic images have some disadvantages such as, the high dependence of contrast upon the way the image is acquired, weak distinction in splitting cyst from tumor, intensity non uniformity, the existence of noise, etc. These limitations make problem to detect the typical signs such as masses and microcalcifications. For this reason, denoising and enhancing the quality of mammographic images is very important. The method which is used in this paper is in spatial domain which its input includes high, intermediate and even very low contrast mammographic images based on specialist physician's view, while its output is processed images that show the input images with higher quality, more contrast and more details. In this research, 38 mammographic images have been used. The result of purposed method shows details of abnormal zones and the areas with defects so that specialist could explore these zones more accurately and it could be deemed as an index for cancer diagnosis. In this study, mammographic images are initially converted into digital images and then to increase spatial resolution power, their noise is reduced and consequently their contrast is improved. The results demonstrate effectiveness and efficiency of the proposed methods. (authors)

  8. An efficient algorithm for denoising MR and CT images using digital curvelet transform.

    Science.gov (United States)

    Hyder, S Ali; Sukanesh, R

    2011-01-01

    This chapter presents a curvelet-based approach for the denoising of magnetic resonance (MR) and computed tomography (CT) images. Curvelet transform is a new multiscale representation suited for objects which are smooth away from discontinuities across curves, which was developed by Candies and Donoho (Proceedings of Curves and Surfaces IV, France:105-121, 1999). We apply these digital transforms to the denoising of some standard MR and CT images embedded in white noise, random noise, and poisson noise. In the tests reported here, simple thresholding of the curvelet coefficients is very competitive with "state-of-the-art" techniques based on wavelet transform methods. Moreover, the curvelet reconstructions exhibit higher perceptual quality than wavelet-based reconstructions, offering visually sharper images and, in particular, higher quality recovery of edges and of faint linear and curvilinear features. Since medical images have several objects and curved shapes, it is expected that curvelet transform would be better in their denoising. The simulation results show the outperforms than wavelet transform in the denoising of both MR and CT images from both visual quality and the peak signal-to-noise (PSNR) ratio points of view.

  9. Comparison of wavelet based denoising schemes for gear condition monitoring: An Artificial Neural Network based Approach

    Science.gov (United States)

    Ahmed, Rounaq; Srinivasa Pai, P.; Sriram, N. S.; Bhat, Vasudeva

    2018-02-01

    Vibration Analysis has been extensively used in recent past for gear fault diagnosis. The vibration signals extracted is usually contaminated with noise and may lead to wrong interpretation of results. The denoising of extracted vibration signals helps the fault diagnosis by giving meaningful results. Wavelet Transform (WT) increases signal to noise ratio (SNR), reduces root mean square error (RMSE) and is effective to denoise the gear vibration signals. The extracted signals have to be denoised by selecting a proper denoising scheme in order to prevent the loss of signal information along with noise. An approach has been made in this work to show the effectiveness of Principal Component Analysis (PCA) to denoise gear vibration signal. In this regard three selected wavelet based denoising schemes namely PCA, Empirical Mode Decomposition (EMD), Neighcoeff Coefficient (NC), has been compared with Adaptive Threshold (AT) an extensively used wavelet based denoising scheme for gear vibration signal. The vibration signals acquired from a customized gear test rig were denoised by above mentioned four denoising schemes. The fault identification capability as well as SNR, Kurtosis and RMSE for the four denoising schemes have been compared. Features extracted from the denoised signals have been used to train and test artificial neural network (ANN) models. The performances of the four denoising schemes have been evaluated based on the performance of the ANN models. The best denoising scheme has been identified, based on the classification accuracy results. PCA is effective in all the regards as a best denoising scheme.

  10. Image Denoising Using Singular Value Difference in the Wavelet Domain

    Directory of Open Access Journals (Sweden)

    Min Wang

    2018-01-01

    Full Text Available Singular value (SV difference is the difference in the singular values between a noisy image and the original image; it varies regularly with noise intensity. This paper proposes an image denoising method using the singular value difference in the wavelet domain. First, the SV difference model is generated for different noise variances in the three directions of the wavelet transform and the noise variance of a new image is used to make the calculation by the diagonal part. Next, the single-level discrete 2-D wavelet transform is used to decompose each noisy image into its low-frequency and high-frequency parts. Then, singular value decomposition (SVD is used to obtain the SVs of the three high-frequency parts. Finally, the three denoised high-frequency parts are reconstructed by SVD from the SV difference, and the final denoised image is obtained using the inverse wavelet transform. Experiments show the effectiveness of this method compared with relevant existing methods.

  11. Image fusion and denoising using fractional-order gradient information

    DEFF Research Database (Denmark)

    Mei, Jin-Jin; Dong, Yiqiu; Huang, Ting-Zhu

    Image fusion and denoising are significant in image processing because of the availability of multi-sensor and the presence of the noise. The first-order and second-order gradient information have been effectively applied to deal with fusing the noiseless source images. In this paper, due....... By adding the data fitting term between the fused image and a preprocessed image, a new convex variational model is proposed for fusing the noisy source images. Furthermore, an alternating direction method of multiplier (ADMM) is developed for solving the proposed variational model. Numerical experiments...

  12. Two-stage image denoising considering interscale and intrascale dependencies

    Science.gov (United States)

    Shahdoosti, Hamid Reza

    2017-11-01

    A solution to the problem of reducing the noise of grayscale images is presented. To consider the intrascale and interscale dependencies, this study makes use of a model. It is shown that the dependency between a wavelet coefficient and its predecessors can be modeled by the first-order Markov chain, which means that the parent conveys all of the information necessary for efficient estimation. Using this fact, the proposed method employs the Kalman filter in the wavelet domain for image denoising. The proposed method has two stages. The first stage employs a simple denoising algorithm to provide the noise-free image, by which the parameters of the model such as state transition matrix, variance of the process noise, the observation model, and the covariance of the observation noise are estimated. In the second stage, the Kalman filter is applied to the wavelet coefficients of the noisy image to estimate the noise-free coefficients. In fact, the Kalman filter is used to estimate the coefficients of high-frequency subbands from the coefficients of coarser scales and noisy observations of neighboring coefficients. In this way, both the interscale and intrascale dependencies are taken into account. Results are presented and discussed on a set of standard 8-bit grayscale images. The experimental results demonstrate that the proposed method achieves performances competitive with the state-of-the-art denoising methods in terms of both peak-signal-to-noise ratio and subjective visual quality.

  13. Iterative weighted maximum likelihood denoising with probabilistic patch-based weights.

    Science.gov (United States)

    Deledalle, Charles-Alban; Denis, Loïc; Tupin, Florence

    2009-12-01

    Image denoising is an important problem in image processing since noise may interfere with visual or automatic interpretation. This paper presents a new approach for image denoising in the case of a known uncorrelated noise model. The proposed filter is an extension of the nonlocal means (NL means) algorithm introduced by Buades , which performs a weighted average of the values of similar pixels. Pixel similarity is defined in NL means as the Euclidean distance between patches (rectangular windows centered on each two pixels). In this paper, a more general and statistically grounded similarity criterion is proposed which depends on the noise distribution model. The denoising process is expressed as a weighted maximum likelihood estimation problem where the weights are derived in a data-driven way. These weights can be iteratively refined based on both the similarity between noisy patches and the similarity of patches extracted from the previous estimate. We show that this iterative process noticeably improves the denoising performance, especially in the case of low signal-to-noise ratio images such as synthetic aperture radar (SAR) images. Numerical experiments illustrate that the technique can be successfully applied to the classical case of additive Gaussian noise but also to cases such as multiplicative speckle noise. The proposed denoising technique seems to improve on the state of the art performance in that latter case.

  14. Image Denoising Using Interquartile Range Filter with Local Averaging

    OpenAIRE

    Jassim, Firas Ajil

    2013-01-01

    Image denoising is one of the fundamental problems in image processing. In this paper, a novel approach to suppress noise from the image is conducted by applying the interquartile range (IQR) which is one of the statistical methods used to detect outlier effect from a dataset. A window of size kXk was implemented to support IQR filter. Each pixel outside the IQR range of the kXk window is treated as noisy pixel. The estimation of the noisy pixels was obtained by local averaging. The essential...

  15. Hyperspectral anomaly detection based on stacked denoising autoencoders

    Science.gov (United States)

    Zhao, Chunhui; Li, Xueyuan; Zhu, Haifeng

    2017-10-01

    Hyperspectral anomaly detection (AD) is an important technique of unsupervised target detection and has significance in real situations. Due to the high dimensionality of hyperspectral data, AD will be influenced by noise, nonlinear correlation of band, or other factors that lead to the decline of detection accuracy. To overcome this problem, a method of hyperspectral AD based on stacked denoising autoencoders (AE) (HADSDA) is proposed. Simultaneously, two different feature detection models, spectral feature (SF) and fused feature by clustering (FFC), are constructed to verify the effectiveness of the proposed algorithm. The SF detection model uses the SF of each pixel. The FFC detection model uses a similar set of pixels constructed by clustering and then fuses the set of pixels by the stacked denoising autoencoders algorithm (SDA). The SDA is an algorithm that can automatically learn nonlinear deep features of the image. Compared with other linear or nonlinear feature extraction methods, the detection result of the proposed algorithm is greatly improved. Experiment results show that the proposed algorithm is an excellent feature learning method and can achieve higher detection performance.

  16. The Two-Dimensional Wavelet Transform De-noising and Combining with Side Scan Sonar Image

    Directory of Open Access Journals (Sweden)

    Muhammad Zainuddin Lubis

    2017-05-01

    Full Text Available This paper puts forward an image de-noising method based on 2D wavelet transform with the application of the method in seabed identification data collection system. Two-dimensional haar wavelets in image processing presents a unified framework for wavelet image compression and combining with side scan sonar image. Seabed identification target have 7 target detection in side scan sonar imagery result.  The vibration signals were analyzed to perform fault diagnosis. The obtained signal was time-domain signal. The experiment result shows that the application of 2D wavelet transform image de-noising algorithm can achieve good subjective and objective image quality and help to collect high quality data and analyze the images for the data center with optimum effects, the features from time-domain signal were extracted. 3 vectors were formed which are v1, v2, v3. In Haar wavelet retained energy is 93.8 %, so from the results, it has been concluded that Haar wavelet transform shows the best results in terms of Energy from De-noised Image processing with side scan sonar imagery.

  17. The Two-Dimensional Wavelet Transform De-noising and Combining with Side Scan Sonar Image

    Directory of Open Access Journals (Sweden)

    Muhammad Zainuddin Lubis

    2017-04-01

    Full Text Available This paper puts forward an image de-noising method based on 2D wavelet transform with the application of the method in seabed identification data collection system. Two-dimensional haar wavelets in image processing presents a unified framework for wavelet image compression and combining with side scan sonar image. Seabed identification target have 7 target detection in side scan sonar imagery result. The vibration signals were analyzed to perform fault diagnosis. The obtained signal was time-domain signal. The experiment result shows that the application of 2D wavelet transform image de-noising algorithm can achieve good subjective and objective image quality and help to collect high quality data and analyze the images for the data center with optimum effects, the features from time-domain signal were extracted. 3 vectors were formed which are v1, v2, v3. In Haar wavelet retained energy is 93.8 %, so from the results, it has been concluded that Haar wavelet transform shows the best results in terms of Energy from De-noised Image processing with side scan sonar imagery.

  18. A multichannel block-matching denoising algorithm for spectral photon-counting CT images.

    Science.gov (United States)

    Harrison, Adam P; Xu, Ziyue; Pourmorteza, Amir; Bluemke, David A; Mollura, Daniel J

    2017-06-01

    We present a denoising algorithm designed for a whole-body prototype photon-counting computed tomography (PCCT) scanner with up to 4 energy thresholds and associated energy-binned images. Spectral PCCT images can exhibit low signal to noise ratios (SNRs) due to the limited photon counts in each simultaneously-acquired energy bin. To help address this, our denoising method exploits the correlation and exact alignment between energy bins, adapting the highly-effective block-matching 3D (BM3D) denoising algorithm for PCCT. The original single-channel BM3D algorithm operates patch-by-patch. For each small patch in the image, a patch grouping action collects similar patches from the rest of the image, which are then collaboratively filtered together. The resulting performance hinges on accurate patch grouping. Our improved multi-channel version, called BM3D_PCCT, incorporates two improvements. First, BM3D_PCCT uses a more accurate shared patch grouping based on the image reconstructed from photons detected in all 4 energy bins. Second, BM3D_PCCT performs a cross-channel decorrelation, adding a further dimension to the collaborative filtering process. These two improvements produce a more effective algorithm for PCCT denoising. Preliminary results compare BM3D_PCCT against BM3D_Naive, which denoises each energy bin independently. Experiments use a three-contrast PCCT image of a canine abdomen. Within five regions of interest, selected from paraspinal muscle, liver, and visceral fat, BM3D_PCCT reduces the noise standard deviation by 65.0%, compared to 40.4% for BM3D_Naive. Attenuation values of the contrast agents in calibration vials also cluster much tighter to their respective lines of best fit. Mean angular differences (in degrees) for the original, BM3D_Naive, and BM3D_PCCT images, respectively, were 15.61, 7.34, and 4.45 (iodine); 12.17, 7.17, and 4.39 (galodinium); and 12.86, 6.33, and 3.96 (bismuth). We outline a multi-channel denoising algorithm tailored for

  19. Denoising Algorithm for CFA Image Sensors Considering Inter-Channel Correlation.

    Science.gov (United States)

    Lee, Min Seok; Park, Sang Wook; Kang, Moon Gi

    2017-05-28

    In this paper, a spatio-spectral-temporal filter considering an inter-channel correlation is proposed for the denoising of a color filter array (CFA) sequence acquired by CCD/CMOS image sensors. Owing to the alternating under-sampled grid of the CFA pattern, the inter-channel correlation must be considered in the direct denoising process. The proposed filter is applied in the spatial, spectral, and temporal domain, considering the spatio-tempo-spectral correlation. First, nonlocal means (NLM) spatial filtering with patch-based difference (PBD) refinement is performed by considering both the intra-channel correlation and inter-channel correlation to overcome the spatial resolution degradation occurring with the alternating under-sampled pattern. Second, a motion-compensated temporal filter that employs inter-channel correlated motion estimation and compensation is proposed to remove the noise in the temporal domain. Then, a motion adaptive detection value controls the ratio of the spatial filter and the temporal filter. The denoised CFA sequence can thus be obtained without motion artifacts. Experimental results for both simulated and real CFA sequences are presented with visual and numerical comparisons to several state-of-the-art denoising methods combined with a demosaicing method. Experimental results confirmed that the proposed frameworks outperformed the other techniques in terms of the objective criteria and subjective visual perception in CFA sequences.

  20. Fast and accurate denoising method applied to very high resolution optical remote sensing images

    Science.gov (United States)

    Masse, Antoine; Lefèvre, Sébastien; Binet, Renaud; Artigues, Stéphanie; Lassalle, Pierre; Blanchet, Gwendoline; Baillarin, Simon

    2017-10-01

    Restoration of Very High Resolution (VHR) optical Remote Sensing Image (RSI) is critical and leads to the problem of removing instrumental noise while keeping integrity of relevant information. Improving denoising in an image processing chain implies increasing image quality and improving performance of all following tasks operated by experts (photo-interpretation, cartography, etc.) or by algorithms (land cover mapping, change detection, 3D reconstruction, etc.). In a context of large industrial VHR image production, the selected denoising method should optimized accuracy and robustness with relevant information and saliency conservation, and rapidity due to the huge amount of data acquired and/or archived. Very recent research in image processing leads to a fast and accurate algorithm called Non Local Bayes (NLB) that we propose to adapt and optimize for VHR RSIs. This method is well suited for mass production thanks to its best trade-off between accuracy and computational complexity compared to other state-of-the-art methods. NLB is based on a simple principle: similar structures in an image have similar noise distribution and thus can be denoised with the same noise estimation. In this paper, we describe in details algorithm operations and performances, and analyze parameter sensibilities on various typical real areas observed in VHR RSIs.

  1. Image Denoising Algorithm Combined with SGK Dictionary Learning and Principal Component Analysis Noise Estimation

    Directory of Open Access Journals (Sweden)

    Wenjing Zhao

    2018-01-01

    Full Text Available SGK (sequential generalization of K-means dictionary learning denoising algorithm has the characteristics of fast denoising speed and excellent denoising performance. However, the noise standard deviation must be known in advance when using SGK algorithm to process the image. This paper presents a denoising algorithm combined with SGK dictionary learning and the principal component analysis (PCA noise estimation. At first, the noise standard deviation of the image is estimated by using the PCA noise estimation algorithm. And then it is used for SGK dictionary learning algorithm. Experimental results show the following: (1 The SGK algorithm has the best denoising performance compared with the other three dictionary learning algorithms. (2 The SGK algorithm combined with PCA is superior to the SGK algorithm combined with other noise estimation algorithms. (3 Compared with the original SGK algorithm, the proposed algorithm has higher PSNR and better denoising performance.

  2. Hand Depth Image Denoising and Superresolution via Noise-Aware Dictionaries

    Directory of Open Access Journals (Sweden)

    Huayang Li

    2016-01-01

    Full Text Available This paper proposes a two-stage method for hand depth image denoising and superresolution, using bilateral filters and learned dictionaries via noise-aware orthogonal matching pursuit (NAOMP based K-SVD. The bilateral filtering phase recovers singular points and removes artifacts on silhouettes by averaging depth data using neighborhood pixels on which both depth difference and RGB similarity restrictions are imposed. The dictionary learning phase uses NAOMP for training dictionaries which separates faithful depth from noisy data. Compared with traditional OMP, NAOMP adds a residual reduction step which effectively weakens the noise term within the residual during the residual decomposition in terms of atoms. Experimental results demonstrate that the bilateral phase and the NAOMP-based learning dictionaries phase corporately denoise both virtual and real depth images effectively.

  3. Wavelet packet denoising of magnetic resonance images: importance of Rician noise at low SNR.

    Science.gov (United States)

    Wood, J C; Johnson, K M

    1999-03-01

    Wavelet packet analysis is a mathematical transformation that can be used to post-process images, for example, to remove image noise ("denoising"). At a very low signal-to-noise ratio (SNR images have skewed Rician noise statistics that degrade denoising performance. Since the quadrature images have approximately Gaussian noise, it was postulated that denoising would produce better contrast and sharper edges if performed before magnitude image formation. Signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and edge blurring effects of these two approaches were examined in synthetic, phantom, and human MR images. While magnitude and complex denoising both significantly improved SNR and CNR, complex denoising yielded sharper edges and better low-intensity feature contrast.

  4. Regularized Pre-image Estimation for Kernel PCA De-noising

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    The main challenge in de-noising by kernel Principal Component Analysis (PCA) is the mapping of de-noised feature space points back into input space, also referred to as “the pre-image problem”. Since the feature space mapping is typically not bijective, pre-image estimation is inherently illposed...

  5. Image denoising in bidimensional empirical mode decomposition domain: the role of Student's probability distribution function.

    Science.gov (United States)

    Lahmiri, Salim

    2016-03-01

    Hybridisation of the bi-dimensional empirical mode decomposition (BEMD) with denoising techniques has been proposed in the literature as an effective approach for image denoising. In this Letter, the Student's probability density function is introduced in the computation of the mean envelope of the data during the BEMD sifting process to make it robust to values that are far from the mean. The resulting BEMD is denoted tBEMD. In order to show the effectiveness of the tBEMD, several image denoising techniques in tBEMD domain are employed; namely, fourth order partial differential equation (PDE), linear complex diffusion process (LCDP), non-linear complex diffusion process (NLCDP), and the discrete wavelet transform (DWT). Two biomedical images and a standard digital image were considered for experiments. The original images were corrupted with additive Gaussian noise with three different levels. Based on peak-signal-to-noise ratio, the experimental results show that PDE, LCDP, NLCDP, and DWT all perform better in the tBEMD than in the classical BEMD domain. It is also found that tBEMD is faster than classical BEMD when the noise level is low. When it is high, the computational cost in terms of processing time is similar. The effectiveness of the presented approach makes it promising for clinical applications.

  6. An Implementation and Detailed Analysis of the K-SVD Image Denoising Algorithm

    Directory of Open Access Journals (Sweden)

    Marc Lebrun

    2012-05-01

    Full Text Available K-SVD is a signal representation method which, from a set of signals, can derive a dictionary able to approximate each signal with a sparse combination of the atoms. This paper focuses on the K-SVD-based image denoising algorithm. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation.

  7. Electrocardiogram signal denoising based on a new improved wavelet thresholding

    Science.gov (United States)

    Han, Guoqiang; Xu, Zhijun

    2016-08-01

    Good quality electrocardiogram (ECG) is utilized by physicians for the interpretation and identification of physiological and pathological phenomena. In general, ECG signals may mix various noises such as baseline wander, power line interference, and electromagnetic interference in gathering and recording process. As ECG signals are non-stationary physiological signals, wavelet transform is investigated to be an effective tool to discard noises from corrupted signals. A new compromising threshold function called sigmoid function-based thresholding scheme is adopted in processing ECG signals. Compared with other methods such as hard/soft thresholding or other existing thresholding functions, the new algorithm has many advantages in the noise reduction of ECG signals. It perfectly overcomes the discontinuity at ±T of hard thresholding and reduces the fixed deviation of soft thresholding. The improved wavelet thresholding denoising can be proved to be more efficient than existing algorithms in ECG signal denoising. The signal to noise ratio, mean square error, and percent root mean square difference are calculated to verify the denoising performance as quantitative tools. The experimental results reveal that the waves including P, Q, R, and S waves of ECG signals after denoising coincide with the original ECG signals by employing the new proposed method.

  8. An Analysis and Implementation of the BM3D Image Denoising Method

    Directory of Open Access Journals (Sweden)

    Marc Lebrun

    2012-08-01

    Full Text Available BM3D is a recent denoising method based on the fact that an image has a locally sparse representation in transform domain. This sparsity is enhanced by grouping similar 2D image patches into 3D groups. In this paper we propose an open-source implementation of the method. We discuss the choice of all parameter methods and confirm their actual optimality. The description of the method is rewritten with a a more transparent notation that in the original paper. A final index gives nonetheless the correspondence between the new notation and the original notation.

  9. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    Science.gov (United States)

    Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad

    2016-01-01

    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  10. An unified framework for Bayesian denoising for several medical and biological imaging modalities.

    Science.gov (United States)

    Sanches, João M; Nascimento, Jacinto C; Marques, Jorge S

    2007-01-01

    Multiplicative noise is often present in several medical and biological imaging modalities, such as MRI, Ultrasound, PET/SPECT and Fluorescence Microscopy. Noise removal and preserving the details is not a trivial task. Bayesian algorithms have been used to tackle this problem. They succeed to accomplish this task, however they lead to a computational burden as we increase the image dimensionality. Therefore, a significant effort has been made to accomplish this tradeoff, i.e., to develop fast and reliable algorithms to remove noise without distorting relevant clinical information. This paper provides a new unified framework for Bayesian denoising of images corrupted with additive and multiplicative multiplicative noise. This allows to deal with additive white Gaussian and multiplicative noise described by Poisson and Rayleigh distributions respectively. The proposed algorithm is based on the maximum a posteriori (MAP) criterion, and an edge preserving priors are used to avoid the distortion of the relevant image details. The denoising task is performed by an iterative scheme based on Sylvester/Lyapunov equation. This approach allows to use fast and efficient algorithms described in the literature to solve the Sylvester/Lyapunov equation developed in the context of the Control theory. Experimental results with synthetic and real data testify the performance of the proposed technique, and competitive results are achieved when comparing to the of the state-of-the-art methods.

  11. A Denoising Scheme for Randomly Clustered Noise Removal in ICCD Sensing Image

    Directory of Open Access Journals (Sweden)

    Fei Wang

    2017-01-01

    Full Text Available An Intensified Charge-Coupled Device (ICCD image is captured by the ICCD image sensor in extremely low-light conditions. Its noise has two distinctive characteristics. (a Different from the independent identically distributed (i.i.d. noise in natural image, the noise in the ICCD sensing image is spatially clustered, which induces unexpected structure information; (b The pattern of the clustered noise is formed randomly. In this paper, we propose a denoising scheme to remove the randomly clustered noise in the ICCD sensing image. First, we decompose the image into non-overlapped patches and classify them into flat patches and structure patches according to if real structure information is included. Then, two denoising algorithms are designed for them, respectively. For each flat patch, we simulate multiple similar patches for it in pseudo-time domain and remove its noise by averaging all the simulated patches, considering that the structure information induced by the noise varies randomly over time. For each structure patch, we design a structure-preserved sparse coding algorithm to reconstruct the real structure information. It reconstructs each patch by describing it as a weighted summation of its neighboring patches and incorporating the weights into the sparse representation of the current patch. Based on all the reconstructed patches, we generate a reconstructed image. After that, we repeat the whole process by changing relevant parameters, considering that blocking artifacts exist in a single reconstructed image. Finally, we obtain the reconstructed image by merging all the generated images into one. Experiments are conducted on an ICCD sensing image dataset, which verifies its subjective performance in removing the randomly clustered noise and preserving the real structure information in the ICCD sensing image.

  12. Edge-preserving image denoising via group coordinate descent on the GPU.

    Science.gov (United States)

    McGaffin, Madison Gray; Fessler, Jeffrey A

    2015-04-01

    Image denoising is a fundamental operation in image processing, and its applications range from the direct (photographic enhancement) to the technical (as a subproblem in image reconstruction algorithms). In many applications, the number of pixels has continued to grow, while the serial execution speed of computational hardware has begun to stall. New image processing algorithms must exploit the power offered by massively parallel architectures like graphics processing units (GPUs). This paper describes a family of image denoising algorithms well-suited to the GPU. The algorithms iteratively perform a set of independent, parallel 1D pixel-update subproblems. To match GPU memory limitations, they perform these pixel updates in-place and only store the noisy data, denoised image, and problem parameters. The algorithms can handle a wide range of edge-preserving roughness penalties, including differentiable convex penalties and anisotropic total variation. Both algorithms use the majorize-minimize framework to solve the 1D pixel update subproblem. Results from a large 2D image denoising problem and a 3D medical imaging denoising problem demonstrate that the proposed algorithms converge rapidly in terms of both iteration and run-time.

  13. A phase field method for joint denoising, edge detection, and motion estimation in image sequence processing

    NARCIS (Netherlands)

    Preusser, T.; Droske, M.; Garbe, C. S.; Telea, A.; Rumpf, M.

    2007-01-01

    The estimation of optical flow fields from image sequences is incorporated in a Mumford-Shah approach for image denoising and edge detection. Possibly noisy image sequences are considered as input and a piecewise smooth image intensity, a piecewise smooth motion field, and a joint discontinuity set

  14. Energy-Based Wavelet De-Noising of Hydrologic Time Series

    Science.gov (United States)

    Sang, Yan-Fang; Liu, Changming; Wang, Zhonggen; Wen, Jun; Shang, Lunyu

    2014-01-01

    De-noising is a substantial issue in hydrologic time series analysis, but it is a difficult task due to the defect of methods. In this paper an energy-based wavelet de-noising method was proposed. It is to remove noise by comparing energy distribution of series with the background energy distribution, which is established from Monte-Carlo test. Differing from wavelet threshold de-noising (WTD) method with the basis of wavelet coefficient thresholding, the proposed method is based on energy distribution of series. It can distinguish noise from deterministic components in series, and uncertainty of de-noising result can be quantitatively estimated using proper confidence interval, but WTD method cannot do this. Analysis of both synthetic and observed series verified the comparable power of the proposed method and WTD, but de-noising process by the former is more easily operable. The results also indicate the influences of three key factors (wavelet choice, decomposition level choice and noise content) on wavelet de-noising. Wavelet should be carefully chosen when using the proposed method. The suitable decomposition level for wavelet de-noising should correspond to series' deterministic sub-signal which has the smallest temporal scale. If too much noise is included in a series, accurate de-noising result cannot be obtained by the proposed method or WTD, but the series would show pure random but not autocorrelation characters, so de-noising is no longer needed. PMID:25360533

  15. A shape-optimized framework for kidney segmentation in ultrasound images using NLTV denoising and DRLSE

    Directory of Open Access Journals (Sweden)

    Yang Fan

    2012-10-01

    Full Text Available Abstract Background Computer-assisted surgical navigation aims to provide surgeons with anatomical target localization and critical structure observation, where medical image processing methods such as segmentation, registration and visualization play a critical role. Percutaneous renal intervention plays an important role in several minimally-invasive surgeries of kidney, such as Percutaneous Nephrolithotomy (PCNL and Radio-Frequency Ablation (RFA of kidney tumors, which refers to a surgical procedure where access to a target inside the kidney by a needle puncture of the skin. Thus, kidney segmentation is a key step in developing any ultrasound-based computer-aided diagnosis systems for percutaneous renal intervention. Methods In this paper, we proposed a novel framework for kidney segmentation of ultrasound (US images combined with nonlocal total variation (NLTV image denoising, distance regularized level set evolution (DRLSE and shape prior. Firstly, a denoised US image was obtained by NLTV image denoising. Secondly, DRLSE was applied in the kidney segmentation to get binary image. In this case, black and white region represented the kidney and the background respectively. The last stage is that the shape prior was applied to get a shape with the smooth boundary from the kidney shape space, which was used to optimize the segmentation result of the second step. The alignment model was used occasionally to enlarge the shape space in order to increase segmentation accuracy. Experimental results on both synthetic images and US data are given to demonstrate the effectiveness and accuracy of the proposed algorithm. Results We applied our segmentation framework on synthetic and real US images to demonstrate the better segmentation results of our method. From the qualitative results, the experiment results show that the segmentation results are much closer to the manual segmentations. The sensitivity (SN, specificity (SP and positive predictive value

  16. BL_Wiener Denoising Method for Removal of Speckle Noise in Ultrasound Image

    OpenAIRE

    Suhaila Sari; Zuliana Azreen Zulkifeli; Hazli Roslan; Nabilah Ibrahim

    2015-01-01

    Medical imaging techniques are extremely important tools in medical diagnosis. One of these important imaging techniques is ultrasound imaging. However, during ultrasound image acquisition process, the quality of image can be degraded due to corruption by speckle noise. The enhancement of ultrasound images quality from the 2D ultrasound imaging machines is expected to provide medical practitioners more reliable medical images in their patients’ diagnosis. However, developing a denoising techn...

  17. Input Space Regularization Stabilizes Pre-images for Kernel PCA De-noising

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2009-01-01

    Solution of the pre-image problem is key to efficient nonlinear de-noising using kernel Principal Component Analysis. Pre-image estimation is inherently ill-posed for typical kernels used in applications and consequently the most widely used estimation schemes lack stability. For de-noising appli...... mapping is non-linear, however, by applying a simple input space distance regularizer we can reduce variability with very limited sacrifice in terms of de-noising efficiency.......-noising applications we propose input space distance regularization as a stabilizer for pre-image estimation. We perform extensive experiments on the USPS digit modeling problem to evaluate the stability of three widely used pre-image estimators. We show that the previous methods lack stability when the feature...

  18. Selecting wavelet basis function for denoising of planar nuclear images using mutual information metric

    International Nuclear Information System (INIS)

    Matsuyama, Eri; Tsai, Du-Yih; Lee, Yongbum; Fuse, Masashi; Kojima, Katsuyuki

    2010-01-01

    Noise reduction in nuclear medicine images can be achieved by increasing the counts or by filtering the images. In this paper, we employed an image filtering technique, a wavelet-based method, for reducing image noise. We selected eight various wavelet basis functions for our study. Wavelet transforms were applied to planar images using the universal soft-thresholding method. We used mutual information (MI) as an image-quality metric to conduct quantitative image analysis and comparison on the processed images obtained from the eight selected, wavelet basis functions. To validate the usefulness of the proposed metric, standard deviation rate and edge slope ratio of the processed images were calculated and compared. In this study, a computer-generated 2-D grid-pattern image and phantom images acquired with a standard inkjet printer, were served as original images. Simulation experiments and phantom experiments demonstrate that MI value can be used as a criterion to select an appropriate wavelet basis for image denoising. (author)

  19. A spectral CT denoising algorithm based on weighted block matching 3D filtering

    Science.gov (United States)

    Salehjahromi, Morteza; Zhang, Yanbo; Yu, Hengyong

    2017-09-01

    In spectral CT, an energy-resolving detector is capable of counting the number of received photons in different energy channels with appropriate post-processing steps. Because the received photon number in each energy channel is low in practice, the generated projections suffer from low signal-to-noise ratio. This poses a challenge to perform image reconstruction of spectral CT. Because the reconstructed multi-channel images are for the same object but in different energies, there is a high correlation among these images and one can make full use of this redundant information. In this work, we propose a weighted block-matching and three-dimensional (3-D) filtering (BM3D) based method for spectral CT denoising. It is based on denoising of small 3-D data arrays formed by grouping similar 2-D blocks from the whole 3-D data image. This method consists of the following two steps. First, a 2-D image is obtained using the filtered back-projection (FBP) in each energy channel. Second, the proposed weighted BM3D filtering is performed. It not only uses the spatial correlation within each channel image but also exploits the spectral correlation among the channel images. The proposed method is evaluated on both numerical simulation and realistic preclinical datasets, and its merits are demonstrated by the promising results.

  20. Image Denoising And Segmentation Approchto Detect Tumor From BRAINMRI Images

    Directory of Open Access Journals (Sweden)

    Shanta Rangaswamy

    2018-04-01

    Full Text Available The detection of the Brain Tumor is a challenging problem, due to the structure of the Tumor cells in the brain. This project presents a systematic method that enhances the detection of brain tumor cells and to analyze functional structures by training and classification of the samples in SVM and tumor cell segmentation of the sample using DWT algorithm. From the input MRI Images collected, first noise is removed from MRI images by applying wiener filtering technique. In image enhancement phase, all the color components of MRI Images will be converted into gray scale image and make the edges clear in the image to get better identification and improvised quality of the image. In the segmentation phase, DWT on MRI Image to segment the grey-scale image is performed. During the post-processing, classification of tumor is performed by using SVM classifier. Wiener Filter, DWT, SVM Segmentation strategies were used to find and group the tumor position in the MRI filtered picture respectively. An essential perception in this work is that multi arrange approach utilizes various leveled classification strategy which supports execution altogether. This technique diminishes the computational complexity quality in time and memory. This classification strategy works accurately on all images and have achieved the accuracy of 93%.

  1. A review of wavelet denoising in MRI and ultrasound brain imaging

    NARCIS (Netherlands)

    Pižurica, Aleksandra; Wink, Alle Meije; Vansteenkiste, Ewout; Philips, Wilfried; Roerdink, Jos B.T.M.

    There is a growing interest in using multiresolution noise filters in a variety of medical imaging applications. We review recent wavelet denoising techniques for medical ultrasound and for magnetic resonance images and discuss some of their potential applications in the clinical investigations of

  2. A new method for fusion, denoising and enhancement of x-ray images retrieved from Talbot-Lau grating interferometry

    Science.gov (United States)

    Scholkmann, Felix; Revol, Vincent; Kaufmann, Rolf; Baronowski, Heidrun; Kottler, Christian

    2014-03-01

    This paper introduces a new image denoising, fusion and enhancement framework for combining and optimal visualization of x-ray attenuation contrast (AC), differential phase contrast (DPC) and dark-field contrast (DFC) images retrieved from x-ray Talbot-Lau grating interferometry. The new image fusion framework comprises three steps: (i) denoising each input image (AC, DPC and DFC) through adaptive Wiener filtering, (ii) performing a two-step image fusion process based on the shift-invariant wavelet transform, i.e. first fusing the AC with the DPC image and then fusing the resulting image with the DFC image, and finally (iii) enhancing the fused image to obtain a final image using adaptive histogram equalization, adaptive sharpening and contrast optimization. Application examples are presented for two biological objects (a human tooth and a cherry) and the proposed method is compared to two recently published AC/DPC/DFC image processing techniques. In conclusion, the new framework for the processing of AC, DPC and DFC allows the most relevant features of all three images to be combined in one image while reducing the noise and enhancing adaptively the relevant image features. The newly developed framework may be used in technical and medical applications.

  3. (Non-) homomorphic approaches to denoise intensity SAR images with non-local means and stochastic distances

    Science.gov (United States)

    Penna, Pedro A. A.; Mascarenhas, Nelson D. A.

    2018-02-01

    The development of new methods to denoise images still attract researchers, who seek to combat the noise with the minimal loss of resolution and details, like edges and fine structures. Many algorithms have the goal to remove additive white Gaussian noise (AWGN). However, it is not the only type of noise which interferes in the analysis and interpretation of images. Therefore, it is extremely important to expand the filters capacity to different noise models present in li-terature, for example the multiplicative noise called speckle that is present in synthetic aperture radar (SAR) images. The state-of-the-art algorithms in remote sensing area work with similarity between patches. This paper aims to develop two approaches using the non local means (NLM), developed for AWGN. In our research, we expanded its capacity for intensity SAR ima-ges speckle. The first approach is grounded on the use of stochastic distances based on the G0 distribution without transforming the data to the logarithm domain, like homomorphic transformation. It takes into account the speckle and backscatter to estimate the parameters necessary to compute the stochastic distances on NLM. The second method uses a priori NLM denoising with a homomorphic transformation and applies the inverse Gamma distribution to estimate the parameters that were used into NLM with stochastic distances. The latter method also presents a new alternative to compute the parameters for the G0 distribution. Finally, this work compares and analyzes the synthetic and real results of the proposed methods with some recent filters of the literature.

  4. Edge-preserving image denoising via group coordinate descent on the GPU

    OpenAIRE

    McGaffin, Madison G.; Fessler, Jeffrey A.

    2015-01-01

    Image denoising is a fundamental operation in image processing, and its applications range from the direct (photographic enhancement) to the technical (as a subproblem in image reconstruction algorithms). In many applications, the number of pixels has continued to grow, while the serial execution speed of computational hardware has begun to stall. New image processing algorithms must exploit the power offered by massively parallel architectures like graphics processing units (GPUs). This pape...

  5. Radar Target Recognition Based on Stacked Denoising Sparse Autoencoder

    Directory of Open Access Journals (Sweden)

    Zhao Feixiang

    2017-04-01

    Full Text Available Feature extraction is a key step in radar target recognition. The quality of the extracted features determines the performance of target recognition. However, obtaining the deep nature of the data is difficult using the traditional method. The autoencoder can learn features by making use of data and can obtain feature expressions at different levels of data. To eliminate the influence of noise, the method of radar target recognition based on stacked denoising sparse autoencoder is proposed in this paper. This method can extract features directly and efficiently by setting different hidden layers and numbers of iterations. Experimental results show that the proposed method is superior to the K-nearest neighbor method and the traditional stacked autoencoder.

  6. Fractional fourier-based filter for denoising elastograms.

    Science.gov (United States)

    Subramaniam, Suba R; Hon, Tsz K; Georgakis, Apostolos; Papadakis, George

    2010-01-01

    In ultrasound elastography, tissue axial strains are obtained through the differentiation of axial displacements. However, the application of the gradient operator amplifies the noise present in the displacement rendering unreadable axial strains. In this paper a novel denoising scheme based on repeated filtering in consecutive fractional Fourier transform domains is proposed for the accurate estimation of axial strains. The presented method generates a time-varying cutoff threshold that can accommodate the discrete non-stationarities present in the displacement signal. This is achieved by means of a filter circuit which is composed of a small number of ordinary linear low-pass filters and appropriate fractional Fourier transforms. We show that the proposed method can improve the contrast-to-noise ratio (CNR(e)) of the elastogram outperforming conventional low-pass filters.

  7. The EM Method in a Probabilistic Wavelet-Based MRI Denoising

    Science.gov (United States)

    2015-01-01

    Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959

  8. Denoising of two-photon fluorescence images with block-matching 3D filtering.

    Science.gov (United States)

    Danielyan, Aram; Wu, Yu-Wei; Shih, Pei-Yu; Dembitskaya, Yulia; Semyanov, Alexey

    2014-07-01

    Two-photon florescence imaging is widely used to perform morphological analysis of subcellular structures such as neuronal dendrites and spines, astrocytic processes etc. This method is also indispensable for functional analysis of cellular activity such as Ca2+ dynamics. Although spatial resolution of laser scanning two-photon system is greater than that of confocal or wide field microscope, it is still diffraction limited. In practice, the resolution of the system is more affected by its signal-to-noise ratio (SNR) than the diffraction limit. Thus, various approaches aiming to increase the SNR in two-photon imaging are desirable and can potentially save on building costly super-resolution imaging system. Here we analyze the statistics of noise in the two-photon florescence images of hippocampal astrocytes expressing genetically encoded Ca2+ sensor GCaMP2 and show that it can be reasonably well approximated using the same models which are used for describing noise in images acquired with digital cameras. This allows to use denoising methods available for wide field imaging on two-photon images. Particularly we demonstrate that the Block-Matching 3D (BM3D) filter can significantly improve the quality of two-photon fluorescence images so small details such as astrocytic processes can be easier identified. Moreover, denoising of the images with BM3D yields less noisy Ca2+ signals in astrocytes when denoising of the images with Gaussian filter. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. [Research on electrocardiogram de-noising algorithm based on wavelet neural networks].

    Science.gov (United States)

    Wan, Xiangkui; Zhang, Jun

    2010-12-01

    In this paper, the ECG de-noising technology based on wavelet neural networks (WNN) is used to deal with the noises in Electrocardiogram (ECG) signal. The structure of WNN, which has the outstanding nonlinear mapping capability, is designed as a nonlinear filter used for ECG to cancel the baseline wander, electromyo-graphical interference and powerline interference. The network training algorithm and de-noising experiments results are presented, and some key points of the WNN filter using ECG de-noising are discussed.

  10. Preprocessing with image denoising and histogram equalization for endoscopy image analysis using texture analysis.

    Science.gov (United States)

    Hiroyasu, Tomoyuki; Hayashinuma, Katsutoshi; Ichikawa, Hiroshi; Yagi, Nobuaki

    2015-08-01

    A preprocessing method for endoscopy image analysis using texture analysis is proposed. In a previous study, we proposed a feature value that combines a co-occurrence matrix and a run-length matrix to analyze the extent of early gastric cancer from images taken with narrow-band imaging endoscopy. However, the obtained feature value does not identify lesion zones correctly due to the influence of noise and halation. Therefore, we propose a new preprocessing method with a non-local means filter for de-noising and contrast limited adaptive histogram equalization. We have confirmed that the pattern of gastric mucosa in images can be improved by the proposed method. Furthermore, the lesion zone is shown more correctly by the obtained color map.

  11. Adaptive nonlocal means filtering based on local noise level for CT denoising

    International Nuclear Information System (INIS)

    Li, Zhoubo; Trzasko, Joshua D.; Lake, David S.; Blezek, Daniel J.; Manduca, Armando; Yu, Lifeng; Fletcher, Joel G.; McCollough, Cynthia H.

    2014-01-01

    Purpose: To develop and evaluate an image-domain noise reduction method based on a modified nonlocal means (NLM) algorithm that is adaptive to local noise level of CT images and to implement this method in a time frame consistent with clinical workflow. Methods: A computationally efficient technique for local noise estimation directly from CT images was developed. A forward projection, based on a 2D fan-beam approximation, was used to generate the projection data, with a noise model incorporating the effects of the bowtie filter and automatic exposure control. The noise propagation from projection data to images was analytically derived. The analytical noise map was validated using repeated scans of a phantom. A 3D NLM denoising algorithm was modified to adapt its denoising strength locally based on this noise map. The performance of this adaptive NLM filter was evaluated in phantom studies in terms of in-plane and cross-plane high-contrast spatial resolution, noise power spectrum (NPS), subjective low-contrast spatial resolution using the American College of Radiology (ACR) accreditation phantom, and objective low-contrast spatial resolution using a channelized Hotelling model observer (CHO). Graphical processing units (GPU) implementation of this noise map calculation and the adaptive NLM filtering were developed to meet demands of clinical workflow. Adaptive NLM was piloted on lower dose scans in clinical practice. Results: The local noise level estimation matches the noise distribution determined from multiple repetitive scans of a phantom, demonstrated by small variations in the ratio map between the analytical noise map and the one calculated from repeated scans. The phantom studies demonstrated that the adaptive NLM filter can reduce noise substantially without degrading the high-contrast spatial resolution, as illustrated by modulation transfer function and slice sensitivity profile results. The NPS results show that adaptive NLM denoising preserves the

  12. Non-local neighbor embedding image denoising algorithm in sparse domain

    Science.gov (United States)

    Shi, Guo-chuan; Xia, Liang; Liu, Shuang-qing; Xu, Guo-ming

    2013-12-01

    To get better denoising results, the prior knowledge of nature images should be taken into account to regularize the ill-posed inverse problem. In this paper, we propose an image denoising algorithm via non-local similar neighbor embedding in sparse domain. Firstly, a local statistical feature, namely histograms of oriented gradients of image patches is used to perform the clustering, and then the whole training data set is partitioned into a set of subsets which have similar local geometric structures and the centroid of each subset is also obtained. Secondly, we apply the principal component analysis (PCA) to learn the compact sub-dictionary for each cluster. Next, through sparse coding over the sub-dictionary and neighborhood selecting, the image patch to be synthesized can be approximated by its top k neighbors. The extensive experimental results validate the effective of the proposed method both in PSNR and visual perception.

  13. Impact of image denoising on image quality, quantitative parameters and sensitivity of ultra-low-dose volume perfusion CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Othman, Ahmed E. [RWTH Aachen University, Department of Diagnostic and Interventional Neuroradiology, Aachen (Germany); Eberhard Karls University Tuebingen, University Hospital Tuebingen, Department for Diagnostic and Interventional Radiology, Tuebingen (Germany); Brockmann, Carolin; Afat, Saif; Pjontek, Rastislav; Nikoubashman, Omid; Brockmann, Marc A.; Wiesmann, Martin [RWTH Aachen University, Department of Diagnostic and Interventional Neuroradiology, Aachen (Germany); Yang, Zepa; Kim, Changwon [Seoul National University, Department of Transdisciplinary Studies, Graduate School of Convergence Science and Technology, Suwon (Korea, Republic of); Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Nikolaou, Konstantin [Eberhard Karls University Tuebingen, University Hospital Tuebingen, Department for Diagnostic and Interventional Radiology, Tuebingen (Germany); Kim, Jong Hyo [Seoul National University, Department of Transdisciplinary Studies, Graduate School of Convergence Science and Technology, Suwon (Korea, Republic of); Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Advanced Institute of Convergence Technology, Center for Medical-IT Convergence Technology Research, Suwon (Korea, Republic of); Seoul National University Hospital, Department of Radiology, Seoul (Korea, Republic of)

    2016-01-15

    To examine the impact of denoising on ultra-low-dose volume perfusion CT (ULD-VPCT) imaging in acute stroke. Simulated ULD-VPCT data sets at 20 % dose rate were generated from perfusion data sets of 20 patients with suspected ischemic stroke acquired at 80 kVp/180 mAs. Four data sets were generated from each ULD-VPCT data set: not-denoised (ND); denoised using spatiotemporal filter (D1); denoised using quanta-stream diffusion technique (D2); combination of both methods (D1 + D2). Signal-to-noise ratio (SNR) was measured in the resulting 100 data sets. Image quality, presence/absence of ischemic lesions, CBV and CBF scores according to a modified ASPECTS score were assessed by two blinded readers. SNR and qualitative scores were highest for D1 + D2 and lowest for ND (all p ≤ 0.001). In 25 % of the patients, ND maps were not assessable and therefore excluded from further analyses. Compared to original data sets, in D2 and D1 + D2, readers correctly identified all patients with ischemic lesions (sensitivity 1.0, kappa 1.0). Lesion size was most accurately estimated for D1 + D2 with a sensitivity of 1.0 (CBV) and 0.94 (CBF) and an inter-rater agreement of 1.0 and 0.92, respectively. An appropriate combination of denoising techniques applied in ULD-VPCT produces diagnostically sufficient perfusion maps at substantially reduced dose rates as low as 20 % of the normal scan. (orig.)

  14. System and method for image reconstruction, analysis, and/or de-noising

    KAUST Repository

    Laleg-Kirati, Taous-Meriem

    2015-11-12

    A method and system can analyze, reconstruct, and/or denoise an image. The method and system can include interpreting a signal as a potential of a Schrödinger operator, decomposing the signal into squared eigenfunctions, reducing a design parameter of the Schrödinger operator, analyzing discrete spectra of the Schrödinger operator and combining the analysis of the discrete spectra to construct the image.

  15. Optimal inversion of the Anscombe transformation in low-count Poisson image denoising.

    Science.gov (United States)

    Mäkitalo, Markku; Foi, Alessandro

    2011-01-01

    The removal of Poisson noise is often performed through the following three-step procedure. First, the noise variance is stabilized by applying the Anscombe root transformation to the data, producing a signal in which the noise can be treated as additive Gaussian with unitary variance. Second, the noise is removed using a conventional denoising algorithm for additive white Gaussian noise. Third, an inverse transformation is applied to the denoised signal, obtaining the estimate of the signal of interest. The choice of the proper inverse transformation is crucial in order to minimize the bias error which arises when the nonlinear forward transformation is applied. We introduce optimal inverses for the Anscombe transformation, in particular the exact unbiased inverse, a maximum likelihood (ML) inverse, and a more sophisticated minimum mean square error (MMSE) inverse. We then present an experimental analysis using a few state-of-the-art denoising algorithms and show that the estimation can be consistently improved by applying the exact unbiased inverse, particularly at the low-count regime. This results in a very efficient filtering solution that is competitive with some of the best existing methods for Poisson image denoising.

  16. Denoising of B1+ field maps for noise-robust image reconstruction in electrical properties tomography

    International Nuclear Information System (INIS)

    Michel, Eric; Hernandez, Daniel; Cho, Min Hyoung; Lee, Soo Yeol

    2014-01-01

    Purpose: To validate the use of adaptive nonlinear filters in reconstructing conductivity and permittivity images from the noisy B 1 + maps in electrical properties tomography (EPT). Methods: In EPT, electrical property images are computed by taking Laplacian of the B 1 + maps. To mitigate the noise amplification in computing the Laplacian, the authors applied adaptive nonlinear denoising filters to the measured complex B 1 + maps. After the denoising process, they computed the Laplacian by central differences. They performed EPT experiments on phantoms and a human brain at 3 T along with corresponding EPT simulations on finite-difference time-domain models. They evaluated the EPT images comparing them with the ones obtained by previous EPT reconstruction methods. Results: In both the EPT simulations and experiments, the nonlinear filtering greatly improved the EPT image quality when evaluated in terms of the mean and standard deviation of the electrical property values at the regions of interest. The proposed method also improved the overall similarity between the reconstructed conductivity images and the true shapes of the conductivity distribution. Conclusions: The nonlinear denoising enabled us to obtain better-quality EPT images of the phantoms and the human brain at 3 T

  17. Denoising method of heart sound signals based on self-construct heart sound wavelet

    Directory of Open Access Journals (Sweden)

    Xiefeng Cheng

    2014-08-01

    Full Text Available In the field of heart sound signal denoising, the wavelet transform has become one of the most effective measures. The selective wavelet basis is based on the well-known orthogonal db series or biorthogonal bior series wavelet. In this paper we present a self-construct wavelet basis which is suitable for the heart sound denoising and analyze its constructor method and features in detail according to the characteristics of heart sound and evaluation criterion of signal denoising. The experimental results show that the heart sound wavelet can effectively filter out the noise of the heart sound signals, reserve the main characteristics of the signal. Compared with the traditional wavelets, it has a higher signal-to-noise ratio, lower mean square error and better denoising effect.

  18. Denoising time-resolved microscopy image sequences with singular value thresholding

    Energy Technology Data Exchange (ETDEWEB)

    Furnival, Tom, E-mail: tjof2@cam.ac.uk; Leary, Rowan K., E-mail: rkl26@cam.ac.uk; Midgley, Paul A., E-mail: pam33@cam.ac.uk

    2017-07-15

    Time-resolved imaging in microscopy is important for the direct observation of a range of dynamic processes in both the physical and life sciences. However, the image sequences are often corrupted by noise, either as a result of high frame rates or a need to limit the radiation dose received by the sample. Here we exploit both spatial and temporal correlations using low-rank matrix recovery methods to denoise microscopy image sequences. We also make use of an unbiased risk estimator to address the issue of how much thresholding to apply in a robust and automated manner. The performance of the technique is demonstrated using simulated image sequences, as well as experimental scanning transmission electron microscopy data, where surface adatom motion and nanoparticle structural dynamics are recovered at rates of up to 32 frames per second. - Highlights: • Correlations in space and time are harnessed to denoise microscopy image sequences. • A robust estimator provides automated selection of the denoising parameter. • Motion tracking and automated noise estimation provides a versatile algorithm. • Application to time-resolved STEM enables study of atomic and nanoparticle dynamics.

  19. Research on De-Noising of Power Quality Disturbance Detection Based on EEMD Threshold

    Directory of Open Access Journals (Sweden)

    Liping Chen

    2013-11-01

    Full Text Available Actual power quality signal which is often affected by noise pollution impacts the analysis results of the disturbance signal. In this paper, EEMD (Ensemble Empirical Mode Decomposition -based threshold de-noising method is proposed for power quality signal with different SNR (Signal-to-Noise Ratio. As a comparison, we use other four thresholds, namely, the heuristic threshold, the self-adaptive threshold, the fixed threshold and the minimax threshold to filter the noises from power quality signal. Through the analysis and comparison of three characteristics of the signal pre-and-post de-noised, including waveforms, SNR and MSE (Mean Square Error, furthermore the instantaneous attribute of corresponding time by HHT (Hilbert Huang Transform. Simulation results show that EEMD threshold de-noising method can make the waveform close to the actual value. The SNR is higher and the MSE is smaller compared with other four thresholds. The instantaneous attribute can reflect the actual disturbance signal more exactly. The optimal threshold EEMD-based algorithm is proposed for power quality disturbance signal de-noising. Meanwhile, EEMD threshold de-noising method with adaptivity is suitable for composite disturbance signal de-noising.

  20. Integration of speckle de-noising and image segmentation using ...

    Indian Academy of Sciences (India)

    Flood is one of the detrimental hydro-meteorological threats to mankind. This compels very efficient flood assessment models. In this paper, we propose remote sensing based flood assessment using Synthetic Aperture Radar (SAR) image because of its imperviousness to unfavourable weather conditions. However, they ...

  1. Static Myocardial Perfusion Imaging using denoised dynamic Rb-82 PET/CT scans

    DEFF Research Database (Denmark)

    Petersen, Maiken N.M.; Hoff, Camilla; Harms, Hans

    Introduction: Relative and absolute measures of myocardial perfusion are derived from a single 82Rb PET/CT scan. However, images are inherently noising due to the short half-life of 82Rb. We have previously shown that denoising techniques can be applied to dynamic 82Rb series with excellent...... quantitative accuracy. In this study, we examine static images created by summing late frames of denoised dynamic series. Method: 47 random clinical 82Rb stress and rest scans (27 male, age 68+/- 12 y., BMI 27.9 +/- 5.5 kg/m2) performed on a GE Discovery 690 PET/CT scanner were included in the study....... Administered 82Rb dose was 1110 MBq. Denoising using HYPR-LR or Hotelling 3D algorithms was performed as post-processing on the dynamic images series. Static series were created by summing frames from 2.5-5 min. The image data was analysed in QPET (Cedars-Sinai). Relative segmental perfusion (normalized...

  2. QuaSI: Quantile Sparse Image Prior for Spatio-Temporal Denoising of Retinal OCT Data

    OpenAIRE

    Schirrmacher, Franziska; Köhler, Thomas; Husvogt, Lennart; Fujimoto, James G.; Hornegger, Joachim; Maier, Andreas K.

    2017-01-01

    Optical coherence tomography (OCT) enables high-resolution and non-invasive 3D imaging of the human retina but is inherently impaired by speckle noise. This paper introduces a spatio-temporal denoising algorithm for OCT data on a B-scan level using a novel quantile sparse image (QuaSI) prior. To remove speckle noise while preserving image structures of diagnostic relevance, we implement our QuaSI prior via median filter regularization coupled with a Huber data fidelity model in a variational ...

  3. Signal denoising using stochastic resonance and bistable circuit for acoustic emission-based structural health monitoring

    Science.gov (United States)

    Kim, Jinki; Harne, Ryan L.; Wang, K. W.

    2017-04-01

    Noise is unavoidable and ever-present in measurements. As a result, signal denoising is a necessity for many scientific and engineering disciplines. In particular, structural health monitoring applications aim to detect often weak anomaly responses generated by incipient damage (such as acoustic emission signals) from background noise that contaminates the signals. Among various approaches, stochastic resonance has been widely studied and adopted for denoising and weak signal detection to enhance the reliability of structural heath monitoring. On the other hand, many of the advancements have been focused on detecting useful information from the frequency domain generally in a postprocessing environment, such as identifying damage-induced frequency changes that become more prominent by utilizing stochastic resonance in bistable systems, rather than recovering the original time domain responses. In this study, a new adaptive signal conditioning strategy is presented for on-line signal denoising and recovery, via utilizing the stochastic resonance in a bistable circuit sensor. The input amplitude to the bistable system is adaptively adjusted to favorably activate the stochastic resonance based on the noise level of the given signal, which is one of the few quantities that can be readily assessed from noise contaminated signals in practical situations. Numerical investigations conducted by employing a theoretical model of a double-well Duffing analog circuit demonstrate the operational principle and confirm the denoising performance of the new method. This study exemplifies the promising potential of implementing the new denoising strategy for enhancing on-line acoustic emission-based structural health monitoring.

  4. Subsignal-based denoising from piecewise linear or constant signal

    Science.gov (United States)

    Jalil, Bushra; Beya, Ouadi; Fauvet, Eric; Laligant, Olivier

    2011-11-01

    In the present work, a novel signal denoising technique for piecewise constant or linear signals is presented termed as ``signal split.'' The proposed method separates the sharp edges or transitions from the noise elements by splitting the signal into different parts. Unlike many noise removal techniques, the method works only in the nonorthogonal domain. The new method utilizes Stein unbiased risk estimate (SURE) to split the signal, Lipschitz exponents to identify noise elements, and a polynomial fitting approach for the sub signal reconstruction. At the final stage, merging of all parts yield in the fully denoised signal at a very low computational cost. Statistical results are quite promising and performs better than the conventional shrinkage methods in the case of different types of noise, i.e., speckle, Poisson, and white Gaussian noise. The method has been compared with the state of the art SURE-linear expansion of thresholds denoising technique as well and performs equally well. The method has been extended to the multisplitting approach to identify small edges which are difficult to identify due to the mutual influence of their adjacent strong edges.

  5. Integration of speckle de-noising and image segmentation using ...

    Indian Academy of Sciences (India)

    Aperture Radar (SAR) image because of its imperviousness to unfavourable weather conditions. However, they suffer from the speckle noise. Hence, the processing of SAR image is applied in two stages: speckle removal filters and image segmentation methods for flood mapping. The speckle noise has been reduced.

  6. Machinery vibration signal denoising based on learned dictionary and sparse representation

    International Nuclear Information System (INIS)

    Guo, Liang; Gao, Hongli; Li, Jun; Huang, Haifeng; Zhang, Xiaochen

    2015-01-01

    Mechanical vibration signal denoising has been an import problem for machine damage assessment and health monitoring. Wavelet transfer and sparse reconstruction are the powerful and practical methods. However, those methods are based on the fixed basis functions or atoms. In this paper, a novel method is presented. The atoms used to represent signals are learned from the raw signal. And in order to satisfy the requirements of real-time signal processing, an online dictionary learning algorithm is adopted. Orthogonal matching pursuit is applied to extract the most pursuit column in the dictionary. At last, denoised signal is calculated with the sparse vector and learned dictionary. A simulation signal and real bearing fault signal are utilized to evaluate the improved performance of the proposed method through the comparison with kinds of denoising algorithms. Then Its computing efficiency is demonstrated by an illustrative runtime example. The results show that the proposed method outperforms current algorithms with efficiency calculation. (paper)

  7. Denoising GPS-Based Structure Monitoring Data Using Hybrid EMD and Wavelet Packet

    Directory of Open Access Journals (Sweden)

    Lu Ke

    2017-01-01

    Full Text Available High-frequency components are often discarded for data denoising when applying pure wavelet multiscale or empirical mode decomposition (EMD based approaches. Instead, they may raise the problem of energy leakage in vibration signals. Hybrid EMD and wavelet packet (EMD-WP is proposed to denoise Global Positioning System- (GPS- based structure monitoring data. First, field observables are decomposed into a collection of intrinsic mode functions (IMFs with different characteristics. Second, high-frequency IMFs are denoised using the wavelet packet; then the monitoring data are reconstructed using the denoised IMFs together with the remaining low-frequency IMFs. Our algorithm is demonstrated on a synthetic displacement response of a 3-story frame excited by El Centro earthquake along with a set of Gaussian random white noises on different levels added. We find that the hybrid method can effectively weaken the multipath effect with low frequency and can potentially extract vibration feature. However, false modals may still exist by the rest of the noise contained in the high-frequency IMFs and when the frequency of the noise is located in the same band as that of effective vibration. Finally, real GPS observables are implemented to evaluate the efficiency of EMD-WP method in mitigating low-frequency multipath.

  8. Multisensor signal denoising based on matching synchrosqueezing wavelet transform for mechanical fault condition assessment

    Science.gov (United States)

    Yi, Cancan; Lv, Yong; Xiao, Han; Huang, Tao; You, Guanghui

    2018-04-01

    Since it is difficult to obtain the accurate running status of mechanical equipment with only one sensor, multisensor measurement technology has attracted extensive attention. In the field of mechanical fault diagnosis and condition assessment based on vibration signal analysis, multisensor signal denoising has emerged as an important tool to improve the reliability of the measurement result. A reassignment technique termed the synchrosqueezing wavelet transform (SWT) has obvious superiority in slow time-varying signal representation and denoising for fault diagnosis applications. The SWT uses the time-frequency reassignment scheme, which can provide signal properties in 2D domains (time and frequency). However, when the measured signal contains strong noise components and fast varying instantaneous frequency, the performance of SWT-based analysis still depends on the accuracy of instantaneous frequency estimation. In this paper, a matching synchrosqueezing wavelet transform (MSWT) is investigated as a potential candidate to replace the conventional synchrosqueezing transform for the applications of denoising and fault feature extraction. The improved technology utilizes the comprehensive instantaneous frequency estimation by chirp rate estimation to achieve a highly concentrated time-frequency representation so that the signal resolution can be significantly improved. To exploit inter-channel dependencies, the multisensor denoising strategy is performed by using a modulated multivariate oscillation model to partition the time-frequency domain; then, the common characteristics of the multivariate data can be effectively identified. Furthermore, a modified universal threshold is utilized to remove noise components, while the signal components of interest can be retained. Thus, a novel MSWT-based multisensor signal denoising algorithm is proposed in this paper. The validity of this method is verified by numerical simulation, and experiments including a rolling

  9. Temporal and Volumetric Denoising via Quantile Sparse Image (QuaSI) Prior in Optical Coherence Tomography and Beyond

    OpenAIRE

    Schirrmacher, Franziska; Köhler, Thomas; Lindenberger, Tobias; Husvogt, Lennart; Endres, Jürgen; Fujimoto, James G.; Hornegger, Joachim; Dörfler, Arnd; Hoelter, Philip; Maier, Andreas K.

    2018-01-01

    This paper introduces an universal and structure-preserving regularization term, called quantile sparse image (QuaSI) prior. The prior is suitable for denoising images from various medical image modalities. We demonstrate its effectivness on volumetric optical coherence tomography (OCT) and computed tomography (CT) data, which show differnt noise and image characteristics. OCT offers high-resolution scans of the human retina but is inherently impaired by speckle noise. CT on the other hand ha...

  10. Cubesat-Derived Detection of Seagrasses Using Planet Imagery Following Unmixing-Based Denoising: is Small the Next Big?

    Science.gov (United States)

    Traganos, D.; Cerra, D.; Reinartz, P.

    2017-05-01

    Seagrasses are one of the most productive and widespread yet threatened coastal ecosystems on Earth. Despite their importance, they are declining due to various threats, which are mainly anthropogenic. Lack of data on their distribution hinders any effort to rectify this decline through effective detection, mapping and monitoring. Remote sensing can mitigate this data gap by allowing retrospective quantitative assessment of seagrass beds over large and remote areas. In this paper, we evaluate the quantitative application of Planet high resolution imagery for the detection of seagrasses in the Thermaikos Gulf, NW Aegean Sea, Greece. The low Signal-to-noise Ratio (SNR), which characterizes spectral bands at shorter wavelengths, prompts the application of the Unmixing-based denoising (UBD) as a pre-processing step for seagrass detection. A total of 15 spectral-temporal patterns is extracted from a Planet image time series to restore the corrupted blue and green band in the processed Planet image. Subsequently, we implement Lyzenga's empirical water column correction and Support Vector Machines (SVM) to evaluate quantitative benefits of denoising. Denoising aids detection of Posidonia oceanica seagrass species by increasing its producer and user accuracy by 31.7 % and 10.4 %, correspondingly, with a respective increase in its Kappa value from 0.3 to 0.48. In the near future, our objective is to improve accuracies in seagrass detection by applying more sophisticated, analytical water column correction algorithms to Planet imagery, developing time- and cost-effective monitoring of seagrass distribution that will enable in turn the effective management and conservation of these highly valuable and productive ecosystems.

  11. CUBESAT-DERIVED DETECTION OF SEAGRASSES USING PLANET IMAGERY FOLLOWING UNMIXING-BASED DENOISING: IS SMALL THE NEXT BIG?

    Directory of Open Access Journals (Sweden)

    D. Traganos

    2017-05-01

    Full Text Available Seagrasses are one of the most productive and widespread yet threatened coastal ecosystems on Earth. Despite their importance, they are declining due to various threats, which are mainly anthropogenic. Lack of data on their distribution hinders any effort to rectify this decline through effective detection, mapping and monitoring. Remote sensing can mitigate this data gap by allowing retrospective quantitative assessment of seagrass beds over large and remote areas. In this paper, we evaluate the quantitative application of Planet high resolution imagery for the detection of seagrasses in the Thermaikos Gulf, NW Aegean Sea, Greece. The low Signal-to-noise Ratio (SNR, which characterizes spectral bands at shorter wavelengths, prompts the application of the Unmixing-based denoising (UBD as a pre-processing step for seagrass detection. A total of 15 spectral-temporal patterns is extracted from a Planet image time series to restore the corrupted blue and green band in the processed Planet image. Subsequently, we implement Lyzenga’s empirical water column correction and Support Vector Machines (SVM to evaluate quantitative benefits of denoising. Denoising aids detection of Posidonia oceanica seagrass species by increasing its producer and user accuracy by 31.7 % and 10.4 %, correspondingly, with a respective increase in its Kappa value from 0.3 to 0.48. In the near future, our objective is to improve accuracies in seagrass detection by applying more sophisticated, analytical water column correction algorithms to Planet imagery, developing time- and cost-effective monitoring of seagrass distribution that will enable in turn the effective management and conservation of these highly valuable and productive ecosystems.

  12. A Wiener-Wavelet-Based filter for de-noising satellite soil moisture retrievals

    Science.gov (United States)

    Massari, Christian; Brocca, Luca; Ciabatta, Luca; Moramarco, Tommaso; Su, Chun-Hsu; Ryu, Dongryeol; Wagner, Wolfgang

    2014-05-01

    The reduction of noise in microwave satellite soil moisture (SM) retrievals is of paramount importance for practical applications especially for those associated with the study of climate changes, droughts, floods and other related hydrological processes. So far, Fourier based methods have been used for de-noising satellite SM retrievals by filtering either the observed emissivity time series (Du, 2012) or the retrieved SM observations (Su et al. 2013). This contribution introduces an alternative approach based on a Wiener-Wavelet-Based filtering (WWB) technique, which uses the Entropy-Based Wavelet de-noising method developed by Sang et al. (2009) to design both a causal and a non-causal version of the filter. WWB is used as a post-retrieval processing tool to enhance the quality of observations derived from the i) Advanced Microwave Scanning Radiometer for the Earth observing system (AMSR-E), ii) the Advanced SCATterometer (ASCAT), and iii) the Soil Moisture and Ocean Salinity (SMOS) satellite. The method is tested on three pilot sites located in Spain (Remedhus Network), in Greece (Hydrological Observatory of Athens) and in Australia (Oznet network), respectively. Different quantitative criteria are used to judge the goodness of the de-noising technique. Results show that WWB i) is able to improve both the correlation and the root mean squared differences between satellite retrievals and in situ soil moisture observations, and ii) effectively separates random noise from deterministic components of the retrieved signals. Moreover, the use of WWB de-noised data in place of raw observations within a hydrological application confirms the usefulness of the proposed filtering technique. Du, J. (2012), A method to improve satellite soil moisture retrievals based on Fourier analysis, Geophys. Res. Lett., 39, L15404, doi:10.1029/ 2012GL052435 Su,C.-H.,D.Ryu, A. W. Western, and W. Wagner (2013), De-noising of passive and active microwave satellite soil moisture time

  13. A survey on OFDM channel estimation techniques based on denoising strategies

    Directory of Open Access Journals (Sweden)

    Pallaviram Sure

    2017-04-01

    Full Text Available Channel estimation forms the heart of any orthogonal frequency division multiplexing (OFDM based wireless communication receiver. Frequency domain pilot aided channel estimation techniques are either least squares (LS based or minimum mean square error (MMSE based. LS based techniques are computationally less complex. Unlike MMSE ones, they do not require a priori knowledge of channel statistics (KCS. However, the mean square error (MSE performance of the channel estimator incorporating MMSE based techniques is better compared to that obtained with the incorporation of LS based techniques. To enhance the MSE performance using LS based techniques, a variety of denoising strategies have been developed in the literature, which are applied on the LS estimated channel impulse response (CIR. The advantage of denoising threshold based LS techniques is that, they do not require KCS but still render near optimal MMSE performance similar to MMSE based techniques. In this paper, a detailed survey on various existing denoising strategies, with a comparative discussion of these strategies is presented.

  14. Enhancing seismic P phase arrival picking based on wavelet denoising and kurtosis picker

    Science.gov (United States)

    Shang, Xueyi; Li, Xibing; Weng, Lei

    2018-01-01

    P phase arrival picking of weak signals is still challenging in seismology. A wavelet denoising is proposed to enhance seismic P phase arrival picking, and the kurtosis picker is applied on the wavelet-denoised signal to identify P phase arrival. It has been called the WD-K picker. The WD-K picker, which is different from those traditional wavelet-based pickers on the basis of a single wavelet component or certain main wavelet components, takes full advantage of the reconstruction of main detail wavelet components and the approximate wavelet component. The proposed WD-K picker considers more wavelet components and presents a better P phase arrival feature. The WD-K picker has been evaluated on 500 micro-seismic signals recorded in the Chinese Yongshaba mine. The comparison between the WD-K pickings and manual pickings shows the good picking accuracy of the WD-K picker. Furthermore, the WD-K picking performance has been compared with the main detail wavelet component combining-based kurtosis (WDC-K) picker, the single wavelet component-based kurtosis (SW-K) picker, and certain main wavelet component-based maximum kurtosis (MMW-K) picker. The comparison has demonstrated that the WD-K picker has better picking accuracy than the other three-wavelet and kurtosis-based pickers, thus showing the enhanced ability of wavelet denoising.

  15. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los

    2013-11-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  16. Electrocardiogram signal denoising based on empirical mode decomposition technique: an overview

    Science.gov (United States)

    Han, G.; Lin, B.; Xu, Z.

    2017-03-01

    Electrocardiogram (ECG) signal is nonlinear and non-stationary weak signal which reflects whether the heart is functioning normally or abnormally. ECG signal is susceptible to various kinds of noises such as high/low frequency noises, powerline interference and baseline wander. Hence, the removal of noises from ECG signal becomes a vital link in the ECG signal processing and plays a significant role in the detection and diagnosis of heart diseases. The review will describe the recent developments of ECG signal denoising based on Empirical Mode Decomposition (EMD) technique including high frequency noise removal, powerline interference separation, baseline wander correction, the combining of EMD and Other Methods, EEMD technique. EMD technique is a quite potential and prospective but not perfect method in the application of processing nonlinear and non-stationary signal like ECG signal. The EMD combined with other algorithms is a good solution to improve the performance of noise cancellation. The pros and cons of EMD technique in ECG signal denoising are discussed in detail. Finally, the future work and challenges in ECG signal denoising based on EMD technique are clarified.

  17. AMA- and RWE- Based Adaptive Kalman Filter for Denoising Fiber Optic Gyroscope Drift Signal.

    Science.gov (United States)

    Yang, Gongliu; Liu, Yuanyuan; Li, Ming; Song, Shunguang

    2015-10-23

    An improved double-factor adaptive Kalman filter called AMA-RWE-DFAKF is proposed to denoise fiber optic gyroscope (FOG) drift signal in both static and dynamic conditions. The first factor is Kalman gain updated by random weighting estimation (RWE) of the covariance matrix of innovation sequence at any time to ensure the lowest noise level of output, but the inertia of KF response increases in dynamic condition. To decrease the inertia, the second factor is the covariance matrix of predicted state vector adjusted by RWE only when discontinuities are detected by adaptive moving average (AMA).The AMA-RWE-DFAKF is applied for denoising FOG static and dynamic signals, its performance is compared with conventional KF (CKF), RWE-based adaptive KF with gain correction (RWE-AKFG), AMA- and RWE- based dual mode adaptive KF (AMA-RWE-DMAKF). Results of Allan variance on static signal and root mean square error (RMSE) on dynamic signal show that this proposed algorithm outperforms all the considered methods in denoising FOG signal.

  18. Adaptive Wavelet Threshold Denoising Method for Machinery Sound Based on Improved Fruit Fly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Jing Xu

    2016-07-01

    Full Text Available As the sound signal of a machine contains abundant information and is easy to measure, acoustic-based monitoring or diagnosis systems exhibit obvious superiority, especially in some extreme conditions. However, the sound directly collected from industrial field is always polluted. In order to eliminate noise components from machinery sound, a wavelet threshold denoising method optimized by an improved fruit fly optimization algorithm (WTD-IFOA is proposed in this paper. The sound is firstly decomposed by wavelet transform (WT to obtain coefficients of each level. As the wavelet threshold functions proposed by Donoho were discontinuous, many modified functions with continuous first and second order derivative were presented to realize adaptively denoising. However, the function-based denoising process is time-consuming and it is difficult to find optimal thresholds. To overcome these problems, fruit fly optimization algorithm (FOA was introduced to the process. Moreover, to avoid falling into local extremes, an improved fly distance range obeying normal distribution was proposed on the basis of original FOA. Then, sound signal of a motor was recorded in a soundproof laboratory, and Gauss white noise was added into the signal. The simulation results illustrated the effectiveness and superiority of the proposed approach by a comprehensive comparison among five typical methods. Finally, an industrial application on a shearer in coal mining working face was performed to demonstrate the practical effect.

  19. Research on Mechanical Fault Diagnosis Scheme Based on Improved Wavelet Total Variation Denoising

    Directory of Open Access Journals (Sweden)

    Wentao He

    2016-01-01

    Full Text Available Wavelet analysis is a powerful tool for signal processing and mechanical equipment fault diagnosis due to the advantages of multiresolution analysis and excellent local characteristics in time-frequency domain. Wavelet total variation (WATV was recently developed based on the traditional wavelet analysis method, which combines the advantages of wavelet-domain sparsity and total variation (TV regularization. In order to guarantee the sparsity and the convexity of the total objective function, nonconvex penalty function is chosen as a new wavelet penalty function in WATV. The actual noise reduction effect of WATV method largely depends on the estimation of the noise signal variance. In this paper, an improved wavelet total variation (IWATV denoising method was introduced. The local variance analysis on wavelet coefficients obtained from the wavelet decomposition of noisy signals is employed to estimate the noise variance so as to provide a scientific evaluation index. Through the analysis of the numerical simulation signal and real-word failure data, the results demonstrated that the IWATV method has obvious advantages over the traditional wavelet threshold denoising and total variation denoising method in the mechanical fault diagnose.

  20. [Denoising of Fetal Heart Sound Based on Empirical Mode Decomposition Method].

    Science.gov (United States)

    Liu, Qiaoqiao; Tan, Zhixiang; Zhang, Yi; Wang, Hua

    2015-08-01

    Fetal heart sound is nonlinear and non-stationary, which contains a lot of noise when it is colleced, so the denoising method is important. We proposed a new denoising method in our study. Firstly, we chose the preprocessing of low-pass filter with a cutoff frequency of 200 Hz and the resampling. Secondly, we decomposed the signal based on empirical mode decomposition method (EMD) of Hilbert-Huang transform, then denoised some selected target components with wavelet soft threshold adaptive noise cancellation algorithm. Finally we got the clean fetal heart sound by combining the target components. In the EMD, we used a mask signal to eliminate the mode mixing problem, used mirroring extension method to eliminate the end effect, and referenced the stopping rule from the research of Rilling. This method eliminated the baseline drift and noise at once. To compare with wavelet transform (WT), mathematical morphology (MM) and the Fourier transform (FT), the SNR was improved obviously, and the RMSE was the minimum, which could satisfy the need of the practical application.

  1. Shape-adaptive DCT for denoising of 3D scalar and tensor valued images.

    Science.gov (United States)

    Bergmann, Ørjan; Christiansen, Oddvar; Lie, Johan; Lundervold, Arvid

    2009-06-01

    During the last ten years or so, diffusion tensor imaging has been used in both research and clinical medical applications. To construct the diffusion tensor images, a large set of direction sensitive magnetic resonance image (MRI) acquisitions are required. These acquisitions in general have a lower signal-to-noise ratio than conventional MRI acquisitions. In this paper, we discuss computationally effective algorithms for noise removal for diffusion tensor magnetic resonance imaging (DTI) using the framework of 3-dimensional shape-adaptive discrete cosine transform. We use local polynomial approximations for the selection of homogeneous regions in the DTI data. These regions are transformed to the frequency domain by a modified discrete cosine transform. In the frequency domain, the noise is removed by thresholding. We perform numerical experiments on 3D synthetical MRI and DTI data and real 3D DTI brain data from a healthy volunteer. The experiments indicate good performance compared to current state-of-the-art methods. The proposed method is well suited for parallelization and could thus dramatically improve the computation speed of denoising schemes for large scale 3D MRI and DTI.

  2. A Small Leak Detection Method Based on VMD Adaptive De-Noising and Ambiguity Correlation Classification Intended for Natural Gas Pipelines.

    Science.gov (United States)

    Xiao, Qiyang; Li, Jian; Bai, Zhiliang; Sun, Jiedi; Zhou, Nan; Zeng, Zhoumo

    2016-12-13

    In this study, a small leak detection method based on variational mode decomposition (VMD) and ambiguity correlation classification (ACC) is proposed. The signals acquired from sensors were decomposed using the VMD, and numerous components were obtained. According to the probability density function (PDF), an adaptive de-noising algorithm based on VMD is proposed for noise component processing and de-noised components reconstruction. Furthermore, the ambiguity function image was employed for analysis of the reconstructed signals. Based on the correlation coefficient, ACC is proposed to detect the small leak of pipeline. The analysis of pipeline leakage signals, using 1 mm and 2 mm leaks, has shown that proposed detection method can detect a small leak accurately and effectively. Moreover, the experimental results have shown that the proposed method achieved better performances than support vector machine (SVM) and back propagation neural network (BP) methods.

  3. A Small Leak Detection Method Based on VMD Adaptive De-Noising and Ambiguity Correlation Classification Intended for Natural Gas Pipelines

    Directory of Open Access Journals (Sweden)

    Qiyang Xiao

    2016-12-01

    Full Text Available In this study, a small leak detection method based on variational mode decomposition (VMD and ambiguity correlation classification (ACC is proposed. The signals acquired from sensors were decomposed using the VMD, and numerous components were obtained. According to the probability density function (PDF, an adaptive de-noising algorithm based on VMD is proposed for noise component processing and de-noised components reconstruction. Furthermore, the ambiguity function image was employed for analysis of the reconstructed signals. Based on the correlation coefficient, ACC is proposed to detect the small leak of pipeline. The analysis of pipeline leakage signals, using 1 mm and 2 mm leaks, has shown that proposed detection method can detect a small leak accurately and effectively. Moreover, the experimental results have shown that the proposed method achieved better performances than support vector machine (SVM and back propagation neural network (BP methods.

  4. Image matching in Bayer raw domain to de-noise low-light still images, optimized for real-time implementation

    Science.gov (United States)

    Romanenko, I. V.; Edirisinghe, E. A.; Larkin, D.

    2013-03-01

    Temporal accumulation of images is a well-known approach to improve signal to noise ratios of still images taken in a low light conditions. However, the complexity of known algorithms often leads to high hardware resource usage, increased memory bandwidth and computational complexity, making their practical use impossible. In our research we attempt to solve this problem with an implementation of a practical spatial-temporal de-noising algorithm, based on image accumulation. Image matching and spatial-temporal filtering was performed in Bayer RAW data space, which allowed us to benefit from predictable sensor noise characteristics, thus allowing using a range of algorithmic optimizations. The proposed algorithm accurately compensates for global and local motion and efficiently removes different kinds of noise in noisy images taken in low light conditions. In our algorithm we were able to perform global and local motion compensation in Bayer RAW data space, while preserving the resolution and effectively improving signal to noise ratios of moving objects as well as non-stationary background. The proposed algorithm is suitable for implementation in commercial grade FPGA's and capable of processing 16MP images at capturing rate (10 frames per second). The main challenge for matching between still images is the compromise between the quality of the motion prediction and the complexity of the algorithm and required memory bandwidth. Still images taken in a burst sequence must be aligned to compensate for background motion and foreground objects movements in a scene. High resolution still images coupled with significant time between successive frames can produce large displacements between images, which creates additional difficulty for image matching algorithms. In photo applications it is very important that the noise is efficiently removed in both static, and non-static background as well as in a moving objects, maintaining the resolution of the image. In our proposed

  5. Segmentation of confocal Raman microspectroscopic imaging data using edge-preserving denoising and clustering.

    Science.gov (United States)

    Alexandrov, Theodore; Lasch, Peter

    2013-06-18

    Over the past decade, confocal Raman microspectroscopic (CRM) imaging has matured into a useful analytical tool to obtain spatially resolved chemical information on the molecular composition of biological samples and has found its way into histopathology, cytology, and microbiology. A CRM imaging data set is a hyperspectral image in which Raman intensities are represented as a function of three coordinates: a spectral coordinate λ encoding the wavelength and two spatial coordinates x and y. Understanding CRM imaging data is challenging because of its complexity, size, and moderate signal-to-noise ratio. Spatial segmentation of CRM imaging data is a way to reveal regions of interest and is traditionally performed using nonsupervised clustering which relies on spectral domain-only information with the main drawback being the high sensitivity to noise. We present a new pipeline for spatial segmentation of CRM imaging data which combines preprocessing in the spectral and spatial domains with k-means clustering. Its core is the preprocessing routine in the spatial domain, edge-preserving denoising (EPD), which exploits the spatial relationships between Raman intensities acquired at neighboring pixels. Additionally, we propose to use both spatial correlation to identify Raman spectral features colocalized with defined spatial regions and confidence maps to assess the quality of spatial segmentation. For CRM data acquired from midsagittal Syrian hamster ( Mesocricetus auratus ) brain cryosections, we show how our pipeline benefits from the complex spatial-spectral relationships inherent in the CRM imaging data. EPD significantly improves the quality of spatial segmentation that allows us to extract the underlying structural and compositional information contained in the Raman microspectra.

  6. Optimization of Wavelet-Based De-noising in MRI

    Czech Academy of Sciences Publication Activity Database

    Bartušek, Karel; Přinosil, J.; Smékal, Z.

    2011-01-01

    Roč. 20, č. 1 (2011), s. 85-93 ISSN 1210-2512 R&D Projects: GA ČR GA102/09/0314 Institutional research plan: CEZ:AV0Z20650511 Keywords : wavelet transformation * filtering technique * magnetic resonance imaging Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 0.739, year: 2011

  7. A volume-based method for denoising on curved surfaces

    KAUST Repository

    Biddle, Harry

    2013-09-01

    We demonstrate a method for removing noise from images or other data on curved surfaces. Our approach relies on in-surface diffusion: we formulate both the Gaussian diffusion and Perona-Malik edge-preserving diffusion equations in a surface-intrinsic way. Using the Closest Point Method, a recent technique for solving partial differential equations (PDEs) on general surfaces, we obtain a very simple algorithm where we merely alternate a time step of the usual Gaussian diffusion (and similarly Perona-Malik) in a small 3D volume containing the surface with an interpolation step. The method uses a closest point function to represent the underlying surface and can treat very general surfaces. Experimental results include image filtering on smooth surfaces, open surfaces, and general triangulated surfaces. © 2013 IEEE.

  8. Wavelet-based de-noising techniques in MRI

    Czech Academy of Sciences Publication Activity Database

    Bartušek, Karel; Přinosil, J.; Smékal, Z.

    2011-01-01

    Roč. 104, č. 3 (2011), s. 480-488 ISSN 0169-2607 R&D Projects: GA MŠk ED0017/01/01; GA ČR GAP102/11/0318 Institutional research plan: CEZ:AV0Z20650511 Keywords : wavelet transformation * filtering technique * magnetic resonance imaging Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 1.516, year: 2011

  9. A Denoising Based Autoassociative Model for Robust Sensor Monitoring in Nuclear Power Plants

    Directory of Open Access Journals (Sweden)

    Ahmad Shaheryar

    2016-01-01

    Full Text Available Sensors health monitoring is essentially important for reliable functioning of safety-critical chemical and nuclear power plants. Autoassociative neural network (AANN based empirical sensor models have widely been reported for sensor calibration monitoring. However, such ill-posed data driven models may result in poor generalization and robustness. To address above-mentioned issues, several regularization heuristics such as training with jitter, weight decay, and cross-validation are suggested in literature. Apart from these regularization heuristics, traditional error gradient based supervised learning algorithms for multilayered AANN models are highly susceptible of being trapped in local optimum. In order to address poor regularization and robust learning issues, here, we propose a denoised autoassociative sensor model (DAASM based on deep learning framework. Proposed DAASM model comprises multiple hidden layers which are pretrained greedily in an unsupervised fashion under denoising autoencoder architecture. In order to improve robustness, dropout heuristic and domain specific data corruption processes are exercised during unsupervised pretraining phase. The proposed sensor model is trained and tested on sensor data from a PWR type nuclear power plant. Accuracy, autosensitivity, spillover, and sequential probability ratio test (SPRT based fault detectability metrics are used for performance assessment and comparison with extensively reported five-layer AANN model by Kramer.

  10. A robust technique based on VLM and Frangi filter for retinal vessel extraction and denoising.

    Science.gov (United States)

    Khan, Khan Bahadar; Khaliq, Amir A; Jalil, Abdul; Shahid, Muhammad

    2018-01-01

    The exploration of retinal vessel structure is colossally important on account of numerous diseases including stroke, Diabetic Retinopathy (DR) and coronary heart diseases, which can damage the retinal vessel structure. The retinal vascular network is very hard to be extracted due to its spreading and diminishing geometry and contrast variation in an image. The proposed technique consists of unique parallel processes for denoising and extraction of blood vessels in retinal images. In the preprocessing section, an adaptive histogram equalization enhances dissimilarity between the vessels and the background and morphological top-hat filters are employed to eliminate macula and optic disc, etc. To remove local noise, the difference of images is computed from the top-hat filtered image and the high-boost filtered image. Frangi filter is applied at multi scale for the enhancement of vessels possessing diverse widths. Segmentation is performed by using improved Otsu thresholding on the high-boost filtered image and Frangi's enhanced image, separately. In the postprocessing steps, a Vessel Location Map (VLM) is extracted by using raster to vector transformation. Postprocessing steps are employed in a novel way to reject misclassified vessel pixels. The final segmented image is obtained by using pixel-by-pixel AND operation between VLM and Frangi output image. The method has been rigorously analyzed on the STARE, DRIVE and HRF datasets.

  11. A robust technique based on VLM and Frangi filter for retinal vessel extraction and denoising.

    Directory of Open Access Journals (Sweden)

    Khan Bahadar Khan

    Full Text Available The exploration of retinal vessel structure is colossally important on account of numerous diseases including stroke, Diabetic Retinopathy (DR and coronary heart diseases, which can damage the retinal vessel structure. The retinal vascular network is very hard to be extracted due to its spreading and diminishing geometry and contrast variation in an image. The proposed technique consists of unique parallel processes for denoising and extraction of blood vessels in retinal images. In the preprocessing section, an adaptive histogram equalization enhances dissimilarity between the vessels and the background and morphological top-hat filters are employed to eliminate macula and optic disc, etc. To remove local noise, the difference of images is computed from the top-hat filtered image and the high-boost filtered image. Frangi filter is applied at multi scale for the enhancement of vessels possessing diverse widths. Segmentation is performed by using improved Otsu thresholding on the high-boost filtered image and Frangi's enhanced image, separately. In the postprocessing steps, a Vessel Location Map (VLM is extracted by using raster to vector transformation. Postprocessing steps are employed in a novel way to reject misclassified vessel pixels. The final segmented image is obtained by using pixel-by-pixel AND operation between VLM and Frangi output image. The method has been rigorously analyzed on the STARE, DRIVE and HRF datasets.

  12. Wavelet based denoising of power quality events for characterization

    African Journals Online (AJOL)

    distribution of pure sine voltage wave, voltage sag, swell, transients, harmonics, impulse, notching, fluctuation and flicker are obtained using wavelet transform. The presence of noise degrades the detection capability of wavelet based method and therefore effect of noise on different signal is analyzed. The noise corrupted ...

  13. Wavelet based denoising of power quality events for characterization

    African Journals Online (AJOL)

    user

    and feature extraction. Practically, electromagnetic noise is generated in every device that generates, consumes, or transmits power. Besides degrading the detection capability of wavelet and other higher time resolution based PQ monitoring systems it also hinders the recovery of important information from the captured ...

  14. Denoising and artefact reduction in dynamic flat detector CT perfusion imaging using high speed acquisition: first experimental and clinical results

    Science.gov (United States)

    Manhart, Michael T.; Aichert, André; Struffert, Tobias; Deuerling-Zheng, Yu; Kowarschik, Markus; Maier, Andreas K.; Hornegger, Joachim; Doerfler, Arnd

    2014-08-01

    Flat detector CT perfusion (FD-CTP) is a novel technique using C-arm angiography systems for interventional dynamic tissue perfusion measurement with high potential benefits for catheter-guided treatment of stroke. However, FD-CTP is challenging since C-arms rotate slower than conventional CT systems. Furthermore, noise and artefacts affect the measurement of contrast agent flow in tissue. Recent robotic C-arms are able to use high speed protocols (HSP), which allow sampling of the contrast agent flow with improved temporal resolution. However, low angular sampling of projection images leads to streak artefacts, which are translated to the perfusion maps. We recently introduced the FDK-JBF denoising technique based on Feldkamp (FDK) reconstruction followed by joint bilateral filtering (JBF). As this edge-preserving noise reduction preserves streak artefacts, an empirical streak reduction (SR) technique is presented in this work. The SR method exploits spatial and temporal information in the form of total variation and time-curve analysis to detect and remove streaks. The novel approach is evaluated in a numerical brain phantom and a patient study. An improved noise and artefact reduction compared to existing post-processing methods and faster computation speed compared to an algebraic reconstruction method are achieved.

  15. Nonlocal denoising using anisotropic structure tensor for 3D MRI.

    Science.gov (United States)

    Wu, Xi; Liu, Shujuan; Wu, Min; Sun, Huaiqiang; Zhou, Jiliu; Gong, Qiyong; Ding, Zhaohua

    2013-10-01

    Noise in magnetic resonance imaging (MRI) data is widely recognized to be harmful to image processing and subsequent quantitative analysis. To ameliorate the effects of image noise, the authors present a structure-tensor based nonlocal mean (NLM) denoising technique that can effectively reduce noise in MRI data and improve tissue characterization. The proposed 3D NLM algorithm uses a structure tensor to characterize information around tissue boundaries. The similarity weight of a pixel (or patch), which determines its contribution to the denoising process, is determined by the intensity and structure tensor simultaneously. Meanwhile, similarity of structure tensors is computed using an affine-invariant Riemannian metrics, which compares tensor properties more comprehensively and avoids orientation inaccuracy of structure subsequently. The proposed method is further extended for denoising high dimensional MRI data such as diffusion weighted MRI. It is also extended to handle Rician noise corruption so that denoising effects are further enhanced. The proposed method was implemented in both simulated datasets and multiply modalities of real 3D MRI datasets. Comparisons with related state-of-the-art algorithms demonstrated that this method improves denoising performance qualitatively and quantitatively. In this paper, high order structure information of 3D MRI was characterized by 3D structure tensor and compared for NLM denoising in a Riemannian space. Experiments with simulated and real human MRI data demonstrate a great potential of the proposed technique for routine clinical use.

  16. A new and general approach to signal denoising and eye movement classification based on segmented linear regression.

    Science.gov (United States)

    Pekkanen, Jami; Lappi, Otto

    2017-12-18

    We introduce a conceptually novel method for eye-movement signal analysis. The method is general in that it does not place severe restrictions on sampling frequency, measurement noise or subject behavior. Event identification is based on segmentation that simultaneously denoises the signal and determines event boundaries. The full gaze position time-series is segmented into an approximately optimal piecewise linear function in O(n) time. Gaze feature parameters for classification into fixations, saccades, smooth pursuits and post-saccadic oscillations are derived from human labeling in a data-driven manner. The range of oculomotor events identified and the powerful denoising performance make the method useable for both low-noise controlled laboratory settings and high-noise complex field experiments. This is desirable for harmonizing the gaze behavior (in the wild) and oculomotor event identification (in the laboratory) approaches to eye movement behavior. Denoising and classification performance are assessed using multiple datasets. Full open source implementation is included.

  17. An Imbalanced Data Classification Algorithm of De-noising Auto-Encoder Neural Network Based on SMOTE

    Directory of Open Access Journals (Sweden)

    Zhang Chenggang

    2016-01-01

    Full Text Available Imbalanced data classification problem has always been one of the hot issues in the field of machine learning. Synthetic minority over-sampling technique (SMOTE is a classical approach to balance datasets, but it may give rise to such problem as noise. Stacked De-noising Auto-Encoder neural network (SDAE, can effectively reduce data redundancy and noise through unsupervised layer-wise greedy learning. Aiming at the shortcomings of SMOTE algorithm when synthesizing new minority class samples, the paper proposed a Stacked De-noising Auto-Encoder neural network algorithm based on SMOTE, SMOTE-SDAE, which is aimed to deal with imbalanced data classification. The proposed algorithm is not only able to synthesize new minority class samples, but it also can de-noise and classify the sampled data. Experimental results show that compared with traditional algorithms, SMOTE-SDAE significantly improves the minority class classification accuracy of the imbalanced datasets.

  18. Denoising of genetic switches based on Parrondo's paradox

    Science.gov (United States)

    Fotoohinasab, Atiyeh; Fatemizadeh, Emad; Pezeshk, Hamid; Sadeghi, Mehdi

    2018-03-01

    Random decision making in genetic switches can be modeled as tossing a biased coin. In other word, each genetic switch can be considered as a game in which the reactive elements compete with each other to increase their molecular concentrations. The existence of a very small number of reactive element molecules has caused the neglect of effects of noise to be inevitable. Noise can lead to undesirable cell fate in cellular differentiation processes. In this paper, we study the robustness to noise in genetic switches by considering another switch to have a new gene regulatory network (GRN) in which both switches have been affected by the same noise and for this purpose, we will use Parrondo's paradox. We introduce two networks of games based on possible regulatory relations between genes. Our results show that the robustness to noise can increase by combining these noisy switches. We also describe how one of the switches in network II can model lysis/lysogeny decision making of bacteriophage lambda in Escherichia coli and we change its fate by another switch.

  19. Accelerometer North Finding System Based on the Wavelet Packet De-noising Algorithm and Filtering Circuit

    Directory of Open Access Journals (Sweden)

    LU Yongle

    2014-07-01

    Full Text Available This paper demonstrates a method and system for north finding with a low-cost piezoelectricity accelerometer based on the Coriolis acceleration principle. The proposed setup is based on the choice of an accelerometer with residual noise of 35 ng•Hz-1/2. The plane of the north finding system is aligned parallel to the local level, which helps to eliminate the effect of plane error. The Coriolis acceleration caused by the earth’s rotation and the acceleration’s instantaneous velocity is much weaker than the g-sensitivity acceleration. To get a high accuracy and a shorter time for north finding system, in this paper, the Filtering Circuit and the wavelet packet de-nosing algorithm are used as the following. First, the hardware is designed as the alternating currents across by filtering circuit, so the DC will be isolated and the weak AC signal will be amplified. The DC is interfering signal generated by the earth's gravity. Then, we have used a wavelet packet to filter the signal which has been done through the filtering circuit. Finally, compare the north finding results measured by wavelet packet filtering with those measured by a low-pass filter. Wavelet filter de-noise data shows that wavelet packet filtering and wavelet filter measurement have high accuracy. Wavelet Packet filtering has stronger ability to remove burst noise and higher engineering environment adaptability than that of Wavelet filtering. Experimental results prove the effectiveness and project implementation of the accelerometer north finding method based on wavelet packet de-noising algorithm.

  20. A wavelet-based estimator of the degrees of freedom in denoised fMRI time series for probabilistic testing of functional connectivity and brain graphs.

    Science.gov (United States)

    Patel, Ameera X; Bullmore, Edward T

    2016-11-15

    Connectome mapping using techniques such as functional magnetic resonance imaging (fMRI) has become a focus of systems neuroscience. There remain many statistical challenges in analysis of functional connectivity and network architecture from BOLD fMRI multivariate time series. One key statistic for any time series is its (effective) degrees of freedom, df, which will generally be less than the number of time points (or nominal degrees of freedom, N). If we know the df, then probabilistic inference on other fMRI statistics, such as the correlation between two voxel or regional time series, is feasible. However, we currently lack good estimators of df in fMRI time series, especially after the degrees of freedom of the "raw" data have been modified substantially by denoising algorithms for head movement. Here, we used a wavelet-based method both to denoise fMRI data and to estimate the (effective) df of the denoised process. We show that seed voxel correlations corrected for locally variable df could be tested for false positive connectivity with better control over Type I error and greater specificity of anatomical mapping than probabilistic connectivity maps using the nominal degrees of freedom. We also show that wavelet despiked statistics can be used to estimate all pairwise correlations between a set of regional nodes, assign a P value to each edge, and then iteratively add edges to the graph in order of increasing P. These probabilistically thresholded graphs are likely more robust to regional variation in head movement effects than comparable graphs constructed by thresholding correlations. Finally, we show that time-windowed estimates of df can be used for probabilistic connectivity testing or dynamic network analysis so that apparent changes in the functional connectome are appropriately corrected for the effects of transient noise bursts. Wavelet despiking is both an algorithm for fMRI time series denoising and an estimator of the (effective) df of denoised

  1. Image decomposition model Shearlet-Hilbert-L2 with better performance for denoising in ESPI fringe patterns.

    Science.gov (United States)

    Xu, Wenjun; Tang, Chen; Su, Yonggang; Li, Biyuan; Lei, Zhenkun

    2018-02-01

    In this paper, we propose an image decomposition model Shearlet-Hilbert-L 2 with better performance for denoising in electronic speckle pattern interferometry (ESPI) fringe patterns. In our model, the low-density fringes, high-density fringes, and noise are, respectively, described by shearlet smoothness spaces, adaptive Hilbert space, and L 2 space and processed individually. Because the shearlet transform has superior directional sensitivity, our proposed Shearlet-Hilbert-L 2 model achieves commendable filtering results for various types of ESPI fringe patterns, including uniform density fringe patterns, moderately variable density fringe patterns, and greatly variable density fringe patterns. We evaluate the performance of our proposed Shearlet-Hilbert-L 2 model via application to two computer-simulated and nine experimentally obtained ESPI fringe patterns with various densities and poor quality. Furthermore, we compare our proposed model with windowed Fourier filtering and coherence-enhancing diffusion, both of which are the state-of-the-art methods for ESPI fringe patterns denoising in transform domain and spatial domain, respectively. We also compare our proposed model with the previous image decomposition model BL-Hilbert-L 2 .

  2. Sinogram denoising via simultaneous sparse representation in learned dictionaries

    International Nuclear Information System (INIS)

    Karimi, Davood; Ward, Rabab K

    2016-01-01

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster. (paper)

  3. Fractional Diffusion, Low Exponent Lévy Stable Laws, and 'Slow Motion' Denoising of Helium Ion Microscope Nanoscale Imagery.

    Science.gov (United States)

    Carasso, Alfred S; Vladár, András E

    2012-01-01

    Helium ion microscopes (HIM) are capable of acquiring images with better than 1 nm resolution, and HIM images are particularly rich in morphological surface details. However, such images are generally quite noisy. A major challenge is to denoise these images while preserving delicate surface information. This paper presents a powerful slow motion denoising technique, based on solving linear fractional diffusion equations forward in time. The method is easily implemented computationally, using fast Fourier transform (FFT) algorithms. When applied to actual HIM images, the method is found to reproduce the essential surface morphology of the sample with high fidelity. In contrast, such highly sophisticated methodologies as Curvelet Transform denoising, and Total Variation denoising using split Bregman iterations, are found to eliminate vital fine scale information, along with the noise. Image Lipschitz exponents are a useful image metrology tool for quantifying the fine structure content in an image. In this paper, this tool is applied to rank order the above three distinct denoising approaches, in terms of their texture preserving properties. In several denoising experiments on actual HIM images, it was found that fractional diffusion smoothing performed noticeably better than split Bregman TV, which in turn, performed slightly better than Curvelet denoising.

  4. A chromaticity-brightness model for color images denoising in a Meyer’s “u + v” framework

    KAUST Repository

    Ferreira, Rita

    2017-09-11

    A variational model for imaging segmentation and denoising color images is proposed. The model combines Meyer’s “u+v” decomposition with a chromaticity-brightness framework and is expressed by a minimization of energy integral functionals depending on a small parameter ε>0. The asymptotic behavior as ε→0+ is characterized, and convergence of infima, almost minimizers, and energies are established. In particular, an integral representation of the lower semicontinuous envelope, with respect to the L1-norm, of functionals with linear growth and defined for maps taking values on a certain compact manifold is provided. This study escapes the realm of previous results since the underlying manifold has boundary, and the integrand and its recession function fail to satisfy hypotheses commonly assumed in the literature. The main tools are Γ-convergence and relaxation techniques.

  5. Denoising of Ictal EEG Data Using Semi-Blind Source Separation Methods Based on Time-Frequency Priors.

    Science.gov (United States)

    Hajipour Sardouie, Sepideh; Bagher Shamsollahi, Mohammad; Albera, Laurent; Merlet, Isabelle

    2015-05-01

    Removing muscle activity from ictal ElectroEncephaloGram (EEG) data is an essential preprocessing step in diagnosis and study of epileptic disorders. Indeed, at the very beginning of seizures, ictal EEG has a low amplitude and its morphology in the time domain is quite similar to muscular activity. Contrary to the time domain, ictal signals have specific characteristics in the time-frequency domain. In this paper, we use the time-frequency signature of ictal discharges as a priori information on the sources of interest. To extract the time-frequency signature of ictal sources, we use the Canonical Correlation Analysis (CCA) method. Then, we propose two time-frequency based semi-blind source separation approaches, namely the Time-Frequency-Generalized EigenValue Decomposition (TF-GEVD) and the Time-Frequency-Denoising Source Separation (TF-DSS), for the denoising of ictal signals based on these time-frequency signatures. The performance of the proposed methods is compared with that of CCA and Independent Component Analysis (ICA) approaches for the denoising of simulated ictal EEGs and of real ictal data. The results show the superiority of the proposed methods in comparison with CCA and ICA.

  6. Intelligent Mechanical Fault Diagnosis Based on Multiwavelet Adaptive Threshold Denoising and MPSO

    Directory of Open Access Journals (Sweden)

    Hao Sun

    2014-01-01

    Full Text Available The condition diagnosis of rotating machinery depends largely on the feature analysis of vibration signals measured for the condition diagnosis. However, the signals measured from rotating machinery usually are nonstationary and nonlinear and contain noise. The useful fault features are hidden in the heavy background noise. In this paper, a novel fault diagnosis method for rotating machinery based on multiwavelet adaptive threshold denoising and mutation particle swarm optimization (MPSO is proposed. Geronimo, Hardin, and Massopust (GHM multiwavelet is employed for extracting weak fault features under background noise, and the method of adaptively selecting appropriate threshold for multiwavelet with energy ratio of multiwavelet coefficient is presented. The six nondimensional symptom parameters (SPs in the frequency domain are defined to reflect the features of the vibration signals measured in each state. Detection index (DI using statistical theory has been also defined to evaluate the sensitiveness of SP for condition diagnosis. MPSO algorithm with adaptive inertia weight adjustment and particle mutation is proposed for condition identification. MPSO algorithm effectively solves local optimum and premature convergence problems of conventional particle swarm optimization (PSO algorithm. It can provide a more accurate estimate on fault diagnosis. Practical examples of fault diagnosis for rolling element bearings are given to verify the effectiveness of the proposed method.

  7. A hybrid fault diagnosis method based on second generation wavelet de-noising and local mean decomposition for rotating machinery.

    Science.gov (United States)

    Liu, Zhiwen; He, Zhengjia; Guo, Wei; Tang, Zhangchun

    2016-03-01

    In order to extract fault features of large-scale power equipment from strong background noise, a hybrid fault diagnosis method based on the second generation wavelet de-noising (SGWD) and the local mean decomposition (LMD) is proposed in this paper. In this method, a de-noising algorithm of second generation wavelet transform (SGWT) using neighboring coefficients was employed as the pretreatment to remove noise in rotating machinery vibration signals by virtue of its good effect in enhancing the signal-noise ratio (SNR). Then, the LMD method is used to decompose the de-noised signals into several product functions (PFs). The PF corresponding to the faulty feature signal is selected according to the correlation coefficients criterion. Finally, the frequency spectrum is analyzed by applying the FFT to the selected PF. The proposed method is applied to analyze the vibration signals collected from an experimental gearbox and a real locomotive rolling bearing. The results demonstrate that the proposed method has better performances such as high SNR and fast convergence speed than the normal LMD method. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  8. Progressive image denoising through hybrid graph Laplacian regularization: a unified framework.

    Science.gov (United States)

    Liu, Xianming; Zhai, Deming; Zhao, Debin; Zhai, Guangtao; Gao, Wen

    2014-04-01

    Recovering images from corrupted observations is necessary for many real-world applications. In this paper, we propose a unified framework to perform progressive image recovery based on hybrid graph Laplacian regularized regression. We first construct a multiscale representation of the target image by Laplacian pyramid, then progressively recover the degraded image in the scale space from coarse to fine so that the sharp edges and texture can be eventually recovered. On one hand, within each scale, a graph Laplacian regularization model represented by implicit kernel is learned, which simultaneously minimizes the least square error on the measured samples and preserves the geometrical structure of the image data space. In this procedure, the intrinsic manifold structure is explicitly considered using both measured and unmeasured samples, and the nonlocal self-similarity property is utilized as a fruitful resource for abstracting a priori knowledge of the images. On the other hand, between two successive scales, the proposed model is extended to a projected high-dimensional feature space through explicit kernel mapping to describe the interscale correlation, in which the local structure regularity is learned and propagated from coarser to finer scales. In this way, the proposed algorithm gradually recovers more and more image details and edges, which could not been recovered in previous scale. We test our algorithm on one typical image recovery task: impulse noise removal. Experimental results on benchmark test images demonstrate that the proposed method achieves better performance than state-of-the-art algorithms.

  9. A Novel Partial Discharge Ultra-High Frequency Signal De-Noising Method Based on a Single-Channel Blind Source Separation Algorithm

    Directory of Open Access Journals (Sweden)

    Liangliang Wei

    2018-02-01

    Full Text Available To effectively de-noise the Gaussian white noise and periodic narrow-band interference in the background noise of partial discharge ultra-high frequency (PD UHF signals in field tests, a novel de-noising method, based on a single-channel blind source separation algorithm, is proposed. Compared with traditional methods, the proposed method can effectively de-noise the noise interference, and the distortion of the de-noising PD signal is smaller. Firstly, the PD UHF signal is time-frequency analyzed by S-transform to obtain the number of source signals. Then, the single-channel detected PD signal is converted into multi-channel signals by singular value decomposition (SVD, and background noise is separated from multi-channel PD UHF signals by the joint approximate diagonalization of eigen-matrix method. At last, the source PD signal is estimated and recovered by the l1-norm minimization method. The proposed de-noising method was applied on the simulation test and field test detected signals, and the de-noising performance of the different methods was compared. The simulation and field test results demonstrate the effectiveness and correctness of the proposed method.

  10. Wavelet denoising for quantum noise removal in chest digital tomosynthesis.

    Science.gov (United States)

    Gomi, Tsutomu; Nakajima, Masahiro; Umeda, Tokuo

    2015-01-01

    Quantum noise impairs image quality in chest digital tomosynthesis (DT). A wavelet denoising processing algorithm for selectively removing quantum noise was developed and tested. A wavelet denoising technique was implemented on a DT system and experimentally evaluated using chest phantom measurements including spatial resolution. Comparison was made with an existing post-reconstruction wavelet denoising processing algorithm reported by Badea et al. (Comput Med Imaging Graph 22:309-315, 1998). The potential DT quantum noise decrease was evaluated using different exposures with our technique (pre-reconstruction and post-reconstruction wavelet denoising processing via the balance sparsity-norm method) and the existing wavelet denoising processing algorithm. Wavelet denoising processing algorithms such as the contrast-to-noise ratio (CNR), root mean square error (RMSE) were compared with and without wavelet denoising processing. Modulation transfer functions (MTF) were evaluated for the in-focus plane. We performed a statistical analysis (multi-way analysis of variance) using the CNR and RMSE values. Our wavelet denoising processing algorithm significantly decreased the quantum noise and improved the contrast resolution in the reconstructed images (CNR and RMSE: pre-balance sparsity-norm wavelet denoising processing versus existing wavelet denoising processing, Pwavelet denoising processing versus existing wavelet denoising processing, Pwavelet denoising processing, Pwavelet denoising processing algorithm caused MTF deterioration. A balance sparsity-norm wavelet denoising processing algorithm for removing quantum noise in DT was demonstrated to be effective for certain classes of structures with high-frequency component features. This denoising approach may be useful for a variety of clinical applications for chest digital tomosynthesis when quantum noise is present.

  11. Subspace based adaptive denoising of surface EMG from neurological injury patients

    Science.gov (United States)

    Liu, Jie; Ying, Dongwen; Zev Rymer, William; Zhou, Ping

    2014-10-01

    Objective: After neurological injuries such as spinal cord injury, voluntary surface electromyogram (EMG) signals recorded from affected muscles are often corrupted by interferences, such as spurious involuntary spikes and background noises produced by physiological and extrinsic/accidental origins, imposing difficulties for signal processing. Conventional methods did not well address the problem caused by interferences. It is difficult to mitigate such interferences using conventional methods. The aim of this study was to develop a subspace-based denoising method to suppress involuntary background spikes contaminating voluntary surface EMG recordings. Approach: The Karhunen-Loeve transform was utilized to decompose a noisy signal into a signal subspace and a noise subspace. An optimal estimate of EMG signal is derived from the signal subspace and the noise power. Specifically, this estimator is capable of making a tradeoff between interference reduction and signal distortion. Since the estimator partially relies on the estimate of noise power, an adaptive method was presented to sequentially track the variation of interference power. The proposed method was evaluated using both semi-synthetic and real surface EMG signals. Main results: The experiments confirmed that the proposed method can effectively suppress interferences while keep the distortion of voluntary EMG signal in a low level. The proposed method can greatly facilitate further signal processing, such as onset detection of voluntary muscle activity. Significance: The proposed method can provide a powerful tool for suppressing background spikes and noise contaminating voluntary surface EMG signals of paretic muscles after neurological injuries, which is of great importance for their multi-purpose applications.

  12. Research and Implementation of Heart Sound Denoising

    Science.gov (United States)

    Liu, Feng; Wang, Yutai; Wang, Yanxiang

    Heart sound is one of the most important signals. However, the process of getting heart sound signal can be interfered with many factors outside. Heart sound is weak electric signal and even weak external noise may lead to the misjudgment of pathological and physiological information in this signal, thus causing the misjudgment of disease diagnosis. As a result, it is a key to remove the noise which is mixed with heart sound. In this paper, a more systematic research and analysis which is involved in heart sound denoising based on matlab has been made. The study of heart sound denoising based on matlab firstly use the powerful image processing function of matlab to transform heart sound signals with noise into the wavelet domain through wavelet transform and decomposition these signals in muli-level. Then for the detail coefficient, soft thresholding is made using wavelet transform thresholding to eliminate noise, so that a signal denoising is significantly improved. The reconstructed signals are gained with stepwise coefficient reconstruction for the processed detail coefficient. Lastly, 50HZ power frequency and 35 Hz mechanical and electrical interference signals are eliminated using a notch filter.

  13. Data-adaptive image-denoising for detecting and quantifying nanoparticle entry in mucosal tissues through intravital 2-photon microscopy

    Directory of Open Access Journals (Sweden)

    Torsten Bölke

    2014-11-01

    Full Text Available Intravital 2-photon microscopy of mucosal membranes across which nanoparticles enter the organism typically generates noisy images. Because the noise results from the random statistics of only very few photons detected per pixel, it cannot be avoided by technical means. Fluorescent nanoparticles contained in the tissue may be represented by a few bright pixels which closely resemble the noise structure. We here present a data-adaptive method for digital denoising of datasets obtained by 2-photon microscopy. The algorithm exploits both local and non-local redundancy of the underlying ground-truth signal to reduce noise. Our approach automatically adapts the strength of noise suppression in a data-adaptive way by using a Bayesian network. The results show that the specific adaption to both signal and noise characteristics improves the preservation of fine structures such as nanoparticles while less artefacts were produced as compared to reference algorithms. Our method is applicable to other imaging modalities as well, provided the specific noise characteristics are known and taken into account.

  14. A Robust Text Classifier Based on Denoising Deep Neural Network in the Analysis of Big Data

    Directory of Open Access Journals (Sweden)

    Wulamu Aziguli

    2017-01-01

    Full Text Available Text classification has always been an interesting issue in the research area of natural language processing (NLP. While entering the era of big data, a good text classifier is critical to achieving NLP for scientific big data analytics. With the ever-increasing size of text data, it has posed important challenges in developing effective algorithm for text classification. Given the success of deep neural network (DNN in analyzing big data, this article proposes a novel text classifier using DNN, in an effort to improve the computational performance of addressing big text data with hybrid outliers. Specifically, through the use of denoising autoencoder (DAE and restricted Boltzmann machine (RBM, our proposed method, named denoising deep neural network (DDNN, is able to achieve significant improvement with better performance of antinoise and feature extraction, compared to the traditional text classification algorithms. The simulations on benchmark datasets verify the effectiveness and robustness of our proposed text classifier.

  15. An adaptive image denoising method based on local parameters ...

    Indian Academy of Sciences (India)

    term, i.e., individual pixels or block-by-block, i.e., group of pixels, using suitable shrinkage factor and threshold function. The shrinkage factor is generally a function of threshold and some other characteristics of the neighbouring pixels of the ...

  16. An adaptive image denoising method based on local parameters

    Indian Academy of Sciences (India)

    term, i.e., individual pixels or block-by-block, i.e., group of pixels, using suitable shrinkage factor and threshold function. The shrinkage factor is generally a function of threshold and some other characteristics of the neighbouring pixels of the ...

  17. A Fast Alternating Minimization Algorithm for Nonlocal Vectorial Total Variational Multichannel Image Denoising

    Directory of Open Access Journals (Sweden)

    Rubing Xi

    2014-01-01

    Full Text Available The variational models with nonlocal regularization offer superior image restoration quality over traditional method. But the processing speed remains a bottleneck due to the calculation quantity brought by the recent iterative algorithms. In this paper, a fast algorithm is proposed to restore the multichannel image in the presence of additive Gaussian noise by minimizing an energy function consisting of an l2-norm fidelity term and a nonlocal vectorial total variational regularization term. This algorithm is based on the variable splitting and penalty techniques in optimization. Following our previous work on the proof of the existence and the uniqueness of the solution of the model, we establish and prove the convergence properties of this algorithm, which are the finite convergence for some variables and the q-linear convergence for the rest. Experiments show that this model has a fabulous texture-preserving property in restoring color images. Both the theoretical derivation of the computation complexity analysis and the experimental results show that the proposed algorithm performs favorably in comparison to the widely used fixed point algorithm.

  18. Pixel-wise decay parameter adaption for nonlocal means image denoising

    Science.gov (United States)

    Zhan, Yi; Ding, Mingyue; Zhang, Xuming

    2013-10-01

    The globally fixed decay parameter is generally adopted in the traditional nonlocal means method for similarity computation, which has a negative influence on its restoration performance. To address this problem, we propose to adaptively tune the decay parameter for each image pixel using the golden section search method based on the pixel-wise minimum mean square error, which can be estimated using the prefiltered result and the estimated noise component. The quantitative and subjective comparisons of restoration performance among the proposed method and several state-of-the-art methods indicate that it can achieve a better performance in noise reduction, artifact avoidance, and detail preservation.

  19. Speckle noise reduction in ultrasound images using a discrete wavelet transform-based image fusion technique.

    Science.gov (United States)

    Choi, Hyun Ho; Lee, Ju Hwan; Kim, Sung Min; Park, Sung Yun

    2015-01-01

    Here, the speckle noise in ultrasonic images is removed using an image fusion-based denoising method. To optimize the denoising performance, each discrete wavelet transform (DWT) and filtering technique was analyzed and compared. In addition, the performances were compared in order to derive the optimal input conditions. To evaluate the speckle noise removal performance, an image fusion algorithm was applied to the ultrasound images, and comparatively analyzed with the original image without the algorithm. As a result, applying DWT and filtering techniques caused information loss and noise characteristics, and did not represent the most significant noise reduction performance. Conversely, an image fusion method applying SRAD-original conditions preserved the key information in the original image, and the speckle noise was removed. Based on such characteristics, the input conditions of SRAD-original had the best denoising performance with the ultrasound images. From this study, the best denoising technique proposed based on the results was confirmed to have a high potential for clinical application.

  20. Image decomposition fusion method based on sparse representation and neural network.

    Science.gov (United States)

    Chang, Lihong; Feng, Xiangchu; Zhang, Rui; Huang, Hua; Wang, Weiwei; Xu, Chen

    2017-10-01

    For noisy images, in most existing sparse representation-based models, fusion and denoising proceed simultaneously using the coefficients of a universal dictionary. This paper proposes an image fusion method based on a cartoon + texture dictionary pair combined with a deep neural network combination (DNNC). In our model, denoising and fusion are carried out alternately. The proposed method is divided into three main steps: denoising + fusion + network denoising. More specifically, (1) denoise the source images using external/internal methods separately; (2) fuse these preliminary denoised results with external/internal cartoon and texture dictionary pair to obtain the external cartoon + texture sparse representation result (E-CTSR) and internal cartoon + texture sparse representation result (I-CTSR); and (3) combine E-CTSR and I-CTSR using DNNC (EI-CTSR) to obtain the final result. Experimental results demonstrate that EI-CTSR outperforms not only the stand-alone E-CTSR and I-CTSR methods but also state-of-the-art methods such as sparse representation (SR) and adaptive sparse representation (ASR) for isomorphic images, and E-CTSR outperforms SR and ASR for heterogeneous multi-mode images.

  1. Comment on ‘A new method for fusion, denoising and enhancement of x-ray images retrieved from Talbot–Lau grating interferometry’

    International Nuclear Information System (INIS)

    Scholkmann, Felix; Revol, Vincent; Kaufmann, Rolf; Kottler, Christian

    2015-01-01

    In a recent paper (Scholkamm et al 2014 Phys. Med. Biol. 59 1425–40) we presented a new image denoising, fusion and enhancement framework for combining and optimal visualization of x-ray attenuation contrast, differential phase contrast and dark-field contrast images retrieved from x-ray Talbot–Lau grating interferometry. In this comment we give additional information and report about the application of our framework to breast cancer tissue which we presented in our paper as an example. The applied procedure is suitable for a qualitative comparison of different algorithms. For a quantitative juxtaposition original data would however be needed as an input. (comment and reply)

  2. When Low Rank Representation Based Hyperspectral Imagery Classification Meets Segmented Stacked Denoising Auto-Encoder Based Spatial-Spectral Feature

    Directory of Open Access Journals (Sweden)

    Cong Wang

    2018-02-01

    Full Text Available When confronted with limited labelled samples, most studies adopt an unsupervised feature learning scheme and incorporate the extracted features into a traditional classifier (e.g., support vector machine, SVM to deal with hyperspectral imagery classification. However, these methods have limitations in generalizing well in challenging cases due to the limited representative capacity of the shallow feature learning model, as well as the insufficient robustness of the classifier which only depends on the supervision of labelled samples. To address these two problems simultaneously, we present an effective low-rank representation-based classification framework for hyperspectral imagery. In particular, a novel unsupervised segmented stacked denoising auto-encoder-based feature learning model is proposed to depict the spatial-spectral characteristics of each pixel in the imagery with deep hierarchical structure. With the extracted features, a low-rank representation based robust classifier is then developed which takes advantage of both the supervision provided by labelled samples and unsupervised correlation (e.g., intra-class similarity and inter-class dissimilarity, etc. among those unlabelled samples. Both the deep unsupervised feature learning and the robust classifier benefit, improving the classification accuracy with limited labelled samples. Extensive experiments on hyperspectral imagery classification demonstrate the effectiveness of the proposed framework.

  3. ECG Signal Denoising and QRS Complex Detection by Wavelet Transform Based Thresholding

    Directory of Open Access Journals (Sweden)

    Swati BANERJEE

    2010-08-01

    Full Text Available Biomedical signals like heart waves commonly change their statistical property over time and are highly non stationary signals. For the analysis of this kind of signals wavelet transform is a powerful tool. Electrocardiogram (ECG is one of the most widely used diagnostic tools for heart disease. Automatic detection of R peaks in a QRS complex is a fundamental requirement for automatic disease identification. This paper presents a novel algorithm and its implementation details for denoising an ECG signal along with accurate detection of R peaks and hence the QRS complex using Discrete Wavelet Transform (DWT where db6 is selected as the mother wavelet for analysis as it is found to be most similar to the morphology of QRS complexes. Decomposition and selective reconstruction by elimination of higher scale details from the signal, denoises it considerably. Thresholding along with slope inversion method is used for detection of QRS complex. The performance of the system is validated using the 12-lead ECG recordings collected from physionet PTB diagnostic database giving a sensitivity of 99.4 %.

  4. Performances of a specific denoising wavelet process for high-resolution gamma imaging

    Science.gov (United States)

    Pousse, Annie; Dornier, Christophe; Parmentier, Michel; Kastler, Bruno; Chavanelle, Jerome

    2004-02-01

    Due to its functional capabilities, gamma imaging is an interesting tool for medical diagnosis. Recent developments lead to improved intrinsic resolution. However this gain is impaired by the poor activity detected and the Poissonian feature of gamma ray emission. High resolution gamma images are grainy. This is a real nuisance for detecting cold nodules in an emitting organ. A specific translation wavelet filter which takes into account the Poissonian feature of noise, has been developed in order to improve the diagnostic capabilities of radioisotopic high resolution images. Monte Carlo simulations of a hot thyroid phantom in which cold spheres, 3-7 mm in diameter, could be included were performed. The loss of activity induced by cold nodules was determined on filtered images by using catchment basins determination. On the original images, only 5-7 mm cold spheres were clearly visible. On filtered images, 3 and 4 mm spheres were put in prominent. The limit of the developed filter is approximately the detection of 3 mm spherical cold nodule in acquisition and activity conditions which mimic a thyroid examination. Furthermore, no disturbing artifacts are generated. It is therefore a powerful tool for detecting small cold nodules in a gamma emitting medium.

  5. Structure-adaptive sparse denoising for diffusion-tensor MRI.

    Science.gov (United States)

    Bao, Lijun; Robini, Marc; Liu, Wanyu; Zhu, Yuemin

    2013-05-01

    Diffusion tensor magnetic resonance imaging (DT-MRI) is becoming a prospective imaging technique in clinical applications because of its potential for in vivo and non-invasive characterization of tissue organization. However, the acquisition of diffusion-weighted images (DWIs) is often corrupted by noise and artifacts, and the intensity of diffusion-weighted signals is weaker than that of classical magnetic resonance signals. In this paper, we propose a new denoising method for DT-MRI, called structure-adaptive sparse denoising (SASD), which exploits self-similarity in DWIs. We define a similarity measure based on the local mean and on a modified structure-similarity index to find sets of similar patches that are arranged into three-dimensional arrays, and we propose a simple and efficient structure-adaptive window pursuit method to achieve sparse representation of these arrays. The noise component of the resulting structure-adaptive arrays is attenuated by Wiener shrinkage in a transform domain defined by two-dimensional principal component decomposition and Haar transformation. Experiments on both synthetic and real cardiac DT-MRI data show that the proposed SASD algorithm outperforms state-of-the-art methods for denoising images with structural redundancy. Moreover, SASD achieves a good trade-off between image contrast and image smoothness, and our experiments on synthetic data demonstrate that it produces more accurate tensor fields from which biologically relevant metrics can then be computed. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Joint denoising and distortion correction of atomic scale scanning transmission electron microscopy images

    Science.gov (United States)

    Berkels, Benjamin; Wirth, Benedikt

    2017-09-01

    Nowadays, modern electron microscopes deliver images at atomic scale. The precise atomic structure encodes information about material properties. Thus, an important ingredient in the image analysis is to locate the centers of the atoms shown in micrographs as precisely as possible. Here, we consider scanning transmission electron microscopy (STEM), which acquires data in a rastering pattern, pixel by pixel. Due to this rastering combined with the magnification to atomic scale, movements of the specimen even at the nanometer scale lead to random image distortions that make precise atom localization difficult. Given a series of STEM images, we derive a Bayesian method that jointly estimates the distortion in each image and reconstructs the underlying atomic grid of the material by fitting the atom bumps with suitable bump functions. The resulting highly non-convex minimization problems are solved numerically with a trust region approach. Existence of minimizers and the model behavior for faster and faster rastering are investigated using variational techniques. The performance of the method is finally evaluated on both synthetic and real experimental data.

  7. Nonlocal Means Denoising of Self-Gated and k-Space Sorted 4-Dimensional Magnetic Resonance Imaging Using Block-Matching and 3-Dimensional Filtering: Implications for Pancreatic Tumor Registration and Segmentation.

    Science.gov (United States)

    Jin, Jun; McKenzie, Elizabeth; Fan, Zhaoyang; Tuli, Richard; Deng, Zixin; Pang, Jianing; Fraass, Benedick; Li, Debiao; Sandler, Howard; Yang, Guang; Sheng, Ke; Gou, Shuiping; Yang, Wensha

    2016-07-01

    To denoise self-gated k-space sorted 4-dimensional magnetic resonance imaging (SG-KS-4D-MRI) by applying a nonlocal means denoising filter, block-matching and 3-dimensional filtering (BM3D), to test its impact on the accuracy of 4D image deformable registration and automated tumor segmentation for pancreatic cancer patients. Nine patients with pancreatic cancer and abdominal SG-KS-4D-MRI were included in the study. Block-matching and 3D filtering was adapted to search in the axial slices/frames adjacent to the reference image patch in the spatial and temporal domains. The patches with high similarity to the reference patch were used to collectively denoise the 4D-MRI image. The pancreas tumor was manually contoured on the first end-of-exhalation phase for both the raw and the denoised 4D-MRI. B-spline deformable registration was applied to the subsequent phases for contour propagation. The consistency of tumor volume defined by the standard deviation of gross tumor volumes from 10 breathing phases (σ_GTV), tumor motion trajectories in 3 cardinal motion planes, 4D-MRI imaging noise, and image contrast-to-noise ratio were compared between the raw and denoised groups. Block-matching and 3D filtering visually and quantitatively reduced image noise by 52% and improved image contrast-to-noise ratio by 56%, without compromising soft tissue edge definitions. Automatic tumor segmentation is statistically more consistent on the denoised 4D-MRI (σ_GTV = 0.6 cm(3)) than on the raw 4D-MRI (σ_GTV = 0.8 cm(3)). Tumor end-of-exhalation location is also more reproducible on the denoised 4D-MRI than on the raw 4D-MRI in all 3 cardinal motion planes. Block-matching and 3D filtering can significantly reduce random image noise while maintaining structural features in the SG-KS-4D-MRI datasets. In this study of pancreatic tumor segmentation, automatic segmentation of GTV in the registered image sets is shown to be more consistent on the denoised 4D-MRI than on the raw 4D

  8. Fault diagnosis of rolling bearing based on second generation wavelet denoising and morphological filter

    International Nuclear Information System (INIS)

    Meng, Lingjie; Xiang, Jiawei; Zhong, Yongteng; Song, Wenlei

    2015-01-01

    Defective rolling bearing response is often characterized by the presence of periodic impulses. However, the in-situ sampled vibration signal is ordinarily mixed with ambient noises and easy to be interfered even submerged. The hybrid approach combining the second generation wavelet denoising with morphological filter is presented. The raw signal is purified using the second generation wavelet. The difference between the closing and opening operator is employed as the morphology filter to extract the periodicity impulsive features from the purified signal and the defect information is easily to be extracted from the corresponding frequency spectrum. The proposed approach is evaluated by simulations and vibration signals from defective bearings with inner race fault, outer race fault, rolling element fault and compound faults, espectively. Results show that the ambient noises can be fully restrained and the defect information of the above defective bearings is well extracted, which demonstrates that the approach is feasible and effective for the fault detection of rolling bearing.

  9. Convolutional auto-encoder for image denoising of ultra-low-dose CT

    Directory of Open Access Journals (Sweden)

    Mizuho Nishio

    2017-08-01

    Conclusion: Neural network with convolutional auto-encoder could be trained using pairs of standard-dose and ultra-low-dose CT image patches. According to the visual assessment by radiologists and technologists, the performance of our proposed method was superior to that of large-scale nonlocal mean and block-matching and 3D filtering.

  10. A Novel Hybrid Model Based on Extreme Learning Machine, k-Nearest Neighbor Regression and Wavelet Denoising Applied to Short-Term Electric Load Forecasting

    Directory of Open Access Journals (Sweden)

    Weide Li

    2017-05-01

    Full Text Available Electric load forecasting plays an important role in electricity markets and power systems. Because electric load time series are complicated and nonlinear, it is very difficult to achieve a satisfactory forecasting accuracy. In this paper, a hybrid model, Wavelet Denoising-Extreme Learning Machine optimized by k-Nearest Neighbor Regression (EWKM, which combines k-Nearest Neighbor (KNN and Extreme Learning Machine (ELM based on a wavelet denoising technique is proposed for short-term load forecasting. The proposed hybrid model decomposes the time series into a low frequency-associated main signal and some detailed signals associated with high frequencies at first, then uses KNN to determine the independent and dependent variables from the low-frequency signal. Finally, the ELM is used to get the non-linear relationship between these variables to get the final prediction result for the electric load. Compared with three other models, Extreme Learning Machine optimized by k-Nearest Neighbor Regression (EKM, Wavelet Denoising-Extreme Learning Machine (WKM and Wavelet Denoising-Back Propagation Neural Network optimized by k-Nearest Neighbor Regression (WNNM, the model proposed in this paper can improve the accuracy efficiently. New South Wales is the economic powerhouse of Australia, so we use the proposed model to predict electric demand for that region. The accurate prediction has a significant meaning.

  11. Echocardiogram enhancement using supervised manifold denoising.

    Science.gov (United States)

    Wu, Hui; Huynh, Toan T; Souvenir, Richard

    2015-08-01

    This paper presents data-driven methods for echocardiogram enhancement. Existing denoising algorithms typically rely on a single noise model, and do not generalize to the composite noise sources typically found in real-world echocardiograms. Our methods leverage the low-dimensional intrinsic structure of echocardiogram videos. We assume that echocardiogram images are noisy samples from an underlying manifold parametrized by cardiac motion and denoise images via back-projection onto a learned (non-linear) manifold. Our methods incorporate synchronized side information (e.g., electrocardiography), which is often collected alongside the visual data. We evaluate the proposed methods on a synthetic data set and real-world echocardiograms. Quantitative results show improved performance of our methods over recent image despeckling methods and video denoising methods, and a visual analysis of real-world data shows noticeable image enhancement, even in the challenging case of noise due to dropout artifacts. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Fingerprint Image Segmentation Algorithm Based on Contourlet Transform Technology

    Directory of Open Access Journals (Sweden)

    Guanghua Zhang

    2016-09-01

    Full Text Available This paper briefly introduces two classic algorithms for fingerprint image processing, which include the soft threshold denoise algorithm of wavelet domain based on wavelet domain and the fingerprint image enhancement algorithm based on Gabor function. Contourlet transform has good texture sensitivity and can be used for the segmentation enforcement of the fingerprint image. The method proposed in this paper has attained the final fingerprint segmentation image through utilizing a modified denoising for a high-frequency coefficient after Contourlet decomposition, highlighting the fingerprint ridge line through modulus maxima detection and finally connecting the broken fingerprint line using a value filter in direction. It can attain richer direction information than the method based on wavelet transform and Gabor function and can make the positioning of detailed features more accurate. However, its ridge should be more coherent. Experiments have shown that this algorithm is obviously superior in fingerprint features detection.

  13. Analysis of hydrological trend for radioactivity content in bore-hole water samples using wavelet based denoising

    International Nuclear Information System (INIS)

    Paul, Sabyasachi; Suman, V.; Sarkar, P.K.; Ranade, A.K.; Pulhani, V.; Dafauti, S.; Datta, D.

    2013-01-01

    A wavelet transform based denoising methodology has been applied to detect the presence of any discernable trend in 137 Cs and 90 Sr activity levels in bore-hole water samples collected four times a year over a period of eight years, from 2002 to 2009, in the vicinity of typical nuclear facilities inside the restricted access zones. The conventional non-parametric methods viz., Mann–Kendall and Spearman rho, along with linear regression when applied for detecting the linear trend in the time series data do not yield results conclusive for trend detection with a confidence of 95% for most of the samples. The stationary wavelet based hard thresholding data pruning method with Haar as the analyzing wavelet was applied to remove the noise present in the same data. Results indicate that confidence interval of the established trend has significantly improved after pre-processing to more than 98% compared to the conventional non-parametric methods when applied to direct measurements. -- Highlights: ► Environmental trend analysis with wavelet pre-processing was carried out. ► Removal of local fluctuations to obtain the trend in a time series with various mother wavelets. ► Theoretical validation of the methodology with model outputs. ► Efficient detection of trend for 137 Cs, 90 Sr in bore-hole water samples improves the associated confidence interval to more than 98%. ► Wavelet based pre-processing reduces the indecisive nature of the detected trend

  14. FOG Random Drift Signal Denoising Based on the Improved AR Model and Modified Sage-Husa Adaptive Kalman Filter.

    Science.gov (United States)

    Sun, Jin; Xu, Xiaosu; Liu, Yiting; Zhang, Tao; Li, Yao

    2016-07-12

    In order to reduce the influence of fiber optic gyroscope (FOG) random drift error on inertial navigation systems, an improved auto regressive (AR) model is put forward in this paper. First, based on real-time observations at each restart of the gyroscope, the model of FOG random drift can be established online. In the improved AR model, the FOG measured signal is employed instead of the zero mean signals. Then, the modified Sage-Husa adaptive Kalman filter (SHAKF) is introduced, which can directly carry out real-time filtering on the FOG signals. Finally, static and dynamic experiments are done to verify the effectiveness. The filtering results are analyzed with Allan variance. The analysis results show that the improved AR model has high fitting accuracy and strong adaptability, and the minimum fitting accuracy of single noise is 93.2%. Based on the improved AR(3) model, the denoising method of SHAKF is more effective than traditional methods, and its effect is better than 30%. The random drift error of FOG is reduced effectively, and the precision of the FOG is improved.

  15. VIDEO DENOISING USING SWITCHING ADAPTIVE DECISION BASED ALGORITHM WITH ROBUST MOTION ESTIMATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V. Jayaraj

    2010-08-01

    Full Text Available A Non-linear adaptive decision based algorithm with robust motion estimation technique is proposed for removal of impulse noise, Gaussian noise and mixed noise (impulse and Gaussian with edge and fine detail preservation in images and videos. The algorithm includes detection of corrupted pixels and the estimation of values for replacing the corrupted pixels. The main advantage of the proposed algorithm is that an appropriate filter is used for replacing the corrupted pixel based on the estimation of the noise variance present in the filtering window. This leads to reduced blurring and better fine detail preservation even at the high mixed noise density. It performs both spatial and temporal filtering for removal of the noises in the filter window of the videos. The Improved Cross Diamond Search Motion Estimation technique uses Least Median Square as a cost function, which shows improved performance than other motion estimation techniques with existing cost functions. The results show that the proposed algorithm outperforms the other algorithms in the visual point of view and in Peak Signal to Noise Ratio, Mean Square Error and Image Enhancement Factor.

  16. Denoising in Wavelet Packet Domain via Approximation Coefficients

    Directory of Open Access Journals (Sweden)

    Zahra Vahabi

    2012-01-01

    Full Text Available In this paper we propose a new approach in the wavelet domain for image denoising. In recent researches wavelet transform has introduced a time-Frequency transform for computing wavelet coefficient and eliminating noise. Some coefficients have effected smaller than the other's from noise, so they can be use reconstruct images with other subbands. We have developed Approximation image to estimate better denoised image. Naturally noiseless subimage introduced image with lower noise. Beside denoising we obtain a bigger compression rate. Increasing image contrast is another advantage of this method. Experimental results demonstrate that our approach compares favorably to more typical methods of denoising and compression in wavelet domain.100 images of LIVE Dataset were tested, comparing signal to noise ratios (SNR,soft thresholding was %1.12 better than hard thresholding, POAC was %1.94 better than soft thresholding and POAC with wavelet packet was %1.48 better than POAC.

  17. Computer processing of image captured by the passive THz imaging device as an effective tool for its de-noising

    Science.gov (United States)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Kuchik, Igor E.; Zhang, Cun-lin; Deng, Chao; Zhao, Yuan-meng; Zhang, Xin

    2012-12-01

    As it is well-known, passive THz imaging devices have big potential for solution of the security problem. Nevertheless, one of the main problems, which take place on the way of using these devices, consists in the low image quality of developed passive THz camera. To change this situation, it is necessary to improve the engineering characteristics (resolution, sensitivity and so on) of the THz camera or to use computer processing of the image. In our opinion, the last issue is more preferable because it is more inexpensive. Below we illustrate possibility of suppression of the noise of the image captured by three THz passive camera developed in CNU (Beijing. China). After applying the computer processing of the image, its quality enhances many times. Achieved quality in many cases becomes enough for the detection of the object hidden under opaque clothes. We stress that the performance of developed computer code is enough high and does not restrict the performance of passive THz imaging device. The obtained results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem. Nevertheless, developing the new spatial filter for treatment of the THz image remains a modern problem at present time.

  18. a SAR Image Registration Method Based on Sift Algorithm

    Science.gov (United States)

    Lu, W.; Yue, X.; Zhao, Y.; Han, C.

    2017-09-01

    In order to improve the stability and rapidity of synthetic aperture radar (SAR) images matching, an effective method was presented. Firstly, the adaptive smoothing filtering was employed for image denoising in image processing based on Wallis filtering to avoid the follow-up noise is amplified. Secondly, feature points were extracted by a simplified SIFT algorithm. Finally, the exact matching of the images was achieved with these points. Compared with the existing methods, it not only maintains the richness of features, but a-lso reduces the noise of the image. The simulation results show that the proposed algorithm can achieve better matching effect.

  19. Enhancing micro-seismic P-phase arrival picking: EMD-cosine function-based denoising with an application to the AIC picker

    Science.gov (United States)

    Shang, Xueyi; Li, Xibing; Morales-Esteban, A.; Dong, Longjun

    2018-03-01

    Micro-seismic P-phase arrival picking is an elementary step into seismic event location, source mechanism analysis, and seismic tomography. However, a micro-seismic signal is often mixed with high frequency noises and power frequency noises (50 Hz), which could considerably reduce P-phase picking accuracy. To solve this problem, an Empirical Mode Decomposition (EMD)-cosine function denoising-based Akaike Information Criterion (AIC) picker (ECD-AIC picker) is proposed for picking the P-phase arrival time. Unlike traditional low pass filters which are ineffective when seismic data and noise bandwidths overlap, the EMD adaptively separates the seismic data and the noise into different Intrinsic Mode Functions (IMFs). Furthermore, the EMD-cosine function-based denoising retains the P-phase arrival amplitude and phase spectrum more reliably than any traditional low pass filter. The ECD-AIC picker was tested on 1938 sets of micro-seismic waveforms randomly selected from the Institute of Mine Seismology (IMS) database of the Chinese Yongshaba mine. The results have shown that the EMD-cosine function denoising can effectively estimate high frequency and power frequency noises and can be easily adapted to perform on signals with different shapes and forms. Qualitative and quantitative comparisons show that the combined ECD-AIC picker provides better picking results than both the ED-AIC picker and the AIC picker, and the comparisons also show more reliable source localization results when the ECD-AIC picker is applied, thus showing the potential of this combined P-phase picking technique.

  20. Random Correlation Matrix and De-Noising

    OpenAIRE

    Ken-ichi Mitsui; Yoshio Tabata

    2006-01-01

    In Finance, the modeling of a correlation matrix is one of the important problems. In particular, the correlation matrix obtained from market data has the noise. Here we apply the de-noising processing based on the wavelet analysis to the noisy correlation matrix, which is generated by a parametric function with random parameters. First of all, we show that two properties, i.e. symmetry and ones of all diagonal elements, of the correlation matrix preserve via the de-noising processing and the...

  1. A Morphological Hessian Based Approach for Retinal Blood Vessels Segmentation and Denoising Using Region Based Otsu Thresholding

    Science.gov (United States)

    BahadarKhan, Khan; A Khaliq, Amir; Shahid, Muhammad

    2016-01-01

    Diabetic Retinopathy (DR) harm retinal blood vessels in the eye causing visual deficiency. The appearance and structure of blood vessels in retinal images play an essential part in the diagnoses of an eye sicknesses. We proposed a less computational unsupervised automated technique with promising results for detection of retinal vasculature by using morphological hessian based approach and region based Otsu thresholding. Contrast Limited Adaptive Histogram Equalization (CLAHE) and morphological filters have been used for enhancement and to remove low frequency noise or geometrical objects, respectively. The hessian matrix and eigenvalues approach used has been in a modified form at two different scales to extract wide and thin vessel enhanced images separately. Otsu thresholding has been further applied in a novel way to classify vessel and non-vessel pixels from both enhanced images. Finally, postprocessing steps has been used to eliminate the unwanted region/segment, non-vessel pixels, disease abnormalities and noise, to obtain a final segmented image. The proposed technique has been analyzed on the openly accessible DRIVE (Digital Retinal Images for Vessel Extraction) and STARE (STructured Analysis of the REtina) databases along with the ground truth data that has been precisely marked by the experts. PMID:27441646

  2. A Morphological Hessian Based Approach for Retinal Blood Vessels Segmentation and Denoising Using Region Based Otsu Thresholding.

    Directory of Open Access Journals (Sweden)

    Khan BahadarKhan

    Full Text Available Diabetic Retinopathy (DR harm retinal blood vessels in the eye causing visual deficiency. The appearance and structure of blood vessels in retinal images play an essential part in the diagnoses of an eye sicknesses. We proposed a less computational unsupervised automated technique with promising results for detection of retinal vasculature by using morphological hessian based approach and region based Otsu thresholding. Contrast Limited Adaptive Histogram Equalization (CLAHE and morphological filters have been used for enhancement and to remove low frequency noise or geometrical objects, respectively. The hessian matrix and eigenvalues approach used has been in a modified form at two different scales to extract wide and thin vessel enhanced images separately. Otsu thresholding has been further applied in a novel way to classify vessel and non-vessel pixels from both enhanced images. Finally, postprocessing steps has been used to eliminate the unwanted region/segment, non-vessel pixels, disease abnormalities and noise, to obtain a final segmented image. The proposed technique has been analyzed on the openly accessible DRIVE (Digital Retinal Images for Vessel Extraction and STARE (STructured Analysis of the REtina databases along with the ground truth data that has been precisely marked by the experts.

  3. A novel EMD selecting thresholding method based on multiple iteration for denoising LIDAR signal

    Science.gov (United States)

    Li, Meng; Jiang, Li-hui; Xiong, Xing-long

    2015-06-01

    Empirical mode decomposition (EMD) approach has been believed to be potentially useful for processing the nonlinear and non-stationary LIDAR signals. To shed further light on its performance, we proposed the EMD selecting thresholding method based on multiple iteration, which essentially acts as a development of EMD interval thresholding (EMD-IT), and randomly alters the samples of noisy parts of all the corrupted intrinsic mode functions to generate a better effect of iteration. Simulations on both synthetic signals and LIDAR signals from real world support this method.

  4. Statistical Multirate High-Resolution Signal Reconstruction Using the EMD-IT Based Denoising Approach

    Directory of Open Access Journals (Sweden)

    A. Kizilkaya

    2015-04-01

    Full Text Available The reconstruction problem of a high-resolution (HR signal from a set of its noise-corrupted low-resolution (LR versions is considered. As a part of this problem, a hybrid method that consists of four operation units is proposed. The first unit applies noise reduction based on the empirical mode decomposition interval-thresholding to the noisy LR observations. In the second unit, estimates of zero-interpolated HR signals are obtained by performing up-sampling and then time shifting on each noise reduced LR signal. The third unit combines the zero-interpolated HR signals for attaining one HR signal. To eliminate the ripple effect, finally, median filtering is applied to the resulting reconstructed signal. As compared to the work that employs linear periodically time-varying Wiener filters, the proposed method does not require any correlation information about desired signal and LR observations. The validity of the proposed method is demonstrated by several simulation examples.

  5. Similarity and denoising.

    Science.gov (United States)

    Vitányi, Paul M B

    2013-02-13

    We can discover the effective similarity among pairs of finite objects and denoise a finite object using the Kolmogorov complexity of these objects. The drawback is that the Kolmogorov complexity is not computable. If we approximate it, using a good real-world compressor, then it turns out that on natural data the processes give adequate results in practice. The methodology is parameter-free, alignment-free and works on individual data. We illustrate both methods with examples.

  6. Sparse non-linear denoising: Generalization performance and pattern reproducibility in functional MRI

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    We investigate sparse non-linear denoising of functional brain images by kernel Principal Component Analysis (kernel PCA). The main challenge is the mapping of denoised feature space points back into input space, also referred to as ”the pre-image problem”. Since the feature space mapping is typi...

  7. ECG signal performance de-noising assessment based on threshold tuning of dual-tree wavelet transform.

    Science.gov (United States)

    El B'charri, Oussama; Latif, Rachid; Elmansouri, Khalifa; Abenaou, Abdenbi; Jenkal, Wissam

    2017-02-07

    Since the electrocardiogram (ECG) signal has a low frequency and a weak amplitude, it is sensitive to miscellaneous mixed noises, which may reduce the diagnostic accuracy and hinder the physician's correct decision on patients. The dual tree wavelet transform (DT-WT) is one of the most recent enhanced versions of discrete wavelet transform. However, threshold tuning on this method for noise removal from ECG signal has not been investigated yet. In this work, we shall provide a comprehensive study on the impact of the choice of threshold algorithm, threshold value, and the appropriate wavelet decomposition level to evaluate the ECG signal de-noising performance. A set of simulations is performed on both synthetic and real ECG signals to achieve the promised results. First, the synthetic ECG signal is used to observe the algorithm response. The evaluation results of synthetic ECG signal corrupted by various types of noise has showed that the modified unified threshold and wavelet hyperbolic threshold de-noising method is better in realistic and colored noises. The tuned threshold is then used on real ECG signals from the MIT-BIH database. The results has shown that the proposed method achieves higher performance than the ordinary dual tree wavelet transform into all kinds of noise removal from ECG signal. The simulation results indicate that the algorithm is robust for all kinds of noises with varying degrees of input noise, providing a high quality clean signal. Moreover, the algorithm is quite simple and can be used in real time ECG monitoring.

  8. Wavelet Denoising of Radio Observations of Rotating Radio Transients (RRATs): Improved Timing Parameters for Eight RRATs

    Science.gov (United States)

    Jiang, M.; Cui, B.-Y.; Schmid, N. A.; McLaughlin, M. A.; Cao, Z.-C.

    2017-09-01

    Rotating radio transients (RRATs) are sporadically emitting pulsars detectable only through searches for single pulses. While over 100 RRATs have been detected, only a small fraction (roughly 20%) have phase-connected timing solutions, which are critical for determining how they relate to other neutron star populations. Detecting more pulses in order to achieve solutions is key to understanding their physical nature. Astronomical signals collected by radio telescopes contain noise from many sources, making the detection of weak pulses difficult. Applying a denoising method to raw time series prior to performing a single-pulse search typically leads to a more accurate estimation of their times of arrival (TOAs). Taking into account some features of RRAT pulses and noise, we present a denoising method based on wavelet data analysis, an image-processing technique. Assuming that the spin period of an RRAT is known, we estimate the frequency spectrum components contributing to the composition of RRAT pulses. This allows us to suppress the noise, which contributes to other frequencies. We apply the wavelet denoising method including selective wavelet reconstruction and wavelet shrinkage to the de-dispersed time series of eight RRATs with existing timing solutions. The signal-to-noise ratio (S/N) of most pulses are improved after wavelet denoising. Compared to the conventional approach, we measure 12%–69% more TOAs for the eight RRATs. The new timing solutions for the eight RRATs show 16%–90% smaller estimation error of most parameters. Thus, we conclude that wavelet analysis is an effective tool for denoising RRATs signal.

  9. Pipeline for effective denoising of digital mammography and digital breast tomosynthesis

    Science.gov (United States)

    Borges, Lucas R.; Bakic, Predrag R.; Foi, Alessandro; Maidment, Andrew D. A.; Vieira, Marcelo A. C.

    2017-03-01

    Denoising can be used as a tool to enhance image quality and enforce low radiation doses in X-ray medical imaging. The effectiveness of denoising techniques relies on the validity of the underlying noise model. In full-field digital mammography (FFDM) and digital breast tomosynthesis (DBT), calibration steps like the detector offset and flat-fielding can affect some assumptions made by most denoising techniques. Furthermore, quantum noise found in X-ray images is signal-dependent and can only be treated by specific filters. In this work we propose a pipeline for FFDM and DBT image denoising that considers the calibration steps and simplifies the modeling of the noise statistics through variance-stabilizing transformations (VST). The performance of a state-of-the-art denoising method was tested with and without the proposed pipeline. To evaluate the method, objective metrics such as the normalized root mean square error (N-RMSE), noise power spectrum, modulation transfer function (MTF) and the frequency signal-to-noise ratio (SNR) were analyzed. Preliminary tests show that the pipeline improves denoising. When the pipeline is not used, bright pixels of the denoised image are under-filtered and dark pixels are over-smoothed due to the assumption of a signal-independent Gaussian model. The pipeline improved denoising up to 20% in terms of spatial N-RMSE and up to 15% in terms of frequency SNR. Besides improving the denoising, the pipeline does not increase signal smoothing significantly, as shown by the MTF. Thus, the proposed pipeline can be used with state-of-the-art denoising techniques to improve the quality of DBT and FFDM images.

  10. Airborne Gravity Data Denoising Based on Empirical Mode Decomposition: A Case Study for SGA-WZ Greenland Test Data

    Directory of Open Access Journals (Sweden)

    Lei Zhao

    2015-10-01

    Full Text Available Surveying the Earth’s gravity field refers to an important domain of Geodesy, involving deep connections with Earth Sciences and Geo-information. Airborne gravimetry is an effective tool for collecting gravity data with mGal accuracy and a spatial resolution of several kilometers. The main obstacle of airborne gravimetry is extracting gravity disturbance from the extremely low signal to noise ratio measuring data. In general, the power of noise concentrates on the higher frequency of measuring data, and a low pass filter can be used to eliminate it. However, the noise could distribute in a broad range of frequency while low pass filter cannot deal with it in pass band of the low pass filter. In order to improve the accuracy of the airborne gravimetry, Empirical Mode Decomposition (EMD is employed to denoise the measuring data of two primary repeated flights of the strapdown airborne gravimetry system SGA-WZ carried out in Greenland. Comparing to the solutions of using finite impulse response filter (FIR, the new results are improved by 40% and 10% of root mean square (RMS of internal consistency and external accuracy, respectively.

  11. A New Wavelet Denoising Method for Selecting Decomposition Levels and Noise Thresholds.

    Science.gov (United States)

    Srivastava, Madhur; Anderson, C Lindsay; Freed, Jack H

    2016-01-01

    A new method is presented to denoise 1-D experimental signals using wavelet transforms. Although the state-of- the-art wavelet denoising methods perform better than other denoising methods, they are not very effective for experimental signals. Unlike images and other signals, experimental signals in chemical and biophysical applications for example, are less tolerant to signal distortion and under-denoising caused by the standard wavelet denoising methods. The new method 1) provides a method to select the number of decomposition levels to denoise, 2) uses a new formula to calculate noise thresholds that does not require noise estimation, 3) uses separate noise thresholds for positive and negative wavelet coefficients, 4) applies denoising to the Approximation component, and 5) allows the flexibility to adjust the noise thresholds. The new method is applied to continuous wave electron spin resonance (cw-ESR) spectra and it is found that it increases the signal-to-noise ratio (SNR) by more than 32 dB without distorting the signal, whereas standard denoising methods improve the SNR by less than 10 dB and with some distortion. Also, its computation time is more than 6 times faster.

  12. Denoising by semi-supervised kernel PCA preimaging

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Abrahamsen, Trine Julie; Hansen, Lars Kai

    2014-01-01

    Kernel Principal Component Analysis (PCA) has proven a powerful tool for nonlinear feature extraction, and is often applied as a pre-processing step for classification algorithms. In denoising applications Kernel PCA provides the basis for dimensionality reduction, prior to the so-called pre-imag...

  13. Fractional-Order Total Variation Image Restoration Based on Primal-Dual Algorithm

    OpenAIRE

    Dali Chen; YangQuan Chen; Dingyu Xue

    2013-01-01

    This paper proposes a fractional-order total variation image denoising algorithm based on the primal-dual method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, convergence rate, and blocky effect. The fractional-order total variation model is introduced by generalizing the first-order model, and the corresponding saddle-point and dual formulation are constructed in theory. In order to guarantee $O(1/{N}^{2})$ conv...

  14. Dictionary learning based noisy image super-resolution via distance penalty weight model.

    Science.gov (United States)

    Han, Yulan; Zhao, Yongping; Wang, Qisong

    2017-01-01

    In this study, we address the problem of noisy image super-resolution. Noisy low resolution (LR) image is always obtained in applications, while most of the existing algorithms assume that the LR image is noise-free. As to this situation, we present an algorithm for noisy image super-resolution which can achieve simultaneously image super-resolution and denoising. And in the training stage of our method, LR example images are noise-free. For different input LR images, even if the noise variance varies, the dictionary pair does not need to be retrained. For the input LR image patch, the corresponding high resolution (HR) image patch is reconstructed through weighted average of similar HR example patches. To reduce computational cost, we use the atoms of learned sparse dictionary as the examples instead of original example patches. We proposed a distance penalty model for calculating the weight, which can complete a second selection on similar atoms at the same time. Moreover, LR example patches removed mean pixel value are also used to learn dictionary rather than just their gradient features. Based on this, we can reconstruct initial estimated HR image and denoised LR image. Combined with iterative back projection, the two reconstructed images are applied to obtain final estimated HR image. We validate our algorithm on natural images and compared with the previously reported algorithms. Experimental results show that our proposed method performs better noise robustness.

  15. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  16. FlowClus: efficiently filtering and denoising pyrosequenced amplicons.

    Science.gov (United States)

    Gaspar, John M; Thomas, W Kelley

    2015-03-27

    Reducing the effects of sequencing errors and PCR artifacts has emerged as an essential component in amplicon-based metagenomic studies. Denoising algorithms have been designed that can reduce error rates in mock community data, but they change the sequence data in a manner that can be inconsistent with the process of removing errors in studies of real communities. In addition, they are limited by the size of the dataset and the sequencing technology used. FlowClus uses a systematic approach to filter and denoise reads efficiently. When denoising real datasets, FlowClus provides feedback about the process that can be used as the basis to adjust the parameters of the algorithm to suit the particular dataset. When used to analyze a mock community dataset, FlowClus produced a lower error rate compared to other denoising algorithms, while retaining significantly more sequence information. Among its other attributes, FlowClus can analyze longer reads being generated from all stages of 454 sequencing technology, as well as from Ion Torrent. It has processed a large dataset of 2.2 million GS-FLX Titanium reads in twelve hours; using its more efficient (but less precise) trie analysis option, this time was further reduced, to seven minutes. Many of the amplicon-based metagenomics datasets generated over the last several years have been processed through a denoising pipeline that likely caused deleterious effects on the raw data. By using FlowClus, one can avoid such negative outcomes while maintaining control over the filtering and denoising processes. Because of its efficiency, FlowClus can be used to re-analyze multiple large datasets together, thereby leading to more standardized conclusions. FlowClus is freely available on GitHub (jsh58/FlowClus); it is written in C and supported on Linux.

  17. Compression of Ultrasonic NDT Image by Wavelet Based Local Quantization

    Science.gov (United States)

    Cheng, W.; Li, L. Q.; Tsukada, K.; Hanasaki, K.

    2004-02-01

    Compression on ultrasonic image that is always corrupted by noise will cause `over-smoothness' or much distortion. To solve this problem to meet the need of real time inspection and tele-inspection, a compression method based on Discrete Wavelet Transform (DWT) that can also suppress the noise without losing much flaw-relevant information, is presented in this work. Exploiting the multi-resolution and interscale correlation property of DWT, a simple way named DWCs classification, is introduced first to classify detail wavelet coefficients (DWCs) as dominated by noise, signal or bi-effected. A better denoising can be realized by selective thresholding DWCs. While in `Local quantization', different quantization strategies are applied to the DWCs according to their classification and the local image property. It allocates the bit rate more efficiently to the DWCs thus achieve a higher compression rate. Meanwhile, the decompressed image shows the effects of noise suppressed and flaw characters preserved.

  18. Fusion of visible and near-infrared images based on luminance estimation by weighted luminance algorithm

    Science.gov (United States)

    Wang, Zhun; Cheng, Feiyan; Shi, Junsheng; Huang, Xiaoqiao

    2018-01-01

    In a low-light scene, capturing color images needs to be at a high-gain setting or a long-exposure setting to avoid a visible flash. However, such these setting will lead to color images with serious noise or motion blur. Several methods have been proposed to improve a noise-color image through an invisible near infrared flash image. A novel method is that the luminance component and the chroma component of the improved color image are estimated from different image sources [1]. The luminance component is estimated mainly from the NIR image via a spectral estimation, and the chroma component is estimated from the noise-color image by denoising. However, it is challenging to estimate the luminance component. This novel method to estimate the luminance component needs to generate the learning data pairs, and the processes and algorithm are complex. It is difficult to achieve practical application. In order to reduce the complexity of the luminance estimation, an improved luminance estimation algorithm is presented in this paper, which is to weight the NIR image and the denoised-color image and the weighted coefficients are based on the mean value and standard deviation of both images. Experimental results show that the same fusion effect at aspect of color fidelity and texture quality is achieved, compared the proposed method with the novel method, however, the algorithm is more simple and practical.

  19. Discrete shearlet transform on GPU with applications in anomaly detection and denoising

    Science.gov (United States)

    Gibert, Xavier; Patel, Vishal M.; Labate, Demetrio; Chellappa, Rama

    2014-12-01

    Shearlets have emerged in recent years as one of the most successful methods for the multiscale analysis of multidimensional signals. Unlike wavelets, shearlets form a pyramid of well-localized functions defined not only over a range of scales and locations, but also over a range of orientations and with highly anisotropic supports. As a result, shearlets are much more effective than traditional wavelets in handling the geometry of multidimensional data, and this was exploited in a wide range of applications from image and signal processing. However, despite their desirable properties, the wider applicability of shearlets is limited by the computational complexity of current software implementations. For example, denoising a single 512 × 512 image using a current implementation of the shearlet-based shrinkage algorithm can take between 10 s and 2 min, depending on the number of CPU cores, and much longer processing times are required for video denoising. On the other hand, due to the parallel nature of the shearlet transform, it is possible to use graphics processing units (GPU) to accelerate its implementation. In this paper, we present an open source stand-alone implementation of the 2D discrete shearlet transform using CUDA C++ as well as GPU-accelerated MATLAB implementations of the 2D and 3D shearlet transforms. We have instrumented the code so that we can analyze the running time of each kernel under different GPU hardware. In addition to denoising, we describe a novel application of shearlets for detecting anomalies in textured images. In this application, computation times can be reduced by a factor of 50 or more, compared to multicore CPU implementations.

  20. Vibration sensor data denoising using a time-frequency manifold for machinery fault diagnosis.

    Science.gov (United States)

    He, Qingbo; Wang, Xiangxiang; Zhou, Qiang

    2013-12-27

    Vibration sensor data from a mechanical system are often associated with important measurement information useful for machinery fault diagnosis. However, in practice the existence of background noise makes it difficult to identify the fault signature from the sensing data. This paper introduces the time-frequency manifold (TFM) concept into sensor data denoising and proposes a novel denoising method for reliable machinery fault diagnosis. The TFM signature reflects the intrinsic time-frequency structure of a non-stationary signal. The proposed method intends to realize data denoising by synthesizing the TFM using time-frequency synthesis and phase space reconstruction (PSR) synthesis. Due to the merits of the TFM in noise suppression and resolution enhancement, the denoised signal would have satisfactory denoising effects, as well as inherent time-frequency structure keeping. Moreover, this paper presents a clustering-based statistical parameter to evaluate the proposed method, and also presents a new diagnostic approach, called frequency probability time series (FPTS) spectral analysis, to show its effectiveness in fault diagnosis. The proposed TFM-based data denoising method has been employed to deal with a set of vibration sensor data from defective bearings, and the results verify that for machinery fault diagnosis the method is superior to two traditional denoising methods.

  1. Optimal wavelet denoising for smart biomonitor systems

    Science.gov (United States)

    Messer, Sheila R.; Agzarian, John; Abbott, Derek

    2001-03-01

    Future smart-systems promise many benefits for biomedical diagnostics. The ideal is for simple portable systems that display and interpret information from smart integrated probes or MEMS-based devices. In this paper, we will discuss a step towards this vision with a heart bio-monitor case study. An electronic stethoscope is used to record heart sounds and the problem of extracting noise from the signal is addressed via the use of wavelets and averaging. In our example of heartbeat analysis, phonocardiograms (PCGs) have many advantages in that they may be replayed and analysed for spectral and frequency information. Many sources of noise may pollute a PCG including foetal breath sounds if the subject is pregnant, lung and breath sounds, environmental noise and noise from contact between the recording device and the skin. Wavelets can be employed to denoise the PCG. The signal is decomposed by a discrete wavelet transform. Due to the efficient decomposition of heart signals, their wavelet coefficients tend to be much larger than those due to noise. Thus, coefficients below a certain level are regarded as noise and are thresholded out. The signal can then be reconstructed without significant loss of information in the signal. The questions that this study attempts to answer are which wavelet families, levels of decomposition, and thresholding techniques best remove the noise in a PCG. The use of averaging in combination with wavelet denoising is also addressed. Possible applications of the Hilbert Transform to heart sound analysis are discussed.

  2. Denoising, deblurring, and superresoluton in mobile phones

    Science.gov (United States)

    Šroubek, Filip; Kamenický, Jan; Flusser, Jan

    2011-03-01

    Current mobile phones and web cameras are equipped with low-budget digital cameras and very poor optics. Consequently, images acquired by such cameras are deteriorated by noise and blur, and have effective resolution lower than the number of pixels. Recovering a noise-free, sharp and high-resolution image from a single input image is a heavily ill-posed problem. We propose a novel algorithm which takes a set of acquired images from low-budget cameras and performs simultaneously three tasks: registration, denoising, deblurring and resolution enhancement. The amount of each depends on the characteristics of the input set. In order to achieve all tasks in one framework, we formulate the image restoration as an energy minimization problem. A special attention is paid to implementation, so that a fast algorithm is achieved. We demonstrate performance of the proposed algorithm on a system, which comprises a camera in a mobile phone (or web camera) and a PC. The mobile acquires images, connects to the PC via wireless network, sends the images and shows the output after it is calculated on the PC.

  3. The Feature Extraction Based on Texture Image Information for Emotion Sensing in Speech

    Directory of Open Access Journals (Sweden)

    Kun-Ching Wang

    2014-09-01

    Full Text Available In this paper, we present a novel texture image feature for Emotion Sensing in Speech (ESS. This idea is based on the fact that the texture images carry emotion-related information. The feature extraction is derived from time-frequency representation of spectrogram images. First, we transform the spectrogram as a recognizable image. Next, we use a cubic curve to enhance the image contrast. Then, the texture image information (TII derived from the spectrogram image can be extracted by using Laws’ masks to characterize emotional state. In order to evaluate the effectiveness of the proposed emotion recognition in different languages, we use two open emotional databases including the Berlin Emotional Speech Database (EMO-DB and eNTERFACE corpus and one self-recorded database (KHUSC-EmoDB, to evaluate the performance cross-corpora. The results of the proposed ESS system are presented using support vector machine (SVM as a classifier. Experimental results show that the proposed TII-based feature extraction inspired by visual perception can provide significant classification for ESS systems. The two-dimensional (2-D TII feature can provide the discrimination between different emotions in visual expressions except for the conveyance pitch and formant tracks. In addition, the de-noising in 2-D images can be more easily completed than de-noising in 1-D speech.

  4. Speckle reduction in optical coherence tomography images based on wave atoms

    Science.gov (United States)

    Du, Yongzhao; Liu, Gangjun; Feng, Guoying; Chen, Zhongping

    2014-01-01

    Abstract. Optical coherence tomography (OCT) is an emerging noninvasive imaging technique, which is based on low-coherence interferometry. OCT images suffer from speckle noise, which reduces image contrast. A shrinkage filter based on wave atoms transform is proposed for speckle reduction in OCT images. Wave atoms transform is a new multiscale geometric analysis tool that offers sparser expansion and better representation for images containing oscillatory patterns and textures than other traditional transforms, such as wavelet and curvelet transforms. Cycle spinning-based technology is introduced to avoid visual artifacts, such as Gibbs-like phenomenon, and to develop a translation invariant wave atoms denoising scheme. The speckle suppression degree in the denoised images is controlled by an adjustable parameter that determines the threshold in the wave atoms domain. The experimental results show that the proposed method can effectively remove the speckle noise and improve the OCT image quality. The signal-to-noise ratio, contrast-to-noise ratio, average equivalent number of looks, and cross-correlation (XCOR) values are obtained, and the results are also compared with the wavelet and curvelet thresholding techniques. PMID:24825507

  5. TERRESTRIAL LASER SCANNER DATA DENOISING BY DICTIONARY LEARNING OF SPARSE CODING

    Directory of Open Access Journals (Sweden)

    E. Smigiel

    2013-07-01

    Full Text Available Point cloud processing is basically a signal processing issue. The huge amount of data which are collected with Terrestrial Laser Scanners or photogrammetry techniques faces the classical questions linked with signal or image processing. Among others, denoising and compression are questions which have to be addressed in this context. That is why, one has to turn attention to signal theory because it is susceptible to guide one's good practices or to inspire new ideas from the latest developments of this field. The literature have been showing for decades how strong and dynamic, the theoretical field is and how efficient the derived algorithms have become. For about ten years, a new technique has appeared: known as compressive sensing or compressive sampling, it is based first on sparsity which is an interesting characteristic of many natural signals. Based on this concept, many denoising and compression techniques have shown their efficiencies. Sparsity can also be seen as redundancy removal of natural signals. Taken along with incoherent measurements, compressive sensing has appeared and uses the idea that redundancy could be removed at the very early stage of sampling. Hence, instead of sampling the signal at high sampling rate and removing redundancy as a second stage, the acquisition stage itself may be run with redundancy removal. This paper gives some theoretical aspects of these ideas with first simple mathematics. Then, the idea of compressive sensing for a Terrestrial Laser Scanner is examined as a potential research question and finally, a denoising scheme based on a dictionary learning of sparse coding is experienced. Both the theoretical discussion and the obtained results show that it is worth staying close to signal processing theory and its community to take benefit of its latest developments.

  6. WaveletQuant, an improved quantification software based on wavelet signal threshold de-noising for labeled quantitative proteomic analysis

    Directory of Open Access Journals (Sweden)

    Li Song

    2010-04-01

    Full Text Available Abstract Background Quantitative proteomics technologies have been developed to comprehensively identify and quantify proteins in two or more complex samples. Quantitative proteomics based on differential stable isotope labeling is one of the proteomics quantification technologies. Mass spectrometric data generated for peptide quantification are often noisy, and peak detection and definition require various smoothing filters to remove noise in order to achieve accurate peptide quantification. Many traditional smoothing filters, such as the moving average filter, Savitzky-Golay filter and Gaussian filter, have been used to reduce noise in MS peaks. However, limitations of these filtering approaches often result in inaccurate peptide quantification. Here we present the WaveletQuant program, based on wavelet theory, for better or alternative MS-based proteomic quantification. Results We developed a novel discrete wavelet transform (DWT and a 'Spatial Adaptive Algorithm' to remove noise and to identify true peaks. We programmed and compiled WaveletQuant using Visual C++ 2005 Express Edition. We then incorporated the WaveletQuant program in the Trans-Proteomic Pipeline (TPP, a commonly used open source proteomics analysis pipeline. Conclusions We showed that WaveletQuant was able to quantify more proteins and to quantify them more accurately than the ASAPRatio, a program that performs quantification in the TPP pipeline, first using known mixed ratios of yeast extracts and then using a data set from ovarian cancer cell lysates. The program and its documentation can be downloaded from our website at http://systemsbiozju.org/data/WaveletQuant.

  7. Real-Time Wavelet-Based Coordinated Control of Hybrid Energy Storage Systems for Denoising and Flattening Wind Power Output

    Directory of Open Access Journals (Sweden)

    Tran Thai Trung

    2014-10-01

    Full Text Available Since the penetration level of wind energy is continuously increasing, the negative impact caused by the fluctuation of wind power output needs to be carefully managed. This paper proposes a novel real-time coordinated control algorithm based on a wavelet transform to mitigate both short-term and long-term fluctuations by using a hybrid energy storage system (HESS. The short-term fluctuation is eliminated by using an electric double-layer capacitor (EDLC, while the wind-HESS system output is kept constant during each 10-min period by a Ni-MH battery (NB. State-of-charge (SOC control strategies for both EDLC and NB are proposed to maintain the SOC level of storage within safe operating limits. A ramp rate limitation (RRL requirement is also considered in the proposed algorithm. The effectiveness of the proposed algorithm has been tested by using real time simulation. The simulation model of the wind-HESS system is developed in the real-time digital simulator (RTDS/RSCAD environment. The proposed algorithm is also implemented as a user defined model of the RSCAD. The simulation results demonstrate that the HESS with the proposed control algorithm can indeed assist in dealing with the variation of wind power generation. Moreover, the proposed method shows better performance in smoothing out the fluctuation and managing the SOC of battery and EDLC than the simple moving average (SMA based method.

  8. Principal Component Analysis Based Two-Dimensional (PCA-2D) Correlation Spectroscopy: PCA Denoising for 2D Correlation Spectroscopy

    International Nuclear Information System (INIS)

    Jung, Young Mee

    2003-01-01

    Principal component analysis based two-dimensional (PCA-2D) correlation analysis is applied to FTIR spectra of polystyrene/methyl ethyl ketone/toluene solution mixture during the solvent evaporation. Substantial amount of artificial noise were added to the experimental data to demonstrate the practical noise-suppressing benefit of PCA-2D technique. 2D correlation analysis of the reconstructed data matrix from PCA loading vectors and scores successfully extracted only the most important features of synchronicity and asynchronicity without interference from noise or insignificant minor components. 2D correlation spectra constructed with only one principal component yield strictly synchronous response with no discernible a asynchronous features, while those involving at least two or more principal components generated meaningful asynchronous 2D correlation spectra. Deliberate manipulation of the rank of the reconstructed data matrix, by choosing the appropriate number and type of PCs, yields potentially more refined 2D correlation spectra

  9. Sparsity based noise removal from low dose scanning electron microscopy images

    Science.gov (United States)

    Lazar, A.; Fodor, P. S.

    2015-03-01

    Scanning electron microscopes are some of the most versatile tools for imaging materials with nanometer resolution. However, images collected at high scan rates to increase throughput and avoid sample damage, suffer from low signalto- noise ratio (SNR) as a result of the Poisson distributed shot noise associated with the electron production and interaction with the surface imaged. The signal is further degraded by additive white Gaussian noise (AWGN) from the detection electronics. In this work, denoising frameworks are applied to this type of images, taking advantage of their sparsity character, along with a methodology for determining the AWGN. A variance stabilization technique is applied to the raw data followed by a patch-based denoising algorithm. Results are presented both for images with known levels of mixed Poisson-Gaussian noise, and for raw images. The quality of the image reconstruction is assessed based both on the PSNR as well as on measures specific to the application of the data collected. These include accurate identification of objects of interest and structural similarity. High-quality results are recovered from noisy observations collected at short dwell times that avoid sample damage.

  10. New methods to De-noise and Invert Lidar Observations Applied to HSRL and CATS Observations

    Science.gov (United States)

    Marais, W.; Holz, R.

    2017-12-01

    Atmospheric lidar observations provide a unique capability to directly observe the vertical column of cloud and aerosol scattering properties. Detector and solar-background noise, however, hinder the ability of lidar systems to provide reliable backscatter and extinction cross-section estimates. Our ultimate goal is to develop inversion algorithms for space-based lidar systems. Standard methods for solving atmospheric lidar inverse problem are most effective with high signal-to-noise ratio observations that are only available at low resolution in uniform scenes. We started off with our research with developing inversion algorithms for the UW-Madison High Spectral Resolution Lidar (HSRL) system, and we have made progress in developing a denoising algorithm for Cloud-Aerosol Transport System (CATS) lidar data. In our talk we will describe novel methods that have been developed for solving lidar inverse problems with high-resolution, lower signal-to-noise ratio observations that are effective in non-uniform scenes. The new methods are based on state-of-the-art signal processing tools that were originally developed for medical imaging, and have been adapted for atmospheric lidar inverse problems. We will present inverted backscatter and extinction cross-section results of the new method, estimated from the UW-Madison HSRL observations, and we will juxtapose the results against the estimates obtained via the standard inversion method. We will also present denoising results of CATS observations from which the attenuated backscatter cross-section is obtained. We demonstrate the validity of the denoised CATS observations through simulations, and the validity of the HSRL observations are demonstrated through an uncertainty analysis using real data.

  11. A New Approach to Inverting and De-Noising Backscatter from Lidar Observations

    Directory of Open Access Journals (Sweden)

    Marais Willem

    2016-01-01

    Full Text Available Atmospheric lidar observations provide a unique capability to directly observe the vertical profile of cloud and aerosol scattering properties and have proven to be an important capability for the atmospheric science community. For this reason NASA and ESA have put a major emphasis on developing both space and ground based lidar instruments. Measurement noise (solar background and detector noise has proven to be a significant limitation and is typically reduced by temporal and vertical averaging. This approach has significant limitations as it results in significant reduction in the spatial information and can introduce biases due to the non-linear relationship between the signal and the retrieved scattering properties. This paper investigates a new approach to de-noising and retrieving cloud and aerosol backscatter properties from lidar observations that leverages a technique developed for medical imaging to de-blur and de-noise images; the accuracy is defined as the error between the true and inverted photon rates. Hence non-linear bias errors can be mitigated and spatial information can be preserved.

  12. Dictionary Based Image Segmentation

    DEFF Research Database (Denmark)

    Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2015-01-01

    We propose a method for weakly supervised segmentation of natural images, which may contain both textured or non-textured regions. Our texture representation is based on a dictionary of image patches. To divide an image into separated regions with similar texture we use an implicit level sets...

  13. A novel approach to speckle noise filtering based on Artificial Bee Colony algorithm: an ultrasound image application.

    Science.gov (United States)

    Latifoğlu, Fatma

    2013-09-01

    In this study a novel approach based on 2D FIR filters is presented for denoising digital images. In this approach the filter coefficients of 2D FIR filters were optimized using the Artificial Bee Colony (ABC) algorithm. To obtain the best filter design, the filter coefficients were tested with different numbers (3×3, 5×5, 7×7, 11×11) and connection types (cascade and parallel) during optimization. First, the speckle noise with variances of 1, 0.6, 0.8 and 0.2 respectively was added to the synthetic test image. Later, these noisy images were denoised with both the proposed approach and other well-known filter types such as Gaussian, mean and average filters. For image quality determination metrics such as mean square error (MSE), peak signal-to-noise ratio (PSNR) and signal-to-noise ratio (SNR) were used. Even in the case of noise having maximum variance (the most noisy), the proposed approach performed better than other filtering methods did on the noisy test images. In addition to test images, speckle noise with a variance of 1 was added to a fetal ultrasound image, and this noisy image was denoised with very high PSNR and SNR values. The performance of the proposed approach was also tested on several clinical ultrasound images such as those obtained from ovarian, abdomen and liver tissues. The results of this study showed that the 2D FIR filters designed based on ABC optimization can eliminate speckle noise quite well on noise added test images and intrinsically noisy ultrasound images. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  14. Feedwater flowrate estimation based on the two-step de-noising using the wavelet analysis and an autoassociative neural network

    International Nuclear Information System (INIS)

    Heo, Gyun Young; Choi, Seong Soo; Chang, Soon Heung

    1999-01-01

    This paper proposes an improved signal processing strategy for accurate feedwater flowrate estimation in nuclear power plants. It is generally known that ∼ 2% thermal power errors occur due to fouling phenomena in feedwater flowmeters. In the strategy proposed, the noises included in feedwater flowrate signal are classified into rapidly varying noises and gradually varying noises according to the characteristics in a frequency domain. The estimation precision is enhanced by introducing a low pass filter with the wavelet analysis against rapidly varying noises, and an autoassociative neural network which takes charge of the correction of only gradually varying noises. The modified multivariate stratification sampling using the concept of time stratification and MAXIMIN criteria is developed to overcome the shortcoming of a general random sampling. In addition the multi-stage robust training method is developed to increase the quality and reliability of training signals. Some validations using the simulated data from a micro-simulator were carried out. In the validation tests, the proposed methodology removed both rapidly varying noises and gradually varying noises respectively in each de-noising step, and 5.54 % root mean square errors of initial noisy signals were decreased to 0.674% after denoising. These results indicate that it is possible to estimate the reactor thermal power more elaborately by adopting this strategy. (author). 16 refs., 6 figs., 2 tabs

  15. Accurate prediction of subcellular location of apoptosis proteins combining Chou’s PseAAC and PsePSSM based on wavelet denoising

    Science.gov (United States)

    Chen, Cheng; Chen, Rui-Xin; Wang, Lei; Wang, Ming-Hui; Zhang, Yan

    2017-01-01

    Apoptosis proteins subcellular localization information are very important for understanding the mechanism of programmed cell death and the development of drugs. The prediction of subcellular localization of an apoptosis protein is still a challenging task because the prediction of apoptosis proteins subcellular localization can help to understand their function and the role of metabolic processes. In this paper, we propose a novel method for protein subcellular localization prediction. Firstly, the features of the protein sequence are extracted by combining Chou's pseudo amino acid composition (PseAAC) and pseudo-position specific scoring matrix (PsePSSM), then the feature information of the extracted is denoised by two-dimensional (2-D) wavelet denoising. Finally, the optimal feature vectors are input to the SVM classifier to predict subcellular location of apoptosis proteins. Quite promising predictions are obtained using the jackknife test on three widely used datasets and compared with other state-of-the-art methods. The results indicate that the method proposed in this paper can remarkably improve the prediction accuracy of apoptosis protein subcellular localization, which will be a supplementary tool for future proteomics research. PMID:29296195

  16. Adaptive Deep Supervised Autoencoder Based Image Reconstruction for Face Recognition

    Directory of Open Access Journals (Sweden)

    Rongbing Huang

    2016-01-01

    Full Text Available Based on a special type of denoising autoencoder (DAE and image reconstruction, we present a novel supervised deep learning framework for face recognition (FR. Unlike existing deep autoencoder which is unsupervised face recognition method, the proposed method takes class label information from training samples into account in the deep learning procedure and can automatically discover the underlying nonlinear manifold structures. Specifically, we define an Adaptive Deep Supervised Network Template (ADSNT with the supervised autoencoder which is trained to extract characteristic features from corrupted/clean facial images and reconstruct the corresponding similar facial images. The reconstruction is realized by a so-called “bottleneck” neural network that learns to map face images into a low-dimensional vector and reconstruct the respective corresponding face images from the mapping vectors. Having trained the ADSNT, a new face image can then be recognized by comparing its reconstruction image with individual gallery images, respectively. Extensive experiments on three databases including AR, PubFig, and Extended Yale B demonstrate that the proposed method can significantly improve the accuracy of face recognition under enormous illumination, pose change, and a fraction of occlusion.

  17. [Medical image processing based on wavelet characteristics and edge blur detection].

    Science.gov (United States)

    Zhu, Baihui; Wan, Zhiping

    2014-06-01

    To solve the problems of noise interference and edge signal weakness for the existing medical image, we used two-dimensional wavelet transform to process medical images. Combined the directivity of the image edges and the correlation of the wavelet coefficients, we proposed a medical image processing algorithm based on wavelet characteristics and edge blur detection. This algorithm improved noise reduction capabilities and the edge effect due to wavelet transformation and edge blur detection. The experimental results showed that directional correlation improved edge based on wavelet transform fuzzy algorithm could effectively reduce the noise signal in the medical image and save the image edge signal. It has the advantage of the high-definition and de-noising ability.

  18. Denoising of electron beam Monte Carlo dose distributions using digital filtering techniques

    International Nuclear Information System (INIS)

    Deasy, Joseph O.

    2000-01-01

    The Monte Carlo (MC) method has long been viewed as the ultimate dose distribution computational technique. The inherent stochastic dose fluctuations (i.e. noise), however, have several important disadvantages: noise will affect estimates of all the relevant dosimetric and radiobiological indices, and noise will degrade the resulting dose contour visualizations. We suggest the use of a post-processing denoising step to reduce statistical fluctuations and also improve dose contour visualization. We report the results of applying four different two-dimensional digital smoothing filters to two-dimensional dose images. The Integrated Tiger Series MC code was used to generate 10 MeV electron beam dose distributions at various depths in two different phantoms. The observed qualitative effects of filtering include: (a) the suppression of voxel-to-voxel (high-frequency) noise and (b) the resulting contour plots are visually more comprehensible. Drawbacks include, in some cases, slight blurring of penumbra near the surface and slight blurring of other very sharp real dosimetric features. Of the four digital filters considered here, one, a filter based on a local least-squares principle, appears to suppress noise with negligible degradation of real dosimetric features. We conclude that denoising of electron beam MC dose distributions is feasible and will yield improved dosimetric reliability and improved visualization of dose distributions. (author)

  19. Speckle processing for OCT image based on Bayesian least mean square error criterion

    Directory of Open Access Journals (Sweden)

    WANG Rong

    2013-06-01

    Full Text Available This paper presents a noise reduction algorithm for the speckle noise in optical coherence tomography images based on Bayesian criterion.First,the noisy imaging data is put into the logarithmic space and sample is extracted from the data with noise of Gaussian distribution.Then pixels within the sample are given relevant weights based on the correlation between adjacent pixels in the image.Finally,the posterior distribution is estimated by using a weighted histogram approach and the noise-free data is estimated using generic Bayesian least mean square error estimate.Compared with traditional wavelet transformation noise reduction and median filtering denoising,this method obviously improves the signal-to-noise ratio (SNR and the equivalent apparent number (ENL of OCT image.Thus the image quality is enhanced to some extent.

  20. A simple filter circuit for denoising biomechanical impact signals.

    Science.gov (United States)

    Subramaniam, Suba R; Georgakis, Apostolos

    2009-01-01

    We present a simple scheme for denoising non-stationary biomechanical signals with the aim of accurately estimating their second derivative (acceleration). The method is based on filtering in fractional Fourier domains using well-known low-pass filters in a way that amounts to a time-varying cut-off threshold. The resulting algorithm is linear and its design is facilitated by the relationship between the fractional Fourier transform and joint time-frequency representations. The implemented filter circuit employs only three low-order filters while its efficiency is further supported by the low computational complexity of the fractional Fourier transform. The results demonstrate that the proposed method can denoise the signals effectively and is more robust against noise as compared to conventional low-pass filters.

  1. Instance-Wise Denoising Autoencoder for High Dimensional Data

    Directory of Open Access Journals (Sweden)

    Lin Chen

    2016-01-01

    Full Text Available Denoising Autoencoder (DAE is one of the most popular fashions that has reported significant success in recent neural network research. To be specific, DAE randomly corrupts some features of the data to zero as to utilize the cooccurrence information while avoiding overfitting. However, existing DAE approaches do not fare well on sparse and high dimensional data. In this paper, we present a Denoising Autoencoder labeled here as Instance-Wise Denoising Autoencoder (IDA, which is designed to work with high dimensional and sparse data by utilizing the instance-wise cooccurrence relation instead of the feature-wise one. IDA works ahead based on the following corruption rule: if an instance vector of nonzero feature is selected, it is forced to become a zero vector. To avoid serious information loss in the event that too many instances are discarded, an ensemble of multiple independent autoencoders built on different corrupted versions of the data is considered. Extensive experimental results on high dimensional and sparse text data show the superiority of IDA in efficiency and effectiveness. IDA is also experimented on the heterogenous transfer learning setting and cross-modal retrieval to study its generality on heterogeneous feature representation.

  2. Semantic-based high resolution remote sensing image retrieval

    Science.gov (United States)

    Guo, Dihua

    High Resolution Remote Sensing (HRRS) imagery has been experiencing extraordinary development in the past decade. Technology development means increased resolution imagery is available at lower cost, making it a precious resource for planners, environmental scientists, as well as others who can learn from the ground truth. Image retrieval plays an important role in managing and accessing huge image database. Current image retrieval techniques, cannot satisfy users' requests on retrieving remote sensing images based on semantics. In this dissertation, we make two fundamental contributions to the area of content based image retrieval. First, we propose a novel unsupervised texture-based segmentation approach suitable for accurately segmenting HRRS images. The results of existing segmentation algorithms dramatically deteriorate if simply adopted to HRRS images. This is primarily clue to the multi-texture scales and the high level noise present in these images. Therefore, we propose an effective and efficient segmentation model, which is a two-step process. At high-level, we improved the unsupervised segmentation algorithm by coping with two special features possessed by HRRS images. By preprocessing images with wavelet transform, we not only obtain multi-resolution images but also denoise the original images. By optimizing the splitting results, we solve the problem of textons in HRRS images existing in different scales. At fine level, we employ fuzzy classification segmentation techniques with adjusted parameters for different land cover. We implement our algorithm using real world 1-foot resolution aerial images. Second, we devise methodologies to automatically annotate HRRS images based on semantics. In this, we address the issue of semantic feature selection, the major challenge faced by semantic-based image retrieval. To discover and make use of hidden semantics of images is application dependent. One type of the semantics in HRRS image is conveyed by composite

  3. Nanoplatform-based molecular imaging

    National Research Council Canada - National Science Library

    Chen, Xiaoyuan

    2011-01-01

    "Nanoplathform-Based Molecular Imaging provides rationale for using nanoparticle-based probes for molecular imaging, then discusses general strategies for this underutilized, yet promising, technology...

  4. Deep architecture neural network-based real-time image processing for image-guided radiotherapy.

    Science.gov (United States)

    Mori, Shinichiro

    2017-08-01

    To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  5. Determination and Visualization of pH Values in Anaerobic Digestion of Water Hyacinth and Rice Straw Mixtures Using Hyperspectral Imaging with Wavelet Transform Denoising and Variable Selection

    Directory of Open Access Journals (Sweden)

    Chu Zhang

    2016-02-01

    Full Text Available Biomass energy represents a huge supplement for meeting current energy demands. A hyperspectral imaging system covering the spectral range of 874–1734 nm was used to determine the pH value of anaerobic digestion liquid produced by water hyacinth and rice straw mixtures used for methane production. Wavelet transform (WT was used to reduce noises of the spectral data. Successive projections algorithm (SPA, random frog (RF and variable importance in projection (VIP were used to select 8, 15 and 20 optimal wavelengths for the pH value prediction, respectively. Partial least squares (PLS and a back propagation neural network (BPNN were used to build the calibration models on the full spectra and the optimal wavelengths. As a result, BPNN models performed better than the corresponding PLS models, and SPA-BPNN model gave the best performance with a correlation coefficient of prediction (rp of 0.911 and root mean square error of prediction (RMSEP of 0.0516. The results indicated the feasibility of using hyperspectral imaging to determine pH values during anaerobic digestion. Furthermore, a distribution map of the pH values was achieved by applying the SPA-BPNN model. The results in this study would help to develop an on-line monitoring system for biomass energy producing process by hyperspectral imaging.

  6. MREIT conductivity imaging based on the local harmonic Bz algorithm: Animal experiments

    Science.gov (United States)

    Jeon, Kiwan; Lee, Chang-Ock; Woo, Eung Je; Kim, Hyung Joong; Seo, Jin Keun

    2010-04-01

    From numerous numerical and phantom experiments, MREIT conductivity imaging based on harmonic Bz algorithm shows that it could be yet another useful medical imaging modality. However, in animal experiments, the conventional harmonic Bz algorithm gives poor results near boundaries of problematic regions such as bones, lungs, and gas-filled stomach, and the subject boundary where electrodes are not attached. Since the amount of injected current is low enough for the safety for in vivo animal, the measured Bz data is defected by severe noise. In order to handle such problems, we use the recently developed local harmonic Bz algorithm to obtain conductivity images in our ROI(region of interest) without concerning the defected regions. Furthermore we adopt a denoising algorithm that preserves the ramp structure of Bz data, which informs of the location and size of anomaly. Incorporating these efficient techniques, we provide the conductivity imaging of post-mortem and in vivo animal experiments with high spatial resolution.

  7. An Algorithm for Thresholding in the Context of Signal Denoising with Wavelets

    Directory of Open Access Journals (Sweden)

    Dorel AIORDACHIOAIE

    2008-07-01

    Full Text Available The problem of estimating a signal corrupted by additive noise has been of interest to many researchers for practical as well as theoretical reasons. Wavelet based methods have become increasingly popular, due to a number of advantages over the linear methods. A simulation-based analysis of some thresholding functions in the context of denoising application of wavelet transform was investigated. A probability based function to compute the threshold parameter is described and implemented in the Matlab-Simulink simulation environment, by using an estimation of the probability density function of the wavelet coefficients. The obtained results are at the same quality level with other procedures currently used in denoising.

  8. An open-source Matlab code package for improved rank-reduction 3D seismic data denoising and reconstruction

    Science.gov (United States)

    Chen, Yangkang; Huang, Weilin; Zhang, Dong; Chen, Wei

    2016-10-01

    Simultaneous seismic data denoising and reconstruction is a currently popular research subject in modern reflection seismology. Traditional rank-reduction based 3D seismic data denoising and reconstruction algorithm will cause strong residual noise in the reconstructed data and thus affect the following processing and interpretation tasks. In this paper, we propose an improved rank-reduction method by modifying the truncated singular value decomposition (TSVD) formula used in the traditional method. The proposed approach can help us obtain nearly perfect reconstruction performance even in the case of low signal-to-noise ratio (SNR). The proposed algorithm is tested via one synthetic and field data examples. Considering that seismic data interpolation and denoising source packages are seldom in the public domain, we also provide a program template for the rank-reduction based simultaneous denoising and reconstruction algorithm by providing an open-source Matlab package.

  9. Electrocardiogram Signal Denoising Using Extreme-Point Symmetric Mode Decomposition and Nonlocal Means

    Directory of Open Access Journals (Sweden)

    Xiaoying Tian

    2016-09-01

    Full Text Available Electrocardiogram (ECG signals contain a great deal of essential information which can be utilized by physicians for the diagnosis of heart diseases. Unfortunately, ECG signals are inevitably corrupted by noise which will severely affect the accuracy of cardiovascular disease diagnosis. Existing ECG signal denoising methods based on wavelet shrinkage, empirical mode decomposition and nonlocal means (NLM cannot provide sufficient noise reduction or well-detailed preservation, especially with high noise corruption. To address this problem, we have proposed a hybrid ECG signal denoising scheme by combining extreme-point symmetric mode decomposition (ESMD with NLM. In the proposed method, the noisy ECG signals will first be decomposed into several intrinsic mode functions (IMFs and adaptive global mean using ESMD. Then, the first several IMFs will be filtered by the NLM method according to the frequency of IMFs while the QRS complex detected from these IMFs as the dominant feature of the ECG signal and the remaining IMFs will be left unprocessed. The denoised IMFs and unprocessed IMFs are combined to produce the final denoised ECG signals. Experiments on both simulated ECG signals and real ECG signals from the MIT-BIH database demonstrate that the proposed method can suppress noise in ECG signals effectively while preserving the details very well, and it outperforms several state-of-the-art ECG signal denoising methods in terms of signal-to-noise ratio (SNR, root mean squared error (RMSE, percent root mean square difference (PRD and mean opinion score (MOS error index.

  10. A gradient-based method for segmenting FDG-PET images: methodology and validation

    International Nuclear Information System (INIS)

    Geets, Xavier; Lee, John A.; Gregoire, Vincent; Bol, Anne; Lonneux, Max

    2007-01-01

    A new gradient-based method for segmenting FDG-PET images is described and validated. The proposed method relies on the watershed transform and hierarchical cluster analysis. To allow a better estimation of the gradient intensity, iteratively reconstructed images were first denoised and deblurred with an edge-preserving filter and a constrained iterative deconvolution algorithm. Validation was first performed on computer-generated 3D phantoms containing spheres, then on a real cylindrical Lucite phantom containing spheres of different volumes ranging from 2.1 to 92.9 ml. Moreover, laryngeal tumours from seven patients were segmented on PET images acquired before laryngectomy by the gradient-based method and the thresholding method based on the source-to-background ratio developed by Daisne (Radiother Oncol 2003;69:247-50). For the spheres, the calculated volumes and radii were compared with the known values; for laryngeal tumours, the volumes were compared with the macroscopic specimens. Volume mismatches were also analysed. On computer-generated phantoms, the deconvolution algorithm decreased the mis-estimate of volumes and radii. For the Lucite phantom, the gradient-based method led to a slight underestimation of sphere volumes (by 10-20%), corresponding to negligible radius differences (0.5-1.1 mm); for laryngeal tumours, the segmented volumes by the gradient-based method agreed with those delineated on the macroscopic specimens, whereas the threshold-based method overestimated the true volume by 68% (p = 0.014). Lastly, macroscopic laryngeal specimens were totally encompassed by neither the threshold-based nor the gradient-based volumes. The gradient-based segmentation method applied on denoised and deblurred images proved to be more accurate than the source-to-background ratio method. (orig.)

  11. Parallel object-oriented, denoising system using wavelet multiresolution analysis

    Science.gov (United States)

    Kamath, Chandrika; Baldwin, Chuck H.; Fodor, Imola K.; Tang, Nu A.

    2005-04-12

    The present invention provides a data de-noising system utilizing processors and wavelet denoising techniques. Data is read and displayed in different formats. The data is partitioned into regions and the regions are distributed onto the processors. Communication requirements are determined among the processors according to the wavelet denoising technique and the partitioning of the data. The data is transforming onto different multiresolution levels with the wavelet transform according to the wavelet denoising technique, the communication requirements, and the transformed data containing wavelet coefficients. The denoised data is then transformed into its original reading and displaying data format.

  12. Remote Sensing Image Change Detection Based on NSCT-HMT Model and Its Application

    Directory of Open Access Journals (Sweden)

    Pengyun Chen

    2017-06-01

    Full Text Available Traditional image change detection based on a non-subsampled contourlet transform always ignores the neighborhood information’s relationship to the non-subsampled contourlet coefficients, and the detection results are susceptible to noise interference. To address these disadvantages, we propose a denoising method based on the non-subsampled contourlet transform domain that uses the Hidden Markov Tree model (NSCT-HMT for change detection of remote sensing images. First, the ENVI software is used to calibrate the original remote sensing images. After that, the mean-ratio operation is adopted to obtain the difference image that will be denoised by the NSCT-HMT model. Then, using the Fuzzy Local Information C-means (FLICM algorithm, the difference image is divided into the change area and unchanged area. The proposed algorithm is applied to a real remote sensing data set. The application results show that the proposed algorithm can effectively suppress clutter noise, and retain more detailed information from the original images. The proposed algorithm has higher detection accuracy than the Markov Random Field-Fuzzy C-means (MRF-FCM, the non-subsampled contourlet transform-Fuzzy C-means clustering (NSCT-FCM, the pointwise approach and graph theory (PA-GT, and the Principal Component Analysis-Nonlocal Means (PCA-NLM denosing algorithm. Finally, the five algorithms are used to detect the southern boundary of the Gurbantunggut Desert in Xinjiang Uygur Autonomous Region of China, and the results show that the proposed algorithm has the best effect on real remote sensing image change detection.

  13. Remote Sensing Image Change Detection Based on NSCT-HMT Model and Its Application.

    Science.gov (United States)

    Chen, Pengyun; Zhang, Yichen; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola

    2017-06-06

    Traditional image change detection based on a non-subsampled contourlet transform always ignores the neighborhood information's relationship to the non-subsampled contourlet coefficients, and the detection results are susceptible to noise interference. To address these disadvantages, we propose a denoising method based on the non-subsampled contourlet transform domain that uses the Hidden Markov Tree model (NSCT-HMT) for change detection of remote sensing images. First, the ENVI software is used to calibrate the original remote sensing images. After that, the mean-ratio operation is adopted to obtain the difference image that will be denoised by the NSCT-HMT model. Then, using the Fuzzy Local Information C-means (FLICM) algorithm, the difference image is divided into the change area and unchanged area. The proposed algorithm is applied to a real remote sensing data set. The application results show that the proposed algorithm can effectively suppress clutter noise, and retain more detailed information from the original images. The proposed algorithm has higher detection accuracy than the Markov Random Field-Fuzzy C-means (MRF-FCM), the non-subsampled contourlet transform-Fuzzy C-means clustering (NSCT-FCM), the pointwise approach and graph theory (PA-GT), and the Principal Component Analysis-Nonlocal Means (PCA-NLM) denosing algorithm. Finally, the five algorithms are used to detect the southern boundary of the Gurbantunggut Desert in Xinjiang Uygur Autonomous Region of China, and the results show that the proposed algorithm has the best effect on real remote sensing image change detection.

  14. HARDI denoising using nonlocal means on S2

    Science.gov (United States)

    Kuurstra, Alan; Dolui, Sudipto; Michailovich, Oleg

    2012-02-01

    Diffusion MRI (dMRI) is a unique imaging modality for in vivo delineation of the anatomical structure of white matter in the brain. In particular, high angular resolution diffusion imaging (HARDI) is a specific instance of dMRI which is known to excel in detection of multiple neural fibers within a single voxel. Unfortunately, the angular resolution of HARDI is known to be inversely proportional to SNR, which makes the problem of denoising of HARDI data be of particular practical importance. Since HARDI signals are effectively band-limited, denoising can be accomplished by means of linear filtering. However, the spatial dependency of diffusivity in brain tissue makes it impossible to find a single set of linear filter parameters which is optimal for all types of diffusion signals. Hence, adaptive filtering is required. In this paper, we propose a new type of non-local means (NLM) filtering which possesses the required adaptivity property. As opposed to similar methods in the field, however, the proposed NLM filtering is applied in the spherical domain of spatial orientations. Moreover, the filter uses an original definition of adaptive weights, which are designed to be invariant to both spatial rotations as well as to a particular sampling scheme in use. As well, we provide a detailed description of the proposed filtering procedure, its efficient implementation, as well as experimental results with synthetic data. We demonstrate that our filter has substantially better adaptivity as compared to a number of alternative methods.

  15. Iris image recognition wavelet filter-banks based iris feature extraction schemes

    CERN Document Server

    Rahulkar, Amol D

    2014-01-01

    This book provides the new results in wavelet filter banks based feature extraction, and the classifier in the field of iris image recognition. It provides the broad treatment on the design of separable, non-separable wavelets filter banks, and the classifier. The design techniques presented in the book are applied on iris image analysis for person authentication. This book also brings together the three strands of research (wavelets, iris image analysis, and classifier). It compares the performance of the presented techniques with state-of-the-art available schemes. This book contains the compilation of basic material on the design of wavelets that avoids reading many different books. Therefore, it provide an easier path for the new-comers, researchers to master the contents. In addition, the designed filter banks and classifier can also be effectively used than existing filter-banks in many signal processing applications like pattern classification, data-compression, watermarking, denoising etc.  that will...

  16. Adaptive DSPI phase denoising using mutual information and 2D variational mode decomposition

    Science.gov (United States)

    Xiao, Qiyang; Li, Jian; Wu, Sijin; Li, Weixian; Yang, Lianxiang; Dong, Mingli; Zeng, Zhoumo

    2018-04-01

    In digital speckle pattern interferometry (DSPI), noise interference leads to a low peak signal-to-noise ratio (PSNR) and measurement errors in the phase map. This paper proposes an adaptive DSPI phase denoising method based on two-dimensional variational mode decomposition (2D-VMD) and mutual information. Firstly, the DSPI phase map is subjected to 2D-VMD in order to obtain a series of band-limited intrinsic mode functions (BLIMFs). Then, on the basis of characteristics of the BLIMFs and in combination with mutual information, a self-adaptive denoising method is proposed to obtain noise-free components containing the primary phase information. The noise-free components are reconstructed to obtain the denoising DSPI phase map. Simulation and experimental results show that the proposed method can effectively reduce noise interference, giving a PSNR that is higher than that of two-dimensional empirical mode decomposition methods.

  17. An effective method to identify various factors for denoising wrist pulse signal using wavelet denoising algorithm.

    Science.gov (United States)

    Garg, Nidhi; Ryait, Hardeep S; Kumar, Amod; Bisht, Amandeep

    2018-01-01

    WPS is a non-invasive method to investigate human health. During signal acquisition, noises are also recorded along with WPS. Clean WPS with high peak signal to noise ratio is a prerequisite before use in disease diagnosis. Wavelet Transform is a commonly used method in the filtration process. Apart from its extensive use, the appropriate factors for wavelet denoising algorithm is not yet clear in WPS application. The presented work gives an effective approach to select various factors for wavelet denoise algorithm. With the appropriate selection of wavelet and factors, it is possible to reduce noise in WPS. In this work, all the factors of wavelet denoising are varied successively. Various evaluation parameters such as MSE, PSNR, PRD and Fit Coefficient are used to find out the performance of the wavelet denoised algorithm at every one step. The results obtained from computerized WPS illustrates that the presented approach can successfully select the mother wavelet and other factors for wavelet denoise algorithm. The selection of db9 as mother wavelet with sure threshold function and single rescaling function using UWT has been a better option for our database. The empirical results proves that the methodology discussed here could be effective in denoising WPS of any morphological pattern.

  18. Assessing denoising strategies to increase signal to noise ratio in spinal cord and in brain cortical and subcortical regions

    Science.gov (United States)

    Maugeri, L.; Moraschi, M.; Summers, P.; Favilla, S.; Mascali, D.; Cedola, A.; Porro, C. A.; Giove, F.; Fratini, M.

    2018-02-01

    Functional Magnetic Resonance Imaging (fMRI) based on Blood Oxygenation Level Dependent (BOLD) contrast has become one of the most powerful tools in neuroscience research. On the other hand, fMRI approaches have seen limited use in the study of spinal cord and subcortical brain regions (such as the brainstem and portions of the diencephalon). Indeed obtaining good BOLD signal in these areas still represents a technical and scientific challenge, due to poor control of physiological noise and to a limited overall quality of the functional series. A solution can be found in the combination of optimized experimental procedures at acquisition stage, and well-adapted artifact mitigation procedures in the data processing. In this framework, we studied two different data processing strategies to reduce physiological noise in cortical and subcortical brain regions and in the spinal cord, based on the aCompCor and RETROICOR denoising tools respectively. The study, performed in healthy subjects, was carried out using an ad hoc isometric motor task. We observed an increased signal to noise ratio in the denoised functional time series in the spinal cord and in the subcortical brain region.

  19. Image Processing Tools for Improved Visualization and Analysis of Remotely Sensed Images for Agriculture and Forest Classifications

    OpenAIRE

    SINHA G. R.

    2017-01-01

    This paper suggests Image Processing tools for improved visualization and better analysis of remotely sensed images. There are methods already available in literature for the purpose but the most important challenge among the limitations is lack of robustness. We propose an optimal method for image enhancement of the images using fuzzy based approaches and few optimization tools. The segmentation images subsequently obtained after de-noising will be classified into distinct information and th...

  20. Underwater image quality enhancement of sea cucumbers based on improved histogram equalization and wavelet transform

    Directory of Open Access Journals (Sweden)

    Xi Qiao

    2017-09-01

    Full Text Available Sea cucumbers usually live in an environment where lighting and visibility are generally not controllable, which cause the underwater image of sea cucumbers to be distorted, blurred, and severely attenuated. Therefore, the valuable information from such an image cannot be fully extracted for further processing. To solve the problems mentioned above and improve the quality of the underwater images of sea cucumbers, pre-processing of a sea cucumber image is attracting increasing interest. This paper presents a new method based on contrast limited adaptive histogram equalization and wavelet transform (CLAHE-WT to enhance the sea cucumber image quality. CLAHE was used to process the underwater image for increasing contrast based on the Rayleigh distribution, and WT was used for de-noising based on a soft threshold. Qualitative analysis indicated that the proposed method exhibited better performance in enhancing the quality and retaining the image details. For quantitative analysis, the test with 120 underwater images showed that for the proposed method, the mean square error (MSE, peak signal to noise ratio (PSNR, and entropy were 49.2098, 13.3909, and 6.6815, respectively. The proposed method outperformed three established methods in enhancing the visual quality of sea cucumber underwater gray image.

  1. Improvement image in tomosynthesis

    International Nuclear Information System (INIS)

    Gomi, Tsutomu; Umeda, Tokuo; Takeda, Tohoru; Saito, Kyouko; Sakaguchi, Kazuya; Nakajima, Masahiro; Koshida, Kichirou

    2012-01-01

    We evaluated the X-ray digital tomosynthesis (DT) reconstruction processing method for metal artifact reduction and the application of wavelet denoising to selectively remove quantum noise and suggest the possibility of image quality improvement using a novel application for chest. In orthopedic DT imaging, we developed artifact reduction methods based on a modified Shepp and Logan reconstruction filter kernel realized by taking into account additional weighing by direct current (DC) components in frequency domain space. Processing leads to an increase in the ratio of low-frequency components in an image. The effectiveness of the method in enhancing the visibility of a prosthetic case was quantified in terms of removal of ghosting artifacts. In chest DT imaging, the technique was implemented on a DT system and experimentally evaluated through chest phantom measurements, spatial resolution and compared with an existing post-reconstruction wavelet denoise algorithm by Badea et al. Our wavelet technique with balance sparsity-norm contrast-to-noise ratio (CNR) effectively decreased quantum noise in the reconstructed images with and improvement when applied to pre-reconstruction image for post-reconstruction. The results of our technique showed that although modulation transfer function (MTF) did not vary (preserving spatial resolution), the existing wavelet denoise algorithm caused MTF deterioration. (author)

  2. Shock capturing, level sets, and PDE based methods in computer vision and image processing: a review of Osher's contributions

    International Nuclear Information System (INIS)

    Fedkiw, Ronald P.; Sapiro, Guillermo; Shu Chiwang

    2003-01-01

    In this paper we review the algorithm development and applications in high resolution shock capturing methods, level set methods, and PDE based methods in computer vision and image processing. The emphasis is on Stanley Osher's contribution in these areas and the impact of his work. We will start with shock capturing methods and will review the Engquist-Osher scheme, TVD schemes, entropy conditions, ENO and WENO schemes, and numerical schemes for Hamilton-Jacobi type equations. Among level set methods we will review level set calculus, numerical techniques, fluids and materials, variational approach, high codimension motion, geometric optics, and the computation of discontinuous solutions to Hamilton-Jacobi equations. Among computer vision and image processing we will review the total variation model for image denoising, images on implicit surfaces, and the level set method in image processing and computer vision

  3. Speckle Suppression in Ultrasonic Images Based on Undecimated Wavelets

    Science.gov (United States)

    Argenti, Fabrizio; Torricelli, Gionatan

    2003-12-01

    An original method to denoise ultrasonic images affected by speckle is presented. Speckle is modeled as a signal-dependent noise corrupting the image. Noise reduction is approached as a Wiener-like filtering performed in a shift-invariant wavelet domain by means of an adaptive rescaling of the coefficients of an undecimated octave decomposition. The scaling factor of each coefficient is calculated from local statistics of the degraded image, the parameters of the noise model, and the wavelet filters. Experimental results demonstrate that excellent background smoothing as well as preservation of edge sharpness and fine details can be obtained.

  4. Generalized TV and sparse decomposition of the ultrasound image deconvolution model based on fusion technology.

    Science.gov (United States)

    Wen, Qiaonong; Wan, Suiren

    2013-01-01

    Ultrasound image deconvolution involves noise reduction and image feature enhancement, denoising need equivalent the low-pass filtering, image feature enhancement is to strengthen the high-frequency parts, these two requirements are often combined together. It is a contradictory requirement that we must be reasonable balance between these two basic requirements. Image deconvolution method of partial differential equation model is the method based on diffusion theory, and sparse decomposition deconvolution is image representation-based method. The mechanisms of these two methods are not the same, effect of these two methods own characteristics. In contourlet transform domain, we combine the strengths of the two deconvolution method together by image fusion, and introduce the entropy of local orientation energy ratio into fusion decision-making, make a different treatment according to the actual situation on the low-frequency part of the coefficients and the high-frequency part of the coefficient. As deconvolution process is inevitably blurred image edge information, we fusion the edge gray-scale image information to the deconvolution results in order to compensate the missing edge information. Experiments show that our method is better than the effect separate of using deconvolution method, and restore part of the image edge information.

  5. Efficient operator splitting algorithm for joint sparsity-regularized SPIRiT-based parallel MR imaging reconstruction.

    Science.gov (United States)

    Duan, Jizhong; Liu, Yu; Jing, Peiguang

    2018-02-01

    Self-consistent parallel imaging (SPIRiT) is an auto-calibrating model for the reconstruction of parallel magnetic resonance imaging, which can be formulated as a regularized SPIRiT problem. The Projection Over Convex Sets (POCS) method was used to solve the formulated regularized SPIRiT problem. However, the quality of the reconstructed image still needs to be improved. Though methods such as NonLinear Conjugate Gradients (NLCG) can achieve higher spatial resolution, these methods always demand very complex computation and converge slowly. In this paper, we propose a new algorithm to solve the formulated Cartesian SPIRiT problem with the JTV and JL1 regularization terms. The proposed algorithm uses the operator splitting (OS) technique to decompose the problem into a gradient problem and a denoising problem with two regularization terms, which is solved by our proposed split Bregman based denoising algorithm, and adopts the Barzilai and Borwein method to update step size. Simulation experiments on two in vivo data sets demonstrate that the proposed algorithm is 1.3 times faster than ADMM for datasets with 8 channels. Especially, our proposal is 2 times faster than ADMM for the dataset with 32 channels. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Model based image restoration for underwater images

    Science.gov (United States)

    Stephan, Thomas; Frühberger, Peter; Werling, Stefan; Heizmann, Michael

    2013-04-01

    The inspection of offshore parks, dam walls and other infrastructure under water is expensive and time consuming, because such constructions must be inspected manually by divers. Underwater buildings have to be examined visually to find small cracks, spallings or other deficiencies. Automation of underwater inspection depends on established water-proved imaging systems. Most underwater imaging systems are based on acoustic sensors (sonar). The disadvantage of such an acoustic system is the loss of the complete visual impression. All information embedded in texture and surface reflectance gets lost. Therefore acoustic sensors are mostly insufficient for these kind of visual inspection tasks. Imaging systems based on optical sensors feature an enormous potential for underwater applications. The bandwidth from visual imaging systems reach from inspection of underwater buildings via marine biological applications through to exploration of the seafloor. The reason for the lack of established optical systems for underwater inspection tasks lies in technical difficulties of underwater image acquisition and processing. Lightening, highly degraded images make a computational postprocessing absolutely essential.

  7. Three-Dimensional Velocity Field De-Noising using Modal Projection

    Science.gov (United States)

    Frank, Sarah; Ameli, Siavash; Szeri, Andrew; Shadden, Shawn

    2017-11-01

    PCMRI and Doppler ultrasound are common modalities for imaging velocity fields inside the body (e.g. blood, air, etc) and PCMRI is increasingly being used for other fluid mechanics applications where optical imaging is difficult. This type of imaging is typically applied to internal flows, which are strongly influenced by domain geometry. While these technologies are evolving, it remains that measured data is noisy and boundary layers are poorly resolved. We have developed a boundary modal analysis method to de-noise 3D velocity fields such that the resulting field is divergence-free and satisfies no-slip/no-penetration boundary conditions. First, two sets of divergence-free modes are computed based on domain geometry. The first set accounts for flow through ``truncation boundaries'', and the second set of modes has no-slip/no-penetration conditions imposed on all boundaries. The modes are calculated by minimizing the velocity gradient throughout the domain while enforcing a divergence-free condition. The measured velocity field is then projected onto these modes using a least squares algorithm. This method is demonstrated on CFD simulations with artificial noise. Different degrees of noise and different numbers of modes are tested to reveal the capabilities of the approach. American Heart Association Award 17PRE33660202.

  8. Denoising performance of modified dual-tree complex wavelet transform for processing quadrature embolic Doppler signals.

    Science.gov (United States)

    Serbes, Gorkem; Aydin, Nizamettin

    2014-01-01

    Quadrature signals are dual-channel signals obtained from the systems employing quadrature demodulation. Embolic Doppler ultrasound signals obtained from stroke-prone patients by using Doppler ultrasound systems are quadrature signals caused by emboli, which are particles bigger than red blood cells within circulatory system. Detection of emboli is an important step in diagnosing stroke. Most widely used parameter in detection of emboli is embolic signal-to-background signal ratio. Therefore, in order to increase this ratio, denoising techniques are employed in detection systems. Discrete wavelet transform has been used for denoising of embolic signals, but it lacks shift invariance property. Instead, dual-tree complex wavelet transform having near-shift invariance property can be used. However, it is computationally expensive as two wavelet trees are required. Recently proposed modified dual-tree complex wavelet transform, which reduces the computational complexity, can also be used. In this study, the denoising performance of this method is extensively evaluated and compared with the others by using simulated and real quadrature signals. The quantitative results demonstrated that the modified dual-tree-complex-wavelet-transform-based denoising outperforms the conventional discrete wavelet transform with the same level of computational complexity and exhibits almost equal performance to the dual-tree complex wavelet transform with almost half computational cost.

  9. Imaging liver lesions using grating-based phase-contrast computed tomography with bi-lateral filter post-processing.

    Directory of Open Access Journals (Sweden)

    Julia Herzen

    Full Text Available X-ray phase-contrast imaging shows improved soft-tissue contrast compared to standard absorption-based X-ray imaging. Especially the grating-based method seems to be one promising candidate for clinical implementation due to its extendibility to standard laboratory X-ray sources. Therefore the purpose of our study was to evaluate the potential of grating-based phase-contrast computed tomography in combination with a novel bi-lateral denoising method for imaging of focal liver lesions in an ex vivo feasibility study. Our study shows that grating-based phase-contrast CT (PCCT significantly increases the soft-tissue contrast in the ex vivo liver specimens. Combining the information of both signals--absorption and phase-contrast--the bi-lateral filtering leads to an improvement of lesion detectability and higher contrast-to-noise ratios. The normal and the pathological tissue can be clearly delineated and even internal structures of the pathological tissue can be visualized, being invisible in the absorption-based CT alone. Histopathology confirmed the presence of the corresponding findings in the analyzed tissue. The results give strong evidence for a sufficiently high contrast for different liver lesions using non-contrast-enhanced PCCT. Thus, ex vivo imaging of liver lesions is possible with a polychromatic X-ray source and at a spatial resolution of ∼100 µm. The post-processing with the novel bi-lateral denoising method improves the image quality by combining the information from the absorption and the phase-contrast images.

  10. 3D Wavelet-Based Filter and Method

    Science.gov (United States)

    Moss, William C.; Haase, Sebastian; Sedat, John W.

    2008-08-12

    A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

  11. Research on Adaptive Optics Image Restoration Algorithm by Improved Expectation Maximization Method

    OpenAIRE

    Zhang, Lijuan; Li, Dongming; Su, Wei; Yang, Jinhua; Jiang, Yutong

    2014-01-01

    To improve the effect of adaptive optics images’ restoration, we put forward a deconvolution algorithm improved by the EM algorithm which joints multiframe adaptive optics images based on expectation-maximization theory. Firstly, we need to make a mathematical model for the degenerate multiframe adaptive optics images. The function model is deduced for the points that spread with time based on phase error. The AO images are denoised using the image power spectral density and support constrain...

  12. A novel strategy for signal denoising using reweighted SVD and its applications to weak fault feature enhancement of rotating machinery

    Science.gov (United States)

    Zhao, Ming; Jia, Xiaodong

    2017-09-01

    Singular value decomposition (SVD), as an effective signal denoising tool, has been attracting considerable attention in recent years. The basic idea behind SVD denoising is to preserve the singular components (SCs) with significant singular values. However, it is shown that the singular values mainly reflect the energy of decomposed SCs, therefore traditional SVD denoising approaches are essentially energy-based, which tend to highlight the high-energy regular components in the measured signal, while ignoring the weak feature caused by early fault. To overcome this issue, a reweighted singular value decomposition (RSVD) strategy is proposed for signal denoising and weak feature enhancement. In this work, a novel information index called periodic modulation intensity is introduced to quantify the diagnostic information in a mechanical signal. With this index, the decomposed SCs can be evaluated and sorted according to their information levels, rather than energy. Based on that, a truncated linear weighting function is proposed to control the contribution of each SC in the reconstruction of the denoised signal. In this way, some weak but informative SCs could be highlighted effectively. The advantages of RSVD over traditional approaches are demonstrated by both simulated signals and real vibration/acoustic data from a two-stage gearbox as well as train bearings. The results demonstrate that the proposed method can successfully extract the weak fault feature even in the presence of heavy noise and ambient interferences.

  13. Correction of defective pixels for medical and space imagers based on Ising Theory

    Science.gov (United States)

    Cohen, Eliahu; Shnitser, Moriel; Avraham, Tsvika; Hadar, Ofer

    2014-09-01

    We propose novel models for image restoration based on statistical physics. We investigate the affinity between these fields and describe a framework from which interesting denoising algorithms can be derived: Ising-like models and simulated annealing techniques. When combined with known predictors such as Median and LOCO-I, these models become even more effective. In order to further examine the proposed models we apply them to two important problems: (i) Digital Cameras in space damaged from cosmic radiation. (ii) Ultrasonic medical devices damaged from speckle noise. The results, as well as benchmark and comparisons, suggest in most of the cases a significant gain in PSNR and SSIM in comparison to other filters.

  14. Automatic Fabric Defect Detection with a Multi-Scale Convolutional Denoising Autoencoder Network Model

    Directory of Open Access Journals (Sweden)

    Shuang Mei

    2018-04-01

    Full Text Available Fabric defect detection is a necessary and essential step of quality control in the textile manufacturing industry. Traditional fabric inspections are usually performed by manual visual methods, which are low in efficiency and poor in precision for long-term industrial applications. In this paper, we propose an unsupervised learning-based automated approach to detect and localize fabric defects without any manual intervention. This approach is used to reconstruct image patches with a convolutional denoising autoencoder network at multiple Gaussian pyramid levels and to synthesize detection results from the corresponding resolution channels. The reconstruction residual of each image patch is used as the indicator for direct pixel-wise prediction. By segmenting and synthesizing the reconstruction residual map at each resolution level, the final inspection result can be generated. This newly developed method has several prominent advantages for fabric defect detection. First, it can be trained with only a small amount of defect-free samples. This is especially important for situations in which collecting large amounts of defective samples is difficult and impracticable. Second, owing to the multi-modal integration strategy, it is relatively more robust and accurate compared to general inspection methods (the results at each resolution level can be viewed as a modality. Third, according to our results, it can address multiple types of textile fabrics, from simple to more complex. Experimental results demonstrate that the proposed model is robust and yields good overall performance with high precision and acceptable recall rates.

  15. Monte Carlo SURE-based parameter selection for parallel magnetic resonance imaging reconstruction.

    Science.gov (United States)

    Weller, Daniel S; Ramani, Sathish; Nielsen, Jon-Fredrik; Fessler, Jeffrey A

    2014-05-01

    Regularizing parallel magnetic resonance imaging (MRI) reconstruction significantly improves image quality but requires tuning parameter selection. We propose a Monte Carlo method for automatic parameter selection based on Stein's unbiased risk estimate that minimizes the multichannel k-space mean squared error (MSE). We automatically tune parameters for image reconstruction methods that preserve the undersampled acquired data, which cannot be accomplished using existing techniques. We derive a weighted MSE criterion appropriate for data-preserving regularized parallel imaging reconstruction and the corresponding weighted Stein's unbiased risk estimate. We describe a Monte Carlo approximation of the weighted Stein's unbiased risk estimate that uses two evaluations of the reconstruction method per candidate parameter value. We reconstruct images using the denoising sparse images from GRAPPA using the nullspace method (DESIGN) and L1 iterative self-consistent parallel imaging (L1 -SPIRiT). We validate Monte Carlo Stein's unbiased risk estimate against the weighted MSE. We select the regularization parameter using these methods for various noise levels and undersampling factors and compare the results to those using MSE-optimal parameters. Our method selects nearly MSE-optimal regularization parameters for both DESIGN and L1 -SPIRiT over a range of noise levels and undersampling factors. The proposed method automatically provides nearly MSE-optimal choices of regularization parameters for data-preserving nonlinear parallel MRI reconstruction methods. Copyright © 2013 Wiley Periodicals, Inc.

  16. 3D Data Denoising via Nonlocal Means Filter by Using Parallel GPU Strategies

    Science.gov (United States)

    Cuomo, Salvatore; De Michele, Pasquale; Piccialli, Francesco

    2014-01-01

    Nonlocal Means (NLM) algorithm is widely considered as a state-of-the-art denoising filter in many research fields. Its high computational complexity leads researchers to the development of parallel programming approaches and the use of massively parallel architectures such as the GPUs. In the recent years, the GPU devices had led to achieving reasonable running times by filtering, slice-by-slice, and 3D datasets with a 2D NLM algorithm. In our approach we design and implement a fully 3D NonLocal Means parallel approach, adopting different algorithm mapping strategies on GPU architecture and multi-GPU framework, in order to demonstrate its high applicability and scalability. The experimental results we obtained encourage the usability of our approach in a large spectrum of applicative scenarios such as magnetic resonance imaging (MRI) or video sequence denoising. PMID:25045397

  17. High-quality compressive ghost imaging

    Science.gov (United States)

    Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun

    2018-04-01

    We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.

  18. A novel SAR image precise-matching method based on SIFT algorithm

    Science.gov (United States)

    Yan, Wenwen; Li, Bin; Yang, Dekun; Tian, Jinwen; Yu, Qiong

    2013-10-01

    As for SAR image, it has a relatively great geometric distortion, and contains a lot of speckle noise. So a lot of research has been done to find a good method for SAR image matching. SIFT (Scale Invariant Feature Transform) has been proved to a good algorithm for the SAR image matching. This operator can dispose of matching problem such as rotation, affine distortion and noise. In this passage, firstly, in the preprocessing process, we use BM3D to denoise the image which can perform well comparing to other denoise method. Then, regardless of traditional SIFT-RANSAC method, SIFT-TC is used to complete image matching. By using this method, the image matching is proved to have better predominance in the matching efficiency, speed and robustness.

  19. Denoising solar radiation data using coiflet wavelets

    Energy Technology Data Exchange (ETDEWEB)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my; Janier, Josefina B., E-mail: josefinajanier@petronas.com.my; Muthuvalu, Mohana Sundaram, E-mail: mohana.muthuvalu@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia); Hasan, Mohammad Khatim, E-mail: khatim@ftsm.ukm.my [Jabatan Komputeran Industri, Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor (Malaysia); Sulaiman, Jumat, E-mail: jumat@ums.edu.my [Program Matematik dengan Ekonomi, Universiti Malaysia Sabah, Beg Berkunci 2073, 88999 Kota Kinabalu, Sabah (Malaysia); Ismail, Mohd Tahir [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM Minden, Penang (Malaysia)

    2014-10-24

    Signal denoising and smoothing plays an important role in processing the given signal either from experiment or data collection through observations. Data collection usually was mixed between true data and some error or noise. This noise might be coming from the apparatus to measure or collect the data or human error in handling the data. Normally before the data is use for further processing purposes, the unwanted noise need to be filtered out. One of the efficient methods that can be used to filter the data is wavelet transform. Due to the fact that the received solar radiation data fluctuates according to time, there exist few unwanted oscillation namely noise and it must be filtered out before the data is used for developing mathematical model. In order to apply denoising using wavelet transform (WT), the thresholding values need to be calculated. In this paper the new thresholding approach is proposed. The coiflet2 wavelet with variation diminishing 4 is utilized for our purpose. From numerical results it can be seen clearly that, the new thresholding approach give better results as compare with existing approach namely global thresholding value.

  20. Exploring an optimal wavelet-based filter for cryo-ET imaging.

    Science.gov (United States)

    Huang, Xinrui; Li, Sha; Gao, Song

    2018-02-07

    Cryo-electron tomography (cryo-ET) is one of the most advanced technologies for the in situ visualization of molecular machines by producing three-dimensional (3D) biological structures. However, cryo-ET imaging has two serious disadvantages-low dose and low image contrast-which result in high-resolution information being obscured by noise and image quality being degraded, and this causes errors in biological interpretation. The purpose of this research is to explore an optimal wavelet denoising technique to reduce noise in cryo-ET images. We perform tests using simulation data and design a filter using the optimum selected wavelet parameters (three-level decomposition, level-1 zeroed out, subband-dependent threshold, a soft-thresholding and spline-based discrete dyadic wavelet transform (DDWT)), which we call a modified wavelet shrinkage filter; this filter is suitable for noisy cryo-ET data. When testing using real cryo-ET experiment data, higher quality images and more accurate measures of a biological structure can be obtained with the modified wavelet shrinkage filter processing compared with conventional processing. Because the proposed method provides an inherent advantage when dealing with cryo-ET images, it can therefore extend the current state-of-the-art technology in assisting all aspects of cryo-ET studies: visualization, reconstruction, structural analysis, and interpretation.

  1. Computer-assisted counting of retinal cells by automatic segmentation after TV denoising.

    Science.gov (United States)

    Bredies, Kristian; Wagner, Marcus; Schubert, Christian; Ahnelt, Peter

    2013-10-20

    Quantitative evaluation of mosaics of photoreceptors and neurons is essential in studies on development, aging and degeneration of the retina. Manual counting of samples is a time consuming procedure while attempts to automatization are subject to various restrictions from biological and preparation variability leading to both over- and underestimation of cell numbers. Here we present an adaptive algorithm to overcome many of these problems.Digital micrographs were obtained from cone photoreceptor mosaics visualized by anti-opsin immuno-cytochemistry in retinal wholemounts from a variety of mammalian species including primates. Segmentation of photoreceptors (from background, debris, blood vessels, other cell types) was performed by a procedure based on Rudin-Osher-Fatemi total variation (TV) denoising. Once 3 parameters are manually adjusted based on a sample, similarly structured images can be batch processed. The module is implemented in MATLAB and fully documented online. The object recognition procedure was tested on samples with a typical range of signal and background variations. We obtained results with error ratios of less than 10% in 16 of 18 samples and a mean error of less than 6% compared to manual counts. The presented method provides a traceable module for automated acquisition of retinal cell density data. Remaining errors, including addition of background items, splitting or merging of objects might be further reduced by introduction of additional parameters. The module may be integrated into extended environments with features such as 3D-acquisition and recognition.

  2. An Approach to Fault Diagnosis for Gearbox Based on Image Processing

    Directory of Open Access Journals (Sweden)

    Yang Wang

    2016-01-01

    Full Text Available The gearbox is one of the most important parts of mechanical equipment and plays a significant role in many industrial applications. A fault diagnostic of rotating machinery has attracted attention for its significance in preventing catastrophic accidents and beneficially guaranteeing sufficient maintenance. In recent years, fault diagnosis has developed in the direction of multidisciplinary integration. This work addresses a fault diagnosis method based on an image processing method for a gearbox, which overcomes the limitations of manual feature selection. Differing from the analysis method in a one-dimensional space, the computing method in the field of image processing in a 2-dimensional space is applied to accomplish autoextraction and fault diagnosis of a gearbox. The image-processing-based diagnostic flow consists of the following steps: first, the vibration signal after noise reduction by wavelet denoising and signal demodulation by Hilbert transform is transformed into an image by bispectrum analysis. Then, speeded up robustness feature (SURF is applied to automatically extract the image feature points of the bispectrum contour map, and the feature dimension is reduced by principal component analysis (PCA. Finally, an extreme learning machine (ELM is introduced to identify the fault types of the gearbox. From the experimental results, the proposed method appears to be able to accurately diagnose and identify different types of faults of the gearbox.

  3. Enhancement of low light level images using color-plus-mono dual camera.

    Science.gov (United States)

    Jung, Yong Ju

    2017-05-15

    In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.

  4. Independent component analysis and decision trees for ECG holter recording de-noising.

    Directory of Open Access Journals (Sweden)

    Jakub Kuzilek

    Full Text Available We have developed a method focusing on ECG signal de-noising using Independent component analysis (ICA. This approach combines JADE source separation and binary decision tree for identification and subsequent ECG noise removal. In order to to test the efficiency of this method comparison to standard filtering a wavelet- based de-noising method was used. Freely data available at Physionet medical data storage were evaluated. Evaluation criteria was root mean square error (RMSE between original ECG and filtered data contaminated with artificial noise. Proposed algorithm achieved comparable result in terms of standard noises (power line interference, base line wander, EMG, but noticeably significantly better results were achieved when uncommon noise (electrode cable movement artefact were compared.

  5. Partial differential equation-based approach for empirical mode decomposition: application on image analysis.

    Science.gov (United States)

    Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques

    2012-09-01

    The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.

  6. A nonlinear filtering algorithm for denoising HR(S)TEM micrographs

    International Nuclear Information System (INIS)

    Du, Hongchu

    2015-01-01

    Noise reduction of micrographs is often an essential task in high resolution (scanning) transmission electron microscopy (HR(S)TEM) either for a higher visual quality or for a more accurate quantification. Since HR(S)TEM studies are often aimed at resolving periodic atomistic columns and their non-periodic deviation at defects, it is important to develop a noise reduction algorithm that can simultaneously handle both periodic and non-periodic features properly. In this work, a nonlinear filtering algorithm is developed based on widely used techniques of low-pass filter and Wiener filter, which can efficiently reduce noise without noticeable artifacts even in HR(S)TEM micrographs with contrast of variation of background and defects. The developed nonlinear filtering algorithm is particularly suitable for quantitative electron microscopy, and is also of great interest for beam sensitive samples, in situ analyses, and atomic resolution EFTEM. - Highlights: • A nonlinear filtering algorithm for denoising HR(S)TEM images is developed. • It can simultaneously handle both periodic and non-periodic features properly. • It is particularly suitable for quantitative electron microscopy. • It is of great interest for beam sensitive samples, in situ analyses, and atomic resolution EFTEM

  7. Standardized processing of MALDI imaging raw data for enhancement of weak analyte signals in mouse models of gastric cancer and Alzheimer's disease.

    Science.gov (United States)

    Schwartz, Matthias; Meyer, Björn; Wirnitzer, Bernhard; Hopf, Carsten

    2015-03-01

    Conventional mass spectrometry image preprocessing methods used for denoising, such as the Savitzky-Golay smoothing or discrete wavelet transformation, typically do not only remove noise but also weak signals. Recently, memory-efficient principal component analysis (PCA) in conjunction with random projections (RP) has been proposed for reversible compression and analysis of large mass spectrometry imaging datasets. It considers single-pixel spectra in their local context and consequently offers the prospect of using information from the spectra of adjacent pixels for denoising or signal enhancement. However, little systematic analysis of key RP-PCA parameters has been reported so far, and the utility and validity of this method for context-dependent enhancement of known medically or pharmacologically relevant weak analyte signals in linear-mode matrix-assisted laser desorption/ionization (MALDI) mass spectra has not been explored yet. Here, we investigate MALDI imaging datasets from mouse models of Alzheimer's disease and gastric cancer to systematically assess the importance of selecting the right number of random projections k and of principal components (PCs) L for reconstructing reproducibly denoised images after compression. We provide detailed quantitative data for comparison of RP-PCA-denoising with the Savitzky-Golay and wavelet-based denoising in these mouse models as a resource for the mass spectrometry imaging community. Most importantly, we demonstrate that RP-PCA preprocessing can enhance signals of low-intensity amyloid-β peptide isoforms such as Aβ1-26 even in sparsely distributed Alzheimer's β-amyloid plaques and that it enables enhanced imaging of multiply acetylated histone H4 isoforms in response to pharmacological histone deacetylase inhibition in vivo. We conclude that RP-PCA denoising may be a useful preprocessing step in biomarker discovery workflows.

  8. Microprocessor based image processing system

    International Nuclear Information System (INIS)

    Mirza, M.I.; Siddiqui, M.N.; Rangoonwala, A.

    1987-01-01

    Rapid developments in the production of integrated circuits and introduction of sophisticated 8,16 and now 32 bit microprocessor based computers, have set new trends in computer applications. Nowadays the users by investing much less money can make optimal use of smaller systems by getting them custom-tailored according to their requirements. During the past decade there have been great advancements in the field of computer Graphics and consequently, 'Image Processing' has emerged as a separate independent field. Image Processing is being used in a number of disciplines. In the Medical Sciences, it is used to construct pseudo color images from computer aided tomography (CAT) or positron emission tomography (PET) scanners. Art, advertising and publishing people use pseudo colours in pursuit of more effective graphics. Structural engineers use Image Processing to examine weld X-rays to search for imperfections. Photographers use Image Processing for various enhancements which are difficult to achieve in a conventional dark room. (author)

  9. Joint denoising, demosaicing, and chromatic aberration correction for UHD video

    Science.gov (United States)

    Jovanov, Ljubomir; Philips, Wilfried; Damstra, Klaas Jan; Ellenbroek, Frank

    2017-09-01

    High-resolution video capture is crucial for numerous applications such as surveillance, security, industrial inspection, medical imaging and digital entertainment. In the last two decades, we are witnessing a dramatic increase of the spatial resolution and the maximal frame rate of video capturing devices. In order to achieve further resolution increase, numerous challenges will be facing us. Due to the reduced size of the pixel, the amount of light also reduces, leading to the increased noise level. Moreover, the reduced pixel size makes the lens imprecisions more pronounced, which especially applies to chromatic aberrations. Even in the case when high quality lenses are used some chromatic aberration artefacts will remain. Next, noise level additionally increases due to the higher frame rates. To reduce the complexity and the price of the camera, one sensor captures all three colors, by relying on Color Filter Arrays. In order to obtain full resolution color image, missing color components have to be interpolated, i.e. demosaicked, which is more challenging than in the case of lower resolution, due to the increased noise and aberrations. In this paper, we propose a new method, which jointly performs chromatic aberration correction, denoising and demosaicking. By jointly performing the reduction of all artefacts, we are reducing the overall complexity of the system and the introduction of new artefacts. In order to reduce possible flicker we also perform temporal video enhancement. We evaluate the proposed method on a number of publicly available UHD sequences and on sequences recorded in our studio.

  10. Metadata for Content-Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Adrian Sterca

    2010-12-01

    Full Text Available This paper presents an image retrieval technique that combines content based image retrieval with pre-computed metadata-based image retrieval. The resulting system will have the advantages of both approaches: the speed/efficiency of metadata-based image retrieval and the accuracy/power of content-based image retrieval.

  11. Medical Image Fusion Algorithm Based on Nonlinear Approximation of Contourlet Transform and Regional Features

    Directory of Open Access Journals (Sweden)

    Hui Huang

    2017-01-01

    Full Text Available According to the pros and cons of contourlet transform and multimodality medical imaging, here we propose a novel image fusion algorithm that combines nonlinear approximation of contourlet transform with image regional features. The most important coefficient bands of the contourlet sparse matrix are retained by nonlinear approximation. Low-frequency and high-frequency regional features are also elaborated to fuse medical images. The results strongly suggested that the proposed algorithm could improve the visual effects of medical image fusion and image quality, image denoising, and enhancement.

  12. Wavelength conversion based spectral imaging

    DEFF Research Database (Denmark)

    Dam, Jeppe Seidelin

    There has been a strong, application driven development of Si-based cameras and spectrometers for imaging and spectral analysis of light in the visible and near infrared spectral range. This has resulted in very efficient devices, with high quantum efficiency, good signal to noise ratio and high...... resolution for this spectral region. Today, an increasing number of applications exists outside the spectral region covered by Si-based devices, e.g. within cleantech, medical or food imaging. We present a technology based on wavelength conversion which will extend the spectral coverage of state of the art...... visible or near infrared cameras and spectrometers to include other spectral regions of interest....

  13. A New Pixels Flipping Method for Huge Watermarking Capacity of the Invoice Font Image

    Directory of Open Access Journals (Sweden)

    Li Li

    2014-01-01

    Full Text Available Invoice printing just has two-color printing, so invoice font image can be seen as binary image. To embed watermarks into invoice image, the pixels need to be flipped. The more huge the watermark is, the more the pixels need to be flipped. We proposed a new pixels flipping method in invoice image for huge watermarking capacity. The pixels flipping method includes one novel interpolation method for binary image, one flippable pixels evaluation mechanism, and one denoising method based on gravity center and chaos degree. The proposed interpolation method ensures that the invoice image keeps features well after scaling. The flippable pixels evaluation mechanism ensures that the pixels keep better connectivity and smoothness and the pattern has highest structural similarity after flipping. The proposed denoising method makes invoice font image smoother and fiter for human vision. Experiments show that the proposed flipping method not only keeps the invoice font structure well but also improves watermarking capacity.

  14. A new pixels flipping method for huge watermarking capacity of the invoice font image.

    Science.gov (United States)

    Li, Li; Hou, Qingzheng; Lu, Jianfeng; Xu, Qishuai; Dai, Junping; Mao, Xiaoyang; Chang, Chin-Chen

    2014-01-01

    Invoice printing just has two-color printing, so invoice font image can be seen as binary image. To embed watermarks into invoice image, the pixels need to be flipped. The more huge the watermark is, the more the pixels need to be flipped. We proposed a new pixels flipping method in invoice image for huge watermarking capacity. The pixels flipping method includes one novel interpolation method for binary image, one flippable pixels evaluation mechanism, and one denoising method based on gravity center and chaos degree. The proposed interpolation method ensures that the invoice image keeps features well after scaling. The flippable pixels evaluation mechanism ensures that the pixels keep better connectivity and smoothness and the pattern has highest structural similarity after flipping. The proposed denoising method makes invoice font image smoother and fiter for human vision. Experiments show that the proposed flipping method not only keeps the invoice font structure well but also improves watermarking capacity.

  15. Edge-based correlation image registration for multispectral imaging

    Science.gov (United States)

    Nandy, Prabal [Albuquerque, NM

    2009-11-17

    Registration information for images of a common target obtained from a plurality of different spectral bands can be obtained by combining edge detection and phase correlation. The images are edge-filtered, and pairs of the edge-filtered images are then phase correlated to produce phase correlation images. The registration information can be determined based on these phase correlation images.

  16. Study of Denoising in TEOAE Signals Using an Appropriate Mother Wavelet Function

    Directory of Open Access Journals (Sweden)

    Habib Alizadeh Dizaji

    2007-06-01

    Full Text Available Background and Aim: Matching a mother wavelet to class of signals can be of interest in signal analy­sis and denoising based on wavelet multiresolution analysis and decomposition. As transient evoked otoacoustic emissions (TEOAES are contaminated with noise, the aim of this work was to pro­vide a quantitative approach to the problem of matching a mother wavelet to TEOAE signals by us­ing tun­ing curves and to use it for analysis and denoising TEOAE signals. Approximated mother wave­let for TEOAE signals was calculated using an algorithm for designing wavelet to match a specified sig­nal.Materials and Methods: In this paper a tuning curve has used as a template for designing a mother wave­let that has maximum matching to the tuning curve. The mother wavelet matching was performed on tuning curves spectrum magnitude and phase independent of one another. The scaling function was calcu­lated from the matched mother wavelet and by using these functions, lowpass and highpass filters were designed for a filter bank and otoacoustic emissions signal analysis and synthesis. After signal analyz­ing, denoising was performed by time windowing the signal time-frequency component.Results: Aanalysis indicated more signal reconstruction improvement in comparison with coiflets mother wavelet and by using the purposed denoising algorithm it is possible to enhance signal to noise ra­tio up to dB.Conclusion: The wavelet generated from this algorithm was remarkably similar to the biorthogonal wave­lets. Therefore, by matching a biorthogonal wavelet to the tuning curve and using wavelet packet analy­sis, a high resolution time-frequency analysis for the otoacoustic emission signals is possible.

  17. Joint seismic data denoising and interpolation with double-sparsity dictionary learning

    Science.gov (United States)

    Zhu, Lingchen; Liu, Entao; McClellan, James H.

    2017-08-01

    Seismic data quality is vital to geophysical applications, so that methods of data recovery, including denoising and interpolation, are common initial steps in the seismic data processing flow. We present a method to perform simultaneous interpolation and denoising, which is based on double-sparsity dictionary learning. This extends previous work that was for denoising only. The original double-sparsity dictionary learning algorithm is modified to track the traces with missing data by defining a masking operator that is integrated into the sparse representation of the dictionary. A weighted low-rank approximation algorithm is adopted to handle the dictionary updating as a sparse recovery optimization problem constrained by the masking operator. Compared to traditional sparse transforms with fixed dictionaries that lack the ability to adapt to complex data structures, the double-sparsity dictionary learning method learns the signal adaptively from selected patches of the corrupted seismic data, while preserving compact forward and inverse transform operators. Numerical experiments on synthetic seismic data indicate that this new method preserves more subtle features in the data set without introducing pseudo-Gibbs artifacts when compared to other directional multi-scale transform methods such as curvelets.

  18. Deep Fault Recognizer: An Integrated Model to Denoise and Extract Features for Fault Diagnosis in Rotating Machinery

    Directory of Open Access Journals (Sweden)

    Xiaojie Guo

    2016-12-01

    Full Text Available Fault diagnosis in rotating machinery is significant to avoid serious accidents; thus, an accurate and timely diagnosis method is necessary. With the breakthrough in deep learning algorithm, some intelligent methods, such as deep belief network (DBN and deep convolution neural network (DCNN, have been developed with satisfactory performances to conduct machinery fault diagnosis. However, only a few of these methods consider properly dealing with noises that exist in practical situations and the denoising methods are in need of extensive professional experiences. Accordingly, rethinking the fault diagnosis method based on deep architectures is essential. Hence, this study proposes an automatic denoising and feature extraction method that inherently considers spatial and temporal correlations. In this study, an integrated deep fault recognizer model based on the stacked denoising autoencoder (SDAE is applied to both denoise random noises in the raw signals and represent fault features in fault pattern diagnosis for both bearing rolling fault and gearbox fault, and trained in a greedy layer-wise fashion. Finally, the experimental validation demonstrates that the proposed method has better diagnosis accuracy than DBN, particularly in the existing situation of noises with superiority of approximately 7% in fault diagnosis accuracy.

  19. Robust x-ray image segmentation by spectral clustering and active shape model.

    Science.gov (United States)

    Wu, Jing; Mahfouz, Mohamed R

    2016-07-01

    Extraction of bone contours from x-ray radiographs plays an important role in joint space width assessment, preoperative planning, and kinematics analysis. We present a robust segmentation method to accurately extract the distal femur and proximal tibia in knee radiographs of varying image quality. A spectral clustering method based on the eigensolution of an affinity matrix is utilized for x-ray image denoising. An active shape model-based segmentation method is employed for robust and accurate segmentation of the denoised x-ray images. The performance of the proposed method is evaluated with x-ray images from the public-use dataset(s), the osteoarthritis initiative, achieving a root mean square error of [Formula: see text] for femur and [Formula: see text] for tibia. The results demonstrate that this method outperforms previous segmentation methods in capturing anatomical shape variations, accounting for image quality differences and guiding accurate segmentation.

  20. A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images

    Directory of Open Access Journals (Sweden)

    Zhiying Song

    2017-01-01

    Full Text Available The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS method and a dynamic threshold denoising (DTD method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933 on feature images and less Euclidean distance error (ED = 2.826 on landmark points, outperforming the source data (NC = −0.496, ED = 25.847 and the compared method (NC = −0.614, ED = 16.085. Moreover, our method is about ten times faster than the compared one.

  1. A content-based image retrieval method for optical colonoscopy images based on image recognition techniques

    Science.gov (United States)

    Nosato, Hirokazu; Sakanashi, Hidenori; Takahashi, Eiichi; Murakawa, Masahiro

    2015-03-01

    This paper proposes a content-based image retrieval method for optical colonoscopy images that can find images similar to ones being diagnosed. Optical colonoscopy is a method of direct observation for colons and rectums to diagnose bowel diseases. It is the most common procedure for screening, surveillance and treatment. However, diagnostic accuracy for intractable inflammatory bowel diseases, such as ulcerative colitis (UC), is highly dependent on the experience and knowledge of the medical doctor, because there is considerable variety in the appearances of colonic mucosa within inflammations with UC. In order to solve this issue, this paper proposes a content-based image retrieval method based on image recognition techniques. The proposed retrieval method can find similar images from a database of images diagnosed as UC, and can potentially furnish the medical records associated with the retrieved images to assist the UC diagnosis. Within the proposed method, color histogram features and higher order local auto-correlation (HLAC) features are adopted to represent the color information and geometrical information of optical colonoscopy images, respectively. Moreover, considering various characteristics of UC colonoscopy images, such as vascular patterns and the roughness of the colonic mucosa, we also propose an image enhancement method to highlight the appearances of colonic mucosa in UC. In an experiment using 161 UC images from 32 patients, we demonstrate that our method improves the accuracy of retrieving similar UC images.

  2. Boundary denoising for open surface meshes

    Science.gov (United States)

    Lee, Wei Zhe; Lim, Wee Keong; Soo, Wooi King

    2013-04-01

    Recently, applications of open surfaces in 3D have emerged to be an interesting research topic due to the popularity of range cameras such as the Microsoft Kinect. However, surface meshes representing such open surfaces are often corrupted with noises especially at the boundary. Such deformity needs to be treated to facilitate further applications such as texture mapping and zippering of multiple open surface meshes. Conventional methods perform denoising by removing components with high frequencies, thus smoothing the boundaries. However, this may result in loss of information, as not all high frequency transitions at the boundaries correspond to noises. To overcome such shortcoming, we propose a combination of local information and geometric features to single out the noises or unusual vertices at the mesh boundaries. The local shape of the selected mesh boundaries regions, characterized by the mean curvature value, is compared with that of the neighbouring interior region. The neighbouring interior region is chosen such that it is the closest to the corresponding boundary region, while curvature evaluation is independent of the boundary. The smoothing processing is done via Laplacian smoothing with our modified weights to reduce boundary shrinkage. The evaluation of the algorithm is done by noisy meshes generated from controlled model clean meshes. The Hausdorff distance is used as the measurement between the meshes. We show that our method produces better results than conventional smoothing of the whole boundary loop.

  3. Upconversion based MIR hyperspectral imaging

    DEFF Research Database (Denmark)

    Junaid, Saher; Tidemand-Lichtenberg, Peter; Pedersen, Christian

    2017-01-01

    Midinfrared (MIR) hyperspectral imaging has a great potential to be used as a tool for medical diagnostics featuring a combination of imaging and spectroscopy. In hyperspectral imaging, the images of the (biomedical) samples contains both spectral and spatial information....

  4. A Visual Attention Model Based Image Fusion

    OpenAIRE

    Rishabh Gupta; M.R.Vimala Devi; M. Devi

    2013-01-01

    To develop an efficient image fusion algorithm based on visual attention model for images with distinct objects. Image fusion is a process of combining complementary information from multiple images of the same scene into an image, so that the resultant image contains a more accurate description of the scene than any of the individual source images. The two basic fusion techniques are pixel level and region level fusion. Pixel level fusion deals with the operations on each and every pixel sep...

  5. Single image super-resolution based on image patch classification

    Science.gov (United States)

    Xia, Ping; Yan, Hua; Li, Jing; Sun, Jiande

    2017-06-01

    This paper proposed a single image super-resolution algorithm based on image patch classification and sparse representation where gradient information is used to classify image patches into three different classes in order to reflect the difference between the different types of image patches. Compared with other classification algorithms, gradient information based algorithm is simpler and more effective. In this paper, each class is learned to get a corresponding sub-dictionary. High-resolution image patch can be reconstructed by the dictionary and sparse representation coefficients of corresponding class of image patches. The result of the experiments demonstrated that the proposed algorithm has a better effect compared with the other algorithms.

  6. Temporal and Spatial Denoising of Depth Maps

    Directory of Open Access Journals (Sweden)

    Bor-Shing Lin

    2015-07-01

    Full Text Available This work presents a procedure for refining depth maps acquired using RGB-D (depth cameras. With numerous new structured-light RGB-D cameras, acquiring high-resolution depth maps has become easy. However, there are problems such as undesired occlusion, inaccurate depth values, and temporal variation of pixel values when using these cameras. In this paper, a proposed method based on an exemplar-based inpainting method is proposed to remove artefacts in depth maps obtained using RGB-D cameras. Exemplar-based inpainting has been used to repair an object-removed image. The concept underlying this inpainting method is similar to that underlying the procedure for padding the occlusions in the depth data obtained using RGB-D cameras. Therefore, our proposed method enhances and modifies the inpainting method for application in and the refinement of RGB-D depth data image quality. For evaluating the experimental results of the proposed method, our proposed method was tested on the Tsukuba Stereo Dataset, which contains a 3D video with the ground truths of depth maps, occlusion maps, RGB images, the peak signal-to-noise ratio, and the computational time as the evaluation metrics. Moreover, a set of self-recorded RGB-D depth maps and their refined versions are presented to show the effectiveness of the proposed method.

  7. Color correction with blind image restoration based on multiple images using a low-rank model

    Science.gov (United States)

    Li, Dong; Xie, Xudong; Lam, Kin-Man

    2014-03-01

    We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.

  8. GPR Signal Denoising and Target Extraction With the CEEMD Method

    KAUST Repository

    Li, Jing

    2015-04-17

    In this letter, we apply a time and frequency analysis method based on the complete ensemble empirical mode decomposition (CEEMD) method in ground-penetrating radar (GPR) signal processing. It decomposes the GPR signal into a sum of oscillatory components, with guaranteed positive and smoothly varying instantaneous frequencies. The key idea of this method relies on averaging the modes obtained by empirical mode decomposition (EMD) applied to several realizations of Gaussian white noise added to the original signal. It can solve the mode-mixing problem in the EMD method and improve the resolution of ensemble EMD (EEMD) when the signal has a low signal-to-noise ratio. First, we analyze the difference between the basic theory of EMD, EEMD, and CEEMD. Then, we compare the time and frequency analysis with Hilbert-Huang transform to test the results of different methods. The synthetic and real GPR data demonstrate that CEEMD promises higher spectral-spatial resolution than the other two EMD methods in GPR signal denoising and target extraction. Its decomposition is complete, with a numerically negligible error.

  9. Fringe pattern denoising using coherence-enhancing diffusion.

    Science.gov (United States)

    Wang, Haixia; Kemao, Qian; Gao, Wenjing; Lin, Feng; Seah, Hock Soon

    2009-04-15

    Electronic speckle pattern interferometry is one of the methods measuring the displacement on object surfaces in which fringe patterns need to be evaluated. Noise is one of the key problems affecting further processing and reducing measurement quality. We propose an application of coherence-enhancing diffusion to fringe-pattern denoising. It smoothes a fringe pattern along directions both parallel and perpendicular to fringe orientation with suitable diffusion speeds to more effectively reduce noise and improve fringe-pattern quality. It is a generalized work of Tang's et al.'s [Opt. Lett.33, 2179 (2008)] model that only smoothes a fringe pattern along fringe orientation. Since our model diffuses a fringe pattern with an additional direction, it is able to denoise low-density fringes as well as improve denoising effectiveness for high-density fringes. Theoretical analysis as well as simulation and experimental verifications are addressed.

  10. Fully Convolutional Architecture for Low-Dose CT Image Noise Reduction

    Science.gov (United States)

    Badretale, S.; Shaker, F.; Babyn, P.; Alirezaie, J.

    2017-10-01

    One of the critical topics in medical low-dose Computed Tomography (CT) imaging is how best to maintain image quality. As the quality of images decreases with lowering the X-ray radiation dose, improving image quality is extremely important and challenging. We have proposed a novel approach to denoise low-dose CT images. Our algorithm learns directly from an end-to-end mapping from the low-dose Computed Tomography images for denoising the normal-dose CT images. Our method is based on a deep convolutional neural network with rectified linear units. By learning various low-level to high-level features from a low-dose image the proposed algorithm is capable of creating a high-quality denoised image. We demonstrate the superiority of our technique by comparing the results with two other state-of-the-art methods in terms of the peak signal to noise ratio, root mean square error, and a structural similarity index.

  11. Quantitative accuracy of denoising techniques applied to dynamic 82Rb myocardial blood flow PET/CT scans

    DEFF Research Database (Denmark)

    Harms, Hans; Tolbod, Lars Poulsen; Bouchelouche, Kirsten

    be performed successfully for all scans. Quantitative performance was suboptimal when 5 or less factors were used for the Hotelling method, both in 2D and 3D. For 6 or more factors and for HYPR-LR, excellent quantitative accuracy was obtained when compared to non-denoised data (r2 = 0.996, slope = 1.013 for 2D...... Hotelling with 6 factors; r2 = 0.994, slope = 0.998 for 3D Hotelling with 6 factors; r2 = 0.995, slope = 0.996 for HYPR-LR). Conclusions: Both HYPR-LR and Hotelling denoising methods can be applied to 82Rb data with excellent quantitative accuracy. This enables an improvement in image quality or potentially....... This increases radiation dose to the patient, reduces Sr/Rb generator lifetimes and increases risk of scanner saturation. The aim of this study was to evaluate quantitative performance of several image denoising techniques which could ultimately be used to reduce administered activity. Methods: Fifty patients...

  12. Semisupervised learning using denoising autoencoders for brain lesion detection and segmentation.

    Science.gov (United States)

    Alex, Varghese; Vaidhya, Kiran; Thirunavukkarasu, Subramaniam; Kesavadas, Chandrasekharan; Krishnamurthi, Ganapathy

    2017-10-01

    The work explores the use of denoising autoencoders (DAEs) for brain lesion detection, segmentation, and false-positive reduction. Stacked denoising autoencoders (SDAEs) were pretrained using a large number of unlabeled patient volumes and fine-tuned with patches drawn from a limited number of patients ([Formula: see text], 40, 65). The results show negligible loss in performance even when SDAE was fine-tuned using 20 labeled patients. Low grade glioma (LGG) segmentation was achieved using a transfer learning approach in which a network pretrained with high grade glioma data was fine-tuned using LGG image patches. The networks were also shown to generalize well and provide good segmentation on unseen BraTS 2013 and BraTS 2015 test data. The manuscript also includes the use of a single layer DAE, referred to as novelty detector (ND). ND was trained to accurately reconstruct nonlesion patches. The reconstruction error maps of test data were used to localize lesions. The error maps were shown to assign unique error distributions to various constituents of the glioma, enabling localization. The ND learns the nonlesion brain accurately as it was also shown to provide good segmentation performance on ischemic brain lesions in images from a different database.

  13. Multi region based image retrieval system

    Indian Academy of Sciences (India)

    ... an image based on feature-based attention model which mimic viewer's attention. The Curvelet Transform in combination with colour descriptors are used to represent each significant region in an image. Experimental results are analysed and compared with the state-of-the-art Region Based Image Retrieval Technique.

  14. Non Local Spatial and Angular Matching: Enabling higher spatial resolution diffusion MRI datasets through adaptive denoising.

    Science.gov (United States)

    St-Jean, Samuel; Coupé, Pierrick; Descoteaux, Maxime

    2016-08-01

    Diffusion magnetic resonance imaging (MRI) datasets suffer from low Signal-to-Noise Ratio (SNR), especially at high b-values. Acquiring data at high b-values contains relevant information and is now of great interest for microstructural and connectomics studies. High noise levels bias the measurements due to the non-Gaussian nature of the noise, which in turn can lead to a false and biased estimation of the diffusion parameters. Additionally, the usage of in-plane acceleration techniques during the acquisition leads to a spatially varying noise distribution, which depends on the parallel acceleration method implemented on the scanner. This paper proposes a novel diffusion MRI denoising technique that can be used on all existing data, without adding to the scanning time. We first apply a statistical framework to convert both stationary and non stationary Rician and non central Chi distributed noise to Gaussian distributed noise, effectively removing the bias. We then introduce a spatially and angular adaptive denoising technique, the Non Local Spatial and Angular Matching (NLSAM) algorithm. Each volume is first decomposed in small 4D overlapping patches, thus capturing the spatial and angular structure of the diffusion data, and a dictionary of atoms is learned on those patches. A local sparse decomposition is then found by bounding the reconstruction error with the local noise variance. We compare against three other state-of-the-art denoising methods and show quantitative local and connectivity results on a synthetic phantom and on an in-vivo high resolution dataset. Overall, our method restores perceptual information, removes the noise bias in common diffusion metrics, restores the extracted peaks coherence and improves reproducibility of tractography on the synthetic dataset. On the 1.2 mm high resolution in-vivo dataset, our denoising improves the visual quality of the data and reduces the number of spurious tracts when compared to the noisy acquisition. Our

  15. Research on Ship-Radiated Noise Denoising Using Secondary Variational Mode Decomposition and Correlation Coefficient

    Directory of Open Access Journals (Sweden)

    Yuxing Li

    2017-12-01

    Full Text Available As the sound signal of ships obtained by sensors contains other many significant characteristics of ships and called ship-radiated noise (SN, research into a denoising algorithm and its application has obtained great significance. Using the advantage of variational mode decomposition (VMD combined with the correlation coefficient for denoising, a hybrid secondary denoising algorithm is proposed using secondary VMD combined with a correlation coefficient (CC. First, different kinds of simulation signals are decomposed into several bandwidth-limited intrinsic mode functions (IMFs using VMD, where the decomposition number by VMD is equal to the number by empirical mode decomposition (EMD; then, the CCs between the IMFs and the simulation signal are calculated respectively. The noise IMFs are identified by the CC threshold and the rest of the IMFs are reconstructed in order to realize the first denoising process. Finally, secondary denoising of the simulation signal can be accomplished by repeating the above steps of decomposition, screening and reconstruction. The final denoising result is determined according to the CC threshold. The denoising effect is compared under the different signal-to-noise ratio and the time of decomposition by VMD. Experimental results show the validity of the proposed denoising algorithm using secondary VMD (2VMD combined with CC compared to EMD denoising, ensemble EMD (EEMD denoising, VMD denoising and cubic VMD (3VMD denoising, as well as two denoising algorithms presented recently. The proposed denoising algorithm is applied to feature extraction and classification for SN signals, which can effectively improve the recognition rate of different kinds of ships.

  16. Denoising and dimensionality reduction of genomic data

    Science.gov (United States)

    Capobianco, Enrico

    2005-05-01

    Genomics represents a challenging research field for many quantitative scientists, and recently a vast variety of statistical techniques and machine learning algorithms have been proposed and inspired by cross-disciplinary work with computational and systems biologists. In genomic applications, the researcher deals with noisy and complex high-dimensional feature spaces; a wealth of genes whose expression levels are experimentally measured, can often be observed for just a few time points, thus limiting the available samples. This unbalanced combination suggests that it might be hard for standard statistical inference techniques to come up with good general solutions, likewise for machine learning algorithms to avoid heavy computational work. Thus, one naturally turns to two major aspects of the problem: sparsity and intrinsic dimensionality. These two aspects are studied in this paper, where for both denoising and dimensionality reduction, a very efficient technique, i.e., Independent Component Analysis, is used. The numerical results are very promising, and lead to a very good quality of gene feature selection, due to the signal separation power enabled by the decomposition technique. We investigate how the use of replicates can improve these results, and deal with noise through a stabilization strategy which combines the estimated components and extracts the most informative biological information from them. Exploiting the inherent level of sparsity is a key issue in genetic regulatory networks, where the connectivity matrix needs to account for the real links among genes and discard many redundancies. Most experimental evidence suggests that real gene-gene connections represent indeed a subset of what is usually mapped onto either a huge gene vector or a typically dense and highly structured network. Inferring gene network connectivity from the expression levels represents a challenging inverse problem that is at present stimulating key research in biomedical

  17. Image Retrieval Based on Fractal Dictionary Parameters

    Directory of Open Access Journals (Sweden)

    Yuanyuan Sun

    2013-01-01

    Full Text Available Content-based image retrieval is a branch of computer vision. It is important for efficient management of a visual database. In most cases, image retrieval is based on image compression. In this paper, we use a fractal dictionary to encode images. Based on this technique, we propose a set of statistical indices for efficient image retrieval. Experimental results on a database of 416 texture images indicate that the proposed method provides a competitive retrieval rate, compared to the existing methods.

  18. Quantum Image Encryption Algorithm Based on Image Correlation Decomposition

    Science.gov (United States)

    Hua, Tianxiang; Chen, Jiamin; Pei, Dongju; Zhang, Wenquan; Zhou, Nanrun

    2015-02-01

    A novel quantum gray-level image encryption and decryption algorithm based on image correlation decomposition is proposed. The correlation among image pixels is established by utilizing the superposition and measurement principle of quantum states. And a whole quantum image is divided into a series of sub-images. These sub-images are stored into a complete binary tree array constructed previously and then randomly performed by one of the operations of quantum random-phase gate, quantum revolving gate and Hadamard transform. The encrypted image can be obtained by superimposing the resulting sub-images with the superposition principle of quantum states. For the encryption algorithm, the keys are the parameters of random phase gate, rotation angle, binary sequence and orthonormal basis states. The security and the computational complexity of the proposed algorithm are analyzed. The proposed encryption algorithm can resist brute force attack due to its very large key space and has lower computational complexity than its classical counterparts.

  19. A quality quantitative method of silicon direct bonding based on wavelet image analysis

    Science.gov (United States)

    Tan, Xiao; Tao, Zhi; Li, Haiwang; Xu, Tiantong; Yu, Mingxing

    2018-04-01

    The rapid development of MEMS (micro-electro-mechanical systems) has received significant attention from researchers in various fields and subjects. In particular, the MEMS fabrication process is elaborate and, as such, has been the focus of extensive research inquiries. However, in MEMS fabrication, component bonding is difficult to achieve and requires a complex approach. Thus, improvements in bonding quality are relatively important objectives. A higher quality bond can only be achieved with improved measurement and testing capabilities. In particular, the traditional testing methods mainly include infrared testing, tensile testing, and strength testing, despite the fact that using these methods to measure bond quality often results in low efficiency or destructive analysis. Therefore, this paper focuses on the development of a precise, nondestructive visual testing method based on wavelet image analysis that is shown to be highly effective in practice. The process of wavelet image analysis includes wavelet image denoising, wavelet image enhancement, and contrast enhancement, and as an end result, can display an image with low background noise. In addition, because the wavelet analysis software was developed with MATLAB, it can reveal the bonding boundaries and bonding rates to precisely indicate the bond quality at all locations on the wafer. This work also presents a set of orthogonal experiments that consist of three prebonding factors, the prebonding temperature, the positive pressure value and the prebonding time, which are used to analyze the prebonding quality. This method was used to quantify the quality of silicon-to-silicon wafer bonding, yielding standard treatment quantities that could be practical for large-scale use.

  20. Biometric Image Recognition Based on Optical Correlator

    Directory of Open Access Journals (Sweden)

    David Solus

    2017-01-01

    Full Text Available The aim of this paper is to design a biometric images recognition system able to recognize biometric images-eye and DNA marker. The input scenes are processed by user-friendly software created in C# programming language and then are compared with reference images stored in database. In this system, Cambridge optical correlator is used as an image comparator based on similarity of images in the recognition phase.

  1. Wavelets in medical imaging

    International Nuclear Information System (INIS)

    Zahra, Noor e; Sevindir, Huliya A.; Aslan, Zafar; Siddiqi, A. H.

    2012-01-01

    The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.

  2. Wavelets in medical imaging

    Energy Technology Data Exchange (ETDEWEB)

    Zahra, Noor e; Sevindir, Huliya A.; Aslan, Zafar; Siddiqi, A. H. [Sharda University, SET, Department of Electronics and Communication, Knowledge Park 3rd, Gr. Noida (India); University of Kocaeli, Department of Mathematics, 41380 Kocaeli (Turkey); Istanbul Aydin University, Department of Computer Engineering, 34295 Istanbul (Turkey); Sharda University, SET, Department of Mathematics, 32-34 Knowledge Park 3rd, Greater Noida (India)

    2012-07-17

    The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.

  3. Wavelet based image visibility enhancement of IR images

    Science.gov (United States)

    Jiang, Qin; Owechko, Yuri; Blanton, Brendan

    2016-05-01

    Enhancing the visibility of infrared images obtained in a degraded visibility environment is very important for many applications such as surveillance, visual navigation in bad weather, and helicopter landing in brownout conditions. In this paper, we present an IR image visibility enhancement system based on adaptively modifying the wavelet coefficients of the images. In our proposed system, input images are first filtered by a histogram-based dynamic range filter in order to remove sensor noise and convert the input images into 8-bit dynamic range for efficient processing and display. By utilizing a wavelet transformation, we modify the image intensity distribution and enhance image edges simultaneously. In the wavelet domain, low frequency wavelet coefficients contain original image intensity distribution while high frequency wavelet coefficients contain edge information for the original images. To modify the image intensity distribution, an adaptive histogram equalization technique is applied to the low frequency wavelet coefficients while to enhance image edges, an adaptive edge enhancement technique is applied to the high frequency wavelet coefficients. An inverse wavelet transformation is applied to the modified wavelet coefficients to obtain intensity images with enhanced visibility. Finally, a Gaussian filter is used to remove blocking artifacts introduced by the adaptive techniques. Since wavelet transformation uses down-sampling to obtain low frequency wavelet coefficients, histogram equalization of low-frequency coefficients is computationally more efficient than histogram equalization of the original images. We tested the proposed system with degraded IR images obtained from a helicopter landing in brownout conditions. Our experimental results show that the proposed system is effective for enhancing the visibility of degraded IR images.

  4. Medical Image Tamper Detection Based on Passive Image Authentication.

    Science.gov (United States)

    Ulutas, Guzin; Ustubioglu, Arda; Ustubioglu, Beste; V Nabiyev, Vasif; Ulutas, Mustafa

    2017-12-01

    Telemedicine has gained popularity in recent years. Medical images can be transferred over the Internet to enable the telediagnosis between medical staffs and to make the patient's history accessible to medical staff from anywhere. Therefore, integrity protection of the medical image is a serious concern due to the broadcast nature of the Internet. Some watermarking techniques are proposed to control the integrity of medical images. However, they require embedding of extra information (watermark) into image before transmission. It decreases visual quality of the medical image and can cause false diagnosis. The proposed method uses passive image authentication mechanism to detect the tampered regions on medical images. Structural texture information is obtained from the medical image by using local binary pattern rotation invariant (LBPROT) to make the keypoint extraction techniques more successful. Keypoints on the texture image are obtained with scale invariant feature transform (SIFT). Tampered regions are detected by the method by matching the keypoints. The method improves the keypoint-based passive image authentication mechanism (they do not detect tampering when the smooth region is used for covering an object) by using LBPROT before keypoint extraction because smooth regions also have texture information. Experimental results show that the method detects tampered regions on the medical images even if the forged image has undergone some attacks (Gaussian blurring/additive white Gaussian noise) or the forged regions are scaled/rotated before pasting.

  5. Kernel based subspace projection of hyperspectral images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten

    In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF......). The MAF projection exploits the fact that interesting phenomena in images typically exhibit spatial autocorrelation. The analysis is based on nearinfrared hyperspectral images of maize grains demonstrating the superiority of the kernelbased MAF method....

  6. An Improved DOA Estimation Approach Using Coarray Interpolation and Matrix Denoising

    Directory of Open Access Journals (Sweden)

    Muran Guo

    2017-05-01

    Full Text Available Co-prime arrays can estimate the directions of arrival (DOAs of O ( M N sources with O ( M + N sensors, and are convenient to analyze due to their closed-form expression for the locations of virtual lags. However, the number of degrees of freedom is limited due to the existence of holes in difference coarrays if subspace-based algorithms such as the spatial smoothing multiple signal classification (MUSIC algorithm are utilized. To address this issue, techniques such as positive definite Toeplitz completion and array interpolation have been proposed in the literature. Another factor that compromises the accuracy of DOA estimation is the limitation of the number of snapshots. Coarray-based processing is particularly sensitive to the discrepancy between the sample covariance matrix and the ideal covariance matrix due to the finite number of snapshots. In this paper, coarray interpolation based on matrix completion (MC followed by a denoising operation is proposed to detect more sources with a higher accuracy. The effectiveness of the proposed method is based on the capability of MC to fill in holes in the virtual sensors and that of MC denoising operation to reduce the perturbation in the sample covariance matrix. The results of numerical simulations verify the superiority of the proposed approach.

  7. Improving Signal-to-Noise Ratio in Susceptibility Weighted Imaging: A Novel Multicomponent Non-Local Approach.

    Directory of Open Access Journals (Sweden)

    Pasquale Borrelli

    Full Text Available In susceptibility-weighted imaging (SWI, the high resolution required to obtain a proper contrast generation leads to a reduced signal-to-noise ratio (SNR. The application of a denoising filter to produce images with higher SNR and still preserve small structures from excessive blurring is therefore extremely desirable. However, as the distributions of magnitude and phase noise may introduce biases during image restoration, the application of a denoising filter is non-trivial. Taking advantage of the potential multispectral nature of MR images, a multicomponent approach using a Non-Local Means (MNLM denoising filter may perform better than a component-by-component image restoration method. Here we present a new MNLM-based method (Multicomponent-Imaginary-Real-SWI, hereafter MIR-SWI to produce SWI images with high SNR and improved conspicuity. Both qualitative and quantitative comparisons of MIR-SWI with the original SWI scheme and previously proposed SWI restoring pipelines showed that MIR-SWI fared consistently better than the other approaches. Noise removal with MIR-SWI also provided improvement in contrast-to-noise ratio (CNR and vessel conspicuity at higher factors of phase mask multiplications than the one suggested in the literature for SWI vessel imaging. We conclude that a proper handling of noise in the complex MR dataset may lead to improved image quality for SWI data.

  8. Statistical x-ray computed tomography imaging from photon-starved measurements

    Science.gov (United States)

    Chang, Zhiqian; Zhang, Ruoqiao; Thibault, Jean-Baptiste; Sauer, Ken; Bouman, Charles

    2013-03-01

    Dose reduction in clinical X-ray computed tomography (CT) causes low signal-to-noise ratio (SNR) in photonsparse situations. Statistical iterative reconstruction algorithms have the advantage of retaining image quality while reducing input dosage, but they meet their limits of practicality when significant portions of the sinogram near photon starvation. The corruption of electronic noise leads to measured photon counts taking on negative values, posing a problem for the log() operation in preprocessing of data. In this paper, we propose two categories of projection correction methods: an adaptive denoising filter and Bayesian inference. The denoising filter is easy to implement and preserves local statistics, but it introduces correlation between channels and may affect image resolution. Bayesian inference is a point-wise estimation based on measurements and prior information. Both approaches help improve diagnostic image quality at dramatically reduced dosage.

  9. ADVANCED CLUSTER BASED IMAGE SEGMENTATION

    Directory of Open Access Journals (Sweden)

    D. Kesavaraja

    2011-11-01

    Full Text Available This paper presents efficient and portable implementations of a useful image segmentation technique which makes use of the faster and a variant of the conventional connected components algorithm which we call parallel Components. In the Modern world majority of the doctors are need image segmentation as the service for various purposes and also they expect this system is run faster and secure. Usually Image segmentation Algorithms are not working faster. In spite of several ongoing researches in Conventional Segmentation and its Algorithms might not be able to run faster. So we propose a cluster computing environment for parallel image Segmentation to provide faster result. This paper is the real time implementation of Distributed Image Segmentation in Clustering of Nodes. We demonstrate the effectiveness and feasibility of our method on a set of Medical CT Scan Images. Our general framework is a single address space, distributed memory programming model. We use efficient techniques for distributing and coalescing data as well as efficient combinations of task and data parallelism. The image segmentation algorithm makes use of an efficient cluster process which uses a novel approach for parallel merging. Our experimental results are consistent with the theoretical analysis and practical results. It provides the faster execution time for segmentation, when compared with Conventional method. Our test data is different CT scan images from the Medical database. More efficient implementations of Image Segmentation will likely result in even faster execution times.

  10. Different source image fusion based on FPGA

    Science.gov (United States)

    Luo, Xiao; Piao, Yan

    2016-03-01

    The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.

  11. Seismic data interpolation and denoising by learning a tensor tight frame

    International Nuclear Information System (INIS)

    Liu, Lina; Ma, Jianwei; Plonka, Gerlind

    2017-01-01

    Seismic data interpolation and denoising plays a key role in seismic data processing. These problems can be understood as sparse inverse problems, where the desired data are assumed to be sparsely representable within a suitable dictionary. In this paper, we present a new method based on a data-driven tight frame (DDTF) of Kronecker type (KronTF) that avoids the vectorization step and considers the multidimensional structure of data in a tensor-product way. It takes advantage of the structure contained in all different modes (dimensions) simultaneously. In order to overcome the limitations of a usual tensor-product approach we also incorporate data-driven directionality. The complete method is formulated as a sparsity-promoting minimization problem. It includes two main steps. In the first step, a hard thresholding algorithm is used to update the frame coefficients of the data in the dictionary; in the second step, an iterative alternating method is used to update the tight frame (dictionary) in each different mode. The dictionary that is learned in this way contains the principal components in each mode. Furthermore, we apply the proposed KronTF to seismic interpolation and denoising. Examples with synthetic and real seismic data show that the proposed method achieves better results than the traditional projection onto convex sets method based on the Fourier transform and the previous vectorized DDTF methods. In particular, the simple structure of the new frame construction makes it essentially more efficient. (paper)

  12. Portfolio Value at Risk Estimate for Crude Oil Markets: A Multivariate Wavelet Denoising Approach

    Directory of Open Access Journals (Sweden)

    Kin Keung Lai

    2012-04-01

    Full Text Available In the increasingly globalized economy these days, the major crude oil markets worldwide are seeing higher level of integration, which results in higher level of dependency and transmission of risks among different markets. Thus the risk of the typical multi-asset crude oil portfolio is influenced by dynamic correlation among different assets, which has both normal and transient behaviors. This paper proposes a novel multivariate wavelet denoising based approach for estimating Portfolio Value at Risk (PVaR. The multivariate wavelet analysis is introduced to analyze the multi-scale behaviors of the correlation among different markets and the portfolio volatility behavior in the higher dimensional time scale domain. The heterogeneous data and noise behavior are addressed in the proposed multi-scale denoising based PVaR estimation algorithm, which also incorporatesthe mainstream time series to address other well known data features such as autocorrelation and volatility clustering. Empirical studies suggest that the proposed algorithm outperforms the benchmark ExponentialWeighted Moving Average (EWMA and DCC-GARCH model, in terms of conventional performance evaluation criteria for the model reliability.

  13. Magnetic resonance imaging based functional imaging in paediatric oncology.

    Science.gov (United States)

    Manias, Karen A; Gill, Simrandip K; MacPherson, Lesley; Foster, Katharine; Oates, Adam; Peet, Andrew C

    2017-02-01

    Imaging is central to management of solid tumours in children. Conventional magnetic resonance imaging (MRI) is the standard imaging modality for tumours of the central nervous system (CNS) and limbs and is increasingly used in the abdomen. It provides excellent structural detail, but imparts limited information about tumour type, aggressiveness, metastatic potential or early treatment response. MRI based functional imaging techniques, such as magnetic resonance spectroscopy, diffusion and perfusion weighted imaging, probe tissue properties to provide clinically important information about metabolites, structure and blood flow. This review describes the role of and evidence behind these functional imaging techniques in paediatric oncology and implications for integrating them into routine clinical practice. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Developing stereo image based robot control system

    Energy Technology Data Exchange (ETDEWEB)

    Suprijadi,; Pambudi, I. R.; Woran, M.; Naa, C. F; Srigutomo, W. [Department of Physics, FMIPA, InstitutTeknologi Bandung Jl. Ganesha No. 10. Bandung 40132, Indonesia supri@fi.itb.ac.id (Indonesia)

    2015-04-16

    Application of image processing is developed in various field and purposes. In the last decade, image based system increase rapidly with the increasing of hardware and microprocessor performance. Many fields of science and technology were used this methods especially in medicine and instrumentation. New technique on stereovision to give a 3-dimension image or movie is very interesting, but not many applications in control system. Stereo image has pixel disparity information that is not existed in single image. In this research, we proposed a new method in wheel robot control system using stereovision. The result shows robot automatically moves based on stereovision captures.

  15. Patch-based models and algorithms for image processing: a review of the basic principles and methods, and their application in computed tomography.

    Science.gov (United States)

    Karimi, Davood; Ward, Rabab K

    2016-10-01

    Image models are central to all image processing tasks. The great advancements in digital image processing would not have been made possible without powerful models which, themselves, have evolved over time. In the past decade, "patch-based" models have emerged as one of the most effective models for natural images. Patch-based methods have outperformed other competing methods in many image processing tasks. These developments have come at a time when greater availability of powerful computational resources and growing concerns over the health risks of the ionizing radiation encourage research on image processing algorithms for computed tomography (CT). The goal of this paper is to explain the principles of patch-based methods and to review some of their recent applications in CT. We first review the central concepts in patch-based image processing and explain some of the state-of-the-art algorithms, with a focus on aspects that are more relevant to CT. Then, we review some of the recent application of patch-based methods in CT. Patch-based methods have already transformed the field of image processing, leading to state-of-the-art results in many applications. More recently, several studies have proposed patch-based algorithms for various image processing tasks in CT, from denoising and restoration to iterative reconstruction. Although these studies have reported good results, the true potential of patch-based methods for CT has not been yet appreciated. Patch-based methods can play a central role in image reconstruction and processing for CT. They have the potential to lead to substantial improvements in the current state of the art.

  16. Detail Enhancement for Infrared Images Based on Propagated Image Filter

    Directory of Open Access Journals (Sweden)

    Yishu Peng

    2016-01-01

    Full Text Available For displaying high-dynamic-range images acquired by thermal camera systems, 14-bit raw infrared data should map into 8-bit gray values. This paper presents a new method for detail enhancement of infrared images to display the image with a relatively satisfied contrast and brightness, rich detail information, and no artifacts caused by the image processing. We first adopt a propagated image filter to smooth the input image and separate the image into the base layer and the detail layer. Then, we refine the base layer by using modified histogram projection for compressing. Meanwhile, the adaptive weights derived from the layer decomposition processing are used as the strict gain control for the detail layer. The final display result is obtained by recombining the two modified layers. Experimental results on both cooled and uncooled infrared data verify that the proposed method outperforms the method based on log-power histogram modification and bilateral filter-based detail enhancement in both detail enhancement and visual effect.

  17. An Image Filter Based on Shearlet Transformation and Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Kai Hu

    2015-01-01

    Full Text Available Digital image is always polluted by noise and made data postprocessing difficult. To remove noise and preserve detail of image as much as possible, this paper proposed image filter algorithm which combined the merits of Shearlet transformation and particle swarm optimization (PSO algorithm. Firstly, we use classical Shearlet transform to decompose noised image into many subwavelets under multiscale and multiorientation. Secondly, we gave weighted factor to those subwavelets obtained. Then, using classical Shearlet inverse transform, we obtained a composite image which is composed of those weighted subwavelets. After that, we designed fast and rough evaluation method to evaluate noise level of the new image; by using this method as fitness, we adopted PSO to find the optimal weighted factor we added; after lots of iterations, by the optimal factors and Shearlet inverse transform, we got the best denoised image. Experimental results have shown that proposed algorithm eliminates noise effectively and yields good peak signal noise ratio (PSNR.

  18. Comparative study of wavelet denoising in myoelectric control applications.

    Science.gov (United States)

    Sharma, Tanu; Veer, Karan

    2016-01-01

    Here, the wavelet analysis has been investigated to improve the quality of myoelectric signal before use in prosthetic design. Effective Surface Electromyogram (SEMG) signals were estimated by first decomposing the obtained signal using wavelet transform and then analysing the decomposed coefficients by threshold methods. With the appropriate choice of wavelet, it is possible to reduce interference noise effectively in the SEMG signal. However, the most effective wavelet for SEMG denoising is chosen by calculating the root mean square value and signal power values. The combined results of root mean square value and signal power shows that wavelet db4 performs the best denoising among the wavelets. Furthermore, time domain and frequency domain methods were applied for SEMG signal analysis to investigate the effect of muscle-force contraction on the signal. It was found that, during sustained contractions, the mean frequency (MNF) and median frequency (MDF) increase as muscle force levels increase.

  19. Image-based stained glass.

    Science.gov (United States)

    Brooks, Stephen

    2006-01-01

    We present a method of restyling an image so that it approximates the visual appearance of a work of stained glass. To this end, we develop a novel approach which involves image warping, segmentation, querying, and colorization along with texture synthesis. In our method, a given input image is first segmented. Each segment is subsequently transformed to match real segments of stained glass queried from a database of image exemplars. By using real sources of stained glass, our method produces high quality results in this nascent area of nonphotorealistic rendering. The generation of the stained glass requires only modest amounts of user interaction. This interaction is facilitated with a unique region-merging tool.

  20. De-noising of GPS structural monitoring observation error using wavelet analysis

    Directory of Open Access Journals (Sweden)

    Mosbeh R. Kaloop

    2016-03-01

    Full Text Available In the process of the continuous monitoring of the structure's state properties such as static and dynamic responses using Global Positioning System (GPS, there are unavoidable errors in the observation data. These GPS errors and measurement noises have their disadvantages in the precise monitoring applications because these errors cover up the available signals that are needed. The current study aims to apply three methods, which are used widely to mitigate sensor observation errors. The three methods are based on wavelet analysis, namely principal component analysis method, wavelet compressed method, and the de-noised method. These methods are used to de-noise the GPS observation errors and to prove its performance using the GPS measurements which are collected from the short-time monitoring system designed for Mansoura Railway Bridge located in Egypt. The results have shown that GPS errors can effectively be removed, while the full-movement components of the structure can be extracted from the original signals using wavelet analysis.

  1. Automated Protein NMR Structure Determination Using Wavelet De-noised NOESY Spectra

    International Nuclear Information System (INIS)

    Dancea, Felician; Guenther, Ulrich

    2005-01-01

    A major time-consuming step of protein NMR structure determination is the generation of reliable NOESY cross peak lists which usually requires a significant amount of manual interaction. Here we present a new algorithm for automated peak picking involving wavelet de-noised NOESY spectra in a process where the identification of peaks is coupled to automated structure determination. The core of this method is the generation of incremental peak lists by applying different wavelet de-noising procedures which yield peak lists of a different noise content. In combination with additional filters which probe the consistency of the peak lists, good convergence of the NOESY-based automated structure determination could be achieved. These algorithms were implemented in the context of the ARIA software for automated NOE assignment and structure determination and were validated for a polysulfide-sulfur transferase protein of known structure. The procedures presented here should be commonly applicable for efficient protein NMR structure determination and automated NMR peak picking

  2. Fourier and wavelet domain denoising of active sonar echoes

    Science.gov (United States)

    Cable, Peter G.; Shah, Sheila; Butler, Gary

    2004-05-01

    Active sonar classification performance improves significantly when echo-to-background ratios increase above 10-15 dB. To achieve the improved echo waveform fidelity implied by increasing echo-to-background, preclassification processing methods are sought to improve echo waveform estimates. For this purpose a class of nonlinear techniques termed denoising, applied to efficient Hilbert space representations of transient signals, has been shown to yield nearly optimal estimation procedures for noise corrupted signals of unknown smoothness [D. L. Donoho and I. M. Johnstone, Biometrika 81 (1994)]. We have applied several versions of Fourier and wavelet domain denoising to noisy low-frequency target echoes and, for echoes near detection threshold, have demonstrated signal representation improvements equivalent to increases in echo-to-background of 4 dB. The theoretical foundations of denoising, including a new threshold algorithm, will be outlined and measures of performance for waveform estimation will be reviewed and discussed. The experimental methodology used and the results obtained for the test sonar echoes will be summarized and target classification implications of the results obtained from the analysis discussed. [Work supported by ONR.

  3. Image Based Rendering and Virtual Reality

    DEFF Research Database (Denmark)

    Livatino, Salvatore

    The Presentation concerns with an overview of Image Based Rendering approaches and their use on Virtual Reality, including Virtual Photography and Cinematography, and Mobile Robot Navigation.......The Presentation concerns with an overview of Image Based Rendering approaches and their use on Virtual Reality, including Virtual Photography and Cinematography, and Mobile Robot Navigation....

  4. Image based SAR product simulation for analysis

    Science.gov (United States)

    Domik, G.; Leberl, F.

    1987-01-01

    SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.

  5. Content-Based Image Retrial Based on Hadoop

    Directory of Open Access Journals (Sweden)

    DongSheng Yin

    2013-01-01

    Full Text Available Generally, time complexity of algorithms for content-based image retrial is extremely high. In order to retrieve images on large-scale databases efficiently, a new way for retrieving based on Hadoop distributed framework is proposed. Firstly, a database of images features is built by using Speeded Up Robust Features algorithm and Locality-Sensitive Hashing and then perform the search on Hadoop platform in a parallel way specially designed. Considerable experimental results show that it is able to retrieve images based on content on large-scale cluster and image sets effectively.

  6. Graph Cuts based Image Segmentation using Fuzzy Rule Based System

    Directory of Open Access Journals (Sweden)

    M. R. Khokher

    2012-12-01

    Full Text Available This work deals with the segmentation of gray scale, color and texture images using graph cuts. From input image, a graph is constructed using intensity, color and texture profiles of the image simultaneously. Based on the nature of image, a fuzzy rule based system is designed to find the weight that should be given to a specific image feature during graph development. The graph obtained from the fuzzy rule based weighted average of different image features is further used in normalized graph cuts framework. Graph is iteratively bi-partitioned through the normalized graph cuts algorithm to get optimum partitions resulting in the segmented image. Berkeley segmentation database is used to test our algorithm and the segmentation results are evaluated through probabilistic rand index, global consistency error, sensitivity, positive predictive value and Dice similarity coefficient. It is shown that the presented segmentation method provides effective results for most types of images.

  7. Retinal image quality assessment based on image clarity and content

    Science.gov (United States)

    Abdel-Hamid, Lamiaa; El-Rafei, Ahmed; El-Ramly, Salwa; Michelson, Georg; Hornegger, Joachim

    2016-09-01

    Retinal image quality assessment (RIQA) is an essential step in automated screening systems to avoid misdiagnosis caused by processing poor quality retinal images. A no-reference transform-based RIQA algorithm is introduced that assesses images based on five clarity and content quality issues: sharpness, illumination, homogeneity, field definition, and content. Transform-based RIQA algorithms have the advantage of considering retinal structures while being computationally inexpensive. Wavelet-based features are proposed to evaluate the sharpness and overall illumination of the images. A retinal saturation channel is designed and used along with wavelet-based features for homogeneity assessment. The presented sharpness and illumination features are utilized to assure adequate field definition, whereas color information is used to exclude nonretinal images. Several publicly available datasets of varying quality grades are utilized to evaluate the feature sets resulting in area under the receiver operating characteristic curve above 0.99 for each of the individual feature sets. The overall quality is assessed by a classifier that uses the collective features as an input vector. The classification results show superior performance of the algorithm in comparison to other methods from literature. Moreover, the algorithm addresses efficiently and comprehensively various quality issues and is suitable for automatic screening systems.

  8. An overview of medical image data base

    International Nuclear Information System (INIS)

    Nishihara, Eitaro

    1992-01-01

    Recently, the systematization using computers in medical institutions has advanced, and the introduction of hospital information system has been almost completed in the large hospitals with more than 500 beds. But the objects of the management of the hospital information system are text information, and do not include the management of images of enormous quantity. By the progress of image diagnostic equipment, the digitization of medical images has advanced, but the management of images in hospitals does not utilize the merits of digital images. For the purpose of solving these problems, the picture archiving and communication system (PACS) was proposed about ten years ago, which makes medical images into a data base, and enables the on-line access to images from various places in hospitals. The studies have been continued to realize it. The features of medical image data, the present status of utilizing medical image data, the outline of the PACS, the image data base for the PACS, the problems in the realization of the data base and the technical trend, and the state of actual construction of the PACS are reported. (K.I.)

  9. Image content authentication based on channel coding

    Science.gov (United States)

    Zhang, Fan; Xu, Lei

    2008-03-01

    The content authentication determines whether an image has been tampered or not, and if necessary, locate malicious alterations made on the image. Authentication on a still image or a video are motivated by recipient's interest, and its principle is that a receiver must be able to identify the source of this document reliably. Several techniques and concepts based on data hiding or steganography designed as a means for the image authentication. This paper presents a color image authentication algorithm based on convolution coding. The high bits of color digital image are coded by the convolution codes for the tamper detection and localization. The authentication messages are hidden in the low bits of image in order to keep the invisibility of authentication. All communications channels are subject to errors introduced because of additive Gaussian noise in their environment. Data perturbations cannot be eliminated but their effect can be minimized by the use of Forward Error Correction (FEC) techniques in the transmitted data stream and decoders in the receiving system that detect and correct bits in error. This paper presents a color image authentication algorithm based on convolution coding. The message of each pixel is convolution encoded with the encoder. After the process of parity check and block interleaving, the redundant bits are embedded in the image offset. The tamper can be detected and restored need not accessing the original image.

  10. Manifold-Based Image Understanding

    Science.gov (United States)

    2010-06-30

    3] employs a Texas Instruments digital micromirror device (DMD), which consists of an array of N electrostatically actuated micromirrors . The camera...image x) is reflected off a digital micromirror device (DMD) array whose mirror orientations are modulated in the pseudorandom pattern φm supplied by a

  11. Characterization of lens based photoacoustic imaging system

    Directory of Open Access Journals (Sweden)

    Kalloor Joseph Francis

    2017-12-01

    Full Text Available Some of the challenges in translating photoacoustic (PA imaging to clinical applications includes limited view of the target tissue, low signal to noise ratio and the high cost of developing real-time systems. Acoustic lens based PA imaging systems, also known as PA cameras are a potential alternative to conventional imaging systems in these scenarios. The 3D focusing action of lens enables real-time C-scan imaging with a 2D transducer array. In this paper, we model the underlying physics in a PA camera in the mathematical framework of an imaging system and derive a closed form expression for the point spread function (PSF. Experimental verification follows including the details on how to design and fabricate the lens inexpensively. The system PSF is evaluated over a 3D volume that can be imaged by this PA camera. Its utility is demonstrated by imaging phantom and an ex vivo human prostate tissue sample.

  12. Integration of speckle de-noising and image segmentation using ...

    Indian Academy of Sciences (India)

    In addition to speckle suppres- sion, an ideal filter must preserve edges and texture information. In the literature, adaptive filters for speckle removal are preferred for this purpose, since most of the well-known speckle removal filters perform the calculation of the local observed mean along with normalized standard deviation.

  13. Partial volume correction of brain PET studies using iterative deconvolution in combination with HYPR denoising.

    Science.gov (United States)

    Golla, Sandeep S V; Lubberink, Mark; van Berckel, Bart N M; Lammertsma, Adriaan A; Boellaard, Ronald

    2017-12-01

    Accurate quantification of PET studies depends on the spatial resolution of the PET data. The commonly limited PET resolution results in partial volume effects (PVE). Iterative deconvolution methods (IDM) have been proposed as a means to correct for PVE. IDM improves spatial resolution of PET studies without the need for structural information (e.g. MR scans). On the other hand, deconvolution also increases noise, which results in lower signal-to-noise ratios (SNR). The aim of this study was to implement IDM in combination with HighlY constrained back-PRojection (HYPR) denoising to mitigate poor SNR properties of conventional IDM. An anthropomorphic Hoffman brain phantom was filled with an [ 18 F]FDG solution of ~25 kBq mL -1 and scanned for 30 min on a Philips Ingenuity TF PET/CT scanner (Philips, Cleveland, USA) using a dynamic brain protocol with various frame durations ranging from 10 to 300 s. Van Cittert IDM was used for PVC of the scans. In addition, HYPR was used to improve SNR of the dynamic PET images, applying it both before and/or after IDM. The Hoffman phantom dataset was used to optimise IDM parameters (number of iterations, type of algorithm, with/without HYPR) and the order of HYPR implementation based on the best average agreement of measured and actual activity concentrations in the regions. Next, dynamic [ 11 C]flumazenil (five healthy subjects) and [ 11 C]PIB (four healthy subjects and four patients with Alzheimer's disease) scans were used to assess the impact of IDM with and without HYPR on plasma input-derived distribution volumes (V T ) across various regions of the brain. In the case of [ 11 C]flumazenil scans, Hypr-IDM-Hypr showed an increase of 5 to 20% in the regional V T whereas a 0 to 10% increase or decrease was seen in the case of [ 11 C]PIB depending on the volume of interest or type of subject (healthy or patient). References for these comparisons were the V T s from the PVE-uncorrected scans. IDM improved quantitative accuracy

  14. Edge-based perceptual image coding.

    Science.gov (United States)

    Niu, Yi; Wu, Xiaolin; Shi, Guangming; Wang, Xiaotian

    2012-04-01

    We develop a novel psychovisually motivated edge-based low-bit-rate image codec. It offers a compact description of scale-invariant second-order statistics of natural images, the preservation of which is crucial to the perceptual quality of coded images. Although being edge based, the codec does not explicitly code the edge geometry. To save bits on edge descriptions, a background layer of the image is first coded and transmitted, from which the decoder estimates the trajectories of significant edges. The edge regions are then refined by a residual coding technique based on edge dilation and sequential scanning in the edge direction. Experimental results show that the new image coding technique outperforms the existing ones in both objective and perceptual quality, particularly at low bit rates.

  15. Multi region based image retrieval system

    Indian Academy of Sciences (India)

    Abstract. Multimedia information retrieval systems continue to be an active research area in the world of huge and voluminous data. The paramount challenge is to translate or convert a visual query from a human and find similar images or videos in large digital collection. In this paper, a technique of region based image.

  16. Material Recognition for Content Based Image Retrieval

    NARCIS (Netherlands)

    Geusebroek, J.M.

    2002-01-01

    One of the open problems in content-based Image Retrieval is the recognition of material present in an image. Knowledge about the set of materials present gives important semantic information about the scene under consideration. For example, detecting sand, sky, and water certainly classifies the

  17. Infrared Imaging for Inquiry-Based Learning

    Science.gov (United States)

    Xie, Charles; Hazzard, Edmund

    2011-01-01

    Based on detecting long-wavelength infrared (IR) radiation emitted by the subject, IR imaging shows temperature distribution instantaneously and heat flow dynamically. As a picture is worth a thousand words, an IR camera has great potential in teaching heat transfer, which is otherwise invisible. The idea of using IR imaging in teaching was first…

  18. ECG Denoising Using Marginalized Particle Extended Kalman Filter With an Automatic Particle Weighting Strategy.

    Science.gov (United States)

    Hesar, Hamed Danandeh; Mohebbi, Maryam

    2017-05-01

    In this paper, a model-based Bayesian filtering framework called the "marginalized particle-extended Kalman filter (MP-EKF) algorithm" is proposed for electrocardiogram (ECG) denoising. This algorithm does not have the extended Kalman filter (EKF) shortcoming in handling non-Gaussian nonstationary situations because of its nonlinear framework. In addition, it has less computational complexity compared with particle filter. This filter improves ECG denoising performance by implementing marginalized particle filter framework while reducing its computational complexity using EKF framework. An automatic particle weighting strategy is also proposed here that controls the reliance of our framework to the acquired measurements. We evaluated the proposed filter on several normal ECGs selected from MIT-BIH normal sinus rhythm database. To do so, artificial white Gaussian and colored noises as well as nonstationary real muscle artifact (MA) noise over a range of low SNRs from 10 to -5 dB were added to these normal ECG segments. The benchmark methods were the EKF and extended Kalman smoother (EKS) algorithms which are the first model-based Bayesian algorithms introduced in the field of ECG denoising. From SNR viewpoint, the experiments showed that in the presence of Gaussian white noise, the proposed framework outperforms the EKF and EKS algorithms in lower input SNRs where the measurements and state model are not reliable. Owing to its nonlinear framework and particle weighting strategy, the proposed algorithm attained better results at all input SNRs in non-Gaussian nonstationary situations (such as presence of pink noise, brown noise, and real MA). In addition, the impact of the proposed filtering method on the distortion of diagnostic features of the ECG was investigated and compared with EKF/EKS methods using an ECG diagnostic distortion measure called the "Multi-Scale Entropy Based Weighted Distortion Measure" or MSEWPRD. The results revealed that our proposed

  19. EVALUATION OF WAVELET AND NON-LOCAL MEAN DENOISING OF TERRESTRIAL LASER SCANNING DATA FOR SMALL-SCALE JOINT ROUGHNESS ESTIMATION

    Directory of Open Access Journals (Sweden)

    M. Bitenc

    2016-06-01

    Full Text Available Terrestrial Laser Scanning (TLS is a well-known remote sensing tool that enables precise 3D acquisition of surface morphology from distances of a few meters to a few kilometres. The morphological representations obtained are important in engineering geology and rock mechanics, where surface morphology details are of particular interest in rock stability problems and engineering construction. The actual size of the discernible surface detail depends on the instrument range error (noise effect and effective data resolution (smoothing effect. Range error can be (partly removed by applying a denoising method. Based on the positive results from previous studies, two denoising methods, namely 2D wavelet transform (WT and non-local mean (NLM, are tested here, with the goal of obtaining roughness estimations that are suitable in the context of rock engineering practice. Both methods are applied in two variants: conventional Discrete WT (DWT and Stationary WT (SWT, classic NLM (NLM and probabilistic NLM (PNLM. The noise effect and denoising performance are studied in relation to the TLS effective data resolution. Analyses are performed on the reference data acquired by a highly precise Advanced TOpometric Sensor (ATOS on a 20x30 cm rock joint sample. Roughness ratio is computed by comparing the noisy and denoised surfaces to the original ATOS surface. The roughness ratio indicates the success of all denoising methods. Besides, it shows that SWT oversmoothes the surface and the performance of the DWT, NLM and PNLM vary with the noise level and data resolution. The noise effect becomes less prominent when data resolution decreases.

  20. Second-order oriented partial-differential equations for denoising in electronic-speckle-pattern interferometry fringes.

    Science.gov (United States)

    Tang, Chen; Han, Lin; Ren, Hongwei; Zhou, Dongjian; Chang, Yiming; Wang, Xiaohang; Cui, Xiaolong

    2008-10-01

    We derive the second-order oriented partial-differential equations (PDEs) for denoising in electronic-speckle-pattern interferometry fringe patterns from two points of view. The first is based on variational methods, and the second is based on controlling diffusion direction. Our oriented PDE models make the diffusion along only the fringe orientation. The main advantage of our filtering method, based on oriented PDE models, is that it is very easy to implement compared with the published filtering methods along the fringe orientation. We demonstrate the performance of our oriented PDE models via application to two computer-simulated and experimentally obtained speckle fringes and compare with related PDE models.

  1. Comparison of JADE and canonical correlation analysis for ECG de-noising.

    Science.gov (United States)

    Kuzilek, Jakub; Kremen, Vaclav; Lhotska, Lenka

    2014-01-01

    This paper explores differences between two methods for blind source separation within frame of ECG de-noising. First method is joint approximate diagonalization of eigenmatrices, which is based on estimation of fourth order cross-cummulant tensor and its diagonalization. Second one is the statistical method known as canonical correlation analysis, which is based on estimation of correlation matrices between two multidimensional variables. Both methods were used within method, which combines the blind source separation algorithm with decision tree. The evaluation was made on large database of 382 long-term ECG signals and the results were examined. Biggest difference was found in results of 50 Hz power line interference where the CCA algorithm completely failed. Thus main power of CCA lies in estimation of unstructured noise within ECG. JADE algorithm has larger computational complexity thus the CCA perfomed faster when estimating the components.

  2. Image based Monument Recognition using Graph based Visual Saliency

    DEFF Research Database (Denmark)

    Kalliatakis, Grigorios; Triantafyllidis, Georgios

    2013-01-01

    This article presents an image-based application aiming at simple image classification of well-known monuments in the area of Heraklion, Crete, Greece. This classification takes place by utilizing Graph Based Visual Saliency (GBVS) and employing Scale Invariant Feature Transform (SIFT) or Speeded...

  3. Location-based Services using Image Search

    DEFF Research Database (Denmark)

    Vertongen, Pieter-Paulus; Hansen, Dan Witzner

    2008-01-01

    Recent developments in image search has made them sufficiently efficient to be used in real-time applications. GPS has become a popular navigation tool. While GPS information provide reasonably good accuracy, they are not always present in all hand held devices nor are they accurate in all...... situations, for example in urban environments. We propose a system to provide location-based services using image searches without requiring GPS. The goal of this system is to assist tourists in cities with additional information using their mobile phones and built-in cameras. Based upon the result...... of the image search engine and database image location knowledge, the location is determined of the query image and associated data can be presented to the user....

  4. Fast single image dehazing based on image fusion

    Science.gov (United States)

    Liu, Haibo; Yang, Jie; Wu, Zhengping; Zhang, Qingnian

    2015-01-01

    Images captured in foggy weather conditions often fade the colors and reduce the contrast of the observed objects. An efficient image fusion method is proposed to remove haze from a single input image. First, the initial medium transmission is estimated based on the dark channel prior. Second, the method adopts an assumption that the degradation level affected by haze of each region is the same, which is similar to the Retinex theory, and uses a simple Gaussian filter to get the coarse medium transmission. Then, pixel-level fusion is achieved between the initial medium transmission and coarse medium transmission. The proposed method can recover a high-quality haze-free image based on the physical model, and the complexity of the proposed method is only a linear function of the number of input image pixels. Experimental results demonstrate that the proposed method can allow a very fast implementation and achieve better restoration for visibility and color fidelity compared to some state-of-the-art methods.

  5. Contourlet Filter Design Based on Chebyshev Best Uniform Approximation

    Directory of Open Access Journals (Sweden)

    Ming Hou

    2010-01-01

    Full Text Available The contourlet transform can deal effectively with images which have directional information such as contour and texture. In contrast to wavelets for which there exists many good filters, the contourlet filter design for image processing applications is still an ongoing work. Therefore, this paper presents an approach for designing the contourlet filter based on the Chebyshev best uniform approximation for achieving an efficient image denoising applications using hidden Markov tree models in the contourlet domain. Here, we design both the optimal 9/7 wavelet filter banks with rational coefficients and new pkva 12 filter. In this paper, the Laplacian pyramid followed by the direction filter banks decomposition in the contourlet transform using the two filter banks above and the image denoising applications in the contourlet hidden Markov tree model are implemented, respectively. The experimental results show that the denoising performance of the test image Zelda in terms of peak signal-to-noise ratio is improved by 0.33 dB than using CDF 9/7 filter banks with irrational coefficients on the JPEG2000 standard and standard pkva 12 filter, and visual effects are as good as compared with the research results of Duncan D.-Y. Po and Minh N. Do.

  6. Contourlet Filter Design Based on Chebyshev Best Uniform Approximation

    Directory of Open Access Journals (Sweden)

    Fang Xiaofeng

    2010-01-01

    Full Text Available Abstract The contourlet transform can deal effectively with images which have directional information such as contour and texture. In contrast to wavelets for which there exists many good filters, the contourlet filter design for image processing applications is still an ongoing work. Therefore, this paper presents an approach for designing the contourlet filter based on the Chebyshev best uniform approximation for achieving an efficient image denoising applications using hidden Markov tree models in the contourlet domain. Here, we design both the optimal 9/7 wavelet filter banks with rational coefficients and new pkva 12 filter. In this paper, the Laplacian pyramid followed by the direction filter banks decomposition in the contourlet transform using the two filter banks above and the image denoising applications in the contourlet hidden Markov tree model are implemented, respectively. The experimental results show that the denoising performance of the test image Zelda in terms of peak signal-to-noise ratio is improved by 0.33 dB than using CDF 9/7 filter banks with irrational coefficients on the JPEG2000 standard and standard pkva 12 filter, and visual effects are as good as compared with the research results of Duncan D.-Y. Po and Minh N. Do.

  7. A multicore based parallel image registration method.

    Science.gov (United States)

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L; Foran, David J

    2009-01-01

    Image registration is a crucial step for many image-assisted clinical applications such as surgery planning and treatment evaluation. In this paper we proposed a landmark based nonlinear image registration algorithm for matching 2D image pairs. The algorithm was shown to be effective and robust under conditions of large deformations. In landmark based registration, the most important step is establishing the correspondence among the selected landmark points. This usually requires an extensive search which is often computationally expensive. We introduced a nonregular data partition algorithm using the K-means clustering algorithm to group the landmarks based on the number of available processing cores. The step optimizes the memory usage and data transfer. We have tested our method using IBM Cell Broadband Engine (Cell/B.E.) platform.

  8. GRAPHIE: graph based histology image explorer.

    Science.gov (United States)

    Ding, Hao; Wang, Chao; Huang, Kun; Machiraju, Raghu

    2015-01-01

    Histology images comprise one of the important sources of knowledge for phenotyping studies in systems biology. However, the annotation and analyses of histological data have remained a manual, subjective and relatively low-throughput process. We introduce Graph based Histology Image Explorer (GRAPHIE)-a visual analytics tool to explore, annotate and discover potential relationships in histology image collections within a biologically relevant context. The design of GRAPHIE is guided by domain experts' requirements and well-known InfoVis mantras. By representing each image with informative features and then subsequently visualizing the image collection with a graph, GRAPHIE allows users to effectively explore the image collection. The features were designed to capture localized morphological properties in the given tissue specimen. More importantly, users can perform feature selection in an interactive way to improve the visualization of the image collection and the overall annotation process. Finally, the annotation allows for a better prospective examination of datasets as demonstrated in the users study. Thus, our design of GRAPHIE allows for the users to navigate and explore large collections of histology image datasets. We demonstrated the usefulness of our visual analytics approach through two case studies. Both of the cases showed efficient annotation and analysis of histology image collection.

  9. A fractal-based image encryption system

    KAUST Repository

    Abd-El-Hafiz, S. K.

    2014-12-01

    This study introduces a novel image encryption system based on diffusion and confusion processes in which the image information is hidden inside the complex details of fractal images. A simplified encryption technique is, first, presented using a single-fractal image and statistical analysis is performed. A general encryption system utilising multiple fractal images is, then, introduced to improve the performance and increase the encryption key up to hundreds of bits. This improvement is achieved through several parameters: feedback delay, multiplexing and independent horizontal or vertical shifts. The effect of each parameter is studied separately and, then, they are combined to illustrate their influence on the encryption quality. The encryption quality is evaluated using different analysis techniques such as correlation coefficients, differential attack measures, histogram distributions, key sensitivity analysis and the National Institute of Standards and Technology (NIST) statistical test suite. The obtained results show great potential compared to other techniques.

  10. PIXEL PATTERN BASED STEGANOGRAPHY ON IMAGES

    Directory of Open Access Journals (Sweden)

    R. Rejani

    2015-02-01

    Full Text Available One of the drawback of most of the existing steganography methods is that it alters the bits used for storing color information. Some of the examples include LSB or MSB based steganography. There are also various existing methods like Dynamic RGB Intensity Based Steganography Scheme, Secure RGB Image Steganography from Pixel Indicator to Triple Algorithm etc that can be used to find out the steganography method used and break it. Another drawback of the existing methods is that it adds noise to the image which makes the image look dull or grainy making it suspicious for a person about existence of a hidden message within the image. To overcome these shortcomings we have come up with a pixel pattern based steganography which involved hiding the message within in image by using the existing RGB values whenever possible at pixel level or with minimum changes. Along with the image a key will also be used to decrypt the message stored at pixel levels. For further protection, both the message stored as well as the key file will be in encrypted format which can have same or different keys or decryption. Hence we call it as a RGB pixel pattern based steganography.

  11. Automated brain tumor segmentation in magnetic resonance imaging based on sliding-window technique and symmetry analysis.

    Science.gov (United States)

    Lian, Yanyun; Song, Zhijian

    2014-01-01

    Brain tumor segmentation from magnetic resonance imaging (MRI) is an important step toward surgical planning, treatment planning, monitoring of therapy. However, manual tumor segmentation commonly used in clinic is time-consuming and challenging, and none of the existed automated methods are highly robust, reliable and efficient in clinic application. An accurate and automated tumor segmentation method has been developed for brain tumor segmentation that will provide reproducible and objective results close to manual segmentation results. Based on the symmetry of human brain, we employed sliding-window technique and correlation coefficient to locate the tumor position. At first, the image to be segmented was normalized, rotated, denoised, and bisected. Subsequently, through vertical and horizontal sliding-windows technique in turn, that is, two windows in the left and the right part of brain image moving simultaneously pixel by pixel in two parts of brain image, along with calculating of correlation coefficient of two windows, two windows with minimal correlation coefficient were obtained, and the window with bigger average gray value is the location of tumor and the pixel with biggest gray value is the locating point of tumor. At last, the segmentation threshold was decided by the average gray value of the pixels in the square with center at the locating point and 10 pixels of side length, and threshold segmentation and morphological operations were used to acquire the final tumor region. The method was evaluated on 3D FSPGR brain MR images of 10 patients. As a result, the average ratio of correct location was 93.4% for 575 slices containing tumor, the average Dice similarity coefficient was 0.77 for one scan, and the average time spent on one scan was 40 seconds. An fully automated, simple and efficient segmentation method for brain tumor is proposed and promising for future clinic use. Correlation coefficient is a new and effective feature for tumor location.

  12. Stacked Denoising Tensor Auto-Encoder for Action Recognition With Spatiotemporal Corruptions.

    Science.gov (United States)

    Jia, Chengcheng; Shao, Ming; Li, Sheng; Zhao, Handong; Fu, Yun

    2018-04-01

    Spatially or temporally corrupted action videos are impractical for recognition via vision or learning models. It usually happens when streaming data are captured from unintended moving cameras, which bring occlusion or camera vibration and accordingly result in arbitrary loss of spatiotemporal information. In reality, it is intractable to deal with both spatial and temporal corruptions at the same time. In this paper, we propose a coupled stacked denoising tensor auto-encoder (CSDTAE) model, which approaches this corruption problem in a divide-and-conquer fashion by jointing both the spatial and temporal schemes together. In particular, each scheme is a SDTAE designed to handle either spatial or temporal corruption, respectively. SDTAE is composed of several blocks, each of which is a denoising tensor auto-encoder (DTAE). Therefore, CSDTAE is designed based on several DTAE building blocks to solve the spatiotemporal corruption problem simultaneously. In one DTAE, the video features are represented as a high-order tensor to preserve the spatiotemporal structure of data, where the temporal and spatial information are processed separately in different hidden layers via tensor unfolding. In summary, DTAE explores the spatial and temporal structure of the tensor representation, and SDTAE handles different corrupted ratios progressively to extract more discriminative features. CSDTAE couples the temporal and spatial corruptions of the same data through a thorough step-by-step procedure based on canonical correlation analysis, which integrates the two sub-problems into one problem. The key point is solving the spatiotemporal corruption in one model by considering them as noises in either spatial or temporal direction. Extensive experiments on three action data sets demonstrate the effectiveness of our model, especially when large volumes of corruption in the video.

  13. Parallel CT image reconstruction based on GPUs

    International Nuclear Information System (INIS)

    Flores, Liubov A.; Vidal, Vicent; Mayo, Patricia; Rodenas, Francisco; Verdú, Gumersindo

    2014-01-01

    In X-ray computed tomography (CT) iterative methods are more suitable for the reconstruction of images with high contrast and precision in noisy conditions from a small number of projections. However, in practice, these methods are not widely used due to the high computational cost of their implementation. Nowadays technology provides the possibility to reduce effectively this drawback. It is the goal of this work to develop a fast GPU-based algorithm to reconstruct high quality images from under sampled and noisy projection data. - Highlights: • We developed GPU-based iterative algorithm to reconstruct images. • Iterative algorithms are capable to reconstruct images from under sampled set of projections. • The computer cost of the implementation of the developed algorithm is low. • The efficiency of the algorithm increases for the large scale problems

  14. Jet-Based Local Image Descriptors

    DEFF Research Database (Denmark)

    Larsen, Anders Boesen Lindbo; Darkner, Sune; Dahl, Anders Lindbjerg

    2012-01-01

    We present a general novel image descriptor based on higherorder differential geometry and investigate the effect of common descriptor choices. Our investigation is twofold in that we develop a jet-based descriptor and perform a comparative evaluation with current state-of-the-art descriptors on ...

  15. [PACS-based endoscope image acquisition workstation].

    Science.gov (United States)

    Liu, J B; Zhuang, T G

    2001-01-01

    A practical PACS-based Endoscope Image Acquisition Workstation is here introduced. By a Multimedia Video Card, the endoscope video is digitized and captured dynamically or statically into computer. This workstation realizes a variety of functions such as the endoscope video's acquisition and display, as well as the editing, processing, managing, storage, printing, communication of related information. Together with other medical image workstation, it can make up the image sources of PACS for hospitals. In addition, it can also act as an independent endoscopy diagnostic system.

  16. Computer vision for image-based transcriptomics.

    Science.gov (United States)

    Stoeger, Thomas; Battich, Nico; Herrmann, Markus D; Yakimovich, Yauhen; Pelkmans, Lucas

    2015-09-01

    Single-cell transcriptomics has recently emerged as one of the most promising tools for understanding the diversity of the transcriptome among single cells. Image-based transcriptomics is unique compared to other methods as it does not require conversion of RNA to cDNA prior to signal amplification and transcript quantification. Thus, its efficiency in transcript detection is unmatched by other methods. In addition, image-based transcriptomics allows the study of the spatial organization of the transcriptome in single cells at single-molecule, and, when combined with superresolution microscopy, nanometer resolution. However, in order to unlock the full power of image-based transcriptomics, robust computer vision of single molecules and cells is required. Here, we shortly discuss the setup of the experimental pipeline for image-based transcriptomics, and then describe in detail the algorithms that we developed to extract, at high-throughput, robust multivariate feature sets of transcript molecule abundance, localization and patterning in tens of thousands of single cells across the transcriptome. These computer vision algorithms and pipelines can be downloaded from: https://github.com/pelkmanslab/ImageBasedTranscriptomics. Copyright © 2015. Published by Elsevier Inc.

  17. De-noising of microwave satellite soil moisture time series

    Science.gov (United States)

    Su, Chun-Hsu; Ryu, Dongryeol; Western, Andrew; Wagner, Wolfgang

    2013-04-01

    Technology) ASCAT data sets to identify two types of errors that are spectrally distinct. Based on a semi-empirical model of soil moisture dynamics, we consider possible digital filter designs to improve the accuracy of their soil moisture products by reducing systematic periodic errors and stochastic noise. We describe a methodology to design bandstop filters to remove artificial resonances, and a Wiener filter to remove stochastic white noise present in the satellite data. Utility of these filters is demonstrated by comparing de-noised data against in-situ observations from ground monitoring stations in the Murrumbidgee Catchment (Smith et al., 2012), southeast Australia. Albergel, C., de Rosnay, P., Gruhier, C., Muñoz Sabater, J., Hasenauer, S., Isaksen, L., Kerr, Y. H., & Wagner, W. (2012). Evaluation of remotely sensed and modelled soil moisture products using global ground-based in situ observations. Remote Sensing of Environment, 118, 215-226. Scipal, K., Holmes, T., de Jeu, R., Naeimi, V., & Wagner, W. (2008), A possible solution for the problem of estimating the error structure of global soil moisture data sets. Geophysical Research Letters, 35, L24403. Smith, A. B., Walker, J. P., Western, A. W., Young, R. I., Ellett, K. M., Pipunic, R. C., Grayson, R. B., Siriwardena, L., Chiew, F. H. S., & Richter, H. (2012). The Murrumbidgee soil moisture network data set. Water Resources Research, 48, W07701. Su, C.-H., Ryu, D., Young, R., Western, A. W., & Wagner, W. (2012). Inter-comparison of microwave satellite soil moisture retrievals over Australia. Submitted to Remote Sensing of Environment.

  18. Nonlaser-based 3D surface imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  19. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  20. Neural network based multiscale image restoration approach

    Science.gov (United States)

    de Castro, Ana Paula A.; da Silva, José D. S.

    2007-02-01

    This paper describes a neural network based multiscale image restoration approach. Multilayer perceptrons are trained with artificial images of degraded gray level circles, in an attempt to make the neural network learn inherent space relations of the degraded pixels. The present approach simulates the degradation by a low pass Gaussian filter blurring operation and the addition of noise to the pixels at pre-established rates. The training process considers the degraded image as input and the non-degraded image as output for the supervised learning process. The neural network thus performs an inverse operation by recovering a quasi non-degraded image in terms of least squared. The main difference of the approach to existing ones relies on the fact that the space relations are taken from different scales, thus providing relational space data to the neural network. The approach is an attempt to come up with a simple method that leads to an optimum solution to the problem. Considering different window sizes around a pixel simulates the multiscale operation. In the generalization phase the neural network is exposed to indoor, outdoor, and satellite degraded images following the same steps use for the artificial circle image.

  1. Cluster Ensemble-Based Image Segmentation

    Directory of Open Access Journals (Sweden)

    Xiaoru Wang

    2013-07-01

    Full Text Available Image segmentation is the foundation of computer vision applications. In this paper, we propose a new cluster ensemble-based image segmentation algorithm, which overcomes several problems of traditional methods. We make two main contributions in this paper. First, we introduce the cluster ensemble concept to fuse the segmentation results from different types of visual features effectively, which can deliver a better final result and achieve a much more stable performance for broad categories of images. Second, we exploit the PageRank idea from Internet applications and apply it to the image segmentation task. This can improve the final segmentation results by combining the spatial information of the image and the semantic similarity of regions. Our experiments on four public image databases validate the superiority of our algorithm over conventional single type of feature or multiple types of features-based algorithms, since our algorithm can fuse multiple types of features effectively for better segmentation results. Moreover, our method is also proved to be very competitive in comparison with other state-of-the-art segmentation algorithms.

  2. Image segmentation algorithm based on improved PCNN

    Science.gov (United States)

    Chen, Hong; Wu, Chengdong; Yu, Xiaosheng; Wu, Jiahui

    2017-11-01

    A modified simplified Pulse Coupled Neural Network (PCNN) model is proposed in this article based on simplified PCNN. Some work have done to enrich this model, such as imposing restrictions items of the inputs, improving linking inputs and internal activity of PCNN. A self-adaptive parameter setting method of linking coefficient and threshold value decay time constant is proposed here, too. At last, we realized image segmentation algorithm for five pictures based on this proposed simplified PCNN model and PSO. Experimental results demonstrate that this image segmentation algorithm is much better than method of SPCNN and OTSU.

  3. Learning Convex Regularizers for Optimal Bayesian Denoising

    Science.gov (United States)

    Nguyen, Ha Q.; Bostan, Emrah; Unser, Michael

    2018-02-01

    We propose a data-driven algorithm for the maximum a posteriori (MAP) estimation of stochastic processes from noisy observations. The primary statistical properties of the sought signal is specified by the penalty function (i.e., negative logarithm of the prior probability density function). Our alternating direction method of multipliers (ADMM)-based approach translates the estimation task into successive applications of the proximal mapping of the penalty function. Capitalizing on this direct link, we define the proximal operator as a parametric spline curve and optimize the spline coefficients by minimizing the average reconstruction error for a given training set. The key aspects of our learning method are that the associated penalty function is constrained to be convex and the convergence of the ADMM iterations is proven. As a result of these theoretical guarantees, adaptation of the proposed framework to different levels of measurement noise is extremely simple and does not require any retraining. We apply our method to estimation of both sparse and non-sparse models of L\\'{e}vy processes for which the minimum mean square error (MMSE) estimators are available. We carry out a single training session and perform comparisons at various signal-to-noise ratio (SNR) values. Simulations illustrate that the performance of our algorithm is practically identical to the one of the MMSE estimator irrespective of the noise power.

  4. Intelligent image retrieval based on radiology reports.

    Science.gov (United States)

    Gerstmair, Axel; Daumke, Philipp; Simon, Kai; Langer, Mathias; Kotter, Elmar

    2012-12-01

    To create an advanced image retrieval and data-mining system based on in-house radiology reports. Radiology reports are semantically analysed using natural language processing (NLP) techniques and stored in a state-of-the-art search engine. Images referenced by sequence and image number in the reports are retrieved from the picture archiving and communication system (PACS) and stored for later viewing. A web-based front end is used as an interface to query for images and show the results with the retrieved images and report text. Using a comprehensive radiological lexicon for the underlying terminology, the search algorithm also finds results for synonyms, abbreviations and related topics. The test set was 108 manually annotated reports analysed by different system configurations. Best results were achieved using full syntactic and semantic analysis with a precision of 0.929 and recall of 0.952. Operating successfully since October 2010, 258,824 reports have been indexed and a total of 405,146 preview images are stored in the database. Data-mining and NLP techniques provide quick access to a vast repository of images and radiology reports with both high precision and recall values. Consequently, the system has become a valuable tool in daily clinical routine, education and research. Radiology reports can now be analysed using sophisticated natural language-processing techniques. Semantic text analysis is backed by terminology of a radiological lexicon. The search engine includes results for synonyms, abbreviations and compositions. Key images are automatically extracted from radiology reports and fetched from PACS. Such systems help to find diagnoses, improve report quality and save time.

  5. Independent component analysis-based artefact reduction: application to the electrocardiogram for improved magnetic resonance imaging triggering

    International Nuclear Information System (INIS)

    Oster, Julien; Pietquin, Olivier; Felblinger, Jacques; Abächerli, Roger; Kraemer, Michel

    2009-01-01

    Electrocardiogram (ECG) is required during magnetic resonance (MR) examination for monitoring patients under anaesthesia or with heart diseases and for synchronizing image acquisition with heart activity (triggering). Accurate and fast QRS detection is therefore desirable, but this task is complicated by artefacts related to the complex MR environment (high magnetic field, radio-frequency pulses and fast switching magnetic gradients). Specific signal processing has been proposed, whether using specific MR QRS detectors or ECG denoising methods. Most state-of-the-art techniques use a connection to the MR system for achieving their task, which is a major drawback since access to the MR system is often restricted. This paper introduces a new method for on-line ECG signal enhancement, called ICARE, which takes advantage of using multi-lead ECG and does not require any connection to the MR system. It is based on independent component analysis (ICA) and applied in real time. This algorithm yields accurate QRS detection for efficient triggering

  6. Multispectral image fusion based on fractal features

    Science.gov (United States)

    Tian, Jie; Chen, Jie; Zhang, Chunhua

    2004-01-01

    Imagery sensors have been one indispensable part of the detection and recognition systems. They are widely used to the field of surveillance, navigation, control and guide, et. However, different imagery sensors depend on diverse imaging mechanisms, and work within diverse range of spectrum. They also perform diverse functions and have diverse circumstance requires. So it is unpractical to accomplish the task of detection or recognition with a single imagery sensor under the conditions of different circumstances, different backgrounds and different targets. Fortunately, the multi-sensor image fusion technique emerged as important route to solve this problem. So image fusion has been one of the main technical routines used to detect and recognize objects from images. While, loss of information is unavoidable during fusion process, so it is always a very important content of image fusion how to preserve the useful information to the utmost. That is to say, it should be taken into account before designing the fusion schemes how to avoid the loss of useful information or how to preserve the features helpful to the detection. In consideration of these issues and the fact that most detection problems are actually to distinguish man-made objects from natural background, a fractal-based multi-spectral fusion algorithm has been proposed in this paper aiming at the recognition of battlefield targets in the complicated backgrounds. According to this algorithm, source images are firstly orthogonally decomposed according to wavelet transform theories, and then fractal-based detection is held to each decomposed image. At this step, natural background and man-made targets are distinguished by use of fractal models that can well imitate natural objects. Special fusion operators are employed during the fusion of area that contains man-made targets so that useful information could be preserved and features of targets could be extruded. The final fused image is reconstructed from the

  7. Quality-aware features-based noise level estimator for block matching and three-dimensional filtering algorithm

    Science.gov (United States)

    Xu, Shaoping; Hu, Lingyan; Yang, Xiaohui

    2016-01-01

    The performance of conventional denoising algorithms is usually controlled by one or several parameters whose optimal settings depend on the contents of the processed images and the characteristics of the noises. Among these parameters, noise level is a fundamental parameter that is always assumed to be known by most of the existing denoising algorithms (so-called nonblind denoising algorithms), which largely limits the applicability of these nonblind denoising algorithms in many applications. Moreover, these nonblind algorithms do not always achieve the best denoised images in visual quality even when fed with the actual noise level parameter. To address these shortcomings, in this paper we propose a new quality-aware features-based noise level estimator (NLE), which consists of quality-aware features extraction and optimal noise level parameter prediction. First, considering that image local contrast features convey important structural information that is closely related to image perceptual quality, we utilize the marginal statistics of two local contrast operators, i.e., the gradient magnitude and the Laplacian of Gaussian (LOG), to extract quality-aware features. The proposed quality-aware features have very low computational complexity, making them well suited for time-constrained applications. Then we propose a learning-based framework where the noise level parameter is estimated based on the quality-aware features. Based on the proposed NLE, we develop a blind block matching and three-dimensional filtering (BBM3D) denoising algorithm which is capable of effectively removing additive white Gaussian noise, even coupled with impulse noise. The noise level parameter of the BBM3D algorithm is automatically tuned according to the quality-aware features, guaranteeing the best performance. As such, the classical block matching and three-dimensional algorithm can be transformed into a blind one in an unsupervised manner. Experimental results demonstrate that the

  8. LSB Based Quantum Image Steganography Algorithm

    Science.gov (United States)

    Jiang, Nan; Zhao, Na; Wang, Luo

    2016-01-01

    Quantum steganography is the technique which hides a secret message into quantum covers such as quantum images. In this paper, two blind LSB steganography algorithms in the form of quantum circuits are proposed based on the novel enhanced quantum representation (NEQR) for quantum images. One algorithm is plain LSB which uses the message bits to substitute for the pixels' LSB directly. The other is block LSB which embeds a message bit into a number of pixels that belong to one image block. The extracting circuits can regain the secret message only according to the stego cover. Analysis and simulation-based experimental results demonstrate that the invisibility is good, and the balance between the capacity and the robustness can be adjusted according to the needs of applications.

  9. Spatiotemporal-atlas-based dynamic speech imaging

    Science.gov (United States)

    Fu, Maojing; Woo, Jonghye; Liang, Zhi-Pei; Sutton, Bradley P.

    2016-03-01

    Dynamic magnetic resonance imaging (DS-MRI) has been recognized as a promising method for visualizing articulatory motion of speech in scientific research and clinical applications. However, characterization of the gestural and acoustical properties of the vocal tract remains a challenging task for DS-MRI because it requires: 1) reconstructing high-quality spatiotemporal images by incorporating stronger prior knowledge; and 2) quantitatively interpreting the reconstructed images that contain great motion variability. This work presents a novel imaging method that simultaneously meets both requirements by integrating a spatiotemporal atlas into a Partial Separability (PS) model-based imaging framework. Through the use of an atlas-driven sparsity constraint, this method is capable of capturing high-quality articulatory dynamics at an imaging speed of 102 frames per second and a spatial resolution of 2.2 × 2.2 mm2. Moreover, the proposed method enables quantitative characterization of variability of speech motion, compared to the generic motion pattern across all subjects, through the spatial residual components.

  10. Temporal fusion for de-noising of RGB video received from small UAVs

    Science.gov (United States)

    Fischer, Amber D.; Hildebrand, Kyle J.

    2010-04-01

    Monitoring video data sources received from UAVs is especially challenging because of the quality of the video received. Due to the individual characteristics of the unmanned platform and the changing environment, the important elements in the scene are not always observable or easily identified. In addition to typical sensor noise, significant image degradation can occur during transmission of the video from an airborne platform. Interference from other transmitters, analog noise in the embedded avionics, and multi-path effects can corrupt the video signal during transmission, introducing distortion in the video received at the ground. In some cases, the loss of signal is so severe; no information is received in portions of an image frame. To improve the corrupt video, we capitalize on the oversampling in the temporal domain (across video frames), applying a data fusion approach to de-noise the video. The resulting video retains the significant scene content and dynamics, without distracting artifacts from noise. This allows humans to easily ingest the information from the video, and make it possible to utilize further video exploitation algorithms such as object detection and tracking.

  11. An image registration based ultrasound probe calibration

    Science.gov (United States)

    Li, Xin; Kumar, Dinesh; Sarkar, Saradwata; Narayanan, Ram

    2012-02-01

    Reconstructed 3D ultrasound of prostate gland finds application in several medical areas such as image guided biopsy, therapy planning and dose delivery. In our application, we use an end-fire probe rotated about its axis to acquire a sequence of rotational slices to reconstruct 3D TRUS (Transrectal Ultrasound) image. The image acquisition system consists of an ultrasound transducer situated on a cradle directly attached to a rotational sensor. However, due to system tolerances, axis of probe does not align exactly with the designed axis of rotation resulting in artifacts in the 3D reconstructed ultrasound volume. We present a rigid registration based automatic probe calibration approach. The method uses a sequence of phantom images, each pair acquired at angular separation of 180 degrees and registers corresponding image pairs to compute the deviation from designed axis. A modified shadow removal algorithm is applied for preprocessing. An attribute vector is constructed from image intensity and a speckle-insensitive information-theoretic feature. We compare registration between the presented method and expert-corrected images in 16 prostate phantom scans. Images were acquired at multiple resolutions, and different misalignment settings from two ultrasound machines. Screenshots from 3D reconstruction are shown before and after misalignment correction. Registration parameters from automatic and manual correction were found to be in good agreement. Average absolute differences of translation and rotation between automatic and manual methods were 0.27 mm and 0.65 degree, respectively. The registration parameters also showed lower variability for automatic registration (pooled standard deviation σtranslation = 0.50 mm, σrotation = 0.52 degree) compared to the manual approach (pooled standard deviation σtranslation = 0.62 mm, σrotation = 0.78 degree).

  12. HVS-based medical image compression

    International Nuclear Information System (INIS)

    Kai Xie; Jie Yang; Min Zhuyue; Liang Lixiao

    2005-01-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time

  13. Enhancing Image Retrieval System Using Content Based Search ...

    African Journals Online (AJOL)

    ... performing the search on the entire image database, the image category option directs the retrieval engine to the specified category. Also, there is provision to update or modify the different image categories in the image database as need arise. Keywords: Content-based, Multimedia, Search Engine, Image-based, Texture ...

  14. Wind Statistics Offshore based on Satellite Images

    DEFF Research Database (Denmark)

    Hasager, Charlotte Bay; Mouche, Alexis; Badger, Merete

    2009-01-01

    Ocean wind maps from satellites are routinely processed both at Risø DTU and CLS based on the European Space Agency Envisat ASAR data. At Risø the a priori wind direction is taken from the atmospheric model NOGAPS (Navel Operational Global Atmospheric Prediction System) provided by the U.S. Navy......-based observations become available. At present preliminary results are obtained using the routine methods. The first step in the process is to retrieve raw SAR data, calibrate the images and use a priori wind direction as input to the geophysical model function. From this process the wind speed maps are produced....... The wind maps are geo-referenced. The second process is the analysis of a series of geo-referenced SAR-based wind maps. Previous research has shown that a relatively large number of images are needed for achieving certain accuracies on mean wind speed, Weibull A and k (scale and shape parameters...

  15. Software for medical image based phantom modelling

    International Nuclear Information System (INIS)

    Possani, R.G.; Massicano, F.; Coelho, T.S.; Yoriyaz, H.

    2011-01-01

    Latest treatment planning systems depends strongly on CT images, so the tendency is that the dosimetry procedures in nuclear medicine therapy be also based on images, such as magnetic resonance imaging (MRI) or computed tomography (CT), to extract anatomical and histological information, as well as, functional imaging or activities map as PET or SPECT. This information associated with the simulation of radiation transport software is used to estimate internal dose in patients undergoing treatment in nuclear medicine. This work aims to re-engineer the software SCMS, which is an interface software between the Monte Carlo code MCNP, and the medical images, that carry information from the patient in treatment. In other words, the necessary information contained in the images are interpreted and presented in a specific format to the Monte Carlo MCNP code to perform the simulation of radiation transport. Therefore, the user does not need to understand complex process of inputting data on MCNP, as the SCMS is responsible for automatically constructing anatomical data from the patient, as well as the radioactive source data. The SCMS was originally developed in Fortran- 77. In this work it was rewritten in an object-oriented language (JAVA). New features and data options have also been incorporated into the software. Thus, the new software has a number of improvements, such as intuitive GUI and a menu for the selection of the energy spectra correspondent to a specific radioisotope stored in a XML data bank. The new version also supports new materials and the user can specify an image region of interest for the calculation of absorbed dose. (author)

  16. Curvelet based offline analysis of SEM images.

    Directory of Open Access Journals (Sweden)

    Syed Hamad Shirazi

    Full Text Available Manual offline analysis, of a scanning electron microscopy (SEM image, is a time consuming process and requires continuous human intervention and efforts. This paper presents an image processing based method for automated offline analyses of SEM images. To this end, our strategy relies on a two-stage process, viz. texture analysis and quantification. The method involves a preprocessing step, aimed at the noise removal, in order to avoid false edges. For texture analysis, the proposed method employs a state of the art Curvelet transform followed by segmentation through a combination of entropy filtering, thresholding and mathematical morphology (MM. The quantification is carried out by the application of a box-counting algorithm, for fractal dimension (FD calculations, with the ultimate goal of measuring the parameters, like surface area and perimeter. The perimeter is estimated indirectly by counting the boundary boxes of the filled shapes. The proposed method, when applied to a representative set of SEM images, not only showed better results in image segmentation but also exhibited a good accuracy in the calculation of surface area and perimeter. The proposed method outperforms the well-known Watershed segmentation algorithm.

  17. Fluorescence based molecular in vivo imaging

    International Nuclear Information System (INIS)

    Ebert, Bernd

    2008-01-01

    Molecular imaging represents a modern research area that allows the in vivo study of molecular biological process kinetics using appropriate probes and visualization methods. This methodology may be defined- apart from the contrast media injection - as non-abrasive. In order to reach an in vivo molecular process imaging as accurate as possible the effects of the used probes on the biological should not be too large. The contrast media as important part of the molecular imaging can significantly contribute to the understanding of molecular processes and to the development of tailored diagnostics and therapy. Since more than 15 years PTB is developing optic imaging systems that may be used for fluorescence based visualization of tissue phantoms, small animal models and the localization of tumors and their predecessors, and for the early recognition of inflammatory processes in clinical trials. Cellular changes occur during many diseases, thus the molecular imaging might be of importance for the early diagnosis of chronic inflammatory diseases. Fluorescent dyes can be used as unspecific or also as specific contrast media, which allow enhanced detection sensitivity

  18. Steganographic Capacity of Images, based on Image Equivalence Classes

    DEFF Research Database (Denmark)

    Hansen, Klaus; Hammer, Christian; Andersen, Jens Damgaard

    2001-01-01

    The problem of hiding information imperceptibly can be formulated as the problem of determining if a given image is a member of a sufficiently large equivalence class of images which to the Human Visual System appears to be the same image. This makes it possible to replace the given image...... with a modified image similar in appearance but carrying imperceptibly coded information. This paper presents a framework and an experimental algorithm to estimate upper bounds for the size of an equivalence class....

  19. Web Based Distributed Coastal Image Analysis System, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This project develops Web based distributed image analysis system processing the Moderate Resolution Imaging Spectroradiometer (MODIS) data to provide decision...

  20. Effect of wavelet denoising techniques on the determination of navel orange sugar content with near-infrared spectra

    Science.gov (United States)

    Xie, Lijuan; Huang, Haibo; Ying, Yibin; Lu, Huishan; Fu, Xiaping

    2006-10-01

    Near infrared (NIR) spectroscopy is an ideal analytical method for rapid and nondestrctive measurement of the properties of agriculture products. The efficient use of this method is dependent on multivariate calibration methods determined by the sensitivity to variations. However, fluctuation of background and noise are unavoidable during collecting spectra, which will not only worsen the precision of prediction, but also complicate the multivariate models. Therefore, the first step of a multivariate calibration based on NIR spectra data is often to preprocess the data for the purpose of removing the varying background and noise. In this study, wavelet transform (WT) was used to eliminate the varying background and noise simultaneously in the near infrared spectroscopy signals of 55 navel oranges. Three families of mother wavelets (Symlets, Daubechies and Coiflet), four threshold selection rules (Rigrsure, Heursure, Minimaxi, Fixed form threshold), and two threshold functions (soft and hard) were applied to estimate the performances. The sugar content of intact navel orange was calculated by partial least squares regression (PLSR) with the reconstructed spectra after denoised. The results show that the best denoising performance was reached via the combination of Daubechies 5, "Fixed form" threshold selection rule, and hard threshold function. Based on the optimization parameter, wavelet regression models on sugar content in navel orange were also developed and resulted in a smaller prediction error than a traditional PLSR model.

  1. Canny edge-based deformable image registration.

    Science.gov (United States)

    Kearney, Vasant; Huang, Yihui; Mao, Weihua; Yuan, Baohong; Tang, Liping

    2017-02-07

    This work focuses on developing a 2D Canny edge-based deformable image registration (Canny DIR) algorithm to register in vivo white light images taken at various time points. This method uses a sparse interpolation deformation algorithm to sparsely register regions of the image with strong edge information. A stability criterion is enforced which removes regions of edges that do not deform in a smooth uniform manner. Using a synthetic mouse surface ground truth model, the accuracy of the Canny DIR algorithm was evaluated under axial rotation in the presence of deformation. The accuracy was also tested using fluorescent dye injections, which were then used for gamma analysis to establish a second ground truth. The results indicate that the Canny DIR algorithm performs better than rigid registration, intensity corrected Demons, and distinctive features for all evaluation matrices and ground truth scenarios. In conclusion Canny DIR performs well in the presence of the unique lighting and shading variations associated with white-light-based image registration.

  2. Somatostatin Receptor Based Imaging and Radionuclide Therapy

    Directory of Open Access Journals (Sweden)

    Caiyun Xu

    2015-01-01

    Full Text Available Somatostatin (SST receptors (SSTRs belong to the typical 7-transmembrane domain family of G-protein-coupled receptors. Five distinct subtypes (termed SSTR1-5 have been identified, with SSTR2 showing the highest affinity for natural SST and synthetic SST analogs. Most neuroendocrine tumors (NETs have high expression levels of SSTRs, which opens the possibility for tumor imaging and therapy with radiolabeled SST analogs. A number of tracers have been developed for the diagnosis, staging, and treatment of NETs with impressive results, which facilitates the applications of human SSTR subtype 2 (hSSTr2 reporter gene based imaging and therapy in SSTR negative or weakly positive tumors to provide a novel approach for the management of tumors. The hSSTr2 gene can act as not only a reporter gene for in vivo imaging, but also a therapeutic gene for local radionuclide therapy. Even a second therapeutic gene can be transfected into the same tumor cells together with hSSTr2 reporter gene to obtain a synergistic therapeutic effect. However, additional preclinical and especially translational and clinical researches are needed to confirm the value of hSSTr2 reporter gene based imaging and therapy in tumors.

  3. Somatostatin Receptor Based Imaging and Radionuclide Therapy

    Science.gov (United States)

    Zhang, Hong

    2015-01-01

    Somatostatin (SST) receptors (SSTRs) belong to the typical 7-transmembrane domain family of G-protein-coupled receptors. Five distinct subtypes (termed SSTR1-5) have been identified, with SSTR2 showing the highest affinity for natural SST and synthetic SST analogs. Most neuroendocrine tumors (NETs) have high expression levels of SSTRs, which opens the possibility for tumor imaging and therapy with radiolabeled SST analogs. A number of tracers have been developed for the diagnosis, staging, and treatment of NETs with impressive results, which facilitates the applications of human SSTR subtype 2 (hSSTr2) reporter gene based imaging and therapy in SSTR negative or weakly positive tumors to provide a novel approach for the management of tumors. The hSSTr2 gene can act as not only a reporter gene for in vivo imaging, but also a therapeutic gene for local radionuclide therapy. Even a second therapeutic gene can be transfected into the same tumor cells together with hSSTr2 reporter gene to obtain a synergistic therapeutic effect. However, additional preclinical and especially translational and clinical researches are needed to confirm the value of hSSTr2 reporter gene based imaging and therapy in tumors. PMID:25879040

  4. Image Fakery Detection Based on Singular Value Decomposition

    Directory of Open Access Journals (Sweden)

    T. Basaruddin

    2009-11-01

    Full Text Available The growing of image processing technology nowadays make it easier for user to modify and fake the images. Image fakery is a process to manipulate part or whole areas of image either in it content or context with the help of digital image processing techniques. Image fakery is barely unrecognizable because the fake image is looking so natural. Yet by using the numerical computation technique it is able to detect the evidence of fake image. This research is successfully applied the singular value decomposition method to detect image fakery. The image preprocessing algorithm prior to the detection process yields two vectors orthogonal to the singular value vector which are important to detect fake image. The result of experiment to images in several conditions successfully detects the fake images with threshold value 0.2. Singular value decomposition-based detection of image fakery can be used to investigate fake image modified from original image accurately.

  5. Illumination compensation in ground based hyperspectral imaging

    Science.gov (United States)

    Wendel, Alexander; Underwood, James

    2017-07-01

    Hyperspectral imaging has emerged as an important tool for analysing vegetation data in agricultural applications. Recently, low altitude and ground based hyperspectral imaging solutions have come to the fore, providing very high resolution data for mapping and studying large areas of crops in detail. However, these platforms introduce a unique set of challenges that need to be overcome to ensure consistent, accurate and timely acquisition of data. One particular problem is dealing with changes in environmental illumination while operating with natural light under cloud cover, which can have considerable effects on spectral shape. In the past this has been commonly achieved by imaging known reference targets at the time of data acquisition, direct measurement of irradiance, or atmospheric modelling. While capturing a reference panel continuously or very frequently allows accurate compensation for illumination changes, this is often not practical with ground based platforms, and impossible in aerial applications. This paper examines the use of an autonomous unmanned ground vehicle (UGV) to gather high resolution hyperspectral imaging data of crops under natural illumination. A process of illumination compensation is performed to extract the inherent reflectance properties of the crops, despite variable illumination. This work adapts a previously developed subspace model approach to reflectance and illumination recovery. Though tested on a ground vehicle in this paper, it is applicable to low altitude unmanned aerial hyperspectral imagery also. The method uses occasional observations of reference panel training data from within the same or other datasets, which enables a practical field protocol that minimises in-field manual labour. This paper tests the new approach, comparing it against traditional methods. Several illumination compensation protocols for high volume ground based data collection are presented based on the results. The findings in this paper are

  6. ImageSURF: An ImageJ Plugin for Batch Pixel-Based Image Segmentation Using Random Forests

    Directory of Open Access Journals (Sweden)

    Aidan O'Mara

    2017-11-01

    Full Text Available Image segmentation is a necessary step in automated quantitative imaging. ImageSURF is a macro-compatible ImageJ2/FIJI plugin for pixel-based image segmentation that considers a range of image derivatives to train pixel classifiers which are then applied to image sets of any size to produce segmentations without bias in a consistent, transparent and reproducible manner. The plugin is available from ImageJ update site http://sites.imagej.net/ImageSURF/ and source code from https://github.com/omaraa/ImageSURF. Funding statement: This research was supported by an Australian Government Research Training Program Scholarship.

  7. MATLAB-Based Applications for Image Processing and Image Quality Assessment – Part I: Software Description

    Directory of Open Access Journals (Sweden)

    L. Krasula

    2011-12-01

    Full Text Available This paper describes several MATLAB-based applications useful for image processing and image quality assessment. The Image Processing Application helps user to easily modify images, the Image Quality Adjustment Application enables to create series of pictures with different quality. The Image Quality Assessment Application contains objective full reference quality metrics that can be used for image quality assessment. The Image Quality Evaluation Applications represent an easy way to compare subjectively the quality of distorted images with reference image. Results of these subjective tests can be processed by using the Results Processing Application. All applications provide Graphical User Interface (GUI for the intuitive usage.

  8. Hybrid de-noising approach for fiber optic gyroscopes combining improved empirical mode decomposition and forward linear prediction algorithms

    Science.gov (United States)

    Shen, Chong; Cao, Huiliang; Li, Jie; Tang, Jun; Zhang, Xiaoming; Shi, Yunbo; Yang, Wei; Liu, Jun

    2016-03-01

    A noise reduction algorithm based on an improved empirical mode decomposition (EMD) and forward linear prediction (FLP) is proposed for the fiber optic gyroscope (FOG). Referred to as the EMD-FLP algorithm, it was developed to decompose the FOG outputs into a number of intrinsic mode functions (IMFs) after which mode manipulations are performed to select noise-only IMFs, mixed IMFs, and residual IMFs. The FLP algorithm is then employed to process the mixed IMFs, from which the refined IMFs components are reconstructed to produce the final de-noising results. This hybrid approach is applied to, and verified using, both simulated signals and experimental FOG outputs. The results from the applications show that the method eliminates noise more effectively than the conventional EMD or FLP methods and decreases the standard deviations of the FOG outputs after de-noising from 0.17 to 0.026 under sweep frequency vibration and from 0.22 to 0.024 under fixed frequency vibration.

  9. Image-Based Multiresolution Implicit Object Modeling

    Directory of Open Access Journals (Sweden)

    Sarti Augusto

    2002-01-01

    Full Text Available We discuss two image-based 3D modeling methods based on a multiresolution evolution of a volumetric function′s level set. In the former method, the role of the level set implosion is to fuse ("sew" and "stitch" together several partial reconstructions (depth maps into a closed model. In the later, the level set′s implosion is steered directly by the texture mismatch between views. Both solutions share the characteristic of operating in an adaptive multiresolution fashion, in order to boost up computational efficiency and robustness.

  10. Content based image retrieval based on wavelet transform coefficients distribution.

    Science.gov (United States)

    Lamard, Mathieu; Cazuguel, Guy; Quellec, Gwénolé; Bekri, Lynda; Roux, Christian; Cochener, Béatrice

    2007-01-01

    In this paper we propose a content based image retrieval method for diagnosis aid in medical fields. We characterize images without extracting significant features by using distribution of coefficients obtained by building signatures from the distribution of wavelet transform. The research is carried out by computing signature distances between the query and database images. Several signatures are proposed; they use a model of wavelet coefficient distribution. To enhance results, a weighted distance between signatures is used and an adapted wavelet base is proposed. Retrieval efficiency is given for different databases including a diabetic retinopathy, a mammography and a face database. Results are promising: the retrieval efficiency is higher than 95% for some cases using an optimization process.

  11. Content Based Image Retrieval based on Wavelet Transform coefficients distribution

    Science.gov (United States)

    Lamard, Mathieu; Cazuguel, Guy; Quellec, Gwénolé; Bekri, Lynda; Roux, Christian; Cochener, Béatrice

    2007-01-01

    In this paper we propose a content based image retrieval method for diagnosis aid in medical fields. We characterize images without extracting significant features by using distribution of coefficients obtained by building signatures from the distribution of wavelet transform. The research is carried out by computing signature distances between the query and database images. Several signatures are proposed; they use a model of wavelet coefficient distribution. To enhance results, a weighted distance between signatures is used and an adapted wavelet base is proposed. Retrieval efficiency is given for different databases including a diabetic retinopathy, a mammography and a face database. Results are promising: the retrieval efficiency is higher than 95% for some cases using an optimization process. PMID:18003013

  12. A hash-based image encryption algorithm

    Science.gov (United States)

    Cheddad, Abbas; Condell, Joan; Curran, Kevin; McKevitt, Paul

    2010-03-01

    There exist several algorithms that deal with text encryption. However, there has been little research carried out to date on encrypting digital images or video files. This paper describes a novel way of encrypting digital images with password protection using 1D SHA-2 algorithm coupled with a compound forward transform. A spatial mask is generated from the frequency domain by taking advantage of the conjugate symmetry of the complex imagery part of the Fourier Transform. This mask is then XORed with the bit stream of the original image. Exclusive OR (XOR), a logical symmetric operation, that yields 0 if both binary pixels are zeros or if both are ones and 1 otherwise. This can be verified simply by modulus (pixel1, pixel2, 2). Finally, confusion is applied based on the displacement of the cipher's pixels in accordance with a reference mask. Both security and performance aspects of the proposed method are analyzed, which prove that the method is efficient and secure from a cryptographic point of view. One of the merits of such an algorithm is to force a continuous tone payload, a steganographic term, to map onto a balanced bits distribution sequence. This bit balance is needed in certain applications, such as steganography and watermarking, since it is likely to have a balanced perceptibility effect on the cover image when embedding.

  13. A fuzzy art neural network based color image processing and ...

    African Journals Online (AJOL)

    A fuzzy art neural network based color image processing and recognition scheme. ... color image pixels, which enables a Fuzzy ART neural network to process the RGB color images. The application of the algorithm was implemented and tested on a set of RGB color face images. Keywords: Color image processing, RGB, ...

  14. Tracking latency in image-based dynamic MLC tracking with direct image access.

    Science.gov (United States)

    Fledelius, Walther; Keall, Paul J; Cho, Byungchul; Yang, Xinhui; Morf, Daniel; Scheib, Stefan; Poulsen, Per R

    2011-08-01

    Target tracking is a promising method for motion compensation in radiotherapy. For image-based dynamic multileaf collimator (DMLC) tracking, latency has been shown to be the main contributor to geometrical errors in tracking of respiratory motion, specifically due to slow transfer of image data from the image acquisition system to the tracking system via image file storage on a hard disk. The purpose of the current study was to integrate direct image access with a DMLC tracking system and to quantify the tracking latency of the integrated system for both kV and MV image-based tracking. A DMLC tracking system integrated with a linear accelerator was used for tracking of a motion phantom with an embedded tungsten marker. Real-time target localization was based on x-ray images acquired either with a portal imager or a kV imager mounted orthogonal to the treatment beam. Images were processed directly without intermediate disk access. Continuous portal images and system log files were stored during treatment delivery for detailed offline analysis of the tracking latency. The mean tracking system latency for kV and MV image-based tracking as function of the imaging interval ΔT(image) increased linearly with ΔT(image) as 148 ms + 0.58 * ΔT(image) (kV) and 162 ms + 1.1 * ΔT(image) (MV). The latency contribution from image acquisition and image transfer for kV image-based tracking was independent on ΔT(image) at 103 ± 14 ms. For MV-based tracking, it increased with ΔT(image) as 124 ms + 0.44 * ΔT(image). For ΔT(image) = 200 ms (5 Hz imaging), the total latency was reduced from 550 ms to 264 ms for kV image-based tracking and from 500 ms to 382 ms for MV image-based tracking as compared to the previously used indirect image transfer via image file storage on a hard disk. kV and MV image-based DMLC tracking was successfully integrated with direct image access. It resulted in substantial tracking latency reductions compared with image-based tracking without direct

  15. Dictionary-enhanced imaging cytometry

    Science.gov (United States)

    Orth, Antony; Schaak, Diane; Schonbrun, Ethan

    2017-02-01

    State-of-the-art high-throughput microscopes are now capable of recording image data at a phenomenal rate, imaging entire microscope slides in minutes. In this paper we investigate how a large image set can be used to perform automated cell classification and denoising. To this end, we acquire an image library consisting of over one quarter-million white blood cell (WBC) nuclei together with CD15/CD16 protein expression for each cell. We show that the WBC nucleus images alone can be used to replicate CD expression-based gating, even in the presence of significant imaging noise. We also demonstrate that accurate estimates of white blood cell images can be recovered from extremely noisy images by comparing with a reference dictionary. This has implications for dose-limited imaging when samples belong to a highly restricted class such as a well-studied cell type. Furthermore, large image libraries may endow microscopes with capabilities beyond their hardware specifications in terms of sensitivity and resolution. We call for researchers to crowd source large image libraries of common cell lines to explore this possibility.

  16. Image superresolution of cytology images using wavelet based patch search

    Science.gov (United States)

    Vargas, Carlos; García-Arteaga, Juan D.; Romero, Eduardo

    2015-01-01

    Telecytology is a new research area that holds the potential of significantly reducing the number of deaths due to cervical cancer in developing countries. This work presents a novel super-resolution technique that couples high and low frequency information in order to reduce the bandwidth consumption of cervical image transmission. The proposed approach starts by decomposing into wavelets the high resolution images and transmitting only the lower frequency coefficients. The transmitted coefficients are used to reconstruct an image of the original size. Additional details are added by iteratively replacing patches of the wavelet reconstructed image with equivalent high resolution patches from a previously acquired image database. Finally, the original transmitted low frequency coefficients are used to correct the final image. Results show a higher signal to noise ratio in the proposed method over simply discarding high frequency wavelet coefficients or replacing directly down-sampled patches from the image-database.

  17. Denoising for Different Noisy Chaotic Signal Based on Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Jun Ma

    2014-01-01

    Full Text Available In a complete Chaotic radar ranging system, its effective range is often limited by the randomness of the chaotic signal itself and other transmission channel noises or interferences. In order to improve the precision and accuracy of radar ranging system, wavelet transform is proposed to remove different kinds of noise embedded in chaotic signals. White Gaussian noise, colored Gaussian noise as well as sine-wave signal are respectively applied for simulation analysis. Applied for simulation analysis, the experimental results show that wavelet transform can not only remove the chaotic signal mixed in some of the different types of noise, and can also improve the noise ratio.

  18. Electrocardiogram de-noising based on forward wavelet transform ...

    Indian Academy of Sciences (India)

    Formerly, Yao adopted a nonlinear auditory model (Yao & Zhang 2000) in which each basilar membrane point is modelled by the following equation: ¨d(x,t) + .... exhibits visual artefacts such as Gibbs phenomena in discontinuities neighbourhood of disconti- nuities and this due to the lack of translation invariance of the ...

  19. Compressive sensing based ptychography image encryption

    Science.gov (United States)

    Rawat, Nitin

    2015-09-01

    A compressive sensing (CS) based ptychography combined with an optical image encryption is proposed. The diffraction pattern is recorded through ptychography technique further compressed by non-uniform sampling via CS framework. The system requires much less encrypted data and provides high security. The diffraction pattern as well as the lesser measurements of the encrypted samples serves as a secret key which make the intruder attacks more difficult. Furthermore, CS shows that the linearly projected few random samples have adequate information for decryption with a dramatic volume reduction. Experimental results validate the feasibility and effectiveness of our proposed technique compared with the existing techniques. The retrieved images do not reveal any information with the original information. In addition, the proposed system can be robust even with partial encryption and under brute-force attacks.

  20. SQL based cardiovascular ultrasound image classification.

    Science.gov (United States)

    Nandagopalan, S; Suryanarayana, Adiga B; Sudarshan, T S B; Chandrashekar, Dhanalakshmi; Manjunath, C N

    2013-01-01

    This paper proposes a novel method to analyze and classify the cardiovascular ultrasound echocardiographic images using Naïve-Bayesian model via database OLAP-SQL. Efficient data mining algorithms based on tightly-coupled model is used to extract features. Three algorithms are proposed for classification namely Naïve-Bayesian Classifier for Discrete variables (NBCD) with SQL, NBCD with OLAP-SQL, and Naïve-Bayesian Classifier for Continuous variables (NBCC) using OLAP-SQL. The proposed model is trained with 207 patient images containing normal and abnormal categories. Out of the three proposed algorithms, a high classification accuracy of 96.59% was achieved from NBCC which is better than the earlier methods.

  1. Earth-based optical imaging of Mercury

    Science.gov (United States)

    Ksanfomality, L. V.

    2006-01-01

    In recent years, considerable progress has been achieved in producing resolved images of Mercury electronically with short exposures at Earth-based telescopes. For the purpose of obtaining images of the unknown portion of Mercury, the previously started series of observations of this planet by the short exposure method was continued. About 20,000 electronic images of Mercury have been acquired on 1-2 May 2002 under good meteorological conditions during the evening elongation. The phase angle of Mercury was 95-99° and the observed range of longitudes was 210-285°W. Observations were carried out using Ritchy-Chrétien telescope ( D = 1.29 m, F = 9.86 m) with the KS 19 filter cutting wavelengths shorter than about 700 nm. The planet's disk was seen, on average, at an angle of 7.7″. A CCD with a pixel size of 7.4 × 7.4 ncm in the regime of short exposures was used. By processing a great number of electronic images, a sufficiently distinct synthesized image of the unknown portion of Mercury's surface was obtained. The most prominent formation in this region is a giant basin (or cratered "mare") centered at about 8°N, 280°W, which was given a working name "Skinakas basin" (after the name of the observatory where observations were made). By its size, the interior part of this basin exceeds the largest lunar Mare Imbrium. As opposed to Mare Imbrium, the Skinakas basin is presumably of impact origin. Its relief resembles that of Caloris Planitia but the size is much larger. A series of smaller formations are also seen on synthesized images. The resolution obtained on the surface of Mercury is about 100 km, which is close to the telescope diffraction limit. Also considered is the synthesized image obtained at the Mount Bigelow Observatory, on December 4, 2003 (Ritchy-Chrétien telescope, D = 1.54 m, F = 20.79 m, using the same CCD camera).

  2. Accelerated Compressed Sensing Based CT Image Reconstruction

    Directory of Open Access Journals (Sweden)

    SayedMasoud Hashemi

    2015-01-01

    Full Text Available In X-ray computed tomography (CT an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.

  3. Spectrum image analysis tool - A flexible MATLAB solution to analyze EEL and CL spectrum images.

    Science.gov (United States)

    Schmidt, Franz-Philipp; Hofer, Ferdinand; Krenn, Joachim R

    2017-02-01

    Spectrum imaging techniques, gaining simultaneously structural (image) and spectroscopic data, require appropriate and careful processing to extract information of the dataset. In this article we introduce a MATLAB based software that uses three dimensional data (EEL/CL spectrum image in dm3 format (Gatan Inc.'s DigitalMicrograph ® )) as input. A graphical user interface enables a fast and easy mapping of spectral dependent images and position dependent spectra. First, data processing such as background subtraction, deconvolution and denoising, second, multiple display options including an EEL/CL moviemaker and, third, the applicability on a large amount of data sets with a small work load makes this program an interesting tool to visualize otherwise hidden details. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Homotopy Based Reconstruction from Acoustic Images

    DEFF Research Database (Denmark)

    Sharma, Ojaswa

    are reconstruction from an organised set of linear cross sections and reconstruction from an arbitrary set of linear cross sections. The first problem is looked upon in the context of acoustic signals wherein the cross sections show a definite geometric arrangement. A reconstruction in this case can take advantage...... GPU (Graphics Processing Unit) based methods are suggested for a streaming computation on large volumes of data. Validation of results for acoustic images is not straightforward due to unavailability of ground truth. Accuracy figures for the suggested methods are provided using phantom object...

  5. Three-dimensional Segmentation of Retinal Cysts from Spectral-domain Optical Coherence Tomography Images by the Use of Three-dimensional Curvelet Based K-SVD.

    Science.gov (United States)

    Esmaeili, Mahdad; Dehnavi, Alireza Mehri; Rabbani, Hossein; Hajizadeh, Fedra

    2016-01-01

    This paper presents a new three-dimensional curvelet transform based dictionary learning for automatic segmentation of intraretinal cysts, most relevant prognostic biomarker in neovascular age-related macular degeneration, from 3D spectral-domain optical coherence tomography (SD-OCT) images. In particular, we focus on the Spectralis SD-OCT (Heidelberg Engineering, Heidelberg, Germany) system, and show the applicability of our algorithm in the segmentation of these features. For this purpose, we use recursive Gaussian filter and approximate the corrupted pixels from its surrounding, then in order to enhance the cystoid dark space regions and future noise suppression we introduce a new scheme in dictionary learning and take curvelet transform of filtered image then denoise and modify each noisy coefficients matrix in each scale with predefined initial 3D sparse dictionary. Dark pixels between retinal pigment epithelium and nerve fiber layer that were extracted with graph theory are considered as cystoid spaces. The average dice coefficient for the segmentation of cystoid regions in whole 3D volume and with-in central 3 mm diameter on the MICCAI 2015 OPTIMA Cyst Segmentation Challenge dataset were found to be 0.65 and 0.77, respectively.

  6. An Innovative Wavelet Threshold Denoising Method for Environmental Drift of Fiber Optic Gyro

    Directory of Open Access Journals (Sweden)

    Qian Zhang

    2016-01-01

    Full Text Available Fiber optic gyroscope (FOG is a core component in modern inertial technology. However, the precision and performance of FOG will be degraded by environmental drift, especially in complex temperature environment. As the modeling performance is affected by the noises in the output data of FOG, an improved wavelet threshold value based on Allan variance and Classical variance is proposed for discrete wavelet analysis to decompose the temperature drift trend item and noise items. Firstly, the relationship of Allan variance and Classical variance is introduced by analyzing the drawback of traditional wavelet threshold. Secondly, an improved threshold is put forward based on Allan variance and Classical variance which overcomes the shortcoming of traditional wavelet threshold method. Finally, the innovative threshold algorithm is experimentally evaluated on FOG. The mathematical evaluation results show that the new method can get better signal-to-noise ratio (SNR and gain the reconstruction signal of the higher correlation coefficient (CC. As an experimental validation, the nonlinear capability of error back propagation neural network (BP neural network is used to fit the drift trend item and find out the complex relationship between the FOG drift and temperature, and the final processing results indicate that the new denoising method can get better root of mean square error (MSE.

  7. Medical image noise reduction using the Sylvester-Lyapunov equation.

    Science.gov (United States)

    Sanches, João M; Nascimento, Jacinto C; Marques, Jorge S

    2008-09-01

    Multiplicative noise is often present in medical and biological imaging, such as magnetic resonance imaging (MRI), Ultrasound, positron emission tomography (PET), single photon emission computed tomography (SPECT), and fluorescence microscopy. Noise reduction in medical images is a difficult task in which linear filtering algorithms usually fail. Bayesian algorithms have been used with success but they are time consuming and computationally demanding. In addition, the increasing importance of the 3-D and 4-D medical image analysis in medical diagnosis procedures increases the amount of data that must be efficiently processed. This paper presents a Bayesian denoising algorithm which copes with additive white Gaussian and multiplicative noise described by Poisson and Rayleigh distributions. The algorithm is based on the maximum a posteriori (MAP) criterion, and edge preserving priors which avoid the distortion of relevant anatomical details. The main contribution of the paper is the unification of a set of Bayesian denoising algorithms for additive and multiplicative noise using a well-known mathematical framework, the Sylvester-Lyapunov equation, developed in the context of the Control theory.

  8. Super-Drizzle: Applications of Adaptive Kernel Regression in Astronomical Imaging

    National Research Council Canada - National Science Library

    Takeda, Hiroyuki; Farsiu, Sina; Christou, Julian; Milanfar, Peyman

    2006-01-01

    .... For example, a very popular implementation of this method, as studied by Frutcher and Hook, has been used to fuse, denoise, and increase the spatial resolution of the images captured by the Hubble Space Telescope (HST...

  9. Multi-radar super-resolution imaging based compressed sensing

    Science.gov (United States)

    Ye, Fan; Liu, JiYing; Zhu, Jubo

    2017-07-01

    In this paper, a technology of multi-radar imaging based on compressed sensing is proposed to improve image resolution. By constructing sample matrix, multi-radar super-resolution imaging is transformed to a compressed sensing problem. Utilizing the signal's sparsity, super-resolution image can be obtained by solving an optimization problem. Simulation shows effectiveness of this technology.

  10. Automatic Matching of High Resolution Satellite Images Based on RFM

    OpenAIRE

    JI Shunping; YUAN Xiuxiao

    2016-01-01

    A matching method for high resolution satellite images based on RFM is presented.Firstly,the RFM parameters are used to predict the initial parallax of corresponding points and the prediction accuracy is analyzed.Secondly,the approximate epipolar equation is constructed based on projection tracking and its accuracy is analyzed.Thirdly,approximate 1D image matching is executed on pyramid images and least square matching on base images.At last RANSAC is imbedded to eliminate mis-matching points...

  11. Adaptive radiation image enhancement based on different image quality evaluation standards

    International Nuclear Information System (INIS)

    Guo Xiaojing; Wu Zhifang

    2012-01-01

    Genetic algorithm based on incomplete Beta function was realized, and adaptive gray transform based on the said genetic algorithm was implemented, based as such, three image quality evaluation standards were applied in the adaptive gray transform of radiation images, and effects of processing time, stability, generation number and so on of the three standards were compared. The better algorithm scheme was applied in image processing module of container DR/CT inspection system to obtain effective adaptive image enhancement. (authors)

  12. The Jump Set under Geometric Regularization. Part 1: Basic Technique and First-Order Denoising

    KAUST Repository

    Valkonen, Tuomo

    2015-01-01

    © 2015 Society for Industrial and Applied Mathematics. Let u ∈ BV(Ω) solve the total variation (TV) denoising problem with L2-squared fidelity and data f. Caselles, Chambolle, and Novaga [Multiscale Model. Simul., 6 (2008), pp. 879-894] have shown the containment Hm-1 (Ju \\\\Jf) = 0 of the jump set Ju of u in that of f. Their proof unfortunately depends heavily on the co-area formula, as do many results in this area, and as such is not directly extensible to higher-order, curvature-based, and other advanced geometric regularizers, such as total generalized variation and Euler\\'s elastica. These have received increased attention in recent times due to their better practical regularization properties compared to conventional TV or wavelets. We prove analogous jump set containment properties for a general class of regularizers. We do this with novel Lipschitz transformation techniques and do not require the co-area formula. In the present Part 1 we demonstrate the general technique on first-order regularizers, while in Part 2 we will extend it to higher-order regularizers. In particular, we concentrate in this part on TV and, as a novelty, Huber-regularized TV. We also demonstrate that the technique would apply to nonconvex TV models as well as the Perona-Malik anisotropic diffusion, if these approaches were well-posed to begin with.

  13. Marginalised Stacked Denoising Autoencoders for Robust Representation of Real-Time Multi-View Action Recognition

    Directory of Open Access Journals (Sweden)

    Feng Gu

    2015-07-01

    Full Text Available Multi-view action recognition has gained a great interest in video surveillance, human computer interaction, and multimedia retrieval, where multiple cameras of different types are deployed to provide a complementary field of views. Fusion of multiple camera views evidently leads to more robust decisions on both tracking multiple targets and analysing complex human activities, especially where there are occlusions. In this paper, we incorporate the marginalised stacked denoising autoencoders (mSDA algorithm to further improve the bag of words (BoWs representation in terms of robustness and usefulness for multi-view action recognition. The resulting representations are fed into three simple fusion strategies as well as a multiple kernel learning algorithm at the classification stage. Based on the internal evaluation, the codebook size of BoWs and the number of layers of mSDA may not significantly affect recognition performance. According to results on three multi-view benchmark datasets, the proposed framework improves recognition performance across all three datasets and outputs record recognition performance, beating the state-of-art algorithms in the literature. It is also capable of performing real-time action recognition at a frame rate ranging from 33 to 45, which could be further improved by using more powerful machines in future applications.

  14. The Calibration Home Base for Imaging Spectrometers

    Directory of Open Access Journals (Sweden)

    Johannes Felix Simon Brachmann

    2016-08-01

    Full Text Available The Calibration Home Base (CHB is an optical laboratory designed for the calibration of imaging spectrometers for the VNIR/SWIR wavelength range. Radiometric, spectral and geometric calibration as well as the characterization of sensor signal dependency on polarization are realized in a precise and highly automated fashion. This allows to carry out a wide range of time consuming measurements in an ecient way. The implementation of ISO 9001 standards in all procedures ensures a traceable quality of results. Spectral measurements in the wavelength range 380–1000 nm are performed to a wavelength uncertainty of +- 0.1 nm, while an uncertainty of +-0.2 nm is reached in the wavelength range 1000 – 2500 nm. Geometric measurements are performed at increments of 1.7 µrad across track and 7.6 µrad along track. Radiometric measurements reach an absolute uncertainty of +-3% (k=1. Sensor artifacts, such as caused by stray light will be characterizable and correctable in the near future. For now, the CHB is suitable for the characterization of pushbroom sensors, spectrometers and cameras. However, it is planned to extend the CHBs capabilities in the near future such that snapshot hyperspectral imagers can be characterized as well. The calibration services of the CHB are open to third party customers from research institutes as well as industry.

  15. CLUE: cluster-based retrieval of images by unsupervised learning.

    Science.gov (United States)

    Chen, Yixin; Wang, James Z; Krovetz, Robert

    2005-08-01

    In a typical content-based image retrieval (CBIR) system, target images (images in the database) are sorted by feature similarities with respect to the query. Similarities among target images are usually ignored. This paper introduces a new technique, cluster-based retrieval of images by unsupervised learning (CLUE), for improving user interaction with image retrieval systems by fully exploiting the similarity information. CLUE retrieves image clusters by applying a graph-theoretic clustering algorithm to a collection of images in the vicinity of the query. Clustering in CLUE is dynamic. In particular, clusters formed depend on which images are retrieved in response to the query. CLUE can be combined with any real-valued symmetric similarity measure (metric or nonmetric). Thus, it may be embedded in many current CBIR systems, including relevance feedback systems. The performance of an experimental image retrieval system using CLUE is evaluated on a database of around 60,000 images from COREL. Empirical results demonstrate improved performance compared with a CBIR system using the same image similarity measure. In addition, results on images returned by Google's Image Search reveal the potential of applying CLUE to real-world image data and integrating CLUE as a part of the interface for keyword-based image retrieval systems.

  16. ECG denoising and fiducial point extraction using an extended Kalman filtering framework with linear and nonlinear phase observations.

    Science.gov (United States)

    Akhbari, Mahsa; Shamsollahi, Mohammad B; Jutten, Christian; Armoundas, Antonis A; Sayadi, Omid

    2016-02-01

    In this paper we propose an efficient method for denoising and extracting fiducial point (FP) of ECG signals. The method is based on a nonlinear dynamic model which uses Gaussian functions to model ECG waveforms. For estimating the model parameters, we use an extended Kalman filter (EKF). In this framework called EKF25, all the parameters of Gaussian functions as well as the ECG waveforms (P-wave, QRS complex and T-wave) in the ECG dynamical model, are considered as state variables. In this paper, the dynamic time warping method is used to estimate the nonlinear ECG phase observation. We compare this new approach with linear phase observation models. Using linear and nonlinear EKF25 for ECG denoising and nonlinear EKF25 for fiducial point extraction and ECG interval analysis are the main contributions of this paper. Performance comparison with other EKF-based techniques shows that the proposed method results in higher output SNR with an average SNR improvement of 12 dB for an input SNR of -8 dB. To evaluate the FP extraction performance, we compare the proposed method with a method based on partially collapsed Gibbs sampler and an established EKF-based method. The mean absolute error and the root mean square error of all FPs, across all databases are 14 ms and 22 ms, respectively, for our proposed method, with an advantage when using a nonlinear phase observation. These errors are significantly smaller than errors obtained with other methods. For ECG interval analysis, with an absolute mean error and a root mean square error of about 22 ms and 29 ms, the proposed method achieves better accuracy and smaller variability with respect to other methods.

  17. Color image coding based on recurrent iterated function systems

    Science.gov (United States)

    Kim, Kwon; Park, Rae-Hong

    1998-02-01

    This paper proposes a color image coding method based on recurrent iterated function systems (RIFSs). To encode a set of multispectral images, we apply a RIFS to multiset data consisting of three images. In the proposed method, the mappings not only between blocks within an individual spectral image but also between blocks of different spectral images are performed with contraction constraint. Simulation results show that the fractal coding based on the RIFS is useful for encoding concurrently a set of images by describing the similarity existing between a pair of images. In addition, the proposed color coding method can be applied to subband images and moving image sequences consisting of a set of images having similar gray patterns.

  18. Image based book cover recognition and retrieval

    Science.gov (United States)

    Sukhadan, Kalyani; Vijayarajan, V.; Krishnamoorthi, A.; Bessie Amali, D. Geraldine

    2017-11-01

    In this we are developing a graphical user interface using MATLAB for the users to check the information related to books in real time. We are taking the photos of the book cover using GUI, then by using MSER algorithm it will automatically detect all the features from the input image, after this it will filter bifurcate non-text features which will be based on morphological difference between text and non-text regions. We implemented a text character alignment algorithm which will improve the accuracy of the original text detection. We will also have a look upon the built in MATLAB OCR recognition algorithm and an open source OCR which is commonly used to perform better detection results, post detection algorithm is implemented and natural language processing to perform word correction and false detection inhibition. Finally, the detection result will be linked to internet to perform online matching. More than 86% accuracy can be obtained by this algorithm.

  19. Image-based reflectance conversion of ASTER and IKONOS ...

    African Journals Online (AJOL)

    Spectral signatures derived from different image-based models for ASTER and IKONOS were inspected visually as first departure. This was followed by comparison of the total accuracy and Kappa index computed from supervised classification of images that were derived from different image-based atmospheric correction ...

  20. Performance Analysis of Segmentation of Hyperspectral Images Based on Color Image Segmentation

    Directory of Open Access Journals (Sweden)

    Praveen Agarwal

    2017-06-01

    Full Text Available Image segmentation is a fundamental approach in the field of image processing and based on user’s application .This paper propose an original and simple segmentation strategy based on the EM approach that resolves many informatics problems about hyperspectral images which are observed by airborne sensors. In a first step, to simplify the input color textured image into a color image without texture. The final segmentation is simply achieved by a spatially color segmentation using feature vector with the set of color values contained around the pixel to be classified with some mathematical equations. The spatial constraint allows taking into account the inherent spatial relationships of any image and its color. This approach provides effective PSNR for the segmented image. These results have the better performance as the segmented images are compared with Watershed & Region Growing Algorithm and provide effective segmentation for the Spectral Images & Medical Images.

  1. Edge Detection from RGB-D Image Based on Structured Forests

    Directory of Open Access Journals (Sweden)

    Heng Zhang

    2016-01-01

    Full Text Available This paper looks into the fundamental problem in computer vision: edge detection. We propose a new edge detector using structured random forests as the classifier, which can make full use of RGB-D image information from Kinect. Before classification, the adaptive bilateral filter is used for the denoising processing of the depth image. As data sources, information of 13 channels from RGB-D image is computed. In order to train the random forest classifier, the approximation measurement of the information gain is used. All the structured labels at a given node are mapped to a discrete set of labels using the Principal Component Analysis (PCA method. NYUD2 dataset is used to train our structured random forests. The random forest algorithm is used to classify the RGB-D image information for extracting the edge of the image. In addition to the proposed methodology, the quantitative comparisons of different algorithms are presented. The results of the experiments demonstrate the significant improvements of our algorithm over the state of the art.

  2. Stacked Denoising Autoencoders Applied to Star/Galaxy Classification

    Science.gov (United States)

    Qin, Hao-ran; Lin, Ji-ming; Wang, Jun-yi

    2017-04-01

    In recent years, the deep learning algorithm, with the characteristics of strong adaptability, high accuracy, and structural complexity, has become more and more popular, but it has not yet been used in astronomy. In order to solve the problem that the star/galaxy classification accuracy is high for the bright source set, but low for the faint source set of the Sloan Digital Sky Survey (SDSS) data, we introduced the new deep learning algorithm, namely the SDA (stacked denoising autoencoder) neural network and the dropout fine-tuning technique, which can greatly improve the robustness and antinoise performance. We randomly selected respectively the bright source sets and faint source sets from the SDSS DR12 and DR7 data with spectroscopic measurements, and made preprocessing on them. Then, we randomly selected respectively the training sets and testing sets without replacement from the bright source sets and faint source sets. At last, using these training sets we made the training to obtain the SDA models of the bright sources and faint sources in the SDSS DR7 and DR12, respectively. We compared the test result of the SDA model on the DR12 testing set with the test results of the Library for Support Vector Machines (LibSVM), J48 decision tree, Logistic Model Tree (LMT), Support Vector Machine (SVM), Logistic Regression, and Decision Stump algorithm, and compared the test result of the SDA model on the DR7 testing set with the test results of six kinds of decision trees. The experiments show that the SDA has a better classification accuracy than other machine learning algorithms for the faint source sets of DR7 and DR12. Especially, when the completeness function is used as the evaluation index, compared with the decision tree algorithms, the correctness rate of SDA has improved about 15% for the faint source set of SDSS-DR7.

  3. Geometric de-noising of protein-protein interaction networks.

    Directory of Open Access Journals (Sweden)

    Oleksii Kuchaiev

    2009-08-01

    Full Text Available Understanding complex networks of protein-protein interactions (PPIs is one of the foremost challenges of the post-genomic era. Due to the recent advances in experimental bio-technology, including yeast-2-hybrid (Y2H, tandem affinity purification (TAP and other high-throughput methods for protein-protein interaction (PPI detection, huge amounts of PPI network data are becoming available. Of major concern, however, are the levels of noise and incompleteness. For example, for Y2H screens, it is thought that the false positive rate could be as high as 64%, and the false negative rate may range from 43% to 71%. TAP experiments are believed to have comparable levels of noise.We present a novel technique to assess the confidence levels of interactions in PPI networks obtained from experimental studies. We use it for predicting new interactions and thus for guiding future biological experiments. This technique is the first to utilize currently the best fitting network model for PPI networks, geometric graphs. Our approach achieves specificity of 85% and sensitivity of 90%. We use it to assign confidence scores to physical protein-protein interactions in the human PPI network downloaded from BioGRID. Using our approach, we predict 251 interactions in the human PPI network, a statistically significant fraction of which correspond to protein pairs sharing common GO terms. Moreover, we validate a statistically significant portion of our predicted interactions in the HPRD database and the newer release of BioGRID. The data and Matlab code implementing the methods are freely available from the web site: http://www.kuchaev.com/Denoising.

  4. IP-based storage of image information

    Science.gov (United States)

    Fu, Xianglin; Xie, Changsheng; Liu, Zhaobin

    2001-09-01

    With the fast growth of data in multispectral image processing, the traditional storage architecture was challenged. It is currently being replaced by Storage Area Networks (SAN), which makes storage devices externalized from servers. A SAN is a separate network for storage, isolated from the messaging network and optimized for the movement of data between servers and storage devices. Nowadays, most of current SAN use Fibre Channel to move data between servers and storage devices (FC-SAN), but because of the drawbacks of the FC-SAN: for interoperability, lack of skilled professional and management tools, high implementation cost and so on, the development and application of FC-SAN was obstructed. In this paper, we introduce an IP-based Storage Area Networks architecture, which has the good qualities of FC- SAN but overcomes the shortcoming of it. The principle is: use IP technology to move data between servers and storage devices, build a SAN with the IP-based network devices (not the FC-based network device), and through the switch, SAN is attached to the LAN(Local Area Network) through multiple access. Especially, these storage devices are acted as commercial NAS devices and PC.

  5. Machine learning based analysis of cardiovascular images

    NARCIS (Netherlands)

    Wolterink, JM

    2017-01-01

    Cardiovascular diseases (CVDs), including coronary artery disease (CAD) and congenital heart disease (CHD) are the global leading cause of death. Computed tomography (CT) and magnetic resonance imaging (MRI) allow non-invasive imaging of cardiovascular structures. This thesis presents machine

  6. Chaos-based image encryption algorithm

    International Nuclear Information System (INIS)

    Guan Zhihong; Huang Fangjun; Guan Wenjie

    2005-01-01

    In this Letter, a new image encryption scheme is presented, in which shuffling the positions and changing the grey values of image pixels are combined to confuse the relationship between the cipher-image and the plain-image. Firstly, the Arnold cat map is used to shuffle the positions of the image pixels in the spatial-domain. Then the discrete output signal of the Chen's chaotic system is preprocessed to be suitable for the grayscale image encryption, and the shuffled image is encrypted by the preprocessed signal pixel by pixel. The experimental results demonstrate that the key space is large enough to resist the brute-force attack and the distribution of grey values of the encrypted image has a random-like behavior

  7. Multi region based image retrieval system

    Indian Academy of Sciences (India)

    Wang et al (2002) generated a code book from training images to segment images ... code book from different categories of training images. The code book then is used to segment ..... Frintrop S, Rome E and Christensen H I 2010 Computational visual attention systems and their cognitive foundations: A Survey. ACM Trans.

  8. Quantum Image Steganography and Steganalysis Based On LSQu-Blocks Image Information Concealing Algorithm

    Science.gov (United States)

    A. AL-Salhi, Yahya E.; Lu, Songfeng

    2016-08-01

    Quantum steganography can solve some problems that are considered inefficient in image information concealing. It researches on Quantum image information concealing to have been widely exploited in recent years. Quantum image information concealing can be categorized into quantum image digital blocking, quantum image stereography, anonymity and other branches. Least significant bit (LSB) information concealing plays vital roles in the classical world because many image information concealing algorithms are designed based on it. Firstly, based on the novel enhanced quantum representation (NEQR), image uniform blocks clustering around the concrete the least significant Qu-block (LSQB) information concealing algorithm for quantum image steganography is presented. Secondly, a clustering algorithm is proposed to optimize the concealment of important data. Finally, we used Con-Steg algorithm to conceal the clustered image blocks. Information concealing located on the Fourier domain of an image can achieve the security of image information, thus we further discuss the Fourier domain LSQu-block information concealing algorithm for quantum image based on Quantum Fourier Transforms. In our algorithms, the corresponding unitary Transformations are designed to realize the aim of concealing the secret information to the least significant Qu-block representing color of the quantum cover image. Finally, the procedures of extracting the secret information are illustrated. Quantum image LSQu-block image information concealing algorithm can be applied in many fields according to different needs.

  9. Parallel image encryption algorithm based on discretized chaotic map

    International Nuclear Information System (INIS)

    Zhou Qing; Wong Kwokwo; Liao Xiaofeng; Xiang Tao; Hu Yue

    2008-01-01

    Recently, a variety of chaos-based algorithms were proposed for image encryption. Nevertheless, none of them works efficiently in parallel computing environment. In this paper, we propose a framework for parallel image encryption. Based on this framework, a new algorithm is designed using the discretized Kolmogorov flow map. It fulfills all the requirements for a parallel image encryption algorithm. Moreover, it is secure and fast. These properties make it a good choice for image encryption on parallel computing platforms

  10. Investigation of noise sources in upconversion based infrared hyperspectral imaging

    DEFF Research Database (Denmark)

    Kehlet, Louis Martinus; Tidemand-Lichtenberg, Peter; Beato, Pablo

    2016-01-01

    Noise sources in infrared hyperspectral imaging based on nonlinear frequency upconversion are investigated. The effects on the spectral and spatial content of the images are evaluated and methods of combating them are suggested.......Noise sources in infrared hyperspectral imaging based on nonlinear frequency upconversion are investigated. The effects on the spectral and spatial content of the images are evaluated and methods of combating them are suggested....

  11. Fast MR image reconstruction for partially parallel imaging with arbitrary k-space trajectories.

    Science.gov (United States)

    Ye, Xiaojing; Chen, Yunmei; Lin, Wei; Huang, Feng

    2011-03-01

    Both acquisition and reconstruction speed are crucial for magnetic resonance (MR) imaging in clinical applications. In this paper, we present a fast reconstruction algorithm for SENSE in partially parallel MR imaging with arbitrary k-space trajectories. The proposed method is a combination of variable splitting, the classical penalty technique and the optimal gradient method. Variable splitting and the penalty technique reformulate the SENSE model with sparsity regularization as an unconstrained minimization problem, which can be solved by alternating two simple minimizations: One is the total variation and wavelet based denoising that can be quickly solved by several recent numerical methods, whereas the other one involves a linear inversion which is solved by the optimal first order gradient method in our algorithm to significantly improve the performance. Comparisons with several recent parallel imaging algorithms indicate that the proposed method significantly improves the computation efficiency and achieves state-of-the-art reconstruction quality.

  12. A real-time de-noising algorithm for e-noses in a wireless sensor network.

    Science.gov (United States)

    Qu, Jianfeng; Chai, Yi; Yang, Simon X

    2009-01-01

    A wireless e-nose network system is developed for the special purpose of monitoring odorant gases and accurately estimating odor strength in and around livestock farms. This system is to simultaneously acquire accurate odor strength values remotely at various locations, where each node is an e-nose that includes four metal-oxide semiconductor (MOS) gas sensors. A modified Kalman filtering technique is proposed for collecting raw data and de-noising based on the output noise characteristics of those gas sensors. The measurement noise variance is obtained in real time by data analysis using the proposed slip windows average method. The optimal system noise variance of the filter is obtained by using the experiments data. The Kalman filter theory on how to acquire MOS gas sensors data is discussed. Simulation results demonstrate that the proposed method can adjust the Kalman filter parameters and significantly reduce the noise from the gas sensors.

  13. A Real-Time De-Noising Algorithm for E-Noses in a Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Yi Chai

    2009-02-01

    Full Text Available A wireless e-nose network system is developed for the special purpose of monitoring odorant gases and accurately estimating odor strength in and around livestock farms. This system is to simultaneously acquire accurate odor strength values remotely at various locations, where each node is an e-nose that includes four metal-oxide semiconductor (MOS gas sensors. A modified Kalman filtering technique is proposed for collecting raw data and de-noising based on the output noise characteristics of those gas sensors. The measurement noise variance is obtained in real time by data analysis using the proposed slip windows average method. The optimal system noise variance of the filter is obtained by using the experiments data. The Kalman filter theory on how to acquire MOS gas sensors data is discussed. Simulation results demonstrate that the proposed method can adjust the Kalman filter parameters and significantly reduce the noise from the gas sensors.

  14. Analysis of de-noising methods to improve the precision of the ILSF BPM electronic readout system

    Science.gov (United States)

    Shafiee, M.; Feghhi, S. A. H.; Rahighi, J.

    2016-12-01

    In order to have optimum operation and precise control system at particle accelerators, it is required to measure the beam position with the precision of sub-μm. We developed a BPM electronic readout system at Iranian Light Source Facility and it has been experimentally tested at ALBA accelerator facility. The results show the precision of 0.54 μm in beam position measurements. To improve the precision of this beam position monitoring system to sub-μm level, we have studied different de-noising methods such as principal component analysis, wavelet transforms, filtering by FIR, and direct averaging method. An evaluation of the noise reduction was given to testify the ability of these methods. The results show that the noise reduction based on Daubechies wavelet transform is better than other algorithms, and the method is suitable for signal noise reduction in beam position monitoring system.

  15. Virtual bronchoscopy based on spiral CT images

    Science.gov (United States)

    Englmeier, Karl-Hans; Haubner, Michael; Krapichler, Christian; Schuhmann, Dietrich; Seemann, Mark; Fuerst, H.; Reiser, Maximilian

    1998-06-01

    Purpose: To improve the diagnosis of pathologic modified airways, a visualization system has been developed and tested based on the techniques of digital image analysis, synthesis of spiral CT and the visualization by methods of virtual reality. Materials and Methods: 20 patients with pathologic modifications of the airways (tumors, obstructions) were examined with Spiral-CT. The three-dimensional shape of the airways and the lung tissue is defined by a semiautomatic volume growing method and a following geometric surface reconstruction. This is the basis of a multidimensional display system which visualizes volumes, surfaces and computation results simultaneously. To enable the intuitive and immersive inspection of the airways a virtual reality system, consisting of two graphic engines, a head mounted display system, data gloves and specialized software was integrated. Results: In 20 cases the extension of the pathologic modification of the airways could be visualized with the virtual bronchoscopy. The user interacts with and manipulates the 3D model of the airways in an intuitive and immersive way. In contrast to previously proposed virtual bronchoscopy systems the described method permits truly interactive navigation and detailed exploration of anatomic structures. The system enables a user oriented and fast inspection of the volumetric image data. Conclusion: To support radiological diagnosis with additional information in an easy to use and fast way a virtual bronchoscopy system was developed. It enables the immersive and intuitive interaction with 3D Spiral CTs by truly 3D navigation within the airway system. The complex anatomy of the central tracheobronchial system could be clearly visualized. Peripheral bronchi are displayed up to 5th degree.

  16. Image encryption based on permutation-substitution using chaotic map and Latin Square Image Cipher

    Science.gov (United States)

    Panduranga, H. T.; Naveen Kumar, S. K.; Kiran, HASH(0x22c8da0)

    2014-06-01

    In this paper we presented a image encryption based on permutation-substitution using chaotic map and Latin square image cipher. The proposed method consists of permutation and substitution process. In permutation process, plain image is permuted according to chaotic sequence generated using chaotic map. In substitution process, based on secrete key of 256 bit generate a Latin Square Image Cipher (LSIC) and this LSIC is used as key image and perform XOR operation between permuted image and key image. The proposed method can applied to any plain image with unequal width and height as well and also resist statistical attack, differential attack. Experiments carried out for different images of different sizes. The proposed method possesses large key space to resist brute force attack.

  17. Infrared image detail enhancement based on the gradient field specification.

    Science.gov (United States)

    Zhao, Wenda; Xu, Zhijun; Zhao, Jian; Zhao, Fan; Han, Xizhen

    2014-07-01

    Human vision is sensitive to the changes of local image details, which are actually image gradients. To enhance faint infrared image details, this article proposes a gradient field specification algorithm. First we define the image gradient field and gradient histogram. Then, by analyzing the characteristics of the gradient histogram, we construct a Gaussian function to obtain the gradient histogram specification and therefore obtain the transform gradient field. In addition, subhistogram equalization is proposed based on the histogram equalization to improve the contrast of infrared images. The experimental results show that the algorithm can effectively improve image contrast and enhance weak infrared image details and edges. As a result, it can give qualified image information for different applications of an infrared image. In addition, it can also be applied to enhance other types of images such as visible, medical, and lunar surface.

  18. Molecular–Genetic Imaging: A Nuclear Medicine–Based Perspective

    Directory of Open Access Journals (Sweden)

    Ronald G. Blasberg

    2002-07-01

    Full Text Available Molecular imaging is a relatively new discipline, which developed over the past decade, initially driven by in situ reporter imaging technology. Noninvasive in vivo molecular–genetic imaging developed more recently and is based on nuclear (positron emission tomography [PET], gamma camera, autoradiography imaging as well as magnetic resonance (MR and in vivo optical imaging. Molecular–genetic imaging has its roots in both molecular biology and cell biology, as well as in new imaging technologies. The focus of this presentation will be nuclear-based molecular–genetic imaging, but it will comment on the value and utility of combining different imaging modalities. Nuclear-based molecular imaging can be viewed in terms of three different imaging strategies: (1 “indirect” reporter gene imaging; (2 “direct” imaging of endogenous molecules; or (3 “surrogate” or “bio-marker” imaging. Examples of each imaging strategy will be presented and discussed. The rapid growth of in vivo molecular imaging is due to the established base of in vivo imaging technologies, the established programs in molecular and cell biology, and the convergence of these disciplines. The development of versatile and sensitive assays that do not require tissue samples will be of considerable value for monitoring molecular–genetic and cellular processes in animal models of human disease, as well as for studies in human subjects in the future. Noninvasive imaging of molecular–genetic and cellular processes will complement established ex vivo molecular–biological assays that require tissue sampling, and will provide a spatial as well as a temporal dimension to our understanding of various diseases and disease processes.

  19. Combination of Spatial Domain Filters for Speckle Noise Reduction in Ultrasound Medical Images

    Directory of Open Access Journals (Sweden)

    Amit Garg

    2017-01-01

    Full Text Available The occurrence of speckle noise in medical Ultrasound (US images poses a big challenge to medical practitioners over last several years. Speckle noise reduces the fine details present in the images and hence make it more difficult to diagnose. In this paper, a~novel method based on the combination of three spatial domain filters is presented. The output of these filters is combined on the basis of an Intensity Classifier Map (ICF formed using Coefficient of Dispersion (CoD parameter. Experiments were conducted on synthetic and real US images. Quantitative and qualitative analysis of the proposed method is carried out in comparison to other six existing methods. It has been found from the obtained results that proposed method delivers observable improvement in all quantitative parameters undertaken for synthetic US image and in MVR value for real US images. Also, the effectiveness of the proposed method is found to be consistent with qualitative assessment of the denoised images.

  20. Comparing the performance of different ultrasonic images enhancement for speckle noise reduction in ultrasound images using techniques: a preference study

    Science.gov (United States)

    Rana, Md. Shohel; Sarker, Kaushik; Bhuiyan, Touhid; Hassan, Md. Maruf

    2017-06-01

    Diagnostic ultrasound (US) is an important tool in today's sophisticated medical diagnostics. Nearly every medical discipline benefits itself from this relatively inexpensive method that provides a view of the inner organs of the human body without exposing the patient to any harmful radiations. Medical diagnostic images are usually corrupted by noise during their acquisition and most of the noise is speckle noise. To solve this problem, instead of using adaptive filters which are widely used, No-Local Means based filters have been used to de-noise the images. Ultrasound images of four organs such as Abdomen, Ortho, Liver, Kidney, Brest and Prostrate of a Human body have been used and applied comparative analysis study to find out the output. These images were taken from Siemens SONOLINE G60 S System and the output was compared by matrices like SNR, RMSE, PSNR IMGQ and SSIM. The significance and compared results were shown in a tabular format.

  1. Iterative image-domain decomposition for dual-energy CT.

    Science.gov (United States)

    Niu, Tianye; Dong, Xue; Petrongolo, Michael; Zhu, Lei

    2014-04-01

    Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. The proposed algorithm is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but

  2. Biomedical image understanding methods and applications

    CERN Document Server

    Lim, Joo-Hwee; Xiong, Wei

    2015-01-01

    A comprehensive guide to understanding and interpreting digital images in medical and functional applications Biomedical Image Understanding focuses on image understanding and semantic interpretation, with clear introductions to related concepts, in-depth theoretical analysis, and detailed descriptions of important biomedical applications. It covers image processing, image filtering, enhancement, de-noising, restoration, and reconstruction; image segmentation and feature extraction; registration; clustering, pattern classification, and data fusion. With contributions from ex

  3. A wavelet relational fuzzy C-means algorithm for 2D gel image segmentation.

    Science.gov (United States)

    Rashwan, Shaheera; Faheem, Mohamed Talaat; Sarhan, Amany; Youssef, Bayumy A B

    2013-01-01

    One of the most famous algorithms that appeared in the area of image segmentation is the Fuzzy C-Means (FCM) algorithm. This algorithm has been used in many applications such as data analysis, pattern recognition, and image segmentation. It has the advantages of producing high quality segmentation compared to the other available algorithms. Many modifications have been made to the algorithm to improve its segmentation quality. The proposed segmentation algorithm in this paper is based on the Fuzzy C-Means algorithm adding the relational fuzzy notion and the wavelet transform to it so as to enhance its performance especially in the area of 2D gel images. Both proposed modifications aim to minimize the oversegmentation error incurred by previous algorithms. The experimental results of comparing both the Fuzzy C-Means (FCM) and the Wavelet Fuzzy C-Means (WFCM) to the proposed algorithm on real 2D gel images acquired from human leukemias, HL-60 cell lines, and fetal alcohol syndrome (FAS) demonstrate the improvement achieved by the proposed algorithm in overcoming the segmentation error. In addition, we investigate the effect of denoising on the three algorithms. This investigation proves that denoising the 2D gel image before segmentation can improve (in most of the cases) the quality of the segmentation.

  4. A Wavelet Relational Fuzzy C-Means Algorithm for 2D Gel Image Segmentation

    Directory of Open Access Journals (Sweden)

    Shaheera Rashwan

    2013-01-01

    Full Text Available One of the most famous algorithms that appeared in the area of image segmentation is the Fuzzy C-Means (FCM algorithm. This algorithm has been used in many applications such as data analysis, pattern recognition, and image segmentation. It has the advantages of producing high quality segmentation compared to the other available algorithms. Many modifications have been made to the algorithm to improve its segmentation quality. The proposed segmentation algorithm in this paper is based on the Fuzzy C-Means algorithm adding the relational fuzzy notion and the wavelet transform to it so as to enhance its performance especially in the area of 2D gel images. Both proposed modifications aim to minimize the oversegmentation error incurred by previous algorithms. The experimental results of comparing both the Fuzzy C-Means (FCM and the Wavelet Fuzzy C-Means (WFCM to the proposed algorithm on real 2D gel images acquired from human leukemias, HL-60 cell lines, and fetal alcohol syndrome (FAS demonstrate the improvement achieved by the proposed algorithm in overcoming the segmentation error. In addition, we investigate the effect of denoising on the three algorithms. This investigation proves that denoising the 2D gel image before segmentation can improve (in most of the cases the quality of the segmentation.

  5. An Image Processing Algorithm Based On FMAT

    Science.gov (United States)

    Wang, Lui; Pal, Sankar K.

    1995-01-01

    Information deleted in ways minimizing adverse effects on reconstructed images. New grey-scale generalization of medial axis transformation (MAT), called FMAT (short for Fuzzy MAT) proposed. Formulated by making natural extension to fuzzy-set theory of all definitions and conditions (e.g., characteristic function of disk, subset condition of disk, and redundancy checking) used in defining MAT of crisp set. Does not need image to have any kind of priori segmentation, and allows medial axis (and skeleton) to be fuzzy subset of input image. Resulting FMAT (consisting of maximal fuzzy disks) capable of reconstructing exactly original image.

  6. Random Modeling of Daily Rainfall and Runoff Using a Seasonal Model and Wavelet Denoising

    Directory of Open Access Journals (Sweden)

    Chien-ming Chou

    2014-01-01

    Full Text Available Instead of Fourier smoothing, this study applied wavelet denoising to acquire the smooth seasonal mean and corresponding perturbation term from daily rainfall and runoff data in traditional seasonal models, which use seasonal means for hydrological time series forecasting. The denoised rainfall and runoff time series data were regarded as the smooth seasonal mean. The probability distribution of the percentage coefficients can be obtained from calibrated daily rainfall and runoff data. For validated daily rainfall and runoff data, percentage coefficients were randomly generated according to the probability distribution and the law of linear proportion. Multiplying the generated percentage coefficient by the smooth seasonal mean resulted in the corresponding perturbation term. Random modeling of daily rainfall and runoff can be obtained by adding the perturbation term to the smooth seasonal mean. To verify the accuracy of the proposed method, daily rainfall and runoff data for the Wu-Tu watershed were analyzed. The analytical results demonstrate that wavelet denoising enhances the precision of daily rainfall and runoff modeling of the seasonal model. In addition, the wavelet denoising technique proposed in this study can obtain the smooth seasonal mean of rainfall and runoff processes and is suitable for modeling actual daily rainfall and runoff processes.

  7. Wavelet denoising method; application to the flow rate estimation for water level control

    International Nuclear Information System (INIS)

    Park, Gee Young; Park, Jin Ho; Lee, Jung Han; Kim, Bong Soo; Seong, Poong Hyun

    2003-01-01

    The wavelet transform decomposes a signal into time- and frequency-domain signals and it is well known that a noise-corrupted signal could be reconstructed or estimated when a proper denoising method is involved in the wavelet transform. Among the wavelet denoising methods proposed up to now, the wavelets by Mallat and Zhong can reconstruct best the pure transient signal from a highly corrupted signal. But there has been no systematic way of discriminating the original signal from the noise in a dyadic wavelet transform. In this paper, a systematic method is proposed for noise discrimination, which could be implemented easily into a digital system. For demonstrating the potential role of the wavelet denoising method in the nuclear field, this method is applied to the steam or feedwater flow rate estimation of the secondary loop. And the configuration of the S/G water level control system is proposed for incorporating the wavelet denoising method in estimating the flow rate value at low operating powers

  8. Optical microscopic imaging based on VRML language

    Science.gov (United States)

    Zhang, Xuedian; Zhang, Zhenyi; Sun, Jun

    2009-11-01

    As so-called VRML (Virtual Reality Modeling Language), is a kind of language used to establish a model of the real world or a colorful world made by people. As in international standard, VRML is the main kind of program language based on the "www" net building, which is defined by ISO, the kind of MIME is x-world or x-VRML. The most important is that it has no relationship with the operating system. Otherwise, because of the birth of VRML 2.0, its ability of describing the dynamic condition gets better, and the interaction of the internet evolved too. The use of VRML will bring a revolutionary change of confocal microscope. For example, we could send different kinds of swatch in virtual 3D style to the net. On the other hand, scientists in different countries could use the same microscope in the same time to watch the same samples by the internet. The mode of sending original data in the model of text has many advantages, such as: the faster transporting, the fewer data, the more convenient updating and fewer errors. In the following words we shall discuss the basic elements of using VRML in the field of Optical Microscopic imaging.

  9. Study and analysis of wavelet based image compression techniques ...

    African Journals Online (AJOL)

    This paper presented comprehensive study with performance analysis of very recent Wavelet transform based image compression techniques. Image compression is one of the necessities for such communication. The goals of image compression are to minimize the storage requirement and communication bandwidth.

  10. ROI-based DICOM image compression for telemedicine

    Indian Academy of Sciences (India)

    In this paper, we propose lossless scalable RBC for Digital Imaging and Communications in Medicine (DICOM) images based on Integer Wavelet Transform (IWT) and with distortion limiting compression technique for other regions in image. The main objective of this work is to reject the noisy background and reconstruct ...

  11. An Investigation of Image Fusion Algorithms using a Visual Performance-based Image Evaluation Methodology

    Science.gov (United States)

    2009-02-01

    Stathaki, Tania (2006). Adaptive image fusion using ICA bases. Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, 2...selection strategy. Li , H.; Manjunath, B. S. and Mitra, S. K. (1995). Multisensor image fusion using the wavelet transform. Graphical Models and Image

  12. New electronic imaging system for Cf-252 based neutron radiography

    International Nuclear Information System (INIS)

    Ito, S.; Mochiki, K.; Matsumoto, T.

    2004-01-01

    We have developed a new imaging camera and a signal processing system for Cf-252 based neutron radiography. The imaging part consists of cascaded image intensifiers and a progressive-scan monochromatic CCD camera (SONY XC-55) with standard frame rate. The video signal is converted to 12 bits and processed by large-scale field programmable arrays (FPGAs). The signal processing system has three frame accumulation memories for normal frame images, binary-converted frame images and center-of-gravity frame images. A preliminary experiment was made using a Cf-252 neutron source at Atomic Energy Research Laboratory of Musashi Institute of Technology. (author)

  13. Model-based satellite image fusion

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Sveinsson, J. R.; Nielsen, Allan Aasbjerg

    2008-01-01

    A method is proposed for pixel-level satellite image fusion derived directly from a model of the imaging sensor. By design, the proposed method is spectrally consistent. It is argued that the proposed method needs regularization, as is the case for any method for this problem. A framework for pixel...

  14. Fibre laser based broadband THz imaging systems

    DEFF Research Database (Denmark)

    Eichhorn, Finn

    imaging techniques. This thesis exhibits that fiber technology can improve the robustness and the flexibility of terahertz imaging systems both by the use of fiber-optic light sources and the employment of optical fibers as light distribution medium. The main focus is placed on multi-element terahertz...

  15. Photons-based medical imaging - Radiology, X-ray tomography, gamma and positrons tomography, optical imaging

    International Nuclear Information System (INIS)

    Fanet, H.; Dinten, J.M.; Moy, J.P.; Rinkel, J.; Buvat, I.; Da Silva, A.; Douek, P.; Peyrin, F.; Frija, G.; Trebossen, R.

    2010-01-01

    This book describes the different principles used in medical imaging. The detection aspects, the processing electronics and algorithms are detailed for the different techniques. This first tome analyses the photons-based techniques (X-rays, gamma rays and visible light). Content: 1 - physical background: radiation-matter interaction, consequences on detection and medical imaging; 2 - detectors for medical imaging; 3 - processing of numerical radiography images for quantization; 4 - X-ray tomography; 5 - positrons emission tomography: principles and applications; 6 - mono-photonic imaging; 7 - optical imaging; Index. (J.S.)

  16. Fusion Segmentation Method Based on Fuzzy Theory for Color Images

    Science.gov (United States)

    Zhao, J.; Huang, G.; Zhang, J.

    2017-09-01

    The image segmentation method based on two-dimensional histogram segments the image according to the thresholds of the intensity of the target pixel and the average intensity of its neighborhood. This method is essentially a hard-decision method. Due to the uncertainties when labeling the pixels around the threshold, the hard-decision method can easily get the wrong segmentation result. Therefore, a fusion segmentation method based on fuzzy theory is proposed in this paper. We use membership function to model the uncertainties on each color channel of the color image. Then, we segment the color image according to the fuzzy reasoning. The experiment results show that our proposed method can get better segmentation results both on the natural scene images and optical remote sensing images compared with the traditional thresholding method. The fusion method in this paper can provide new ideas for the information extraction of optical remote sensing images and polarization SAR images.

  17. Model-Based Reconstructive Elasticity Imaging Using Ultrasound

    Directory of Open Access Journals (Sweden)

    Salavat R. Aglyamov

    2007-01-01

    Full Text Available Elasticity imaging is a reconstructive imaging technique where tissue motion in response to mechanical excitation is measured using modern imaging systems, and the estimated displacements are then used to reconstruct the spatial distribution of Young's modulus. Here we present an ultrasound elasticity imaging method that utilizes the model-based technique for Young's modulus reconstruction. Based on the geometry of the imaged object, only one axial component of the strain tensor is used. The numerical implementation of the method is highly efficient because the reconstruction is based on an analytic solution of the forward elastic problem. The model-based approach is illustrated using two potential clinical applications: differentiation of liver hemangioma and staging of deep venous thrombosis. Overall, these studies demonstrate that model-based reconstructive elasticity imaging can be used in applications where the geometry of the object and the surrounding tissue is somewhat known and certain assumptions about the pathology can be made.

  18. [A novel image processing and analysis system for medical images based on IDL language].

    Science.gov (United States)

    Tang, Min

    2009-08-01

    Medical image processing and analysis system, which is of great value in medical research and clinical diagnosis, has been a focal field in recent years. Interactive data language (IDL) has a vast library of built-in math, statistics, image analysis and information processing routines, therefore, it has become an ideal software for interactive analysis and visualization of two-dimensional and three-dimensional scientific datasets. The methodology is proposed to design a novel image processing and analysis system for medical images based on IDL. There are five functional modules in this system: Image Preprocessing, Image Segmentation, Image Reconstruction, Image Measurement and Image Management. Experimental results demonstrate that this system is effective and efficient, and it has the advantages of extensive applicability, friendly interaction, convenient extension and favorable transplantation.

  19. Fingerprint Image Enhancement Based on Second Directional Derivative of the Digital Image

    Directory of Open Access Journals (Sweden)

    Onnia Vesa

    2002-01-01

    Full Text Available This paper presents a novel approach of fingerprint image enhancement that relies on detecting the fingerprint ridges as image regions where the second directional derivative of the digital image is positive. A facet model is used in order to approximate the derivatives at each image pixel based on the intensity values of pixels located in a certain neighborhood. We note that the size of this neighborhood has a critical role in achieving accurate enhancement results. Using neighborhoods of various sizes, the proposed algorithm determines several candidate binary representations of the input fingerprint pattern. Subsequently, an output binary ridge-map image is created by selecting image zones, from the available binary image candidates, according to a MAP selection rule. Two public domain collections of fingerprint images are used in order to objectively assess the performance of the proposed fingerprint image enhancement approach.

  20. Mitigating illumination gradients in a SAR image based on the image data and antenna beam pattern

    Science.gov (United States)

    Doerry, Armin W.

    2013-04-30

    Illumination gradients in a synthetic aperture radar (SAR) image of a target can be mitigated by determining a correction for pixel values associated with the SAR image. This correction is determined based on information indicative of a beam pattern used by a SAR antenna apparatus to illuminate the target, and also based on the pixel values associated with the SAR image. The correction is applied to the pixel values associated with the SAR image to produce corrected pixel values that define a corrected SAR image.