WorldWideScience

Sample records for level noisy images

  1. External Prior Guided Internal Prior Learning for Real-World Noisy Image Denoising

    Science.gov (United States)

    Xu, Jun; Zhang, Lei; Zhang, David

    2018-06-01

    Most of existing image denoising methods learn image priors from either external data or the noisy image itself to remove noise. However, priors learned from external data may not be adaptive to the image to be denoised, while priors learned from the given noisy image may not be accurate due to the interference of corrupted noise. Meanwhile, the noise in real-world noisy images is very complex, which is hard to be described by simple distributions such as Gaussian distribution, making real noisy image denoising a very challenging problem. We propose to exploit the information in both external data and the given noisy image, and develop an external prior guided internal prior learning method for real noisy image denoising. We first learn external priors from an independent set of clean natural images. With the aid of learned external priors, we then learn internal priors from the given noisy image to refine the prior model. The external and internal priors are formulated as a set of orthogonal dictionaries to efficiently reconstruct the desired image. Extensive experiments are performed on several real noisy image datasets. The proposed method demonstrates highly competitive denoising performance, outperforming state-of-the-art denoising methods including those designed for real noisy images.

  2. Estimation of object motion parameters from noisy images.

    Science.gov (United States)

    Broida, T J; Chellappa, R

    1986-01-01

    An approach is presented for the estimation of object motion parameters based on a sequence of noisy images. The problem considered is that of a rigid body undergoing unknown rotational and translational motion. The measurement data consists of a sequence of noisy image coordinates of two or more object correspondence points. By modeling the object dynamics as a function of time, estimates of the model parameters (including motion parameters) can be extracted from the data using recursive and/or batch techniques. This permits a desired degree of smoothing to be achieved through the use of an arbitrarily large number of images. Some assumptions regarding object structure are presently made. Results are presented for a recursive estimation procedure: the case considered here is that of a sequence of one dimensional images of a two dimensional object. Thus, the object moves in one transverse dimension, and in depth, preserving the fundamental ambiguity of the central projection image model (loss of depth information). An iterated extended Kalman filter is used for the recursive solution. Noise levels of 5-10 percent of the object image size are used. Approximate Cramer-Rao lower bounds are derived for the model parameter estimates as a function of object trajectory and noise level. This approach may be of use in situations where it is difficult to resolve large numbers of object match points, but relatively long sequences of images (10 to 20 or more) are available.

  3. Fuzzy Logic Based Edge Detection in Smooth and Noisy Clinical Images.

    Directory of Open Access Journals (Sweden)

    Izhar Haq

    Full Text Available Edge detection has beneficial applications in the fields such as machine vision, pattern recognition and biomedical imaging etc. Edge detection highlights high frequency components in the image. Edge detection is a challenging task. It becomes more arduous when it comes to noisy images. This study focuses on fuzzy logic based edge detection in smooth and noisy clinical images. The proposed method (in noisy images employs a 3 × 3 mask guided by fuzzy rule set. Moreover, in case of smooth clinical images, an extra mask of contrast adjustment is integrated with edge detection mask to intensify the smooth images. The developed method was tested on noise-free, smooth and noisy images. The results were compared with other established edge detection techniques like Sobel, Prewitt, Laplacian of Gaussian (LOG, Roberts and Canny. When the developed edge detection technique was applied to a smooth clinical image of size 270 × 290 pixels having 24 dB 'salt and pepper' noise, it detected very few (22 false edge pixels, compared to Sobel (1931, Prewitt (2741, LOG (3102, Roberts (1451 and Canny (1045 false edge pixels. Therefore it is evident that the developed method offers improved solution to the edge detection problem in smooth and noisy clinical images.

  4. Learning from Weak and Noisy Labels for Semantic Segmentation

    KAUST Repository

    Lu, Zhiwu

    2016-04-08

    A weakly supervised semantic segmentation (WSSS) method aims to learn a segmentation model from weak (image-level) as opposed to strong (pixel-level) labels. By avoiding the tedious pixel-level annotation process, it can exploit the unlimited supply of user-tagged images from media-sharing sites such as Flickr for large scale applications. However, these ‘free’ tags/labels are often noisy and few existing works address the problem of learning with both weak and noisy labels. In this work, we cast the WSSS problem into a label noise reduction problem. Specifically, after segmenting each image into a set of superpixels, the weak and potentially noisy image-level labels are propagated to the superpixel level resulting in highly noisy labels; the key to semantic segmentation is thus to identify and correct the superpixel noisy labels. To this end, a novel L1-optimisation based sparse learning model is formulated to directly and explicitly detect noisy labels. To solve the L1-optimisation problem, we further develop an efficient learning algorithm by introducing an intermediate labelling variable. Extensive experiments on three benchmark datasets show that our method yields state-of-the-art results given noise-free labels, whilst significantly outperforming the existing methods when the weak labels are also noisy.

  5. Learning from Weak and Noisy Labels for Semantic Segmentation

    KAUST Repository

    Lu, Zhiwu; Fu, Zhenyong; Xiang, Tao; Han, Peng; Wang, Liwei; Gao, Xin

    2016-01-01

    A weakly supervised semantic segmentation (WSSS) method aims to learn a segmentation model from weak (image-level) as opposed to strong (pixel-level) labels. By avoiding the tedious pixel-level annotation process, it can exploit the unlimited supply of user-tagged images from media-sharing sites such as Flickr for large scale applications. However, these ‘free’ tags/labels are often noisy and few existing works address the problem of learning with both weak and noisy labels. In this work, we cast the WSSS problem into a label noise reduction problem. Specifically, after segmenting each image into a set of superpixels, the weak and potentially noisy image-level labels are propagated to the superpixel level resulting in highly noisy labels; the key to semantic segmentation is thus to identify and correct the superpixel noisy labels. To this end, a novel L1-optimisation based sparse learning model is formulated to directly and explicitly detect noisy labels. To solve the L1-optimisation problem, we further develop an efficient learning algorithm by introducing an intermediate labelling variable. Extensive experiments on three benchmark datasets show that our method yields state-of-the-art results given noise-free labels, whilst significantly outperforming the existing methods when the weak labels are also noisy.

  6. Efficient Filtering of Noisy Fingerprint Images

    Directory of Open Access Journals (Sweden)

    Maria Liliana Costin

    2016-01-01

    Full Text Available Fingerprint identification is an important field in the wide domain of biometrics with many applications, in different areas such: judicial, mobile phones, access systems, airports. There are many elaborated algorithms for fingerprint identification, but none of them can guarantee that the results of identification are always 100 % accurate. A first step in a fingerprint image analysing process consists in the pre-processing or filtering. If the result after this step is not by a good quality the upcoming identification process can fail. A major difficulty can appear in case of fingerprint identification if the images that should be identified from a fingerprint image database are noisy with different type of noise. The objectives of the paper are: the successful completion of the noisy digital image filtering, a novel more robust algorithm of identifying the best filtering algorithm and the classification and ranking of the images. The choice about the best filtered images of a set of 9 algorithms is made with a dual method of fuzzy and aggregation model. We are proposing through this paper a set of 9 filters with different novelty designed for processing the digital images using the following methods: quartiles, medians, average, thresholds and histogram equalization, applied all over the image or locally on small areas. Finally the statistics reveal the classification and ranking of the best algorithms.

  7. A Novel Image Encryption Scheme Based on Clifford Attractor and Noisy Logistic Map for Secure Transferring Images in Navy

    Directory of Open Access Journals (Sweden)

    Mohadeseh Kanafchian

    2017-04-01

    In this paper, we first give a brief introduction into chaotic image encryption and then we investigate some important properties and behaviour of the logistic map. The logistic map, aperiodic trajectory, or random-like fluctuation, could not be obtained with some choice of initial condition. Therefore, a noisy logistic map with an additive system noise is introduced. The proposed scheme is based on the extended map of the Clifford strange attractor, where each dimension has a specific role in the encryption process. Two dimensions are used for pixel permutation and the third dimension is used for pixel diffusion. In order to optimize the Clifford encryption system we increase the space key by using the noisy logistic map and a novel encryption scheme based on the Clifford attractor and the noisy logistic map for secure transfer images is proposed. This algorithm consists of two parts: the noisy logistic map shuffle of the pixel position and the pixel value. We use times for shuffling the pixel position and value then we generate the new pixel position and value by the Clifford system. To illustrate the efficiency of the proposed scheme, various types of security analysis are tested. It can be concluded that the proposed image encryption system is a suitable choice for practical applications.

  8. Using the generalized Radon transform for detection of curves in noisy images

    DEFF Research Database (Denmark)

    Toft, Peter Aundal

    1996-01-01

    In this paper the discrete generalized Radon transform will be investigated as a tool for detection of curves in noisy digital images. The discrete generalized Radon transform maps an image into a parameter domain, where curves following a specific parameterized curve form will correspond to a peak...

  9. Accurate estimation of motion blur parameters in noisy remote sensing image

    Science.gov (United States)

    Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong

    2015-05-01

    The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.

  10. Enhancement of noisy EDX HRSTEM spectrum-images by combination of filtering and PCA.

    Science.gov (United States)

    Potapov, Pavel; Longo, Paolo; Okunishi, Eiji

    2017-05-01

    STEM spectrum-imaging with collecting EDX signal is considered in view of the extraction of maximum information from very noisy data. It is emphasized that spectrum-images with weak EDX signal often suffer from information loss in the course of PCA treatment. The loss occurs when the level of random noise exceeds a certain threshold. Weighted PCA, though potentially helpful in isolation of meaningful variations from noise, might provoke the complete loss of information in the situation of weak EDX signal. Filtering datasets prior PCA can improve the situation and recover the lost information. In particular, Gaussian kernel filters are found to be efficient. A new filter useful in the case of sparse atomic-resolution EDX spectrum-images is suggested. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Shape adaptive, robust iris feature extraction from noisy iris images.

    Science.gov (United States)

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-10-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate.

  12. Machine printed text and handwriting identification in noisy document images.

    Science.gov (United States)

    Zheng, Yefeng; Li, Huiping; Doermann, David

    2004-03-01

    In this paper, we address the problem of the identification of text in noisy document images. We are especially focused on segmenting and identifying between handwriting and machine printed text because: 1) Handwriting in a document often indicates corrections, additions, or other supplemental information that should be treated differently from the main content and 2) the segmentation and recognition techniques requested for machine printed and handwritten text are significantly different. A novel aspect of our approach is that we treat noise as a separate class and model noise based on selected features. Trained Fisher classifiers are used to identify machine printed text and handwriting from noise and we further exploit context to refine the classification. A Markov Random Field-based (MRF) approach is used to model the geometrical structure of the printed text, handwriting, and noise to rectify misclassifications. Experimental results show that our approach is robust and can significantly improve page segmentation in noisy document collections.

  13. Algebraically approximate and noisy realization of discrete-time systems and digital images

    CERN Document Server

    Hasegawa, Yasumichi

    2009-01-01

    This monograph deals with approximation and noise cancellation of dynamical systems which include linear and nonlinear input/output relationships. It also deal with approximation and noise cancellation of two dimensional arrays. It will be of special interest to researchers, engineers and graduate students who have specialized in filtering theory and system theory and digital images. This monograph is composed of two parts. Part I and Part II will deal with approximation and noise cancellation of dynamical systems or digital images respectively. From noiseless or noisy data, reduction will be

  14. Information Extraction with Character-level Neural Networks and Free Noisy Supervision

    OpenAIRE

    Meerkamp, Philipp; Zhou, Zhengyi

    2016-01-01

    We present an architecture for information extraction from text that augments an existing parser with a character-level neural network. The network is trained using a measure of consistency of extracted data with existing databases as a form of noisy supervision. Our architecture combines the ability of constraint-based information extraction systems to easily incorporate domain knowledge and constraints with the ability of deep neural networks to leverage large amounts of data to learn compl...

  15. Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images.

    Science.gov (United States)

    Elad, M; Feuer, A

    1997-01-01

    The three main tools in the single image restoration theory are the maximum likelihood (ML) estimator, the maximum a posteriori probability (MAP) estimator, and the set theoretic approach using projection onto convex sets (POCS). This paper utilizes the above known tools to propose a unified methodology toward the more complicated problem of superresolution restoration. In the superresolution restoration problem, an improved resolution image is restored from several geometrically warped, blurred, noisy and downsampled measured images. The superresolution restoration problem is modeled and analyzed from the ML, the MAP, and POCS points of view, yielding a generalization of the known superresolution restoration methods. The proposed restoration approach is general but assumes explicit knowledge of the linear space- and time-variant blur, the (additive Gaussian) noise, the different measured resolutions, and the (smooth) motion characteristics. A hybrid method combining the simplicity of the ML and the incorporation of nonellipsoid constraints is presented, giving improved restoration performance, compared with the ML and the POCS approaches. The hybrid method is shown to converge to the unique optimal solution of a new definition of the optimization problem. Superresolution restoration from motionless measurements is also discussed. Simulations demonstrate the power of the proposed methodology.

  16. Smoothing of, and parameter estimation from, noisy biophysical recordings.

    Directory of Open Access Journals (Sweden)

    Quentin J M Huys

    2009-05-01

    Full Text Available Biophysically detailed models of single cells are difficult to fit to real data. Recent advances in imaging techniques allow simultaneous access to various intracellular variables, and these data can be used to significantly facilitate the modelling task. These data, however, are noisy, and current approaches to building biophysically detailed models are not designed to deal with this. We extend previous techniques to take the noisy nature of the measurements into account. Sequential Monte Carlo ("particle filtering" methods, in combination with a detailed biophysical description of a cell, are used for principled, model-based smoothing of noisy recording data. We also provide an alternative formulation of smoothing where the neural nonlinearities are estimated in a non-parametric manner. Biophysically important parameters of detailed models (such as channel densities, intercompartmental conductances, input resistances, and observation noise are inferred automatically from noisy data via expectation-maximization. Overall, we find that model-based smoothing is a powerful, robust technique for smoothing of noisy biophysical data and for inference of biophysical parameters in the face of recording noise.

  17. Noisy Ocular Recognition Based on Three Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Min Beom Lee

    2017-12-01

    Full Text Available In recent years, the iris recognition system has been gaining increasing acceptance for applications such as access control and smartphone security. When the images of the iris are obtained under unconstrained conditions, an issue of undermined quality is caused by optical and motion blur, off-angle view (the user’s eyes looking somewhere else, not into the front of the camera, specular reflection (SR and other factors. Such noisy iris images increase intra-individual variations and, as a result, reduce the accuracy of iris recognition. A typical iris recognition system requires a near-infrared (NIR illuminator along with an NIR camera, which are larger and more expensive than fingerprint recognition equipment. Hence, many studies have proposed methods of using iris images captured by a visible light camera without the need for an additional illuminator. In this research, we propose a new recognition method for noisy iris and ocular images by using one iris and two periocular regions, based on three convolutional neural networks (CNNs. Experiments were conducted by using the noisy iris challenge evaluation-part II (NICE.II training dataset (selected from the university of Beira iris (UBIRIS.v2 database, mobile iris challenge evaluation (MICHE database, and institute of automation of Chinese academy of sciences (CASIA-Iris-Distance database. As a result, the method proposed by this study outperformed previous methods.

  18. Noisy Ocular Recognition Based on Three Convolutional Neural Networks.

    Science.gov (United States)

    Lee, Min Beom; Hong, Hyung Gil; Park, Kang Ryoung

    2017-12-17

    In recent years, the iris recognition system has been gaining increasing acceptance for applications such as access control and smartphone security. When the images of the iris are obtained under unconstrained conditions, an issue of undermined quality is caused by optical and motion blur, off-angle view (the user's eyes looking somewhere else, not into the front of the camera), specular reflection (SR) and other factors. Such noisy iris images increase intra-individual variations and, as a result, reduce the accuracy of iris recognition. A typical iris recognition system requires a near-infrared (NIR) illuminator along with an NIR camera, which are larger and more expensive than fingerprint recognition equipment. Hence, many studies have proposed methods of using iris images captured by a visible light camera without the need for an additional illuminator. In this research, we propose a new recognition method for noisy iris and ocular images by using one iris and two periocular regions, based on three convolutional neural networks (CNNs). Experiments were conducted by using the noisy iris challenge evaluation-part II (NICE.II) training dataset (selected from the university of Beira iris (UBIRIS).v2 database), mobile iris challenge evaluation (MICHE) database, and institute of automation of Chinese academy of sciences (CASIA)-Iris-Distance database. As a result, the method proposed by this study outperformed previous methods.

  19. Weighting of field heights for sharpness and noisiness

    Science.gov (United States)

    Keelan, Brian W.; Jin, Elaine W.

    2009-01-01

    Weighting of field heights is important in cases when a single numerical value needs to be calculated that characterizes an attribute's overall impact on perceived image quality. In this paper we report an observer study to derive the weighting of field heights for sharpness and noisiness. One-hundred-forty images were selected to represent a typical consumer photo space distribution. Fifty-three sample points were sampled per image, representing field heights of 0, 14, 32, 42, 51, 58, 71, 76, 86% and 100%. Six observers participated in this study. The field weights derived in this report include both: the effect of area versus field height (which is a purely objective, geometric factor); and the effect of the spatial distribution of image content that draws attention to or masks each of these image structure attributes. The results show that relative to the geometrical area weights, sharpness weights were skewed to lower field heights, because sharpness-critical subject matter was often positioned relatively near the center of an image. Conversely, because noise can be masked by signal, noisiness-critical content (such as blue skies, skin tones, walls, etc.) tended to occur farther from the center of an image, causing the weights to be skewed to higher field heights.

  20. Oversampling smoothness: an effective algorithm for phase retrieval of noisy diffraction intensities.

    Science.gov (United States)

    Rodriguez, Jose A; Xu, Rui; Chen, Chien-Chun; Zou, Yunfei; Miao, Jianwei

    2013-04-01

    Coherent diffraction imaging (CDI) is high-resolution lensless microscopy that has been applied to image a wide range of specimens using synchrotron radiation, X-ray free-electron lasers, high harmonic generation, soft X-ray lasers and electrons. Despite recent rapid advances, it remains a challenge to reconstruct fine features in weakly scattering objects such as biological specimens from noisy data. Here an effective iterative algorithm, termed oversampling smoothness (OSS), for phase retrieval of noisy diffraction intensities is presented. OSS exploits the correlation information among the pixels or voxels in the region outside of a support in real space. By properly applying spatial frequency filters to the pixels or voxels outside the support at different stages of the iterative process ( i.e. a smoothness constraint), OSS finds a balance between the hybrid input-output (HIO) and error reduction (ER) algorithms to search for a global minimum in solution space, while reducing the oscillations in the reconstruction. Both numerical simulations with Poisson noise and experimental data from a biological cell indicate that OSS consistently outperforms the HIO, ER-HIO and noise robust (NR)-HIO algorithms at all noise levels in terms of accuracy and consistency of the reconstructions. It is expected that OSS will find application in the rapidly growing CDI field, as well as other disciplines where phase retrieval from noisy Fourier magnitudes is needed. The MATLAB (The MathWorks Inc., Natick, MA, USA) source code of the OSS algorithm is freely available from http://www.physics.ucla.edu/research/imaging.

  1. Active learning for noisy oracle via density power divergence.

    Science.gov (United States)

    Sogawa, Yasuhiro; Ueno, Tsuyoshi; Kawahara, Yoshinobu; Washio, Takashi

    2013-10-01

    The accuracy of active learning is critically influenced by the existence of noisy labels given by a noisy oracle. In this paper, we propose a novel pool-based active learning framework through robust measures based on density power divergence. By minimizing density power divergence, such as β-divergence and γ-divergence, one can estimate the model accurately even under the existence of noisy labels within data. Accordingly, we develop query selecting measures for pool-based active learning using these divergences. In addition, we propose an evaluation scheme for these measures based on asymptotic statistical analyses, which enables us to perform active learning by evaluating an estimation error directly. Experiments with benchmark datasets and real-world image datasets show that our active learning scheme performs better than several baseline methods. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Automated marker tracking using noisy X-ray images degraded by the treatment beam

    Energy Technology Data Exchange (ETDEWEB)

    Wisotzky, E. [Fraunhofer Institute for Production Systems and Design Technology (IPK), Berlin (Germany); German Cancer Research Center (DKFZ), Heidelberg (Germany); Fast, M.F.; Nill, S. [The Royal Marsden NHS Foundation Trust, London (United Kingdom). Joint Dept. of Physics; Oelfke, U. [The Royal Marsden NHS Foundation Trust, London (United Kingdom). Joint Dept. of Physics; German Cancer Research Center (DKFZ), Heidelberg (Germany)

    2015-09-01

    This study demonstrates the feasibility of automated marker tracking for the real-time detection of intrafractional target motion using noisy kilovoltage (kV) X-ray images degraded by the megavoltage (MV) treatment beam. The authors previously introduced the in-line imaging geometry, in which the flat-panel detector (FPD) is mounted directly underneath the treatment head of the linear accelerator. They found that the 121 kVp image quality was severely compromised by the 6 MV beam passing through the FPD at the same time. Specific MV-induced artefacts present a considerable challenge for automated marker detection algorithms. For this study, the authors developed a new imaging geometry by re-positioning the FPD and the X-ray tube. This improved the contrast-to-noise-ratio between 40% and 72% at the 1.2 mAs/image exposure setting. The increase in image quality clearly facilitates the quick and stable detection of motion with the aid of a template matching algorithm. The setup was tested with an anthropomorphic lung phantom (including an artificial lung tumour). In the tumour one or three Calypso {sup registered} beacons were embedded to achieve better contrast during MV radiation. For a single beacon, image acquisition and automated marker detection typically took around 76±6 ms. The success rate was found to be highly dependent on imaging dose and gantry angle. To eliminate possible false detections, the authors implemented a training phase prior to treatment beam irradiation and also introduced speed limits for motion between subsequent images.

  3. Automated marker tracking using noisy X-ray images degraded by the treatment beam

    International Nuclear Information System (INIS)

    Wisotzky, E.; Fast, M.F.; Nill, S.

    2015-01-01

    This study demonstrates the feasibility of automated marker tracking for the real-time detection of intrafractional target motion using noisy kilovoltage (kV) X-ray images degraded by the megavoltage (MV) treatment beam. The authors previously introduced the in-line imaging geometry, in which the flat-panel detector (FPD) is mounted directly underneath the treatment head of the linear accelerator. They found that the 121 kVp image quality was severely compromised by the 6 MV beam passing through the FPD at the same time. Specific MV-induced artefacts present a considerable challenge for automated marker detection algorithms. For this study, the authors developed a new imaging geometry by re-positioning the FPD and the X-ray tube. This improved the contrast-to-noise-ratio between 40% and 72% at the 1.2 mAs/image exposure setting. The increase in image quality clearly facilitates the quick and stable detection of motion with the aid of a template matching algorithm. The setup was tested with an anthropomorphic lung phantom (including an artificial lung tumour). In the tumour one or three Calypso registered beacons were embedded to achieve better contrast during MV radiation. For a single beacon, image acquisition and automated marker detection typically took around 76±6 ms. The success rate was found to be highly dependent on imaging dose and gantry angle. To eliminate possible false detections, the authors implemented a training phase prior to treatment beam irradiation and also introduced speed limits for motion between subsequent images.

  4. Food Image Recognition via Superpixel Based Low-Level and Mid-Level Distance Coding for Smart Home Applications

    Directory of Open Access Journals (Sweden)

    Jiannan Zheng

    2017-05-01

    Full Text Available Food image recognition is a key enabler for many smart home applications such as smart kitchen and smart personal nutrition log. In order to improve living experience and life quality, smart home systems collect valuable insights of users’ preferences, nutrition intake and health conditions via accurate and robust food image recognition. In addition, efficiency is also a major concern since many smart home applications are deployed on mobile devices where high-end GPUs are not available. In this paper, we investigate compact and efficient food image recognition methods, namely low-level and mid-level approaches. Considering the real application scenario where only limited and noisy data are available, we first proposed a superpixel based Linear Distance Coding (LDC framework where distinctive low-level food image features are extracted to improve performance. On a challenging small food image dataset where only 12 training images are available per category, our framework has shown superior performance in both accuracy and robustness. In addition, to better model deformable food part distribution, we extend LDC’s feature-to-class distance idea and propose a mid-level superpixel food parts-to-class distance mining framework. The proposed framework show superior performance on a benchmark food image datasets compared to other low-level and mid-level approaches in the literature.

  5. Application of morphological associative memories and Fourier descriptors for classification of noisy subsurface signatures

    Science.gov (United States)

    Ortiz, Jorge L.; Parsiani, Hamed; Tolstoy, Leonid

    2004-02-01

    This paper presents a method for recognition of Noisy Subsurface Images using Morphological Associative Memories (MAM). MAM are type of associative memories that use a new kind of neural networks based in the algebra system known as semi-ring. The operations performed in this algebraic system are highly nonlinear providing additional strength when compared to other transformations. Morphological associative memories are a new kind of neural networks that provide a robust performance with noisy inputs. Two representations of morphological associative memories are used called M and W matrices. M associative memory provides a robust association with input patterns corrupted by dilative random noise, while the W associative matrix performs a robust recognition in patterns corrupted with erosive random noise. The robust performance of MAM is used in combination of the Fourier descriptors for the recognition of underground objects in Ground Penetrating Radar (GPR) images. Multiple 2-D GPR images of a site are made available by NASA-SSC center. The buried objects in these images appear in the form of hyperbolas which are the results of radar backscatter from the artifacts or objects. The Fourier descriptors of the prototype hyperbola-like and shapes from non-hyperbola shapes in the sub-surface images are used to make these shapes scale-, shift-, and rotation-invariant. Typical hyperbola-like and non-hyperbola shapes are used to calculate the morphological associative memories. The trained MAMs are used to process other noisy images to detect the presence of these underground objects. The outputs from the MAM using the noisy patterns may be equal to the training prototypes, providing a positive identification of the artifacts. The results are images with recognized hyperbolas which indicate the presence of buried artifacts. A model using MATLAB has been developed and results are presented.

  6. Level set method for image segmentation based on moment competition

    Science.gov (United States)

    Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai

    2015-05-01

    We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.

  7. Cnn Based Retinal Image Upscaling Using Zero Component Analysis

    Science.gov (United States)

    Nasonov, A.; Chesnakov, K.; Krylov, A.

    2017-05-01

    The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.

  8. Quantitative myocardial perfusion PET parametric imaging at the voxel-level

    International Nuclear Information System (INIS)

    Mohy-ud-Din, Hassan; Rahmim, Arman; Lodge, Martin A

    2015-01-01

    Quantitative myocardial perfusion (MP) PET has the potential to enhance detection of early stages of atherosclerosis or microvascular dysfunction, characterization of flow-limiting effects of coronary artery disease (CAD), and identification of balanced reduction of flow due to multivessel stenosis. We aim to enable quantitative MP-PET at the individual voxel level, which has the potential to allow enhanced visualization and quantification of myocardial blood flow (MBF) and flow reserve (MFR) as computed from uptake parametric images. This framework is especially challenging for the 82 Rb radiotracer. The short half-life enables fast serial imaging and high patient throughput; yet, the acquired dynamic PET images suffer from high noise-levels introducing large variability in uptake parametric images and, therefore, in the estimates of MBF and MFR. Robust estimation requires substantial post-smoothing of noisy data, degrading valuable functional information of physiological and pathological importance. We present a feasible and robust approach to generate parametric images at the voxel-level that substantially reduces noise without significant loss of spatial resolution. The proposed methodology, denoted physiological clustering, makes use of the functional similarity of voxels to penalize deviation of voxel kinetics from physiological partners. The results were validated using extensive simulations (with transmural and non-transmural perfusion defects) and clinical studies. Compared to post-smoothing, physiological clustering depicted enhanced quantitative noise versus bias performance as well as superior recovery of perfusion defects (as quantified by CNR) with minimal increase in bias. Overall, parametric images obtained from the proposed methodology were robust in the presence of high-noise levels as manifested in the voxel time-activity-curves. (paper)

  9. Speech Emotion Recognition Based on Power Normalized Cepstral Coefficients in Noisy Conditions

    Directory of Open Access Journals (Sweden)

    M. Bashirpour

    2016-09-01

    Full Text Available Automatic recognition of speech emotional states in noisy conditions has become an important research topic in the emotional speech recognition area, in recent years. This paper considers the recognition of emotional states via speech in real environments. For this task, we employ the power normalized cepstral coefficients (PNCC in a speech emotion recognition system. We investigate its performance in emotion recognition using clean and noisy speech materials and compare it with the performances of the well-known MFCC, LPCC, RASTA-PLP, and also TEMFCC features. Speech samples are extracted from the Berlin emotional speech database (Emo DB and Persian emotional speech database (Persian ESD which are corrupted with 4 different noise types under various SNR levels. The experiments are conducted in clean train/noisy test scenarios to simulate practical conditions with noise sources. Simulation results show that higher recognition rates are achieved for PNCC as compared with the conventional features under noisy conditions.

  10. Efficient OCT Image Enhancement Based on Collaborative Shock Filtering.

    Science.gov (United States)

    Liu, Guohua; Wang, Ziyu; Mu, Guoying; Li, Peijin

    2018-01-01

    Efficient enhancement of noisy optical coherence tomography (OCT) images is a key task for interpreting them correctly. In this paper, to better enhance details and layered structures of a human retina image, we propose a collaborative shock filtering for OCT image denoising and enhancement. Noisy OCT image is first denoised by a collaborative filtering method with new similarity measure, and then the denoised image is sharpened by a shock-type filtering for edge and detail enhancement. For dim OCT images, in order to improve image contrast for the detection of tiny lesions, a gamma transformation is first used to enhance the images within proper gray levels. The proposed method integrating image smoothing and sharpening simultaneously obtains better visual results in experiments.

  11. Analysis of gene expression levels in individual bacterial cells without image segmentation

    International Nuclear Information System (INIS)

    Kwak, In Hae; Son, Minjun; Hagen, Stephen J.

    2012-01-01

    Highlights: ► We present a method for extracting gene expression data from images of bacterial cells. ► The method does not employ cell segmentation and does not require high magnification. ► Fluorescence and phase contrast images of the cells are correlated through the physics of phase contrast. ► We demonstrate the method by characterizing noisy expression of comX in Streptococcus mutans. -- Abstract: Studies of stochasticity in gene expression typically make use of fluorescent protein reporters, which permit the measurement of expression levels within individual cells by fluorescence microscopy. Analysis of such microscopy images is almost invariably based on a segmentation algorithm, where the image of a cell or cluster is analyzed mathematically to delineate individual cell boundaries. However segmentation can be ineffective for studying bacterial cells or clusters, especially at lower magnification, where outlines of individual cells are poorly resolved. Here we demonstrate an alternative method for analyzing such images without segmentation. The method employs a comparison between the pixel brightness in phase contrast vs fluorescence microscopy images. By fitting the correlation between phase contrast and fluorescence intensity to a physical model, we obtain well-defined estimates for the different levels of gene expression that are present in the cell or cluster. The method reveals the boundaries of the individual cells, even if the source images lack the resolution to show these boundaries clearly.

  12. Security with noisy data (Extended abstract of invited talk)

    NARCIS (Netherlands)

    Skoric, B.; Böhme, R.; Fong, P.W.L.; Safavi-Naini, R.

    2010-01-01

    An overview was given of security applications where noisy data plays a substantial role. Secure Sketches and Fuzzy Extractors were discussed at tutorial level, and two simple Fuzzy Extractor constructions were shown. One of the latest developments was presented: quantum-readout PUFs.

  13. Image denoising by exploring external and internal correlations.

    Science.gov (United States)

    Yue, Huanjing; Sun, Xiaoyan; Yang, Jingyu; Wu, Feng

    2015-06-01

    Single image denoising suffers from limited data collection within a noisy image. In this paper, we propose a novel image denoising scheme, which explores both internal and external correlations with the help of web images. For each noisy patch, we build internal and external data cubes by finding similar patches from the noisy and web images, respectively. We then propose reducing noise by a two-stage strategy using different filtering approaches. In the first stage, since the noisy patch may lead to inaccurate patch selection, we propose a graph based optimization method to improve patch matching accuracy in external denoising. The internal denoising is frequency truncation on internal cubes. By combining the internal and external denoising patches, we obtain a preliminary denoising result. In the second stage, we propose reducing noise by filtering of external and internal cubes, respectively, on transform domain. In this stage, the preliminary denoising result not only enhances the patch matching accuracy but also provides reliable estimates of filtering parameters. The final denoising image is obtained by fusing the external and internal filtering results. Experimental results show that our method constantly outperforms state-of-the-art denoising schemes in both subjective and objective quality measurements, e.g., it achieves >2 dB gain compared with BM3D at a wide range of noise levels.

  14. The Noisiness of Low Frequency Bands of Noise

    Science.gov (United States)

    Lawton, B. W.

    1975-01-01

    The relative noisiness of low frequency 1/3-octave bands of noise was examined. The frequency range investigated was bounded by the bands centered at 25 and 200 Hz, with intensities ranging from 50 to 95 db (SPL). Thirty-two subjects used a method of adjustment technique, producing comparison band intensities as noisy as 100 and 200 Hz standard bands at 60 and 72 db. The work resulted in contours of equal noisiness for 1/3-octave bands, ranging in intensity from approximately 58 to 86 db (SPL). These contours were compared with the standard equal noisiness contours; in the region of overlap, between 50 and 200 Hz, the agreement was good.

  15. Coupling regularizes individual units in noisy populations

    International Nuclear Information System (INIS)

    Ly Cheng; Ermentrout, G. Bard

    2010-01-01

    The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.

  16. Analysis of gene expression levels in individual bacterial cells without image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Kwak, In Hae; Son, Minjun [Physics Department, University of Florida, P.O. Box 118440, Gainesville, FL 32611-8440 (United States); Hagen, Stephen J., E-mail: sjhagen@ufl.edu [Physics Department, University of Florida, P.O. Box 118440, Gainesville, FL 32611-8440 (United States)

    2012-05-11

    Highlights: Black-Right-Pointing-Pointer We present a method for extracting gene expression data from images of bacterial cells. Black-Right-Pointing-Pointer The method does not employ cell segmentation and does not require high magnification. Black-Right-Pointing-Pointer Fluorescence and phase contrast images of the cells are correlated through the physics of phase contrast. Black-Right-Pointing-Pointer We demonstrate the method by characterizing noisy expression of comX in Streptococcus mutans. -- Abstract: Studies of stochasticity in gene expression typically make use of fluorescent protein reporters, which permit the measurement of expression levels within individual cells by fluorescence microscopy. Analysis of such microscopy images is almost invariably based on a segmentation algorithm, where the image of a cell or cluster is analyzed mathematically to delineate individual cell boundaries. However segmentation can be ineffective for studying bacterial cells or clusters, especially at lower magnification, where outlines of individual cells are poorly resolved. Here we demonstrate an alternative method for analyzing such images without segmentation. The method employs a comparison between the pixel brightness in phase contrast vs fluorescence microscopy images. By fitting the correlation between phase contrast and fluorescence intensity to a physical model, we obtain well-defined estimates for the different levels of gene expression that are present in the cell or cluster. The method reveals the boundaries of the individual cells, even if the source images lack the resolution to show these boundaries clearly.

  17. Fast noise level estimation algorithm based on principal component analysis transform and nonlinear rectification

    Science.gov (United States)

    Xu, Shaoping; Zeng, Xiaoxia; Jiang, Yinnan; Tang, Yiling

    2018-01-01

    We proposed a noniterative principal component analysis (PCA)-based noise level estimation (NLE) algorithm that addresses the problem of estimating the noise level with a two-step scheme. First, we randomly extracted a number of raw patches from a given noisy image and took the smallest eigenvalue of the covariance matrix of the raw patches as the preliminary estimation of the noise level. Next, the final estimation was directly obtained with a nonlinear mapping (rectification) function that was trained on some representative noisy images corrupted with different known noise levels. Compared with the state-of-art NLE algorithms, the experiment results show that the proposed NLE algorithm can reliably infer the noise level and has robust performance over a wide range of image contents and noise levels, showing a good compromise between speed and accuracy in general.

  18. Threshold policy for global games with noisy information sharing

    KAUST Repository

    Mahdavifar, Hessam

    2015-12-15

    It is known that global games with noisy sharing of information do not admit a certain type of threshold policies [1]. Motivated by this result, we investigate the existence of threshold-type policies on global games with noisy sharing of information and show that such equilibrium strategies exist and are unique if the sharing of information happens over a sufficiently noisy environment. To show this result, we establish that if a threshold function is an equilibrium strategy, then it will be a solution to a fixed point equation. Then, we show that for a sufficiently noisy environment, the functional fixed point equation leads to a contraction mapping, and hence, its iterations converge to a unique continuous threshold policy.

  19. The Noisiness of Low-Frequency One-Third Octave Bands of Noise. M.S. Thesis - Southampton Univ.

    Science.gov (United States)

    Lawton, B. W.

    1975-01-01

    This study examined the relative noisiness of low frequency one-third octave bands of noise bounded by the bands centered at 25 Hz and 200 Hz, with intensities ranging from 50 db sound pressure level (SPL) to 95 db SPL. The thirty-two subjects used a method-of-adjustment technique, producing comparison-band intensities as noisy as standard bands centered at 100 Hz and 200 Hz with intensities of 60 db SPL and 72 db SPL. Four contours of equal noisiness were developed for one-third octave bands, extending down to 25 Hz and ranging in intensity from approximately 58 db SPL to 86 db SPL. These curves were compared with the contours of equal noisiness of Kryter and Pearsons. In the region of overlap (between 50 Hz and 200 Hz) the agreement was good.

  20. Noise Measurement and Frequency Analysis of Commercially Available Noisy Toys

    Directory of Open Access Journals (Sweden)

    Shohreh Jalaie

    2005-06-01

    Full Text Available Objective: Noise measurement and frequency analysis of commercially available noisy toys were the main purposes of the study. Materials and Methods: 181 noisy toys commonly found in toy stores in different zones of Tehran were selected and categorized into 10 groups. Noise measurement were done at 2, 25, and 50 cm from toys in dBA. The noisiest toy of each group was frequency analyzed in octave bands. Results: The highest and the lowest intensity levels belonged to the gun (mean=112 dBA and range of 100-127 dBA and to the rattle-box (mean=84 dBA and range of 74-95 dBA, respectively. Noise intensity levels significantly decreased with increasing distance except for two toys. Noise frequency analysis indicated energy in effective hearing frequencies. Most of the toys energies were in the middle and high frequency region. Conclusion: As intensity level of the toys is considerable, mostly more than 90 dBA, and also their energy exist in the middle and high frequency region, toys should be considered as a cause of the hearing impairment.

  1. Noisy-or classifier

    Czech Academy of Sciences Publication Activity Database

    Vomlel, Jiří

    2006-01-01

    Roč. 21, č. 3 (2006), s. 381-389 ISSN 0884-8173 R&D Projects: GA ČR(CZ) GA201/04/0393 Institutional research plan: CEZ:AV0Z10750506 Keywords : automatic classification * probabilistic models * EM algorithm * noisy-or model Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.429, year: 2006

  2. Robust speaker recognition in noisy environments

    CERN Document Server

    Rao, K Sreenivasa

    2014-01-01

    This book discusses speaker recognition methods to deal with realistic variable noisy environments. The text covers authentication systems for; robust noisy background environments, functions in real time and incorporated in mobile devices. The book focuses on different approaches to enhance the accuracy of speaker recognition in presence of varying background environments. The authors examine: (a) Feature compensation using multiple background models, (b) Feature mapping using data-driven stochastic models, (c) Design of super vector- based GMM-SVM framework for robust speaker recognition, (d) Total variability modeling (i-vectors) in a discriminative framework and (e) Boosting method to fuse evidences from multiple SVM models.

  3. NoGOA: predicting noisy GO annotations using evidences and sparse representation.

    Science.gov (United States)

    Yu, Guoxian; Lu, Chang; Wang, Jun

    2017-07-21

    Gene Ontology (GO) is a community effort to represent functional features of gene products. GO annotations (GOA) provide functional associations between GO terms and gene products. Due to resources limitation, only a small portion of annotations are manually checked by curators, and the others are electronically inferred. Although quality control techniques have been applied to ensure the quality of annotations, the community consistently report that there are still considerable noisy (or incorrect) annotations. Given the wide application of annotations, however, how to identify noisy annotations is an important but yet seldom studied open problem. We introduce a novel approach called NoGOA to predict noisy annotations. NoGOA applies sparse representation on the gene-term association matrix to reduce the impact of noisy annotations, and takes advantage of sparse representation coefficients to measure the semantic similarity between genes. Secondly, it preliminarily predicts noisy annotations of a gene based on aggregated votes from semantic neighborhood genes of that gene. Next, NoGOA estimates the ratio of noisy annotations for each evidence code based on direct annotations in GOA files archived on different periods, and then weights entries of the association matrix via estimated ratios and propagates weights to ancestors of direct annotations using GO hierarchy. Finally, it integrates evidence-weighted association matrix and aggregated votes to predict noisy annotations. Experiments on archived GOA files of six model species (H. sapiens, A. thaliana, S. cerevisiae, G. gallus, B. Taurus and M. musculus) demonstrate that NoGOA achieves significantly better results than other related methods and removing noisy annotations improves the performance of gene function prediction. The comparative study justifies the effectiveness of integrating evidence codes with sparse representation for predicting noisy GO annotations. Codes and datasets are available at http://mlda.swu.edu.cn/codes.php?name=NoGOA .

  4. Global games with noisy sharing of information

    KAUST Repository

    Touri, Behrouz; Shamma, Jeff S.

    2014-01-01

    We provide a framework for the study of global games with noisy sharing of information. In contrast to the previous works where it is shown that an intuitive threshold policy is an equilibrium for such games, we show that noisy sharing of information leads to non-existence of such an equilibrium. We also investigate the group best-response dynamics of two groups of agents sharing the same information to threshold policies based on each group's observation and show the convergence of such dynamics.

  5. Multi-level tree analysis of pulmonary artery/vein trees in non-contrast CT images

    Science.gov (United States)

    Gao, Zhiyun; Grout, Randall W.; Hoffman, Eric A.; Saha, Punam K.

    2012-02-01

    Diseases like pulmonary embolism and pulmonary hypertension are associated with vascular dystrophy. Identifying such pulmonary artery/vein (A/V) tree dystrophy in terms of quantitative measures via CT imaging significantly facilitates early detection of disease or a treatment monitoring process. A tree structure, consisting of nodes and connected arcs, linked to the volumetric representation allows multi-level geometric and volumetric analysis of A/V trees. Here, a new theory and method is presented to generate multi-level A/V tree representation of volumetric data and to compute quantitative measures of A/V tree geometry and topology at various tree hierarchies. The new method is primarily designed on arc skeleton computation followed by a tree construction based topologic and geometric analysis of the skeleton. The method starts with a volumetric A/V representation as input and generates its topologic and multi-level volumetric tree representations long with different multi-level morphometric measures. A new recursive merging and pruning algorithms are introduced to detect bad junctions and noisy branches often associated with digital geometric and topologic analysis. Also, a new notion of shortest axial path is introduced to improve the skeletal arc joining two junctions. The accuracy of the multi-level tree analysis algorithm has been evaluated using computer generated phantoms and pulmonary CT images of a pig vessel cast phantom while the reproducibility of method is evaluated using multi-user A/V separation of in vivo contrast-enhanced CT images of a pig lung at different respiratory volumes.

  6. Classification of cryo electron microscopy images, noisy tomographic images recorded with unknown projection directions, by simultaneously estimating reconstructions and application to an assembly mutant of Cowpea Chlorotic Mottle Virus and portals of the bacteriophage P22

    Science.gov (United States)

    Lee, Junghoon; Zheng, Yili; Yin, Zhye; Doerschuk, Peter C.; Johnson, John E.

    2010-08-01

    Cryo electron microscopy is frequently used on biological specimens that show a mixture of different types of object. Because the electron beam rapidly destroys the specimen, the beam current is minimized which leads to noisy images (SNR substantially less than 1) and only one projection image per object (with an unknown projection direction) is collected. For situations where the objects can reasonably be described as coming from a finite set of classes, an approach based on joint maximum likelihood estimation of the reconstruction of each class and then use of the reconstructions to label the class of each image is described and demonstrated on two challenging problems: an assembly mutant of Cowpea Chlorotic Mottle Virus and portals of the bacteriophage P22.

  7. Global games with noisy sharing of information

    KAUST Repository

    Touri, Behrouz

    2014-12-15

    We provide a framework for the study of global games with noisy sharing of information. In contrast to the previous works where it is shown that an intuitive threshold policy is an equilibrium for such games, we show that noisy sharing of information leads to non-existence of such an equilibrium. We also investigate the group best-response dynamics of two groups of agents sharing the same information to threshold policies based on each group\\'s observation and show the convergence of such dynamics.

  8. Simulation of noisy dynamical system by Deep Learning

    Science.gov (United States)

    Yeo, Kyongmin

    2017-11-01

    Deep learning has attracted huge attention due to its powerful representation capability. However, most of the studies on deep learning have been focused on visual analytics or language modeling and the capability of the deep learning in modeling dynamical systems is not well understood. In this study, we use a recurrent neural network to model noisy nonlinear dynamical systems. In particular, we use a long short-term memory (LSTM) network, which constructs internal nonlinear dynamics systems. We propose a cross-entropy loss with spatial ridge regularization to learn a non-stationary conditional probability distribution from a noisy nonlinear dynamical system. A Monte Carlo procedure to perform time-marching simulations by using the LSTM is presented. The behavior of the LSTM is studied by using noisy, forced Van der Pol oscillator and Ikeda equation.

  9. Analysis and Extension of the PCA Method, Estimating a Noise Curve from a Single Image

    Directory of Open Access Journals (Sweden)

    Miguel Colom

    2016-12-01

    Full Text Available In the article 'Image Noise Level Estimation by Principal Component Analysis', S. Pyatykh, J. Hesser, and L. Zheng propose a new method to estimate the variance of the noise in an image from the eigenvalues of the covariance matrix of the overlapping blocks of the noisy image. Instead of using all the patches of the noisy image, the authors propose an iterative strategy to adaptively choose the optimal set containing the patches with lowest variance. Although the method measures uniform Gaussian noise, it can be easily adapted to deal with signal-dependent noise, which is realistic with the Poisson noise model obtained by a CMOS or CCD device in a digital camera.

  10. Image Denoising Using Singular Value Difference in the Wavelet Domain

    Directory of Open Access Journals (Sweden)

    Min Wang

    2018-01-01

    Full Text Available Singular value (SV difference is the difference in the singular values between a noisy image and the original image; it varies regularly with noise intensity. This paper proposes an image denoising method using the singular value difference in the wavelet domain. First, the SV difference model is generated for different noise variances in the three directions of the wavelet transform and the noise variance of a new image is used to make the calculation by the diagonal part. Next, the single-level discrete 2-D wavelet transform is used to decompose each noisy image into its low-frequency and high-frequency parts. Then, singular value decomposition (SVD is used to obtain the SVs of the three high-frequency parts. Finally, the three denoised high-frequency parts are reconstructed by SVD from the SV difference, and the final denoised image is obtained using the inverse wavelet transform. Experiments show the effectiveness of this method compared with relevant existing methods.

  11. Higher-Order Statistics for the Detection of Small Objects in a Noisy Background Application on Sonar Imaging

    Directory of Open Access Journals (Sweden)

    M. Amate

    2007-01-01

    Full Text Available An original algorithm for the detection of small objects in a noisy background is proposed. Its application to underwater objects detection by sonar imaging is addressed. This new method is based on the use of higher-order statistics (HOS that are locally estimated on the images. The proposed algorithm is divided into two steps. In a first step, HOS (skewness and kurtosis are estimated locally using a square sliding computation window. Small deterministic objects have different statistical properties from the background they are thus highlighted. The influence of the signal-to-noise ratio (SNR on the results is studied in the case of Gaussian noise. Mathematical expressions of the estimators and of the expected performances are derived and are experimentally confirmed. In a second step, the results are focused by a matched filter using a theoretical model. This enables the precise localization of the regions of interest. The proposed method generalizes to other statistical distributions and we derive the theoretical expressions of the HOS estimators in the case of a Weibull distribution (both when only noise is present or when a small deterministic object is present within the filtering window. This enables the application of the proposed technique to the processing of synthetic aperture sonar data containing underwater mines whose echoes have to be detected and located. Results on real data sets are presented and quantitatively evaluated using receiver operating characteristic (ROC curves.

  12. Noisy Spiking in Visual Area V2 of Amblyopic Monkeys.

    Science.gov (United States)

    Wang, Ye; Zhang, Bin; Tao, Xiaofeng; Wensveen, Janice M; Smith, Earl L; Chino, Yuzo M

    2017-01-25

    being noisy by perceptual and modeling studies, the exact nature or origin of this elevated perceptual noise is not known. We show that elevated and noisy spontaneous activity and contrast-dependent noisy spiking (spiking irregularity and trial-to-trial fluctuations in spiking) in neurons of visual area V2 could limit the visual performance of amblyopic primates. Moreover, we discovered that the noisy spiking is linked to a high level of binocular suppression in visual cortex during development. Copyright © 2017 the authors 0270-6474/17/370922-14$15.00/0.

  13. Noisy quantum game

    International Nuclear Information System (INIS)

    Chen Jingling; Kwek, L.C.; Oh, C.H.

    2002-01-01

    In a recent paper [D. A. Meyer, Phys. Rev. Lett. 82, 1052 (1999)], it has been shown that a classical zero-sum strategic game can become a winning quantum game for the player with a quantum device. Nevertheless, it is well known that quantum systems easily decohere in noisy environments. In this paper, we show that if the handicapped player with classical means can delay his action for a sufficiently long time, the quantum version reverts to the classical zero-sum game under decoherence

  14. Gravel Image Segmentation in Noisy Background Based on Partial Entropy Method

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Because of wide variation in gray levels and particle dimensions and the presence of many small gravel objects in the background, as well as corrupting the image by noise, it is difficult o segment gravel objects. In this paper, we develop a partial entropy method and succeed to realize gravel objects segmentation. We give entropy principles and fur calculation methods. Moreover, we use minimum entropy error automaticly to select a threshold to segment image. We introduce the filter method using mathematical morphology. The segment experiments are performed by using different window dimensions for a group of gravel image and demonstrates that this method has high segmentation rate and low noise sensitivity.

  15. Authentication over Noisy Channels

    OpenAIRE

    Lai, Lifeng; Gamal, Hesham El; Poor, H. Vincent

    2008-01-01

    In this work, message authentication over noisy channels is studied. The model developed in this paper is the authentication theory counterpart of Wyner's wiretap channel model. Two types of opponent attacks, namely impersonation attacks and substitution attacks, are investigated for both single message and multiple message authentication scenarios. For each scenario, information theoretic lower and upper bounds on the opponent's success probability are derived. Remarkably, in both scenarios,...

  16. On covariance structure in noisy, big data

    Science.gov (United States)

    Paffenroth, Randy C.; Nong, Ryan; Du Toit, Philip C.

    2013-09-01

    Herein we describe theory and algorithms for detecting covariance structures in large, noisy data sets. Our work uses ideas from matrix completion and robust principal component analysis to detect the presence of low-rank covariance matrices, even when the data is noisy, distorted by large corruptions, and only partially observed. In fact, the ability to handle partial observations combined with ideas from randomized algorithms for matrix decomposition enables us to produce asymptotically fast algorithms. Herein we will provide numerical demonstrations of the methods and their convergence properties. While such methods have applicability to many problems, including mathematical finance, crime analysis, and other large-scale sensor fusion problems, our inspiration arises from applying these methods in the context of cyber network intrusion detection.

  17. An enhanced fractal image denoising algorithm

    International Nuclear Information System (INIS)

    Lu Jian; Ye Zhongxing; Zou Yuru; Ye Ruisong

    2008-01-01

    In recent years, there has been a significant development in image denoising using fractal-based method. This paper presents an enhanced fractal predictive denoising algorithm for denoising the images corrupted by an additive white Gaussian noise (AWGN) by using quadratic gray-level function. Meanwhile, a quantization method for the fractal gray-level coefficients of the quadratic function is proposed to strictly guarantee the contractivity requirement of the enhanced fractal coding, and in terms of the quality of the fractal representation measured by PSNR, the enhanced fractal image coding using quadratic gray-level function generally performs better than the standard fractal coding using linear gray-level function. Based on this enhanced fractal coding, the enhanced fractal image denoising is implemented by estimating the fractal gray-level coefficients of the quadratic function of the noiseless image from its noisy observation. Experimental results show that, compared with other standard fractal-based image denoising schemes using linear gray-level function, the enhanced fractal denoising algorithm can improve the quality of the restored image efficiently

  18. Noisy time-dependent spectra

    International Nuclear Information System (INIS)

    Shore, B.W.; Eberly, J.H.

    1983-01-01

    The definition of a time-dependent spectrum registered by an idealized spectrometer responding to a time-varying electromagnetic field as proposed by Eberly and Wodkiewicz and subsequently applied to the spectrum of laser-induced fluorescence by Eberly, Kunasz, and Wodkiewicz is here extended to allow a stochastically fluctuating (interruption model) environment: we provide an algorithm for numerical determination of the time-dependent fluorescence spectrum of an atom subject to excitation by an intense noisy laser and interruptive relaxation

  19. Development of wireless intercom for work of excessive noisy places

    International Nuclear Information System (INIS)

    Shiba, Kazuo; Yamashita, Shinichi; Fujita, Tsuneaki; Yamazaki, Katsuyoshi; Sakai, Manabu; Nakanishi, Tomokazu.

    1996-01-01

    Nuclear power stations are often excessively noisy working environments, where conversation and verbal communication are hampered to the extreme. We have developed a small wireless intercom for this and other extremely noisy environments. In the first step of this study, we studied work environment noise and vibration. Results formed the basis of intercom system development. In addition, we have examined the possibilities of optical and microwave intercom systems. (author)

  20. Do Quiet Areas Afford Greater Health-Related Quality of Life than Noisy Areas?

    Directory of Open Access Journals (Sweden)

    Kim N. Dirks

    2013-03-01

    Full Text Available People typically choose to live in quiet areas in order to safeguard their health and wellbeing. However, the benefits of living in quiet areas are relatively understudied compared to the burdens associated with living in noisy areas. Additionally, research is increasingly focusing on the relationship between the human response to noise and measures of health and wellbeing, complementing traditional dose-response approaches, and further elucidating the impact of noise and health by incorporating human factors as mediators and moderators. To further explore the benefits of living in quiet areas, we compared the results of health-related quality of life (HRQOL questionnaire datasets collected from households in localities differentiated by their soundscapes and population density: noisy city, quiet city, quiet rural, and noisy rural. The dose-response relationships between noise annoyance and HRQOL measures indicated an inverse relationship between the two. Additionally, quiet areas were found to have higher mean HRQOL domain scores than noisy areas. This research further supports the protection of quiet locales and ongoing noise abatement in noisy areas.

  1. Function reconstruction from noisy local averages

    International Nuclear Information System (INIS)

    Chen Yu; Huang Jianguo; Han Weimin

    2008-01-01

    A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies

  2. Evaluation of the autoregression time-series model for analysis of a noisy signal

    International Nuclear Information System (INIS)

    Allen, J.W.

    1977-01-01

    The autoregression (AR) time-series model of a continuous noisy signal was statistically evaluated to determine quantitatively the uncertainties of the model order, the model parameters, and the model's power spectral density (PSD). The result of such a statistical evaluation enables an experimenter to decide whether an AR model can adequately represent a continuous noisy signal and be consistent with the signal's frequency spectrum, and whether it can be used for on-line monitoring. Although evaluations of other types of signals have been reported in the literature, no direct reference has been found to AR model's uncertainties for continuous noisy signals; yet the evaluation is necessary to decide the usefulness of AR models of typical reactor signals (e.g., neutron detector output or thermocouple output) and the potential of AR models for on-line monitoring applications. AR and other time-series models for noisy data representation are being investigated by others since such models require fewer parameters than the traditional PSD model. For this study, the AR model was selected for its simplicity and conduciveness to uncertainty analysis, and controlled laboratory bench signals were used for continuous noisy data. (author)

  3. Multiple Equilibria in Noisy Rational Expectations Economies

    DEFF Research Database (Denmark)

    Palvolgyi, Domotor; Venter, Gyuri

    This paper studies equilibrium uniqueness in standard noisy rational expectations economies with asymmetric or differential information a la Grossman and Stiglitz (1980) and Hellwig (1980). We show that the standard linear equilibrium of Grossman and Stiglitz (1980) is the unique equilibrium...

  4. Generalizations of the noisy-or model

    Czech Academy of Sciences Publication Activity Database

    Vomlel, Jiří

    2015-01-01

    Roč. 51, č. 3 (2015), s. 508-524 ISSN 0023-5954 R&D Projects: GA ČR GA13-20012S Institutional support: RVO:67985556 Keywords : Bayesian networks * noisy-or model * classification * generalized linear models Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.628, year: 2015 http://library.utia.cas.cz/separaty/2015/MTR/vomlel-0447357.pdf

  5. Continuous Variables Quantum Information in Noisy Environments

    DEFF Research Database (Denmark)

    Berni, Adriano

    safe from the detrimental effects of noise and losses. In the present work we investigate continuous variables Gaussian quantum information in noisy environments, studying the effects of various noise sources in the cases of a quantum metrological task, an error correction scheme and discord...

  6. Image Denoising Using Interquartile Range Filter with Local Averaging

    OpenAIRE

    Jassim, Firas Ajil

    2013-01-01

    Image denoising is one of the fundamental problems in image processing. In this paper, a novel approach to suppress noise from the image is conducted by applying the interquartile range (IQR) which is one of the statistical methods used to detect outlier effect from a dataset. A window of size kXk was implemented to support IQR filter. Each pixel outside the IQR range of the kXk window is treated as noisy pixel. The estimation of the noisy pixels was obtained by local averaging. The essential...

  7. Supervised variational model with statistical inference and its application in medical image segmentation.

    Science.gov (United States)

    Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David

    2015-01-01

    Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.

  8. Behavioral changes in response to sound exposure and no spatial avoidance of noisy conditions in captive zebrafish.

    Science.gov (United States)

    Neo, Yik Yaw; Parie, Lisa; Bakker, Frederique; Snelderwaard, Peter; Tudorache, Christian; Schaaf, Marcel; Slabbekoorn, Hans

    2015-01-01

    Auditory sensitivity in fish serves various important functions, but also makes fish susceptible to noise pollution. Human-generated sounds may affect behavioral patterns of fish, both in natural conditions and in captivity. Fish are often kept for consumption in aquaculture, on display in zoos and hobby aquaria, and for medical sciences in research facilities, but little is known about the impact of ambient sounds in fish tanks. In this study, we conducted two indoor exposure experiments with zebrafish (Danio rerio). The first experiment demonstrated that exposure to moderate sound levels (112 dB re 1 μPa) can affect the swimming behavior of fish by changing group cohesion, swimming speed and swimming height. Effects were brief for both continuous and intermittent noise treatments. In the second experiment, fish could influence exposure to higher sound levels by swimming freely between an artificially noisy fish tank (120-140 dB re 1 μPa) and another with ambient noise levels (89 dB re 1 μPa). Despite initial startle responses, and a brief period in which many individuals in the noisy tank dived down to the bottom, there was no spatial avoidance or noise-dependent tank preference at all. The frequent exchange rate of about 60 fish passages per hour between tanks was not affected by continuous or intermittent exposures. In conclusion, small groups of captive zebrafish were able to detect sounds already at relatively low sound levels and adjust their behavior to it. Relatively high sound levels were at least at the on-set disturbing, but did not lead to spatial avoidance. Further research is needed to show whether zebrafish are not able to avoid noisy areas or just not bothered. Quantitatively, these data are not directly applicable to other fish species or other fish tanks, but they do indicate that sound exposure may affect fish behavior in any captive condition.

  9. Detection of electrophysiology catheters in noisy fluoroscopy images.

    Science.gov (United States)

    Franken, Erik; Rongen, Peter; van Almsick, Markus; ter Haar Romeny, Bart

    2006-01-01

    Cardiac catheter ablation is a minimally invasive medical procedure to treat patients with heart rhythm disorders. It is useful to know the positions of the catheters and electrodes during the intervention, e.g. for the automatization of cardiac mapping. Our goal is therefore to develop a robust image analysis method that can detect the catheters in X-ray fluoroscopy images. Our method uses steerable tensor voting in combination with a catheter-specific multi-step extraction algorithm. The evaluation on clinical fluoroscopy images shows that especially the extraction of the catheter tip is successful and that the use of tensor voting accounts for a large increase in performance.

  10. Eavesdropping on the Bostroem-Filbinger Communication Protocol in Noisy Quantum Channel

    OpenAIRE

    Cai, Qing-yu

    2004-01-01

    We show an eavesdropping scheme on Bostr\\UNICODE{0xf6}m-Felbinger communication protocol (called ping-pong protocol) [Phys. Rev. Lett. 89, 187902 (2002)] in an ideal quantum channel. A measurement attack can be perfectly used to eavesdrop Alice's information instead of a most general quantum operation attack. In a noisy quantum channel, the direct communication is forbidden. We present a quantum key distribution protocol based on the ping-pong protocol, which can be used in a low noisy quantu...

  11. Solving for the capacity of a noisy lossy bosonic channel via the master equation

    International Nuclear Information System (INIS)

    Qin Tao; Zhao Meisheng; Zhang Yongde

    2006-01-01

    We discuss the noisy lossy bosonic channel by exploiting master equations. The capacity of the noisy lossy bosonic channel and the criterion for the optimal capacities are derived. Consequently, we verify that master equations can be a tool to study bosonic channels

  12. Multiwavelength Absolute Phase Retrieval from Noisy Diffractive Patterns: Wavelength Multiplexing Algorithm

    Directory of Open Access Journals (Sweden)

    Vladimir Katkovnik

    2018-05-01

    Full Text Available We study the problem of multiwavelength absolute phase retrieval from noisy diffraction patterns. The system is lensless with multiwavelength coherent input light beams and random phase masks applied for wavefront modulation. The light beams are formed by light sources radiating all wavelengths simultaneously. A sensor equipped by a Color Filter Array (CFA is used for spectral measurement registration. The developed algorithm targeted on optimal phase retrieval from noisy observations is based on maximum likelihood technique. The algorithm is specified for Poissonian and Gaussian noise distributions. One of the key elements of the algorithm is an original sparse modeling of the multiwavelength complex-valued wavefronts based on the complex-domain block-matching 3D filtering. Presented numerical experiments are restricted to noisy Poissonian observations. They demonstrate that the developed algorithm leads to effective solutions explicitly using the sparsity for noise suppression and enabling accurate reconstruction of absolute phase of high-dynamic range.

  13. Global optimization based on noisy evaluations: An empirical study of two statistical approaches

    International Nuclear Information System (INIS)

    Vazquez, Emmanuel; Villemonteix, Julien; Sidorkiewicz, Maryan; Walter, Eric

    2008-01-01

    The optimization of the output of complex computer codes has often to be achieved with a small budget of evaluations. Algorithms dedicated to such problems have been developed and compared, such as the Expected Improvement algorithm (El) or the Informational Approach to Global Optimization (IAGO). However, the influence of noisy evaluation results on the outcome of these comparisons has often been neglected, despite its frequent appearance in industrial problems. In this paper, empirical convergence rates for El and IAGO are compared when an additive noise corrupts the result of an evaluation. IAGO appears more efficient than El and various modifications of El designed to deal with noisy evaluations. Keywords. Global optimization; computer simulations; kriging; Gaussian process; noisy evaluations.

  14. Varying ultrasound power level to distinguish surgical instruments and tissue.

    Science.gov (United States)

    Ren, Hongliang; Anuraj, Banani; Dupont, Pierre E

    2018-03-01

    We investigate a new framework of surgical instrument detection based on power-varying ultrasound images with simple and efficient pixel-wise intensity processing. Without using complicated feature extraction methods, we identified the instrument with an estimated optimal power level and by comparing pixel values of varying transducer power level images. The proposed framework exploits the physics of ultrasound imaging system by varying the transducer power level to effectively distinguish metallic surgical instruments from tissue. This power-varying image-guidance is motivated from our observations that ultrasound imaging at different power levels exhibit different contrast enhancement capabilities between tissue and instruments in ultrasound-guided robotic beating-heart surgery. Using lower transducer power levels (ranging from 40 to 75% of the rated lowest ultrasound power levels of the two tested ultrasound scanners) can effectively suppress the strong imaging artifacts from metallic instruments and thus, can be utilized together with the images from normal transducer power levels to enhance the separability between instrument and tissue, improving intraoperative instrument tracking accuracy from the acquired noisy ultrasound volumetric images. We performed experiments in phantoms and ex vivo hearts in water tank environments. The proposed multi-level power-varying ultrasound imaging approach can identify robotic instruments of high acoustic impedance from low-signal-to-noise-ratio ultrasound images by power adjustments.

  15. Noisy Oscillations in the Actin Cytoskeleton of Chemotactic Amoeba

    Science.gov (United States)

    Negrete, Jose; Pumir, Alain; Hsu, Hsin-Fang; Westendorf, Christian; Tarantola, Marco; Beta, Carsten; Bodenschatz, Eberhard

    2016-09-01

    Biological systems with their complex biochemical networks are known to be intrinsically noisy. Here we investigate the dynamics of actin polymerization of amoeboid cells, which are close to the onset of oscillations. We show that the large phenotypic variability in the polymerization dynamics can be accurately captured by a generic nonlinear oscillator model in the presence of noise. We determine the relative role of the noise with a single dimensionless, experimentally accessible parameter, thus providing a quantitative description of the variability in a population of cells. Our approach, which rests on a generic description of a system close to a Hopf bifurcation and includes the effect of noise, can characterize the dynamics of a large class of noisy systems close to an oscillatory instability.

  16. Auditory Modeling for Noisy Speech Recognition.

    Science.gov (United States)

    2000-01-01

    multiple platforms including PCs, workstations, and DSPs. A prototype version of the SOS process was tested on the Japanese Hiragana language with good...judgment among linguists. American English has 48 phonetic sounds in the ARPABET representation. Hiragana , the Japanese phonetic language, has only 20... Japanese Hiragana ," H.L. Pfister, FL 95, 1995. "State Recognition for Noisy Dynamic Systems," H.L. Pfister, Tech 2005, Chicago, 1995. "Experiences

  17. Blind Compressed Image Watermarking for Noisy Communication Channels

    Science.gov (United States)

    2015-10-26

    Lenna test image [11] for our simulations, and gradient projection for sparse recon- struction (GPSR) [12] to solve the convex optimization prob- lem...E. Candes, J. Romberg , and T. Tao, “Robust uncertainty prin- ciples: exact signal reconstruction from highly incomplete fre- quency information,” IEEE...Images - Requirements and Guidelines,” ITU-T Recommen- dation T.81, 1992. [6] M. Gkizeli, D. Pados, and M. Medley, “Optimal signature de - sign for

  18. Penguins and their noisy world

    Directory of Open Access Journals (Sweden)

    Thierry Aubin

    2004-06-01

    Full Text Available Penguins identify their mate or chick by an acoustic signal, the display call. This identification is realized in a particularly constraining environment: the noisy world of a colony of thousands of birds. To fully understand how birds solve this problem of communication, we have done observations, acoustic analysis, propagation and playback experiments with 6 species of penguins studied in the field. According to our results, it appears that penguins use a particularly efficient ''anti-confusion'' and ''anti-noise'' coding system, allowing a quick identification and localization of individuals on the move in a noisy crowd.Os pingüins identificam seu parceiro ou seu filhote através de um sinal acústico, o grito de exibição. Esta identificação está realizada num ambiente particularmente exigente: o mundo barulhento de uma colônia de milhares de aves. Para entender totalmente como essas aves resolvem este problema de comunicação, realizamos observações, análises acústicas e experiências de propagação e de ''play-back'' com 6 espécies de pingüins estudados no campo. Segundo nossos resultados, parece que os pingüins usam um sistema de codificação ''anti-confusão'' e ''anti-barulho'' particularmente eficiente, permitindo uma rápida identificação e localização dos indivíduos em movimento numa multidão barulhenta.

  19. Toward a unified view of radiological imaging systems. Part II: Noisy images

    International Nuclear Information System (INIS)

    Wagner, R.F.

    1977-01-01

    ''The imaging process is fundamentally a sampling process.'' This philosophy of Otto Schade, utilizing the concepts of sample number and sampling aperture, is applied to a systems analysis of radiographic imaging, including some aspects of vision. It leads to a simple modification of the Rose statistical model; this results in excellent fits to the Blackwell data on the detectability of disks as a function of contrast and size. It gives a straightforward prescription for calculating a signal-to-noise ratio, which is applicable to the detection of low-contrast detail in screen--film imaging, including the effects of magnification. The model lies between the optimistic extreme of the Rose model and the pessimistic extreme of the Morgan model. For high-contrast detail, the rules for the evaluation of noiseless images are recovered

  20. A variational ensemble scheme for noisy image data assimilation

    Science.gov (United States)

    Yang, Yin; Robinson, Cordelia; Heitz, Dominique; Mémin, Etienne

    2014-05-01

    Data assimilation techniques aim at recovering a system state variables trajectory denoted as X, along time from partially observed noisy measurements of the system denoted as Y. These procedures, which couple dynamics and noisy measurements of the system, fulfill indeed a twofold objective. On one hand, they provide a denoising - or reconstruction - procedure of the data through a given model framework and on the other hand, they provide estimation procedures for unknown parameters of the dynamics. A standard variational data assimilation problem can be formulated as the minimization of the following objective function with respect to the initial discrepancy, η, from the background initial guess: δ« J(η(x)) = 1∥Xb (x) - X (t ,x)∥2 + 1 tf∥H(X (t,x ))- Y (t,x)∥2dt. 2 0 0 B 2 t0 R (1) where the observation operator H links the state variable and the measurements. The cost function can be interpreted as the log likelihood function associated to the a posteriori distribution of the state given the past history of measurements and the background. In this work, we aim at studying ensemble based optimal control strategies for data assimilation. Such formulation nicely combines the ingredients of ensemble Kalman filters and variational data assimilation (4DVar). It is also formulated as the minimization of the objective function (1), but similarly to ensemble filter, it introduces in its objective function an empirical ensemble-based background-error covariance defined as: B ≡ )(Xb - )T>. (2) Thus, it works in an off-line smoothing mode rather than on the fly like sequential filters. Such resulting ensemble variational data assimilation technique corresponds to a relatively new family of methods [1,2,3]. It presents two main advantages: first, it does not require anymore to construct the adjoint of the dynamics tangent linear operator, which is a considerable advantage with respect to the method's implementation, and second, it enables the handling of a flow

  1. A new method for mobile phone image denoising

    Science.gov (United States)

    Jin, Lianghai; Jin, Min; Li, Xiang; Xu, Xiangyang

    2015-12-01

    Images captured by mobile phone cameras via pipeline processing usually contain various kinds of noises, especially granular noise with different shapes and sizes in both luminance and chrominance channels. In chrominance channels, noise is closely related to image brightness. To improve image quality, this paper presents a new method to denoise such mobile phone images. The proposed scheme converts the noisy RGB image to luminance and chrominance images, which are then denoised by a common filtering framework. The common filtering framework processes a noisy pixel by first excluding the neighborhood pixels that significantly deviate from the (vector) median and then utilizing the other neighborhood pixels to restore the current pixel. In the framework, the strength of chrominance image denoising is controlled by image brightness. The experimental results show that the proposed method obviously outperforms some other representative denoising methods in terms of both objective measure and visual evaluation.

  2. Rational integration of noisy evidence and prior semantic expectations in sentence interpretation.

    Science.gov (United States)

    Gibson, Edward; Bergen, Leon; Piantadosi, Steven T

    2013-05-14

    Sentence processing theories typically assume that the input to our language processing mechanisms is an error-free sequence of words. However, this assumption is an oversimplification because noise is present in typical language use (for instance, due to a noisy environment, producer errors, or perceiver errors). A complete theory of human sentence comprehension therefore needs to explain how humans understand language given imperfect input. Indeed, like many cognitive systems, language processing mechanisms may even be "well designed"--in this case for the task of recovering intended meaning from noisy utterances. In particular, comprehension mechanisms may be sensitive to the types of information that an idealized statistical comprehender would be sensitive to. Here, we evaluate four predictions about such a rational (Bayesian) noisy-channel language comprehender in a sentence comprehension task: (i) semantic cues should pull sentence interpretation towards plausible meanings, especially if the wording of the more plausible meaning is close to the observed utterance in terms of the number of edits; (ii) this process should asymmetrically treat insertions and deletions due to the Bayesian "size principle"; such nonliteral interpretation of sentences should (iii) increase with the perceived noise rate of the communicative situation and (iv) decrease if semantically anomalous meanings are more likely to be communicated. These predictions are borne out, strongly suggesting that human language relies on rational statistical inference over a noisy channel.

  3. Parallel CT image reconstruction based on GPUs

    International Nuclear Information System (INIS)

    Flores, Liubov A.; Vidal, Vicent; Mayo, Patricia; Rodenas, Francisco; Verdú, Gumersindo

    2014-01-01

    In X-ray computed tomography (CT) iterative methods are more suitable for the reconstruction of images with high contrast and precision in noisy conditions from a small number of projections. However, in practice, these methods are not widely used due to the high computational cost of their implementation. Nowadays technology provides the possibility to reduce effectively this drawback. It is the goal of this work to develop a fast GPU-based algorithm to reconstruct high quality images from under sampled and noisy projection data. - Highlights: • We developed GPU-based iterative algorithm to reconstruct images. • Iterative algorithms are capable to reconstruct images from under sampled set of projections. • The computer cost of the implementation of the developed algorithm is low. • The efficiency of the algorithm increases for the large scale problems

  4. Behavioural changes in response to sound exposure and no spatial avoidance of noisy conditions in captive zebrafish

    Directory of Open Access Journals (Sweden)

    Yik Yaw (Errol eNeo

    2015-02-01

    Full Text Available Auditory sensitivity in fish serves various important functions, but also makes fish susceptible to noise pollution. Human-generated sounds may affect behavioural patterns of fish, both in natural conditions and in captivity. Fish are often kept for consumption in aquaculture, on display in zoos and hobby aquaria, and for medical sciences in research facilities, but little is known about the impact of ambient sounds in fish tanks. In this study, we conducted two indoor exposure experiments with zebrafish (Danio rerio. The first experiment demonstrated that exposure to moderate sound levels (112 dB re 1 μPa can affect the swimming behaviour of fish by changing group cohesion, swimming speed and swimming height. Effects were brief for both continuous and intermittent noise treatments. In the second experiment, fish could influence exposure to higher sound levels by swimming freely between an artificially noisy fish tank (120-140 dB re 1 μPa and another with ambient noise levels (89 dB re 1 μPa. Despite initial startle responses, and a brief period in which many individuals in the noisy tank dived down to the bottom, there was no spatial avoidance or noise-dependent tank preference at all. The frequent exchange rate of about 60 fish passages per hour between tanks was not affected by continuous or intermittent exposures. In conclusion, small groups of captive zebrafish were able to detect sounds already at relatively low sound levels and adjust their behaviour to it. Relatively high sound levels were at least at the on-set disturbing, but did not lead to spatial avoidance. Further research is needed to show whether zebrafish are not able to avoid noisy areas or just not bothered. Quantitatively, these data are not directly applicable to other fish species or other fish tanks, but they do indicate that sound exposure may affect fish behaviour in any captive condition.

  5. Non parametric denoising methods based on wavelets: Application to electron microscopy images in low exposure time

    International Nuclear Information System (INIS)

    Soumia, Sid Ahmed; Messali, Zoubeida; Ouahabi, Abdeldjalil; Trepout, Sylvain; Messaoudi, Cedric; Marco, Sergio

    2015-01-01

    The 3D reconstruction of the Cryo-Transmission Electron Microscopy (Cryo-TEM) and Energy Filtering TEM images (EFTEM) hampered by the noisy nature of these images, so that their alignment becomes so difficult. This noise refers to the collision between the frozen hydrated biological samples and the electrons beam, where the specimen is exposed to the radiation with a high exposure time. This sensitivity to the electrons beam led specialists to obtain the specimen projection images at very low exposure time, which resulting the emergence of a new problem, an extremely low signal-to-noise ratio (SNR). This paper investigates the problem of TEM images denoising when they are acquired at very low exposure time. So, our main objective is to enhance the quality of TEM images to improve the alignment process which will in turn improve the three dimensional tomography reconstructions. We have done multiple tests on special TEM images acquired at different exposure time 0.5s, 0.2s, 0.1s and 1s (i.e. with different values of SNR)) and equipped by Golding beads for helping us in the assessment step. We herein, propose a structure to combine multiple noisy copies of the TEM images. The structure is based on four different denoising methods, to combine the multiple noisy TEM images copies. Namely, the four different methods are Soft, the Hard as Wavelet-Thresholding methods, Bilateral Filter as a non-linear technique able to maintain the edges neatly, and the Bayesian approach in the wavelet domain, in which context modeling is used to estimate the parameter for each coefficient. To ensure getting a high signal-to-noise ratio, we have guaranteed that we are using the appropriate wavelet family at the appropriate level. So we have chosen âĂIJsym8âĂİ wavelet at level 3 as the most appropriate parameter. Whereas, for the bilateral filtering many tests are done in order to determine the proper filter parameters represented by the size of the filter, the range parameter and the

  6. Least-squares methods for identifying biochemical regulatory networks from noisy measurements

    Directory of Open Access Journals (Sweden)

    Heslop-Harrison Pat

    2007-01-01

    Full Text Available Abstract Background We consider the problem of identifying the dynamic interactions in biochemical networks from noisy experimental data. Typically, approaches for solving this problem make use of an estimation algorithm such as the well-known linear Least-Squares (LS estimation technique. We demonstrate that when time-series measurements are corrupted by white noise and/or drift noise, more accurate and reliable identification of network interactions can be achieved by employing an estimation algorithm known as Constrained Total Least Squares (CTLS. The Total Least Squares (TLS technique is a generalised least squares method to solve an overdetermined set of equations whose coefficients are noisy. The CTLS is a natural extension of TLS to the case where the noise components of the coefficients are correlated, as is usually the case with time-series measurements of concentrations and expression profiles in gene networks. Results The superior performance of the CTLS method in identifying network interactions is demonstrated on three examples: a genetic network containing four genes, a network describing p53 activity and mdm2 messenger RNA interactions, and a recently proposed kinetic model for interleukin (IL-6 and (IL-12b messenger RNA expression as a function of ATF3 and NF-κB promoter binding. For the first example, the CTLS significantly reduces the errors in the estimation of the Jacobian for the gene network. For the second, the CTLS reduces the errors from the measurements that are corrupted by white noise and the effect of neglected kinetics. For the third, it allows the correct identification, from noisy data, of the negative regulation of (IL-6 and (IL-12b by ATF3. Conclusion The significant improvements in performance demonstrated by the CTLS method under the wide range of conditions tested here, including different levels and types of measurement noise and different numbers of data points, suggests that its application will enable

  7. Cryptography from noisy storage.

    Science.gov (United States)

    Wehner, Stephanie; Schaffner, Christian; Terhal, Barbara M

    2008-06-06

    We show how to implement cryptographic primitives based on the realistic assumption that quantum storage of qubits is noisy. We thereby consider individual-storage attacks; i.e., the dishonest party attempts to store each incoming qubit separately. Our model is similar to the model of bounded-quantum storage; however, we consider an explicit noise model inspired by present-day technology. To illustrate the power of this new model, we show that a protocol for oblivious transfer is secure for any amount of quantum-storage noise, as long as honest players can perform perfect quantum operations. Our model also allows us to show the security of protocols that cope with noise in the operations of the honest players and achieve more advanced tasks such as secure identification.

  8. Subspace learning from image gradient orientations

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    2012-01-01

    We introduce the notion of subspace learning from image gradient orientations for appearance-based object recognition. As image data is typically noisy and noise is substantially different from Gaussian, traditional subspace learning from pixel intensities fails very often to estimate reliably the

  9. Estimating the number of sources in a noisy convolutive mixture using BIC

    DEFF Research Database (Denmark)

    Olsson, Rasmus Kongsgaard; Hansen, Lars Kai

    2004-01-01

    The number of source signals in a noisy convolutive mixture is determined based on the exact log-likelihoods of the candidate models. In (Olsson and Hansen, 2004), a novel probabilistic blind source separator was introduced that is based solely on the time-varying second-order statistics of the s......The number of source signals in a noisy convolutive mixture is determined based on the exact log-likelihoods of the candidate models. In (Olsson and Hansen, 2004), a novel probabilistic blind source separator was introduced that is based solely on the time-varying second-order statistics...

  10. Threshold policy for global games with noisy information sharing

    KAUST Repository

    Mahdavifar, Hessam; Beirami, Ahmad; Touri, Behrouz; Shamma, Jeff S.

    2015-01-01

    of information and show that such equilibrium strategies exist and are unique if the sharing of information happens over a sufficiently noisy environment. To show this result, we establish that if a threshold function is an equilibrium strategy, then it will be a

  11. Use of global context for handling noisy names in discussion texts of a homeopathy discussion forum

    Directory of Open Access Journals (Sweden)

    Mukta Majumder

    2014-03-01

    Full Text Available The task of identifying named entities from the discussion texts in Web forums faces the challenge of noisy names. As the names are often misspelled or abbreviated, the conventional techniques have failed to detect the noisy names properly. In this paper we propose a global context based framework for handling the noisy names. The framework is tested on a named entity recognition system designed to identify the names from the discussion texts in a homeopathy diagnosis discussion forum. The proposed global context-based framework is found to be effective in improving the accuracy of the named entity recognition system.

  12. Data and Network Science for Noisy Heterogeneous Systems

    Science.gov (United States)

    Rider, Andrew Kent

    2013-01-01

    Data in many growing fields has an underlying network structure that can be taken advantage of. In this dissertation we apply data and network science to problems in the domains of systems biology and healthcare. Data challenges in these fields include noisy, heterogeneous data, and a lack of ground truth. The primary thesis of this work is that…

  13. Joint source/channel coding of scalable video over noisy channels

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, G.; Zakhor, A. [Department of Electrical Engineering and Computer Sciences University of California Berkeley, California94720 (United States)

    1997-01-01

    We propose an optimal bit allocation strategy for a joint source/channel video codec over noisy channel when the channel state is assumed to be known. Our approach is to partition source and channel coding bits in such a way that the expected distortion is minimized. The particular source coding algorithm we use is rate scalable and is based on 3D subband coding with multi-rate quantization. We show that using this strategy, transmission of video over very noisy channels still renders acceptable visual quality, and outperforms schemes that use equal error protection only. The flexibility of the algorithm also permits the bit allocation to be selected optimally when the channel state is in the form of a probability distribution instead of a deterministic state. {copyright} {ital 1997 American Institute of Physics.}

  14. Jump Variation Estimation with Noisy High Frequency Financial Data via Wavelets

    Directory of Open Access Journals (Sweden)

    Xin Zhang

    2016-08-01

    Full Text Available This paper develops a method to improve the estimation of jump variation using high frequency data with the existence of market microstructure noises. Accurate estimation of jump variation is in high demand, as it is an important component of volatility in finance for portfolio allocation, derivative pricing and risk management. The method has a two-step procedure with detection and estimation. In Step 1, we detect the jump locations by performing wavelet transformation on the observed noisy price processes. Since wavelet coefficients are significantly larger at the jump locations than the others, we calibrate the wavelet coefficients through a threshold and declare jump points if the absolute wavelet coefficients exceed the threshold. In Step 2 we estimate the jump variation by averaging noisy price processes at each side of a declared jump point and then taking the difference between the two averages of the jump point. Specifically, for each jump location detected in Step 1, we get two averages from the observed noisy price processes, one before the detected jump location and one after it, and then take their difference to estimate the jump variation. Theoretically, we show that the two-step procedure based on average realized volatility processes can achieve a convergence rate close to O P ( n − 4 / 9 , which is better than the convergence rate O P ( n − 1 / 4 for the procedure based on the original noisy process, where n is the sample size. Numerically, the method based on average realized volatility processes indeed performs better than that based on the price processes. Empirically, we study the distribution of jump variation using Dow Jones Industrial Average stocks and compare the results using the original price process and the average realized volatility processes.

  15. Non-stationary component extraction in noisy multicomponent signal using polynomial chirping Fourier transform.

    Science.gov (United States)

    Lu, Wenlong; Xie, Junwei; Wang, Heming; Sheng, Chuan

    2016-01-01

    Inspired by track-before-detection technology in radar, a novel time-frequency transform, namely polynomial chirping Fourier transform (PCFT), is exploited to extract components from noisy multicomponent signal. The PCFT combines advantages of Fourier transform and polynomial chirplet transform to accumulate component energy along a polynomial chirping curve in the time-frequency plane. The particle swarm optimization algorithm is employed to search optimal polynomial parameters with which the PCFT will achieve a most concentrated energy ridge in the time-frequency plane for the target component. The component can be well separated in the polynomial chirping Fourier domain with a narrow-band filter and then reconstructed by inverse PCFT. Furthermore, an iterative procedure, involving parameter estimation, PCFT, filtering and recovery, is introduced to extract components from a noisy multicomponent signal successively. The Simulations and experiments show that the proposed method has better performance in component extraction from noisy multicomponent signal as well as provides more time-frequency details about the analyzed signal than conventional methods.

  16. Security of modified Ping-Pong protocol in noisy and lossy channel.

    Science.gov (United States)

    Han, Yun-Guang; Yin, Zhen-Qiang; Li, Hong-Wei; Chen, Wei; Wang, Shuang; Guo, Guang-Can; Han, Zheng-Fu

    2014-05-12

    The "Ping-Pong" (PP) protocol is a two-way quantum key protocol based on entanglement. In this protocol, Bob prepares one maximally entangled pair of qubits, and sends one qubit to Alice. Then, Alice performs some necessary operations on this qubit and sends it back to Bob. Although this protocol was proposed in 2002, its security in the noisy and lossy channel has not been proven. In this report, we add a simple and experimentally feasible modification to the original PP protocol, and prove the security of this modified PP protocol against collective attacks when the noisy and lossy channel is taken into account. Simulation results show that our protocol is practical.

  17. Prediction of Intelligibility of Noisy and Time-Frequency Weighted Speech based on Mutual Information Between Amplitude Envelopes

    DEFF Research Database (Denmark)

    Jensen, Jesper; Taal, C.H.

    2013-01-01

    of Shannon information the critical-band amplitude envelopes of the noisy/processed signal convey about the corresponding clean signal envelopes. The resulting intelligibility predictor turns out to be a simple function of the correlation between noisy/processed and clean amplitude envelopes. The proposed...

  18. Limiting hazardous noise exposure from noisy toys: simple, sticky solutions.

    Science.gov (United States)

    Weinreich, Heather M; Jabbour, Noel; Levine, Samuel; Yueh, Bevan

    2013-09-01

    To assess noise levels of toys from the Sight & Hearing Association (SHA) 2010 Noisy Toys List and evaluate the change in noise of these toys after covering the speakers with tape or glue. One Group Pretest-Posttest Design. SHA 2010 Toys List (n = 18) toys were tested at distances of 0 and 25 cm from sound source in a soundproof booth using a digital sound-level meter. The dBA level of sound produced by toy was obtained. Toys with speakers (n = 16) were tested before and after altering speakers with plastic packing tape or nontoxic glue. Mean noise level for non-taped toys at 0 and 25 cm was 107.6 dBA (SD ± 8.5) and 82.5 dBA (SD ± 8.8), respectively. With tape, there was a statistically significant decrease in noise level at 0 and 25 cm: 84.2 dBA and 68.2 dBA (P toys. However, there was no significant difference between tape or glue. Overall, altering the toy can significantly decrease the sound a child may experience when playing with toys. However, some toys, even after altering, still produce sound levels that may be considered dangerous. Copyright © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  19. Mammographic image restoration using maximum entropy deconvolution

    International Nuclear Information System (INIS)

    Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization

  20. Robust histogram-based image retrieval

    Czech Academy of Sciences Publication Activity Database

    Höschl, Cyril; Flusser, Jan

    2016-01-01

    Roč. 69, č. 1 (2016), s. 72-81 ISSN 0167-8655 R&D Projects: GA ČR GA15-16928S Institutional support: RVO:67985556 Keywords : Image retrieval * Noisy image * Histogram * Convolution * Moments * Invariants Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.995, year: 2016 http://library.utia.cas.cz/separaty/2015/ZOI/hoschl-0452147.pdf

  1. COMPARISON OF ULTRASOUND IMAGE FILTERING METHODS BY MEANS OF MULTIVARIABLE KURTOSIS

    Directory of Open Access Journals (Sweden)

    Mariusz Nieniewski

    2017-06-01

    Full Text Available Comparison of the quality of despeckled US medical images is complicated because there is no image of a human body that would be free of speckles and could serve as a reference. A number of various image metrics are currently used for comparison of filtering methods; however, they do not satisfactorily represent the visual quality of images and medical expert’s satisfaction with images. This paper proposes an innovative use of relative multivariate kurtosis for the evaluation of the most important edges in an image. Multivariate kurtosis allows one to introduce an order among the filtered images and can be used as one of the metrics for image quality evaluation. At present there is no method which would jointly consider individual metrics. Furthermore, these metrics are typically defined by comparing the noisy original and filtered images, which is incorrect since the noisy original cannot serve as a golden standard. In contrast to this, the proposed kurtosis is the absolute measure, which is calculated independently of any reference image and it agrees with the medical expert’s satisfaction to a large extent. The paper presents a numerical procedure for calculating kurtosis and describes results of such calculations for a computer-generated noisy image, images of a general purpose phantom and a cyst phantom, as well as real-life images of thyroid and carotid artery obtained with SonixTouch ultrasound machine. 16 different methods of image despeckling are compared via kurtosis. The paper shows that visually more satisfactory despeckling results are associated with higher kurtosis, and to a certain degree kurtosis can be used as a single metric for evaluation of image quality.

  2. Image restoration and processing methods

    International Nuclear Information System (INIS)

    Daniell, G.J.

    1984-01-01

    This review will stress the importance of using image restoration techniques that deal with incomplete, inconsistent, and noisy data and do not introduce spurious features into the processed image. No single image is equally suitable for both the resolution of detail and the accurate measurement of intensities. A good general purpose technique is the maximum entropy method and the basis and use of this will be explained. (orig.)

  3. Entanglement-assisted quantum parameter estimation from a noisy qubit pair: A Fisher information analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chapeau-Blondeau, François, E-mail: chapeau@univ-angers.fr

    2017-04-25

    Benefit from entanglement in quantum parameter estimation in the presence of noise or decoherence is investigated, with the quantum Fisher information to asses the performance. When an input probe experiences any (noisy) transformation introducing the parameter dependence, the performance is always maximized by a pure probe. As a generic estimation task, for estimating the phase of a unitary transformation on a qubit affected by depolarizing noise, the optimal separable probe and its performance are characterized as a function of the level of noise. By entangling qubits in pairs, enhancements of performance over that of the optimal separable probe are quantified, in various settings of the entangled pair. In particular, in the presence of the noise, enhancement over the performance of the one-qubit optimal probe can always be obtained with a second entangled qubit although never interacting with the process to be estimated. Also, enhancement over the performance of the two-qubit optimal separable probe can always be achieved by a two-qubit entangled probe, either partially or maximally entangled depending on the level of the depolarizing noise. - Highlights: • Quantum parameter estimation from a noisy qubit pair is investigated. • The quantum Fisher information is used to assess the ultimate best performance. • Theoretical expressions are established and analyzed for the Fisher information. • Enhanced performances are quantified with various entanglements of the pair. • Enhancement is shown even with one entangled qubit noninteracting with the process.

  4. Image denoising via adaptive eigenvectors of graph Laplacian

    Science.gov (United States)

    Chen, Ying; Tang, Yibin; Xu, Ning; Zhou, Lin; Zhao, Li

    2016-07-01

    An image denoising method via adaptive eigenvectors of graph Laplacian (EGL) is proposed. Unlike the trivial parameter setting of the used eigenvectors in the traditional EGL method, in our method, the eigenvectors are adaptively selected in the whole denoising procedure. In detail, a rough image is first built with the eigenvectors from the noisy image, where the eigenvectors are selected by using the deviation estimation of the clean image. Subsequently, a guided image is effectively restored with a weighted average of the noisy and rough images. In this operation, the average coefficient is adaptively obtained to set the deviation of the guided image to approximately that of the clean image. Finally, the denoised image is achieved by a group-sparse model with the pattern from the guided image, where the eigenvectors are chosen in the error control of the noise deviation. Moreover, a modified group orthogonal matching pursuit algorithm is developed to efficiently solve the above group sparse model. The experiments show that our method not only improves the practicality of the EGL methods with the dependence reduction of the parameter setting, but also can outperform some well-developed denoising methods, especially for noise with large deviations.

  5. Sentence comprehension in aphasia: A noisy channel approach

    Directory of Open Access Journals (Sweden)

    Michael Walsh Dickey

    2014-04-01

    Full Text Available Probabilistic accounts of language understanding assume that comprehension involves determining the probability of an intended message (m given an input utterance (u (P(m|u; e.g. Gibson et al, 2013a; Levy et al, 2009. One challenge is that communication occurs within a noisy channel; i.e. the comprehender’s representation of u may have been distorted, e.g., by a typo or by impairment associated with aphasia. Bayes’ rule provides a model of how comprehenders can combine the prior probability of m (P(m with the probability that m would have been distorted to u (P(mu to calculate the probability of m given u (P(m|u  P(mP(mu. This formalism can capture the observation that people with aphasia (PWA rely more on semantics than syntax during comprehension (e.g., Caramazza & Zurif, 1976: given the high probability that their representation of the input is unreliable, they weigh message likelihood more heavily. Gibson et al. (2013a showed that unimpaired adults are sensitive to P(m and P(mu: they more often chose interpretations that increased message plausibility or involved distortions requiring fewer changes, and/or deletions instead of insertions (see Figure 1a for examples. Gibson et al. (2013b found PWA were also sensitive to both P(m and P(mu in an act-out task, but relied more heavily than unimpaired controls on P(m. This shows group-level optimization towards the less noisy (semantic channel in PWA. The current experiment (8 PWA; 7 age-matched controls investigated noisy channel optimization at the level of individual PWA. It also included active/passive items with a weaker plausibility manipulation to test whether P(m is higher for implausible than impossible strings. The task was forced-choice sentence-picture matching (Figure 1b. Experimental sentences crossed active versus passive (A-P structures with plausibility (Set 1 or impossibility (Set 2, and prepositional-object versus double-object structures (PO-DO: Set 3 with

  6. Iterative estimation of the background in noisy spectroscopic data

    International Nuclear Information System (INIS)

    Zhu, M.H.; Liu, L.G.; Cheng, Y.S.; Dong, T.K.; You, Z.; Xu, A.A.

    2009-01-01

    In this paper, we present an iterative filtering method to estimate the background of noisy spectroscopic data. The proposed method avoids the calculation of the average full width at half maximum (FWHM) of the whole spectrum and the peak regions, and it can estimate the background efficiently, especially for spectroscopic data with the Compton continuum.

  7. Security bound of continuous-variable quantum key distribution with noisy coherent states and channel

    International Nuclear Information System (INIS)

    Shen Yong; Yang Jian; Guo Hong

    2009-01-01

    Security of a continuous-variable quantum key distribution protocol based on noisy coherent states and channel is analysed. Assuming that the noise of coherent states is induced by Fred, a neutral party relative to others, we prove that the prepare-and-measurement scheme (P and M) and entanglement-based scheme (E-B) are equivalent. Then, we show that this protocol is secure against Gaussian collective attacks even if the channel is lossy and noisy, and, further, a lower bound to the secure key rate is derived.

  8. Security bound of continuous-variable quantum key distribution with noisy coherent states and channel

    Energy Technology Data Exchange (ETDEWEB)

    Shen Yong; Yang Jian; Guo Hong, E-mail: hongguo@pku.edu.c [CREAM Group, State Key Laboratory of Advanced Optical Communication Systems and Networks (Peking University) and Institute of Quantum Electronics, School of Electronics Engineering and Computer Science, Peking University, Beijing 100871 (China)

    2009-12-14

    Security of a continuous-variable quantum key distribution protocol based on noisy coherent states and channel is analysed. Assuming that the noise of coherent states is induced by Fred, a neutral party relative to others, we prove that the prepare-and-measurement scheme (P and M) and entanglement-based scheme (E-B) are equivalent. Then, we show that this protocol is secure against Gaussian collective attacks even if the channel is lossy and noisy, and, further, a lower bound to the secure key rate is derived.

  9. A Noisy-Channel Approach to Question Answering

    Science.gov (United States)

    2003-01-01

    question “When did Elvis Presley die?” To do this, we build a noisy channel model that makes explicit how answer sentence parse trees are mapped into...in Figure 1, the algorithm above generates the following training example: Q: When did Elvis Presley die ? SA: Presley died PP PP in A_DATE, and...engine as a potential candidate for finding the answer to the question “When did Elvis Presley die?” In this case, we don’t know what the answer is

  10. Self-imaging of partially coherent light in graded-index media.

    Science.gov (United States)

    Ponomarenko, Sergey A

    2015-02-15

    We demonstrate that partially coherent light beams of arbitrary intensity and spectral degree of coherence profiles can self-image in linear graded-index media. The results can be applicable to imaging with noisy spatial or temporal light sources.

  11. Noisy mean field game model for malware propagation in opportunistic networks

    KAUST Repository

    Tembine, Hamidou; Vilanova, Pedro; Debbah, Mé roú ane

    2012-01-01

    nodes is examined with a noisy mean field limit and compared to a deterministic one. The stochastic nature of the wireless environment make stochastic approaches more realistic for such types of networks. By introducing control strategies, we show

  12. Multivariate statistical analysis for x-ray photoelectron spectroscopy spectral imaging: Effect of image acquisition time

    International Nuclear Information System (INIS)

    Peebles, D.E.; Ohlhausen, J.A.; Kotula, P.G.; Hutton, S.; Blomfield, C.

    2004-01-01

    The acquisition of spectral images for x-ray photoelectron spectroscopy (XPS) is a relatively new approach, although it has been used with other analytical spectroscopy tools for some time. This technique provides full spectral information at every pixel of an image, in order to provide a complete chemical mapping of the imaged surface area. Multivariate statistical analysis techniques applied to the spectral image data allow the determination of chemical component species, and their distribution and concentrations, with minimal data acquisition and processing times. Some of these statistical techniques have proven to be very robust and efficient methods for deriving physically realistic chemical components without input by the user other than the spectral matrix itself. The benefits of multivariate analysis of the spectral image data include significantly improved signal to noise, improved image contrast and intensity uniformity, and improved spatial resolution - which are achieved due to the effective statistical aggregation of the large number of often noisy data points in the image. This work demonstrates the improvements in chemical component determination and contrast, signal-to-noise level, and spatial resolution that can be obtained by the application of multivariate statistical analysis to XPS spectral images

  13. Noisy Spins and the Richardson-Gaudin Model

    Science.gov (United States)

    Rowlands, Daniel A.; Lamacraft, Austen

    2018-03-01

    We study a system of spins (qubits) coupled to a common noisy environment, each precessing at its own frequency. The correlated noise experienced by the spins implies long-lived correlations that relax only due to the differing frequencies. We use a mapping to a non-Hermitian integrable Richardson-Gaudin model to find the exact spectrum of the quantum master equation in the high-temperature limit and, hence, determine the decay rate. Our solution can be used to evaluate the effect of inhomogeneous splittings on a system of qubits coupled to a common bath.

  14. Image restoration technique using median filter combined with decision tree algorithm

    International Nuclear Information System (INIS)

    Sethu, D.; Assadi, H.M.; Hasson, F.N.; Hasson, N.N.

    2007-01-01

    Images are usually corrupted during transmission principally due to interface in the channel used for transmission. Images also be impaired by the addition of various forms of noise. Salt and pepper is commonly used to impair the image. Salt and pepper noise can be caused by errors in data transmission, malfunctioning pixel elements in camera sensors, and timing errors in the digitization process. During the filtering of noisy image, important features such as edges, lines and other fine image details embedded in the image tends to blur because of filtering operation. The enhancement of noisy data, however, is a very critical process because the sharpening operation can significantly increase the noise. In this respect, contrast enhancement is often necessary in order to highlight details that have been blurred. In this proposed approach we aim to develop image processing technique that can meet this new requirement, which are high quality and high speed. Furthermore, prevent the noise accretion during the sharpening of the image details, and compare the restored images via proposed method with other kinds of filters. (author)

  15. Three methods to distill multipartite entanglement over bipartite noisy channels

    International Nuclear Information System (INIS)

    Lee, Soojoon; Park, Jungjoon

    2008-01-01

    We first assume that there are only bipartite noisy qubit channels in a given multipartite system, and present three methods to distill the general Greenberger-Horne-Zeilinger state. By investigating the methods, we show that multipartite entanglement distillation by bipartite entanglement distillation has higher yield than ones in the previous multipartite entanglement distillations

  16. A steady-State Genetic Algorithm with Resampling for Noisy Inventory Control

    NARCIS (Netherlands)

    Prestwich, S.; Tarim, S.A.; Rossi, R.; Hnich, B.

    2008-01-01

    Noisy fitness functions occur in many practical applications of evolutionary computation. A standard technique for solving these problems is fitness resampling but this may be inefficient or need a large population, and combined with elitism it may overvalue chromosomes or reduce genetic diversity.

  17. Advanced topics in control and estimation of state-multiplicative noisy systems

    CERN Document Server

    Gershon, Eli

    2013-01-01

    Advanced Topics in Control and Estimation of State-Multiplicative Noisy Systems begins with an introduction and extensive literature survey. The text proceeds to cover solutions of measurement-feedback control and state problems and the formulation of the Bounded Real Lemma for both continuous- and discrete-time systems. The continuous-time reduced-order and stochastic-tracking control problems for delayed systems are then treated. Ideas of nonlinear stability are introduced for infinite-horizon systems, again, in both the continuous- and discrete-time cases. The reader is introduced to six practical examples of noisy state-multiplicative control and filtering associated with various fields of control engineering. The book is rounded out by a three-part appendix containing stochastic tools necessary for a proper appreciation of the text: a basic introduction to nonlinear stochastic differential equations and aspects of switched systems and peak to peak  optimal control and filtering. Advanced Topics in Contr...

  18. A method for extracting chaotic signal from noisy environment

    International Nuclear Information System (INIS)

    Shang, L.-J.; Shyu, K.-K.

    2009-01-01

    In this paper, we propose a approach for extracting chaos signal from noisy environment where the chaotic signal has been contaminated by white Gaussian noise. The traditional type of independent component analysis (ICA) is capable of separating mixed signals and retrieving them independently; however, the separated signal shows unreal amplitude. The results of this study show with our method the real chaos signal can be effectively recovered.

  19. Noisiness of the Surfaces on Low-Speed Roads

    Directory of Open Access Journals (Sweden)

    Wladyslaw Gardziejczyk

    2016-03-01

    Full Text Available Traffic noise is a particular threat to the environment in the vicinity of roads. The level of the noise is influenced by traffic density and traffic composition, as well as vehicle speed and the type of surface. The article presents the results of studies on tire/road noise from passing vehicles at a speed of 40–80 kph, carried out by using the statistical pass-by method (SPB, on seven surfaces with different characteristics. It has been shown that increasing the speed from 40 kph to 50 kph contributes to the increase in the maximum A-weighted sound pressure level by about 3 dB, regardless of the type of surface. For larger differences in speed (30 kph–40 kph increase in noise levels reaches values about 10 dB. In the case of higher speeds, this increase is slightly lower. In this article, special attention is paid to the noisiness from surfaces made of porous asphalt concrete (PAC, BBTM (thin asphalt layer, and stone mastic asphalt (SMA with a maximum aggregate size of 8 mm and 5 mm. It has also been proved that surfaces of porous asphalt concrete, within two years after the commissioning, significantly contribute to a reduction of the maximum level of noise in the streets and roads with lower speed of passing cars. Reduction of the maximum A-weighted sound pressure level of a statistical car traveling at 60 kph reaches values of up to about 6 dB, as compared with the SMA11. Along with the exploitation of the road, air voids in the low-noise surface becomes clogged and acoustic properties of the road decrease to a level similar to standard asphalt.

  20. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    Science.gov (United States)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  1. Fractional Diffusion in Gaussian Noisy Environment

    Directory of Open Access Journals (Sweden)

    Guannan Hu

    2015-03-01

    Full Text Available We study the fractional diffusion in a Gaussian noisy environment as described by the fractional order stochastic heat equations of the following form: \\(D_t^{(\\alpha} u(t, x=\\textit{B}u+u\\cdot \\dot W^H\\, where \\(D_t^{(\\alpha}\\ is the Caputo fractional derivative of order \\(\\alpha\\in (0,1\\ with respect to the time variable \\(t\\, \\(\\textit{B}\\ is a second order elliptic operator with respect to the space variable \\(x\\in\\mathbb{R}^d\\ and \\(\\dot W^H\\ a time homogeneous fractional Gaussian noise of Hurst parameter \\(H=(H_1, \\cdots, H_d\\. We obtain conditions satisfied by \\(\\alpha\\ and \\(H\\, so that the square integrable solution \\(u\\ exists uniquely.

  2. Statistical image processing and multidimensional modeling

    CERN Document Server

    Fieguth, Paul

    2010-01-01

    Images are all around us! The proliferation of low-cost, high-quality imaging devices has led to an explosion in acquired images. When these images are acquired from a microscope, telescope, satellite, or medical imaging device, there is a statistical image processing task: the inference of something - an artery, a road, a DNA marker, an oil spill - from imagery, possibly noisy, blurry, or incomplete. A great many textbooks have been written on image processing. However this book does not so much focus on images, per se, but rather on spatial data sets, with one or more measurements taken over

  3. Principal component analysis of image gradient orientations for face recognition

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    We introduce the notion of Principal Component Analysis (PCA) of image gradient orientations. As image data is typically noisy, but noise is substantially different from Gaussian, traditional PCA of pixel intensities very often fails to estimate reliably the low-dimensional subspace of a given data

  4. Quantum communication in noisy environments

    International Nuclear Information System (INIS)

    Aschauer, H.

    2004-01-01

    In this thesis, we investigate how protocols in quantum communication theory are influenced by noise. Specifically, we take into account noise during the transmission of quantum information and noise during the processing of quantum information. We describe three novel quantum communication protocols which can be accomplished efficiently in a noisy environment: (1) Factorization of Eve: We show that it is possible to disentangle transmitted qubits a posteriori from the quantum channel's degrees of freedom. (2) Cluster state purification: We give multi-partite entanglement purification protocols for a large class of entangled quantum states. (3) Entanglement purification protocols from quantum codes: We describe a constructive method to create bipartite entanglement purification protocols form quantum error correcting codes, and investigate the properties of these protocols, which can be operated in two different modes, which are related to quantum communication and quantum computation protocols, respectively

  5. Shape-based grey-level image interpolation

    International Nuclear Information System (INIS)

    Keh-Shih Chuang; Chun-Yuan Chen; Ching-Kai Yeh

    1999-01-01

    The three-dimensional (3D) object data obtained from a CT scanner usually have unequal sampling frequencies in the x-, y- and z-directions. Generally, the 3D data are first interpolated between slices to obtain isotropic resolution, reconstructed, then operated on using object extraction and display algorithms. The traditional grey-level interpolation introduces a layer of intermediate substance and is not suitable for objects that are very different from the opposite background. The shape-based interpolation method transfers a pixel location to a parameter related to the object shape and the interpolation is performed on that parameter. This process is able to achieve a better interpolation but its application is limited to binary images only. In this paper, we present an improved shape-based interpolation method for grey-level images. The new method uses a polygon to approximate the object shape and performs the interpolation using polygon vertices as references. The binary images representing the shape of the object were first generated via image segmentation on the source images. The target object binary image was then created using regular shape-based interpolation. The polygon enclosing the object for each slice can be generated from the shape of that slice. We determined the relative location in the source slices of each pixel inside the target polygon using the vertices of a polygon as the reference. The target slice grey-level was interpolated from the corresponding source image pixels. The image quality of this interpolation method is better and the mean squared difference is smaller than with traditional grey-level interpolation. (author)

  6. FALSE DETERMINATIONS OF CHAOS IN SHORT NOISY TIME SERIES. (R828745)

    Science.gov (United States)

    A method (NEMG) proposed in 1992 for diagnosing chaos in noisy time series with 50 or fewer observations entails fitting the time series with an empirical function which predicts an observation in the series from previous observations, and then estimating the rate of divergenc...

  7. Optimal resampling for the noisy OneMax problem

    OpenAIRE

    Liu, Jialin; Fairbank, Michael; Pérez-Liébana, Diego; Lucas, Simon M.

    2016-01-01

    The OneMax problem is a standard benchmark optimisation problem for a binary search space. Recent work on applying a Bandit-Based Random Mutation Hill-Climbing algorithm to the noisy OneMax Problem showed that it is important to choose a good value for the resampling number to make a careful trade off between taking more samples in order to reduce noise, and taking fewer samples to reduce the total computational cost. This paper extends that observation, by deriving an analytical expression f...

  8. Fusion of Thresholding Rules During Wavelet-Based Noisy Image Compression

    Directory of Open Access Journals (Sweden)

    Bekhtin Yury

    2016-01-01

    Full Text Available The new method for combining semisoft thresholding rules during wavelet-based data compression of images with multiplicative noise is suggested. The method chooses the best thresholding rule and the threshold value using the proposed criteria which provide the best nonlinear approximations and take into consideration errors of quantization. The results of computer modeling have shown that the suggested method provides relatively good image quality after restoration in the sense of some criteria such as PSNR, SSIM, etc.

  9. Population coding in sparsely connected networks of noisy neurons

    OpenAIRE

    Tripp, Bryan P.; Orchard, Jeff

    2012-01-01

    This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and be...

  10. The robustness of two tomography reconstructing techniques with heavily noisy dynamical experimental data from a high speed gamma-ray tomograph

    International Nuclear Information System (INIS)

    Vasconcelos, Geovane Vitor; Melo, Silvio de Barros; Dantas, Carlos Costa; Moreira, Icaro Malta; Johansen, Geira; Maad, Rachid

    2013-01-01

    The PSIRT (Particle Systems Iterative Reconstructive Technique) is, just like the ART method, an iterative tomographic reconstruction technique with the recommended use in the reconstruction of catalytic density distribution in the refining process of oil in the FCC-type riser. The PSIRT is based upon computer graphics' particle systems, where the reconstructing material is initially represented as composed of particles subject to a force field emanating from the beams, whose intensities are parameterized by the differences between the experimental readings of a given beam trajectory, and the values corresponding to the current amount of particles landed in this trajectory. A dynamical process is set as the beams fields of attracting forces dispute the particles. At the end, with the equilibrium established, the particles are replaced by the corresponding regions of pixels. The High Speed Gamma-ray Tomograph is a 5-source-fan-beam device with a 17-detector deck per source, capable of producing up to a thousand complete sinograms per second. Around 70.000 experimental sinograms from this tomograph were produced simulating the move of gas bubbles in different angular speeds immersed in oil within the vessel, through the use of a two-hole-polypropylene phantom. The sinogram frames were set with several different detector integration times. This article studies and compares the robustness of both ART and PSIRT methods in this heavily noisy scenario, where this noise comes not only from limitations in the dynamical sampling, but also from to the underlying apparatus that produces the counting in the tomograph. These experiments suggest that PSIRT is a more robust method than ART for noisy data. Visual inspection on the resulting images suggests that PSIRT is a more robust method than ART for noisy data, since it almost never presents globally scattered noise. (author)

  11. Backtracking-Based Iterative Regularization Method for Image Compressive Sensing Recovery

    Directory of Open Access Journals (Sweden)

    Lingjun Liu

    2017-01-01

    Full Text Available This paper presents a variant of the iterative shrinkage-thresholding (IST algorithm, called backtracking-based adaptive IST (BAIST, for image compressive sensing (CS reconstruction. For increasing iterations, IST usually yields a smoothing of the solution and runs into prematurity. To add back more details, the BAIST method backtracks to the previous noisy image using L2 norm minimization, i.e., minimizing the Euclidean distance between the current solution and the previous ones. Through this modification, the BAIST method achieves superior performance while maintaining the low complexity of IST-type methods. Also, BAIST takes a nonlocal regularization with an adaptive regularizor to automatically detect the sparsity level of an image. Experimental results show that our algorithm outperforms the original IST method and several excellent CS techniques.

  12. Indicators to assess the environmental performances of an innovative subway station : example of Noisy-Champs

    Science.gov (United States)

    Schertzer, D. J. M.; Charbonnier, L.; Versini, P. A.; Tchiguirinskaia, I.

    2017-12-01

    Noisy-Champs is a train station located in Noisy-le-Grand and Champs-sur-Marne, in the Paris urban area (France). Integrated into the Grand Paris Express project (huge development project to modernise the transport network around Paris), this station is going to be radically transformed and become a major hub. Designed by the architectural office Duthilleul, the new Noisy-Champs station aspires to be an example of an innovative and sustainable infrastructure. Its architectural precepts are indeed meant to improve its environmental performances, especially those related to storm water management, water consumption and users' thermal and hygrometric comfort. In order to assess and monitor these performances, objectives and associated indicators have been developed. They aim to be adapted for a specific infrastructure such as a public transport station. Analyses of pre-existing comfort simulations, blueprints and regulatory documents have led to identify the main issues for the Noisy-Champs station, focusing on its resilience to extreme events like droughts, heatwaves and heaxvy rainfalls. Both objectives and indicators have been proposed by studying the space-time variabilities of physical fluxes (heat, pollutants, radiation, wind and water) and passenger flows, and their interactions. Each indicator is linked to an environmental performance and has been determined after consultation of the different stakeholders involved in the rebuilding of the station. It results a monitoring program to assess the environmental performances of the station composed by both the indicators grid and their related objectives, and a measurement program detailing the nature and location of sensors, and the frequency of measurements.

  13. Controlling transfer of quantum correlations among bi-partitions of a composite quantum system by combining different noisy environments

    International Nuclear Information System (INIS)

    Zhang Xiu-Xing; Li Fu-Li

    2011-01-01

    The correlation dynamics are investigated for various bi-partitions of a composite quantum system consisting of two qubits and two independent and non-identical noisy environments. The two qubits have no direct interaction with each other and locally interact with their environments. Classical and quantum correlations including the entanglement are initially prepared only between the two qubits. We find that contrary to the identical noisy environment case, the quantum correlation transfer direction can be controlled by combining different noisy environments. The amplitude-damping environment determines whether there exists the entanglement transfer among bi-partitions of the system. When one qubit is coupled to an amplitude-damping environment and the other one to a bit-flip one, we find a very interesting result that all the quantum and the classical correlations, and even the entanglement, originally existing between the qubits, can be completely transferred without any loss to the qubit coupled to the bit-flit environment and the amplitude-damping environment. We also notice that it is possible to distinguish the quantum correlation from the classical correlation and the entanglement by combining different noisy environments. (general)

  14. Image denoising using the squared eigenfunctions of the Schrodinger operator

    KAUST Repository

    Kaisserli, Zineb; Laleg-Kirati, Taous-Meriem

    2015-01-01

    This study introduces a new image denoising method based on the spectral analysis of the semi-classical Schrodinger operator. The noisy image is considered as a potential of the Schrodinger operator, and the denoised image is reconstructed using the discrete spectrum of this operator. First results illustrating the performance of the proposed approach are presented and compared to the singular value decomposition method.

  15. Image denoising using the squared eigenfunctions of the Schrodinger operator

    KAUST Repository

    Kaisserli, Zineb

    2015-02-02

    This study introduces a new image denoising method based on the spectral analysis of the semi-classical Schrodinger operator. The noisy image is considered as a potential of the Schrodinger operator, and the denoised image is reconstructed using the discrete spectrum of this operator. First results illustrating the performance of the proposed approach are presented and compared to the singular value decomposition method.

  16. Effect of weak measurement on entanglement distribution over noisy channels.

    Science.gov (United States)

    Wang, Xin-Wen; Yu, Sixia; Zhang, Deng-Yu; Oh, C H

    2016-03-03

    Being able to implement effective entanglement distribution in noisy environments is a key step towards practical quantum communication, and long-term efforts have been made on the development of it. Recently, it has been found that the null-result weak measurement (NRWM) can be used to enhance probabilistically the entanglement of a single copy of amplitude-damped entangled state. This paper investigates remote distributions of bipartite and multipartite entangled states in the amplitudedamping environment by combining NRWMs and entanglement distillation protocols (EDPs). We show that the NRWM has no positive effect on the distribution of bipartite maximally entangled states and multipartite Greenberger-Horne-Zeilinger states, although it is able to increase the amount of entanglement of each source state (noisy entangled state) of EDPs with a certain probability. However, we find that the NRWM would contribute to remote distributions of multipartite W states. We demonstrate that the NRWM can not only reduce the fidelity thresholds for distillability of decohered W states, but also raise the distillation efficiencies of W states. Our results suggest a new idea for quantifying the ability of a local filtering operation in protecting entanglement from decoherence.

  17. Using the value of Lin's concordance correlation coefficient as a criterion for efficient estimation of areas of leaves of eelgrass from noisy digital images.

    Science.gov (United States)

    Echavarría-Heras, Héctor; Leal-Ramírez, Cecilia; Villa-Diharce, Enrique; Castillo, Oscar

    2014-01-01

    Eelgrass is a cosmopolitan seagrass species that provides important ecological services in coastal and near-shore environments. Despite its relevance, loss of eelgrass habitats is noted worldwide. Restoration by replanting plays an important role, and accurate measurements of the standing crop and productivity of transplants are important for evaluating restoration of the ecological functions of natural populations. Traditional assessments are destructive, and although they do not harm natural populations, in transplants the destruction of shoots might cause undesirable alterations. Non-destructive assessments of the aforementioned variables are obtained through allometric proxies expressed in terms of measurements of the lengths or areas of leaves. Digital imagery could produce measurements of leaf attributes without the removal of shoots, but sediment attachments, damage infringed by drag forces or humidity contents induce noise-effects, reducing precision. Available techniques for dealing with noise caused by humidity contents on leaves use the concepts of adjacency, vicinity, connectivity and tolerance of similarity between pixels. Selection of an interval of tolerance of similarity for efficient measurements requires extended computational routines with tied statistical inferences making concomitant tasks complicated and time consuming. The present approach proposes a simplified and cost-effective alternative, and also a general tool aimed to deal with any sort of noise modifying eelgrass leaves images. Moreover, this selection criterion relies only on a single statistics; the calculation of the maximum value of the Concordance Correlation Coefficient for reproducibility of observed areas of leaves through proxies obtained from digital images. Available data reveals that the present method delivers simplified, consistent estimations of areas of eelgrass leaves taken from noisy digital images. Moreover, the proposed procedure is robust because both the optimal

  18. Quantum simulations with noisy quantum computers

    Science.gov (United States)

    Gambetta, Jay

    Quantum computing is a new computational paradigm that is expected to lie beyond the standard model of computation. This implies a quantum computer can solve problems that can't be solved by a conventional computer with tractable overhead. To fully harness this power we need a universal fault-tolerant quantum computer. However the overhead in building such a machine is high and a full solution appears to be many years away. Nevertheless, we believe that we can build machines in the near term that cannot be emulated by a conventional computer. It is then interesting to ask what these can be used for. In this talk we will present our advances in simulating complex quantum systems with noisy quantum computers. We will show experimental implementations of this on some small quantum computers.

  19. Bayesian image restoration, using configurations

    OpenAIRE

    Thorarinsdottir, Thordis

    2006-01-01

    In this paper, we develop a Bayesian procedure for removing noise from images that can be viewed as noisy realisations of random sets in the plane. The procedure utilises recent advances in configuration theory for noise free random sets, where the probabilities of observing the different boundary configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the re...

  20. Heat source reconstruction from noisy temperature fields using an optimised derivative Gaussian filter

    Science.gov (United States)

    Delpueyo, D.; Balandraud, X.; Grédiac, M.

    2013-09-01

    The aim of this paper is to present a post-processing technique based on a derivative Gaussian filter to reconstruct heat source fields from temperature fields measured by infrared thermography. Heat sources can be deduced from temperature variations thanks to the heat diffusion equation. Filtering and differentiating are key-issues which are closely related here because the temperature fields which are processed are unavoidably noisy. We focus here only on the diffusion term because it is the most difficult term to estimate in the procedure, the reason being that it involves spatial second derivatives (a Laplacian for isotropic materials). This quantity can be reasonably estimated using a convolution of the temperature variation fields with second derivatives of a Gaussian function. The study is first based on synthetic temperature variation fields corrupted by added noise. The filter is optimised in order to reconstruct at best the heat source fields. The influence of both the dimension and the level of a localised heat source is discussed. Obtained results are also compared with another type of processing based on an averaging filter. The second part of this study presents an application to experimental temperature fields measured with an infrared camera on a thin plate in aluminium alloy. Heat sources are generated with an electric heating patch glued on the specimen surface. Heat source fields reconstructed from measured temperature fields are compared with the imposed heat sources. Obtained results illustrate the relevancy of the derivative Gaussian filter to reliably extract heat sources from noisy temperature fields for the experimental thermomechanics of materials.

  1. A Doubly Stochastic Change Point Detection Algorithm for Noisy Biological Signals

    Directory of Open Access Journals (Sweden)

    Nathan Gold

    2018-01-01

    Full Text Available Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, noisy time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We propose a novel and robust statistical method for change point detection for noisy biological time sequences. Our method is a significant improvement over traditional change point detection methods, which only examine a potential anomaly at a single time point. In contrast, our method considers all suspected anomaly points and considers the joint probability distribution of the number of change points and the elapsed time between two consecutive anomalies. We validate our method with three simulated time series, a widely accepted benchmark data set, two geological time series, a data set of ECG recordings, and a physiological data set of heart rate variability measurements of fetal sheep model of human labor, comparing it to three existing methods. Our method demonstrates significantly improved performance over the existing point-wise detection methods.

  2. An Interactive Procedure to Preserve the Desired Edges during the Image Processing of Noise Reduction

    Directory of Open Access Journals (Sweden)

    Lin-Tsang Lee

    2010-01-01

    Full Text Available The paper propose a new procedure including four stages in order to preserve the desired edges during the image processing of noise reduction. A denoised image can be obtained from a noisy image at the first stage of the procedure. At the second stage, an edge map can be obtained by the Canny edge detector to find the edges of the object contours. Manual modification of an edge map at the third stage is optional to capture all the desired edges of the object contours. At the final stage, a new method called Edge Preserved Inhomogeneous Diffusion Equation (EPIDE is used to smooth the noisy images or the previously denoised image at the first stage for achieving the edge preservation. The Optical Character Recognition (OCR results in the experiments show that the proposed procedure has the best recognition result because of the capability of edge preservation.

  3. A virtual speaker in noisy classroom conditions: supporting or disrupting children's listening comprehension?

    Science.gov (United States)

    Nirme, Jens; Haake, Magnus; Lyberg Åhlander, Viveka; Brännström, Jonas; Sahlén, Birgitta

    2018-04-05

    Seeing a speaker's face facilitates speech recognition, particularly under noisy conditions. Evidence for how it might affect comprehension of the content of the speech is more sparse. We investigated how children's listening comprehension is affected by multi-talker babble noise, with or without presentation of a digitally animated virtual speaker, and whether successful comprehension is related to performance on a test of executive functioning. We performed a mixed-design experiment with 55 (34 female) participants (8- to 9-year-olds), recruited from Swedish elementary schools. The children were presented with four different narratives, each in one of four conditions: audio-only presentation in a quiet setting, audio-only presentation in noisy setting, audio-visual presentation in a quiet setting, and audio-visual presentation in a noisy setting. After each narrative, the children answered questions on the content and rated their perceived listening effort. Finally, they performed a test of executive functioning. We found significantly fewer correct answers to explicit content questions after listening in noise. This negative effect was only mitigated to a marginally significant degree by audio-visual presentation. Strong executive function only predicted more correct answers in quiet settings. Altogether, our results are inconclusive regarding how seeing a virtual speaker affects listening comprehension. We discuss how methodological adjustments, including modifications to our virtual speaker, can be used to discriminate between possible explanations to our results and contribute to understanding the listening conditions children face in a typical classroom.

  4. An enhanced approach for biomedical image restoration using image fusion techniques

    Science.gov (United States)

    Karam, Ghada Sabah; Abbas, Fatma Ismail; Abood, Ziad M.; Kadhim, Kadhim K.; Karam, Nada S.

    2018-05-01

    Biomedical image is generally noisy and little blur due to the physical mechanisms of the acquisition process, so one of the common degradations in biomedical image is their noise and poor contrast. The idea of biomedical image enhancement is to improve the quality of the image for early diagnosis. In this paper we are using Wavelet Transformation to remove the Gaussian noise from biomedical images: Positron Emission Tomography (PET) image and Radiography (Radio) image, in different color spaces (RGB, HSV, YCbCr), and we perform the fusion of the denoised images resulting from the above denoising techniques using add image method. Then some quantive performance metrics such as signal -to -noise ratio (SNR), peak signal-to-noise ratio (PSNR), and Mean Square Error (MSE), etc. are computed. Since this statistical measurement helps in the assessment of fidelity and image quality. The results showed that our approach can be applied of Image types of color spaces for biomedical images.

  5. Electron microscopy at reduced levels of irradiation

    International Nuclear Information System (INIS)

    Kuo, I.A.M.

    1975-05-01

    Specimen damage by electron radiation is one of the factors that limits high resolution electron microscopy of biological specimens. A method was developed to record images of periodic objects at a reduced electron exposure in order to preserve high resolution structural detail. The resulting image would tend to be a statistically noisy one, as the electron exposure is reduced to lower and lower values. Reconstruction of a statistically defined image from such data is possible by spatial averaging of the electron signals from a large number of identical unit cells. (U.S.)

  6. Generation of Werner states and preservation of entanglement in a noisy environment

    Energy Technology Data Exchange (ETDEWEB)

    Jakobczyk, Lech [Institute of Theoretical Physics, University of Wroclaw, Pl. M. Borna 9, 50-204 Wroclaw (Poland)]. E-mail: ljak@ift.uni.wroc.pl; Jamroz, Anna [Institute of Theoretical Physics, University of Wroclaw, Pl. M. Borna 9, 50-204 Wroclaw (Poland)

    2005-12-05

    We study the influence of noisy environment on the evolution of two-atomic system in the presence of collective damping. Generation of Werner states as asymptotic stationary states of evolution is described. We also show that for some initial states the amount of entanglement is preserved during the evolution.

  7. Reconstruction of thin electromagnetic inclusions by a level-set method

    International Nuclear Information System (INIS)

    Park, Won-Kwang; Lesselier, Dominique

    2009-01-01

    In this contribution, we consider a technique of electromagnetic imaging (at a single, non-zero frequency) which uses the level-set evolution method for reconstructing a thin inclusion (possibly made of disconnected parts) with either dielectric or magnetic contrast with respect to the embedding homogeneous medium. Emphasis is on the proof of the concept, the scattering problem at hand being so far based on a two-dimensional scalar model. To do so, two level-set functions are employed; the first one describes location and shape, and the other one describes connectivity and length. Speeds of evolution of the level-set functions are calculated via the introduction of Fréchet derivatives of a least-square cost functional. Several numerical experiments on noiseless and noisy data as well illustrate how the proposed method behaves

  8. Multifaceted Effects of Noisy Galvanic Vestibular Stimulation on Manual Tracking Behavior in Parkinson’s Disease

    Directory of Open Access Journals (Sweden)

    Soojin eLee

    2015-02-01

    Full Text Available Parkinson’s disease (PD is a neurodegenerative movement disorder that is characterized clinically by slowness of movement, rigidity, tremor, postural instability, and often cognitive impairments. Recent studies have demonstrated altered cortico-basal ganglia rhythms in PD, which raises the possibility of a role for non-invasive stimulation therapies such as noisy galvanic vestibular stimulation (GVS. We applied noisy GVS to 12 mild-moderately affected PD subjects (Hoehn & Yahr 1.5-2.5 off medication while they performed a sinusoidal visuomotor joystick tracking task, which alternated between 2 task conditions depending on whether the displayed cursor position underestimated the actual error by 30% (‘Better’ or overestimated by 200% (‘Worse’. Either sham or subthreshold, noisy GVS (0.1-10 Hz, 1/f-type power spectrum was applied in pseudorandom order. We used exploratory (Linear Discriminant Analysis with bootstrapping and confirmatory (robust multivariate linear regression methods to determine if the presence of GVS significantly affected our ability to predict cursor position based on target variables. Variables related to displayed error were robustly seen to discriminate GVS in all subjects particularly in the Worse condition. If we considered higher frequency components of the cursor trajectory as noise, the signal-to-noise ratio of cursor trajectory was significantly increased during the GVS stimulation. The results suggest that noisy GVS influenced motor performance of the PD subjects, and we speculate that they were elicited through a combination of mechanisms: enhanced cingulate activity resulting in modulation of frontal midline theta rhythms, improved signal processing in neuromotor system via stochastic facilitation and/or enhanced vigor known to be deficient in PD subjects. Further work is required to determine if GVS has a selective effect on corrective submovements that could not be detected by the current analyses.

  9. Multifaceted effects of noisy galvanic vestibular stimulation on manual tracking behavior in Parkinson’s disease

    Science.gov (United States)

    Lee, Soojin; Kim, Diana J.; Svenkeson, Daniel; Parras, Gabriel; Oishi, Meeko Mitsuko K.; McKeown, Martin J.

    2015-01-01

    Parkinson’s disease (PD) is a neurodegenerative movement disorder that is characterized clinically by slowness of movement, rigidity, tremor, postural instability, and often cognitive impairments. Recent studies have demonstrated altered cortico-basal ganglia rhythms in PD, which raises the possibility of a role for non-invasive stimulation therapies such as noisy galvanic vestibular stimulation (GVS). We applied noisy GVS to 12 mild-moderately affected PD subjects (Hoehn and Yahr 1.5–2.5) off medication while they performed a sinusoidal visuomotor joystick tracking task, which alternated between 2 task conditions depending on whether the displayed cursor position underestimated the actual error by 30% (‘Better’) or overestimated by 200% (‘Worse’). Either sham or subthreshold, noisy GVS (0.1–10 Hz, 1/f-type power spectrum) was applied in pseudorandom order. We used exploratory (linear discriminant analysis with bootstrapping) and confirmatory (robust multivariate linear regression) methods to determine if the presence of GVS significantly affected our ability to predict cursor position based on target variables. Variables related to displayed error were robustly seen to discriminate GVS in all subjects particularly in the Worse condition. If we considered higher frequency components of the cursor trajectory as “noise,” the signal-to-noise ratio of cursor trajectory was significantly increased during the GVS stimulation. The results suggest that noisy GVS influenced motor performance of the PD subjects, and we speculate that they were elicited through a combination of mechanisms: enhanced cingulate activity resulting in modulation of frontal midline theta rhythms, improved signal processing in neuromotor system via stochastic facilitation and/or enhanced “vigor” known to be deficient in PD subjects. Further work is required to determine if GVS has a selective effect on corrective submovements that could not be detected by the current analyses

  10. SuperPixel based mid-level image description for image recognition

    NARCIS (Netherlands)

    Tasli, H.E.; Sicre, R.; Gevers, T.

    2015-01-01

    This study proposes a mid-level feature descriptor and aims to validate improvement on image classification and retrieval tasks. In this paper, we propose a method to explore the conventional feature extraction techniques in the image classification pipeline from a different perspective where

  11. Semantics by levels: An example for an image language

    International Nuclear Information System (INIS)

    Fasciano, M.; Levialdi, S.; Tortora, G.

    1984-01-01

    Ambiguities in formal language constructs may decrease both the understanding and the coding efficiency of a program. Within an image language, two semantic levels have been detected, corresponding to the lower level (pixel-based) and to the higher level (image-based). Denotational semantics has been used to define both levels within PIXAL (an image language) in order to enable the reader to visualize a concrete application of the semantic levels and their implications in a programming environment. This paper presents the semantics of different levels of conceptualization in the abstract formal description of an image language. The disambiguation of the meaning of special purpose constructs that imply either the elementary (pixels) level or the high image (array) level is naturally obtained by means of such semantic clauses. Perhaps non Von architectures on which hierarchical computations may be performed could also benefit from the use of semantic clauses to explicit the different levels where such computations are executed

  12. Contour extraction of echocardiographic images based on pre-processing

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, Zinah Rajab; Rahmat, Rahmita Wirza; Abdullah, Lili Nurliyana [Department of Multimedia, Faculty of Computer Science and Information Technology, Department of Computer and Communication Systems Engineering, Faculty of Engineering University Putra Malaysia 43400 Serdang, Selangor (Malaysia); Zamrin, D M [Department of Surgery, Faculty of Medicine, National University of Malaysia, 56000 Cheras, Kuala Lumpur (Malaysia); Saripan, M Iqbal

    2011-02-15

    In this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.

  13. Contour extraction of echocardiographic images based on pre-processing

    International Nuclear Information System (INIS)

    Hussein, Zinah Rajab; Rahmat, Rahmita Wirza; Abdullah, Lili Nurliyana; Zamrin, D M; Saripan, M Iqbal

    2011-01-01

    In this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.

  14. Noisy transcription factor NF-¿B oscillations stabilize and sensitize cytokine signaling in space

    DEFF Research Database (Denmark)

    Gangstad, S.W.; Feldager, C.W.; Juul, Jeppe Søgaard

    2013-01-01

    NF-¿B is a major transcription factor mediating inflammatory response. In response to a pro-inflammatory stimulus, it exhibits a characteristic response - a pulse followed by noisy oscillations in concentrations of considerably smaller amplitude. NF-¿B is an important mediator of cellular...... amplitude has not been addressed. We use a cellular automaton model to address these issues in the context of spatially distributed communicating cells. We find that noisy secondary oscillations stabilize concentric wave patterns, thus improving signal quality. Furthermore, both lower secondary amplitude...... as well as noise in the oscillation period might be working against chronic inflammation, the state of self-sustained and stimulus-independent excitations. Our findings suggest that the characteristic irregular secondary oscillations of lower amplitude are not accidental. On the contrary, they might have...

  15. A robust firearm identification algorithm of forensic ballistics specimens

    Science.gov (United States)

    Chuan, Z. L.; Jemain, A. A.; Liong, C.-Y.; Ghani, N. A. M.; Tan, L. K.

    2017-09-01

    There are several inherent difficulties in the existing firearm identification algorithms, include requiring the physical interpretation and time consuming. Therefore, the aim of this study is to propose a robust algorithm for a firearm identification based on extracting a set of informative features from the segmented region of interest (ROI) using the simulated noisy center-firing pin impression images. The proposed algorithm comprises Laplacian sharpening filter, clustering-based threshold selection, unweighted least square estimator, and segment a square ROI from the noisy images. A total of 250 simulated noisy images collected from five different pistols of the same make, model and caliber are used to evaluate the robustness of the proposed algorithm. This study found that the proposed algorithm is able to perform the identical task on the noisy images with noise levels as high as 70%, while maintaining a firearm identification accuracy rate of over 90%.

  16. Hybrid of Fuzzy Logic and Random Walker Method for Medical Image Segmentation

    OpenAIRE

    Jasdeep Kaur; Manish Mahajan

    2015-01-01

    The procedure of partitioning an image into various segments to reform an image into somewhat that is more significant and easier to analyze, defined as image segmentation. In real world applications, noisy images exits and there could be some measurement errors too. These factors affect the quality of segmentation, which is of major concern in medical fields where decisions about patients’ treatment are based on information extracted from radiological images. Several algorithms and technique...

  17. Noisy: Identification of problematic columns in multiple sequence alignments

    Directory of Open Access Journals (Sweden)

    Grünewald Stefan

    2008-06-01

    Full Text Available Abstract Motivation Sequence-based methods for phylogenetic reconstruction from (nucleic acid sequence data are notoriously plagued by two effects: homoplasies and alignment errors. Large evolutionary distances imply a large number of homoplastic sites. As most protein-coding genes show dramatic variations in substitution rates that are not uncorrelated across the sequence, this often leads to a patchwork pattern of (i phylogenetically informative and (ii effectively randomized regions. In highly variable regions, furthermore, alignment errors accumulate resulting in sometimes misleading signals in phylogenetic reconstruction. Results We present here a method that, based on assessing the distribution of character states along a cyclic ordering of the taxa, allows the identification of phylogenetically uninformative homoplastic sites in a multiple sequence alignment. Removal of these sites appears to improve the performance of phylogenetic reconstruction algorithms as measured by various indices of "tree quality". In particular, we obtain more stable trees due to the exclusion of phylogenetically incompatible sites that most likely represent strongly randomized characters. Software The computer program noisy implements this approach. It can be employed to improving phylogenetic reconstruction capability with quite a considerable success rate whenever (1 the average bootstrap support obtained from the original alignment is low, and (2 there are sufficiently many taxa in the data set – at least, say, 12 to 15 taxa. The software can be obtained under the GNU Public License from http://www.bioinf.uni-leipzig.de/Software/noisy/.

  18. Noisy non-transitive quantum games

    International Nuclear Information System (INIS)

    Ramzan, M; Khan, Salman; Khan, M Khalid

    2010-01-01

    We study the effect of quantum noise in 3 x 3 entangled quantum games. By taking into account different noisy quantum channels, we analyze how a two-player, three-strategy Rock-Scissor-Paper game is influenced by the quantum noise. We consider the winning non-transitive strategies R, S and P such that R beats S, S beats P and P beats R. The game behaves as a noiseless game for the maximum value of the quantum noise. It is seen that Alice's payoff is heavily influenced by the depolarizing noise as compared to the amplitude damping noise. A depolarizing channel causes a monotonic decrease in players' payoffs as we increase the amount of quantum noise. In the case of the amplitude damping channel, Alice's payoff function reaches its minimum for α = 0.5 and is symmetrical. This means that larger values of quantum noise influence the game weakly. On the other hand, the phase damping channel does not influence the game. Furthermore, the Nash equilibrium and non-transitive character of the game are not affected under the influence of quantum noise.

  19. Exploring an optimal wavelet-based filter for cryo-ET imaging.

    Science.gov (United States)

    Huang, Xinrui; Li, Sha; Gao, Song

    2018-02-07

    Cryo-electron tomography (cryo-ET) is one of the most advanced technologies for the in situ visualization of molecular machines by producing three-dimensional (3D) biological structures. However, cryo-ET imaging has two serious disadvantages-low dose and low image contrast-which result in high-resolution information being obscured by noise and image quality being degraded, and this causes errors in biological interpretation. The purpose of this research is to explore an optimal wavelet denoising technique to reduce noise in cryo-ET images. We perform tests using simulation data and design a filter using the optimum selected wavelet parameters (three-level decomposition, level-1 zeroed out, subband-dependent threshold, a soft-thresholding and spline-based discrete dyadic wavelet transform (DDWT)), which we call a modified wavelet shrinkage filter; this filter is suitable for noisy cryo-ET data. When testing using real cryo-ET experiment data, higher quality images and more accurate measures of a biological structure can be obtained with the modified wavelet shrinkage filter processing compared with conventional processing. Because the proposed method provides an inherent advantage when dealing with cryo-ET images, it can therefore extend the current state-of-the-art technology in assisting all aspects of cryo-ET studies: visualization, reconstruction, structural analysis, and interpretation.

  20. Security Analysis of Measurement-Device-Independent Quantum Key Distribution in Collective-Rotation Noisy Environment

    Science.gov (United States)

    Li, Na; Zhang, Yu; Wen, Shuang; Li, Lei-lei; Li, Jian

    2018-01-01

    Noise is a problem that communication channels cannot avoid. It is, thus, beneficial to analyze the security of MDI-QKD in noisy environment. An analysis model for collective-rotation noise is introduced, and the information theory methods are used to analyze the security of the protocol. The maximum amount of information that Eve can eavesdrop is 50%, and the eavesdropping can always be detected if the noise level ɛ ≤ 0.68. Therefore, MDI-QKD protocol is secure as quantum key distribution protocol. The maximum probability that the relay outputs successful results is 16% when existing eavesdropping. Moreover, the probability that the relay outputs successful results when existing eavesdropping is higher than the situation without eavesdropping. The paper validates that MDI-QKD protocol has better robustness.

  1. Improving the precision of noisy oscillators

    Science.gov (United States)

    Moehlis, Jeff

    2014-04-01

    We consider how the period of an oscillator is affected by white noise, with special attention given to the cases of additive noise and parameter fluctuations. Our treatment is based upon the concepts of isochrons, which extend the notion of the phase of a stable periodic orbit to the basin of attraction of the periodic orbit, and phase response curves, which can be used to understand the geometry of isochrons near the periodic orbit. This includes a derivation of the leading-order effect of noise on the statistics of an oscillator’s period. Several examples are considered in detail, which illustrate the use and validity of the theory, and demonstrate how to improve a noisy oscillator’s precision by appropriately tuning system parameters or operating away from a bifurcation point. It is also shown that appropriately timed impulsive kicks can give further improvements to oscillator precision.

  2. Multi-objective optimization with estimation of distribution algorithm in a noisy environment.

    Science.gov (United States)

    Shim, Vui Ann; Tan, Kay Chen; Chia, Jun Yong; Al Mamun, Abdullah

    2013-01-01

    Many real-world optimization problems are subjected to uncertainties that may be characterized by the presence of noise in the objective functions. The estimation of distribution algorithm (EDA), which models the global distribution of the population for searching tasks, is one of the evolutionary computation techniques that deals with noisy information. This paper studies the potential of EDAs; particularly an EDA based on restricted Boltzmann machines that handles multi-objective optimization problems in a noisy environment. Noise is introduced to the objective functions in the form of a Gaussian distribution. In order to reduce the detrimental effect of noise, a likelihood correction feature is proposed to tune the marginal probability distribution of each decision variable. The EDA is subsequently hybridized with a particle swarm optimization algorithm in a discrete domain to improve its search ability. The effectiveness of the proposed algorithm is examined via eight benchmark instances with different characteristics and shapes of the Pareto optimal front. The scalability, hybridization, and computational time are rigorously studied. Comparative studies show that the proposed approach outperforms other state of the art algorithms.

  3. Mejoramiento de imágenes usando funciones de base radial Images improvement using radial basis functions

    Directory of Open Access Journals (Sweden)

    Jaime Alberto Echeverri Arias

    2009-07-01

    Full Text Available La eliminación del ruido impulsivo es un problema clásico del procesado no lineal para el mejoramiento de imágenes y las funciones de base radial de soporte global son útiles para enfrentarlo. Este trabajo presenta una técnica de interpolación que disminuye eficientemente el ruido impulsivo en imágenes, mediante el uso de interpolante obtenido por funciones de base radial en el marco de la investigación enfocada en el desarrollo de un Sistema de recuperación de imágenes de recursos acuáticos amazónicos. Esta técnica primero etiqueta los píxeles de la imagen que son ruidosos y, mediante la interpolación, genera un valor de reconstrucción de dicho píxel usando sus vecinos. Los resultados obtenidos son comparables y muchas veces mejores que otras técnicas ya publicadas y reconocidas. Según el análisis de resultados, se puede aplicar a imágenes con altas tasas de ruido, manteniendo un bajo error de reconstrucción de los píxeles "ruidosos", así como la calidad visual.Global support radial base functions are effective in eliminating impulsive noise in non-linear processing. This paper introduces an interpolation technique which efficiently reduces image impulsive noise by means of an interpolant obtained through radial base functions. These functions have been used in a research project designed to develop a system for the recovery of images of Amazonian aquatic resources. This technique starts with the tagging by interpolation of noisy image pixels. Thus, a value of reconstruction for the noisy pixels is generated using neighboring pixels. The results obtained with this technique have proved comparable and often better than those obtained with previously known techniques. According to results analysis, this technique can be successfully applied on images with high noise levels. The results are low error in noisy pixel reconstruction and better visual quality.

  4. Associative memory for online learning in noisy environments using self-organizing incremental neural network.

    Science.gov (United States)

    Sudo, Akihito; Sato, Akihiro; Hasegawa, Osamu

    2009-06-01

    Associative memory operating in a real environment must perform well in online incremental learning and be robust to noisy data because noisy associative patterns are presented sequentially in a real environment. We propose a novel associative memory that satisfies these requirements. Using the proposed method, new associative pairs that are presented sequentially can be learned accurately without forgetting previously learned patterns. The memory size of the proposed method increases adaptively with learning patterns. Therefore, it suffers neither redundancy nor insufficiency of memory size, even in an environment in which the maximum number of associative pairs to be presented is unknown before learning. Noisy inputs in real environments are classifiable into two types: noise-added original patterns and faultily presented random patterns. The proposed method deals with two types of noise. To our knowledge, no conventional associative memory addresses noise of both types. The proposed associative memory performs as a bidirectional one-to-many or many-to-one associative memory and deals not only with bipolar data, but also with real-valued data. Results demonstrate that the proposed method's features are important for application to an intelligent robot operating in a real environment. The originality of our work consists of two points: employing a growing self-organizing network for an associative memory, and discussing what features are necessary for an associative memory for an intelligent robot and proposing an associative memory that satisfies those requirements.

  5. Statistical and heuristic image noise extraction (SHINE): a new method for processing Poisson noise in scintigraphic images

    International Nuclear Information System (INIS)

    Hannequin, Pascal; Mas, Jacky

    2002-01-01

    Poisson noise is one of the factors degrading scintigraphic images, especially at low count level, due to the statistical nature of photon detection. We have developed an original procedure, named statistical and heuristic image noise extraction (SHINE), to reduce the Poisson noise contained in the scintigraphic images, preserving the resolution, the contrast and the texture. The SHINE procedure consists in dividing the image into 4 x 4 blocks and performing a correspondence analysis on these blocks. Each block is then reconstructed using its own significant factors which are selected using an original statistical variance test. The SHINE procedure has been validated using a line numerical phantom and a hot spots and cold spots real phantom. The reference images are the noise-free simulated images for the numerical phantom and an extremely high counts image for the real phantom. The SHINE procedure has then been applied to the Jaszczak phantom and clinical data including planar bone scintigraphy, planar Sestamibi scintigraphy and Tl-201 myocardial SPECT. The SHINE procedure reduces the mean normalized error between the noisy images and the corresponding reference images. This reduction is constant and does not change with the count level. The SNR in a SHINE processed image is close to that of the corresponding raw image with twice the number of counts. The visual results with the Jaszczak phantom SPECT have shown that SHINE preserves the contrast and the resolution of the slices well. Clinical examples have shown no visual difference between the SHINE images and the corresponding raw images obtained with twice the acquisition duration. SHINE is an entirely automatic procedure which enables halving the acquisition time or the injected dose in scintigraphic acquisitions. It can be applied to all scintigraphic images, including PET data, and to all low-count photon images

  6. Retinal Image Preprocessing: Background and Noise Segmentation

    Directory of Open Access Journals (Sweden)

    Usman Akram

    2012-09-01

    Full Text Available Retinal images are used for the automated screening and diagnosis of diabetic retinopathy. The retinal image quality must be improved for the detection of features and abnormalities and for this purpose preprocessing of retinal images is vital. In this paper, we present a novel automated approach for preprocessing of colored retinal images. The proposed technique improves the quality of input retinal image by separating the background and noisy area from the overall image. It contains coarse segmentation and fine segmentation. Standard retinal images databases Diaretdb0, Diaretdb1, DRIVE and STARE are used to test the validation of our preprocessing technique. The experimental results show the validity of proposed preprocessing technique.

  7. Algorithms of image processing in nuclear medicine

    International Nuclear Information System (INIS)

    Oliveira, V.A.

    1990-01-01

    The problem of image restoration from noisy measurements as encountered in Nuclear Medicine is considered. A new approach for treating the measurements wherein they are represented by a spatial noncausal interaction model prior to maximum entropy restoration is given. This model describes the statistical dependence among the image values and their neighbourhood. The particular application of the algorithms presented here relates to gamma ray imaging systems, and is aimed at improving the resolution-noise suppression product. Results for actual gamma camera data are presented and compared with more conventional techniques. (author)

  8. Correction for non-rigid movement artefacts in calcium imaging using local-global optical flow and PCA-based templates

    DEFF Research Database (Denmark)

    Brazhe, A.; Fordsmann, J.; Lauritzen, M.

    2017-01-01

    correction of calcium timelapse imaging data is accurate, can represent non-rigid image distortions, robust to noisy data and allows for fast registration of large videos. The implementation is open-source and is programmed in Python, which provides for easy access and merging into downstream image...

  9. Problem of identifying an object in a robotics scene from an imprecise verbal description

    Energy Technology Data Exchange (ETDEWEB)

    Farreny, H; Prade, H

    1983-01-01

    The authors investigate the problem of relating imprecise and incomplete verbal descriptions to noisy high-level features supplied by an image analyzer. Pattern-matching problems are specially addressed. The proposed approach allows the direct processing of human-like descriptions. Moreover, the imprecision due to the use of natural language expressions or to the noisiness of the image analyzer, is taken into account. 26 references.

  10. Capturing spike variability in noisy Izhikevich neurons using point process generalized linear models

    DEFF Research Database (Denmark)

    Østergaard, Jacob; Kramer, Mark A.; Eden, Uri T.

    2018-01-01

    current. We then fit these spike train datawith a statistical model (a generalized linear model, GLM, with multiplicative influences of past spiking). For different levels of noise, we show how the GLM captures both the deterministic features of the Izhikevich neuron and the variability driven...... by the noise. We conclude that the GLM captures essential features of the simulated spike trains, but for near-deterministic spike trains, goodness-of-fit analyses reveal that the model does not fit very well in a statistical sense; the essential random part of the GLM is not captured....... are separately applied; understanding the relationships between these modeling approaches remains an area of active research. In this letter, we examine this relationship using simulation. To do so, we first generate spike train data from a well-known dynamical model, the Izhikevich neuron, with a noisy input...

  11. Change detection for synthetic aperture radar images based on pattern and intensity distinctiveness analysis

    Science.gov (United States)

    Wang, Xiao; Gao, Feng; Dong, Junyu; Qi, Qiang

    2018-04-01

    Synthetic aperture radar (SAR) image is independent on atmospheric conditions, and it is the ideal image source for change detection. Existing methods directly analysis all the regions in the speckle noise contaminated difference image. The performance of these methods is easily affected by small noisy regions. In this paper, we proposed a novel change detection framework for saliency-guided change detection based on pattern and intensity distinctiveness analysis. The saliency analysis step can remove small noisy regions, and therefore makes the proposed method more robust to the speckle noise. In the proposed method, the log-ratio operator is first utilized to obtain a difference image (DI). Then, the saliency detection method based on pattern and intensity distinctiveness analysis is utilized to obtain the changed region candidates. Finally, principal component analysis and k-means clustering are employed to analysis pixels in the changed region candidates. Thus, the final change map can be obtained by classifying these pixels into changed or unchanged class. The experiment results on two real SAR images datasets have demonstrated the effectiveness of the proposed method.

  12. Cerebral Metabolic Rate of Oxygen (CMRO2 ) Mapping by Combining Quantitative Susceptibility Mapping (QSM) and Quantitative Blood Oxygenation Level-Dependent Imaging (qBOLD).

    Science.gov (United States)

    Cho, Junghun; Kee, Youngwook; Spincemaille, Pascal; Nguyen, Thanh D; Zhang, Jingwei; Gupta, Ajay; Zhang, Shun; Wang, Yi

    2018-03-07

    To map the cerebral metabolic rate of oxygen (CMRO 2 ) by estimating the oxygen extraction fraction (OEF) from gradient echo imaging (GRE) using phase and magnitude of the GRE data. 3D multi-echo gradient echo imaging and perfusion imaging with arterial spin labeling were performed in 11 healthy subjects. CMRO 2 and OEF maps were reconstructed by joint quantitative susceptibility mapping (QSM) to process GRE phases and quantitative blood oxygen level-dependent (qBOLD) modeling to process GRE magnitudes. Comparisons with QSM and qBOLD alone were performed using ROI analysis, paired t-tests, and Bland-Altman plot. The average CMRO 2 value in cortical gray matter across subjects were 140.4 ± 14.9, 134.1 ± 12.5, and 184.6 ± 17.9 μmol/100 g/min, with corresponding OEFs of 30.9 ± 3.4%, 30.0 ± 1.8%, and 40.9 ± 2.4% for methods based on QSM, qBOLD, and QSM+qBOLD, respectively. QSM+qBOLD provided the highest CMRO 2 contrast between gray and white matter, more uniform OEF than QSM, and less noisy OEF than qBOLD. Quantitative CMRO 2 mapping that fits the entire complex GRE data is feasible by combining QSM analysis of phase and qBOLD analysis of magnitude. © 2018 International Society for Magnetic Resonance in Medicine.

  13. Evaluation of Parallel and Fan-Beam Data Acquisition Geometries and Strategies for Myocardial SPECT Imaging

    Science.gov (United States)

    Qi, Yujin; Tsui, B. M. W.; Gilland, K. L.; Frey, E. C.; Gullberg, G. T.

    2004-06-01

    This study evaluates myocardial SPECT images obtained from parallel-hole (PH) and fan-beam (FB) collimator geometries using both circular-orbit (CO) and noncircular-orbit (NCO) acquisitions. A newly developed 4-D NURBS-based cardiac-torso (NCAT) phantom was used to simulate the /sup 99m/Tc-sestamibi uptakes in human torso with myocardial defects in the left ventricular (LV) wall. Two phantoms were generated to simulate patients with thick and thin body builds. Projection data including the effects of attenuation, collimator-detector response and scatter were generated using SIMSET Monte Carlo simulations. A large number of photon histories were generated such that the projection data were close to noise free. Poisson noise fluctuations were then added to simulate the count densities found in clinical data. Noise-free and noisy projection data were reconstructed using the iterative OS-EM reconstruction algorithm with attenuation compensation. The reconstructed images from noisy projection data show that the noise levels are lower for the FB as compared to the PH collimator due to increase in detected counts. The NCO acquisition method provides slightly better resolution and small improvement in defect contrast as compared to the CO acquisition method in noise-free reconstructed images. Despite lower projection counts the NCO shows the same noise level as the CO in the attenuation corrected reconstruction images. The results from the channelized Hotelling observer (CHO) study show that FB collimator is superior to PH collimator in myocardial defect detection, but the NCO shows no statistical significant difference from the CO for either PH or FB collimator. In conclusion, our results indicate that data acquisition using NCO makes a very small improvement in the resolution over CO for myocardial SPECT imaging. This small improvement does not make a significant difference on myocardial defect detection. However, an FB collimator provides better defect detection than a

  14. Multiscale Vision Model Highlights Spontaneous Glial Calcium Waves Recorded by 2-Photon Imaging in Brain Tissue

    DEFF Research Database (Denmark)

    Brazhe, Alexey; Mathiesen, Claus; Lauritzen, Martin

    2013-01-01

    Intercellular glial calcium waves constitute a signaling pathway which can be visualized by fluorescence imaging of cytosolic Ca2+ changes. However, there is a lack of procedures for sensitive and reliable detection of calcium waves in noisy multiphoton imaging data. Here we extend multiscale...

  15. Modeling evolution of crosstalk in noisy signal transduction networks

    Science.gov (United States)

    Tareen, Ammar; Wingreen, Ned S.; Mukhopadhyay, Ranjan

    2018-02-01

    Signal transduction networks can form highly interconnected systems within cells due to crosstalk between constituent pathways. To better understand the evolutionary design principles underlying such networks, we study the evolution of crosstalk for two parallel signaling pathways that arise via gene duplication. We use a sequence-based evolutionary algorithm and evolve the network based on two physically motivated fitness functions related to information transmission. We find that one fitness function leads to a high degree of crosstalk while the other leads to pathway specificity. Our results offer insights on the relationship between network architecture and information transmission for noisy biomolecular networks.

  16. Communication in a noisy environment: Perception of one's own voice and speech enhancement

    Science.gov (United States)

    Le Cocq, Cecile

    Workers in noisy industrial environments are often confronted to communication problems. Lost of workers complain about not being able to communicate easily with their coworkers when they wear hearing protectors. In consequence, they tend to remove their protectors, which expose them to the risk of hearing loss. In fact this communication problem is a double one: first the hearing protectors modify one's own voice perception; second they interfere with understanding speech from others. This double problem is examined in this thesis. When wearing hearing protectors, the modification of one's own voice perception is partly due to the occlusion effect which is produced when an earplug is inserted in the car canal. This occlusion effect has two main consequences: first the physiological noises in low frequencies are better perceived, second the perception of one's own voice is modified. In order to have a better understanding of this phenomenon, the literature results are analyzed systematically, and a new method to quantify the occlusion effect is developed. Instead of stimulating the skull with a bone vibrator or asking the subject to speak as is usually done in the literature, it has been decided to excite the buccal cavity with an acoustic wave. The experiment has been designed in such a way that the acoustic wave which excites the buccal cavity does not excite the external car or the rest of the body directly. The measurement of the hearing threshold in open and occluded car has been used to quantify the subjective occlusion effect for an acoustic wave in the buccal cavity. These experimental results as well as those reported in the literature have lead to a better understanding of the occlusion effect and an evaluation of the role of each internal path from the acoustic source to the internal car. The speech intelligibility from others is altered by both the high sound levels of noisy industrial environments and the speech signal attenuation due to hearing

  17. County-Level Population Economic Status and Medicare Imaging Resource Consumption.

    Science.gov (United States)

    Rosenkrantz, Andrew B; Hughes, Danny R; Prabhakar, Anand M; Duszak, Richard

    2017-06-01

    The aim of this study was to assess relationships between county-level variation in Medicare beneficiary imaging resource consumption and measures of population economic status. The 2013 CMS Geographic Variation Public Use File was used to identify county-level per capita Medicare fee-for-service imaging utilization and nationally standardized costs to the Medicare program. The County Health Rankings public data set was used to identify county-level measures of population economic status. Regional variation was assessed, and multivariate regressions were performed. Imaging events per 1,000 Medicare beneficiaries varied 1.8-fold (range, 2,723-4,843) at the state level and 5.3-fold (range, 1,228-6,455) at the county level. Per capita nationally standardized imaging costs to Medicare varied 4.2-fold (range, $84-$353) at the state level and 14.1-fold (range, $33-$471) at the county level. Within individual states, county-level utilization varied on average 2.0-fold (range, 1.1- to 3.1-fold), and costs varied 2.8-fold (range, 1.1- to 6.4-fold). For both large urban populations and small rural states, Medicare imaging resource consumption was heterogeneously variable at the county level. Adjusting for county-level gender, ethnicity, rural status, and population density, countywide unemployment rates showed strong independent positive associations with Medicare imaging events (β = 26.96) and costs (β = 4.37), whereas uninsured rates showed strong independent positive associations with Medicare imaging costs (β = 2.68). Medicare imaging utilization and costs both vary far more at the county than at the state level. Unfavorable measures of county-level population economic status in the non-Medicare population are independently associated with greater Medicare imaging resource consumption. Future efforts to optimize Medicare imaging use should consider the influence of local indigenous socioeconomic factors outside the scope of traditional beneficiary-focused policy

  18. Noisy non-transitive quantum games

    Energy Technology Data Exchange (ETDEWEB)

    Ramzan, M; Khan, Salman; Khan, M Khalid, E-mail: mramzan@phys.qau.edu.p [Department of Physics Quaid-i-Azam University, Islamabad 45320 (Pakistan)

    2010-07-02

    We study the effect of quantum noise in 3 x 3 entangled quantum games. By taking into account different noisy quantum channels, we analyze how a two-player, three-strategy Rock-Scissor-Paper game is influenced by the quantum noise. We consider the winning non-transitive strategies R, S and P such that R beats S, S beats P and P beats R. The game behaves as a noiseless game for the maximum value of the quantum noise. It is seen that Alice's payoff is heavily influenced by the depolarizing noise as compared to the amplitude damping noise. A depolarizing channel causes a monotonic decrease in players' payoffs as we increase the amount of quantum noise. In the case of the amplitude damping channel, Alice's payoff function reaches its minimum for {alpha} = 0.5 and is symmetrical. This means that larger values of quantum noise influence the game weakly. On the other hand, the phase damping channel does not influence the game. Furthermore, the Nash equilibrium and non-transitive character of the game are not affected under the influence of quantum noise.

  19. New variational image decomposition model for simultaneously denoising and segmenting optical coherence tomography images

    International Nuclear Information System (INIS)

    Duan, Jinming; Bai, Li; Tench, Christopher; Gottlob, Irene; Proudlock, Frank

    2015-01-01

    Optical coherence tomography (OCT) imaging plays an important role in clinical diagnosis and monitoring of diseases of the human retina. Automated analysis of optical coherence tomography images is a challenging task as the images are inherently noisy. In this paper, a novel variational image decomposition model is proposed to decompose an OCT image into three components: the first component is the original image but with the noise completely removed; the second contains the set of edges representing the retinal layer boundaries present in the image; and the third is an image of noise, or in image decomposition terms, the texture, or oscillatory patterns of the original image. In addition, a fast Fourier transform based split Bregman algorithm is developed to improve computational efficiency of solving the proposed model. Extensive experiments are conducted on both synthesised and real OCT images to demonstrate that the proposed model outperforms the state-of-the-art speckle noise reduction methods and leads to accurate retinal layer segmentation. (paper)

  20. Predicting speech intelligibility in conditions with nonlinearly processed noisy speech

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2013-01-01

    The speech-based envelope power spectrum model (sEPSM; [1]) was proposed in order to overcome the limitations of the classical speech transmission index (STI) and speech intelligibility index (SII). The sEPSM applies the signal-tonoise ratio in the envelope domain (SNRenv), which was demonstrated...... to successfully predict speech intelligibility in conditions with nonlinearly processed noisy speech, such as processing with spectral subtraction. Moreover, a multiresolution version (mr-sEPSM) was demonstrated to account for speech intelligibility in various conditions with stationary and fluctuating...

  1. Signal detection by active, noisy hair bundles

    Science.gov (United States)

    O'Maoiléidigh, Dáibhid; Salvi, Joshua D.; Hudspeth, A. J.

    2018-05-01

    Vertebrate ears employ hair bundles to transduce mechanical movements into electrical signals, but their performance is limited by noise. Hair bundles are substantially more sensitive to periodic stimulation when they are mechanically active, however, than when they are passive. We developed a model of active hair-bundle mechanics that predicts the conditions under which a bundle is most sensitive to periodic stimulation. The model relies only on the existence of mechanotransduction channels and an active adaptation mechanism that recloses the channels. For a frequency-detuned stimulus, a noisy hair bundle's phase-locked response and degree of entrainment as well as its detection bandwidth are maximized when the bundle exhibits low-amplitude spontaneous oscillations. The phase-locked response and entrainment of a bundle are predicted to peak as functions of the noise level. We confirmed several of these predictions experimentally by periodically forcing hair bundles held near the onset of self-oscillation. A hair bundle's active process amplifies the stimulus preferentially over the noise, allowing the bundle to detect periodic forces less than 1 pN in amplitude. Moreover, the addition of noise can improve a bundle's ability to detect the stimulus. Although, mechanical activity has not yet been observed in mammalian hair bundles, a related model predicts that active but quiescent bundles can oscillate spontaneously when they are loaded by a sufficiently massive object such as the tectorial membrane. Overall, this work indicates that auditory systems rely on active elements, composed of hair cells and their mechanical environment, that operate on the brink of self-oscillation.

  2. Fast parallel algorithm for CT image reconstruction.

    Science.gov (United States)

    Flores, Liubov A; Vidal, Vicent; Mayo, Patricia; Rodenas, Francisco; Verdú, Gumersindo

    2012-01-01

    In X-ray computed tomography (CT) the X rays are used to obtain the projection data needed to generate an image of the inside of an object. The image can be generated with different techniques. Iterative methods are more suitable for the reconstruction of images with high contrast and precision in noisy conditions and from a small number of projections. Their use may be important in portable scanners for their functionality in emergency situations. However, in practice, these methods are not widely used due to the high computational cost of their implementation. In this work we analyze iterative parallel image reconstruction with the Portable Extensive Toolkit for Scientific computation (PETSc).

  3. Methodology to estimate the relative pressure field from noisy experimental velocity data

    International Nuclear Information System (INIS)

    Bolin, C D; Raguin, L G

    2008-01-01

    The determination of intravascular pressure fields is important to the characterization of cardiovascular pathology. We present a two-stage method that solves the inverse problem of estimating the relative pressure field from noisy velocity fields measured by phase contrast magnetic resonance imaging (PC-MRI) on an irregular domain with limited spatial resolution, and includes a filter for the experimental noise. For the pressure calculation, the Poisson pressure equation is solved by embedding the irregular flow domain into a regular domain. To lessen the propagation of the noise inherent to the velocity measurements, three filters - a median filter and two physics-based filters - are evaluated using a 2-D Couette flow. The two physics-based filters outperform the median filter for the estimation of the relative pressure field for realistic signal-to-noise ratios (SNR = 5 to 30). The most accurate pressure field results from a filter that applies in a least-squares sense three constraints simultaneously: consistency between measured and filtered velocity fields, divergence-free and additional smoothness conditions. This filter leads to a 5-fold gain in accuracy for the estimated relative pressure field compared to without noise filtering, in conditions consistent with PC-MRI of the carotid artery: SNR = 5, 20 x 20 discretized flow domain (25 X 25 computational domain).

  4. On-line transmission electron microscopic image analysis of chromatin texture for differentiation of thyroid gland tumors.

    Science.gov (United States)

    Kriete, A; Schäffer, R; Harms, H; Aus, H M

    1987-06-01

    Nuclei of the cells from the thyroid gland were analyzed in a transmission electron microscope by direct TV scanning and on-line image processing. The method uses the advantages of a visual-perception model to detect structures in noisy and low-contrast images. The features analyzed include area, a form factor and texture parameters from the second derivative stage. Three tumor-free thyroid tissues, three follicular adenomas, three follicular carcinomas and three papillary carcinomas were studied. The computer-aided cytophotometric method showed that the most significant differences were the statistics of the chromatin texture features of homogeneity and regularity. These findings document the possibility of an automated differentiation of tumors at the ultrastructural level.

  5. Medical image segmentation using genetic algorithms.

    Science.gov (United States)

    Maulik, Ujjwal

    2009-03-01

    Genetic algorithms (GAs) have been found to be effective in the domain of medical image segmentation, since the problem can often be mapped to one of search in a complex and multimodal landscape. The challenges in medical image segmentation arise due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. The resulting search space is therefore often noisy with a multitude of local optima. Not only does the genetic algorithmic framework prove to be effective in coming out of local optima, it also brings considerable flexibility into the segmentation procedure. In this paper, an attempt has been made to review the major applications of GAs to the domain of medical image segmentation.

  6. Binary-space-partitioned images for resolving image-based visibility.

    Science.gov (United States)

    Fu, Chi-Wing; Wong, Tien-Tsin; Tong, Wai-Shun; Tang, Chi-Keung; Hanson, Andrew J

    2004-01-01

    We propose a novel 2D representation for 3D visibility sorting, the Binary-Space-Partitioned Image (BSPI), to accelerate real-time image-based rendering. BSPI is an efficient 2D realization of a 3D BSP tree, which is commonly used in computer graphics for time-critical visibility sorting. Since the overall structure of a BSP tree is encoded in a BSPI, traversing a BSPI is comparable to traversing the corresponding BSP tree. BSPI performs visibility sorting efficiently and accurately in the 2D image space by warping the reference image triangle-by-triangle instead of pixel-by-pixel. Multiple BSPIs can be combined to solve "disocclusion," when an occluded portion of the scene becomes visible at a novel viewpoint. Our method is highly automatic, including a tensor voting preprocessing step that generates candidate image partition lines for BSPIs, filters the noisy input data by rejecting outliers, and interpolates missing information. Our system has been applied to a variety of real data, including stereo, motion, and range images.

  7. Maximum likelihood estimation-based denoising of magnetic resonance images using restricted local neighborhoods

    International Nuclear Information System (INIS)

    Rajan, Jeny; Jeurissen, Ben; Sijbers, Jan; Verhoye, Marleen; Van Audekerke, Johan

    2011-01-01

    In this paper, we propose a method to denoise magnitude magnetic resonance (MR) images, which are Rician distributed. Conventionally, maximum likelihood methods incorporate the Rice distribution to estimate the true, underlying signal from a local neighborhood within which the signal is assumed to be constant. However, if this assumption is not met, such filtering will lead to blurred edges and loss of fine structures. As a solution to this problem, we put forward the concept of restricted local neighborhoods where the true intensity for each noisy pixel is estimated from a set of preselected neighboring pixels. To this end, a reference image is created from the noisy image using a recently proposed nonlocal means algorithm. This reference image is used as a prior for further noise reduction. A scheme is developed to locally select an appropriate subset of pixels from which the underlying signal is estimated. Experimental results based on the peak signal to noise ratio, structural similarity index matrix, Bhattacharyya coefficient and mean absolute difference from synthetic and real MR images demonstrate the superior performance of the proposed method over other state-of-the-art methods.

  8. Model-free prediction of noisy chaotic time series by deep learning

    OpenAIRE

    Yeo, Kyongmin

    2017-01-01

    We present a deep neural network for a model-free prediction of a chaotic dynamical system from noisy observations. The proposed deep learning model aims to predict the conditional probability distribution of a state variable. The Long Short-Term Memory network (LSTM) is employed to model the nonlinear dynamics and a softmax layer is used to approximate a probability distribution. The LSTM model is trained by minimizing a regularized cross-entropy function. The LSTM model is validated against...

  9. Blind CT image quality assessment via deep learning strategy: initial study

    Science.gov (United States)

    Li, Sui; He, Ji; Wang, Yongbo; Liao, Yuting; Zeng, Dong; Bian, Zhaoying; Ma, Jianhua

    2018-03-01

    Computed Tomography (CT) is one of the most important medical imaging modality. CT images can be used to assist in the detection and diagnosis of lesions and to facilitate follow-up treatment. However, CT images are vulnerable to noise. Actually, there are two major source intrinsically causing the CT data noise, i.e., the X-ray photo statistics and the electronic noise background. Therefore, it is necessary to doing image quality assessment (IQA) in CT imaging before diagnosis and treatment. Most of existing CT images IQA methods are based on human observer study. However, these methods are impractical in clinical for their complex and time-consuming. In this paper, we presented a blind CT image quality assessment via deep learning strategy. A database of 1500 CT images is constructed, containing 300 high-quality images and 1200 corresponding noisy images. Specifically, the high-quality images were used to simulate the corresponding noisy images at four different doses. Then, the images are scored by the experienced radiologists by the following attributes: image noise, artifacts, edge and structure, overall image quality, and tumor size and boundary estimation with five-point scale. We trained a network for learning the non-liner map from CT images to subjective evaluation scores. Then, we load the pre-trained model to yield predicted score from the test image. To demonstrate the performance of the deep learning network in IQA, correlation coefficients: Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) are utilized. And the experimental result demonstrate that the presented deep learning based IQA strategy can be used in the CT image quality assessment.

  10. An upper bound for codes for the noisy two-access binary adder channel

    NARCIS (Netherlands)

    Tilborg, van H.C.A.

    1986-01-01

    Using earlier methods a combinatorial upper bound is derived for|C|. cdot |D|, where(C,D)is adelta-decodable code pair for the noisy two-access binary adder channel. Asymptotically, this bound reduces toR_{1}=R_{2} leq frac{3}{2} + elog_{2} e - (frac{1}{2} + e) log_{2} (1 + 2e)= frac{1}{2} - e +

  11. Autonomous algorithms for image restoration

    OpenAIRE

    Griniasty , Meir

    1994-01-01

    We describe a general theoretical framework for algorithms that adaptively tune all their parameters during the restoration of a noisy image. The adaptation procedure is based on a mean field approach which is known as ``Deterministic Annealing'', and is reminiscent of the ``Deterministic Bolzmann Machiné'. The algorithm is less time consuming in comparison with its simulated annealing alternative. We apply the theory to several architectures and compare their performances.

  12. Minimum decoherence cat-like states in Gaussian noisy channels

    Energy Technology Data Exchange (ETDEWEB)

    Serafini, A [Dipartimento di Fisica ' E R Caianiello' , Universita di Salerno, INFM UdR Salerno, INFN Sezione Napoli, G C Salerno, Via S Allende, 84081 Baronissi, SA (Italy); De Siena, S [Dipartimento di Fisica ' E R Caianiello' , Universita di Salerno, INFM UdR Salerno, INFN Sezione Napoli, G C Salerno, Via S Allende, 84081 Baronissi, SA (Italy); Illuminati, F [Dipartimento di Fisica ' E R Caianiello' , Universita di Salerno, INFM UdR Salerno, INFN Sezione Napoli, G C Salerno, Via S Allende, 84081 Baronissi, SA (Italy); Paris, M G A [ISIS ' A Sorbelli' , I-41026 Pavullo nel Frignano, MO (Italy)

    2004-06-01

    We address the evolution of cat-like states in general Gaussian noisy channels, by considering superpositions of coherent and squeezed coherent states coupled to an arbitrarily squeezed bath. The phase space dynamics is solved and decoherence is studied, keeping track of the purity of the evolving state. The influence of the choice of the state and channel parameters on purity is discussed and optimal working regimes that minimize the decoherence rate are determined. In particular, we show that squeezing the bath to protect a non-squeezed cat state against decoherence is equivalent to orthogonally squeezing the initial cat state while letting the bath be phase insensitive.

  13. Minimum decoherence cat-like states in Gaussian noisy channels

    International Nuclear Information System (INIS)

    Serafini, A; De Siena, S; Illuminati, F; Paris, M G A

    2004-01-01

    We address the evolution of cat-like states in general Gaussian noisy channels, by considering superpositions of coherent and squeezed coherent states coupled to an arbitrarily squeezed bath. The phase space dynamics is solved and decoherence is studied, keeping track of the purity of the evolving state. The influence of the choice of the state and channel parameters on purity is discussed and optimal working regimes that minimize the decoherence rate are determined. In particular, we show that squeezing the bath to protect a non-squeezed cat state against decoherence is equivalent to orthogonally squeezing the initial cat state while letting the bath be phase insensitive

  14. Dictionary-enhanced imaging cytometry

    Science.gov (United States)

    Orth, Antony; Schaak, Diane; Schonbrun, Ethan

    2017-02-01

    State-of-the-art high-throughput microscopes are now capable of recording image data at a phenomenal rate, imaging entire microscope slides in minutes. In this paper we investigate how a large image set can be used to perform automated cell classification and denoising. To this end, we acquire an image library consisting of over one quarter-million white blood cell (WBC) nuclei together with CD15/CD16 protein expression for each cell. We show that the WBC nucleus images alone can be used to replicate CD expression-based gating, even in the presence of significant imaging noise. We also demonstrate that accurate estimates of white blood cell images can be recovered from extremely noisy images by comparing with a reference dictionary. This has implications for dose-limited imaging when samples belong to a highly restricted class such as a well-studied cell type. Furthermore, large image libraries may endow microscopes with capabilities beyond their hardware specifications in terms of sensitivity and resolution. We call for researchers to crowd source large image libraries of common cell lines to explore this possibility.

  15. Numeric ultrasonic image processing method: application to non-destructive testing of stainless austenitic steel welds

    International Nuclear Information System (INIS)

    Corneloup, G.

    1988-09-01

    A bibliographic research on the means used to improve the ultrasonic inspection of heterogeneous materials such as stainless austenitic steel welds has shown, taking into account the first analysis, a signal assembly in the form of an image (space, time) which carries an original solution to fault detection in highly noisy environments. A numeric grey-level ultrasonic image processing detection method is proposed based on the research of a certain determinism, in the way which the ultrasonic image evolves in space and time in the presence of a defect: the first criterion studies the horizontal stability of the gradients in the image and the second takes into account the time-transient nature of the defect echo. A very important rise in the signal-to-noise ratio obtained in welding inspections evidencing defects (real and artificial) is shown with the help of a computerized ultrasonic image processing/management system, developed for this application [fr

  16. Particle tracking from image sequences of complex plasma crystals

    International Nuclear Information System (INIS)

    Hadziavdic, Vedad; Melandsoe, Frank; Hanssen, Alfred

    2006-01-01

    In order to gather information about the physics of the complex plasma crystals from the experimental data, particles have to be tracked through a sequence of images. An application of the Kalman filter for that purpose is presented, using a one-dimensional approximation of the particle dynamics as a model for the filter. It is shown that Kalman filter is capable of tracking dust particles even with high levels of measurement noise. An inherent part of the Kalman filter, the innovation process, can be used to estimate values of the physical system parameters from the experimental data. The method is shown to be able to estimate the characteristic oscillation frequency from noisy data

  17. Encryption of QR code and grayscale image in interference-based scheme with high quality retrieval and silhouette problem removal

    Science.gov (United States)

    Qin, Yi; Wang, Hongjuan; Wang, Zhipeng; Gong, Qiong; Wang, Danchen

    2016-09-01

    In optical interference-based encryption (IBE) scheme, the currently available methods have to employ the iterative algorithms in order to encrypt two images and retrieve cross-talk free decrypted images. In this paper, we shall show that this goal can be achieved via an analytical process if one of the two images is QR code. For decryption, the QR code is decrypted in the conventional architecture and the decryption has a noisy appearance. Nevertheless, the robustness of QR code against noise enables the accurate acquisition of its content from the noisy retrieval, as a result of which the primary QR code can be exactly regenerated. Thereafter, a novel optical architecture is proposed to recover the grayscale image by aid of the QR code. In addition, the proposal has totally eliminated the silhouette problem existing in the previous IBE schemes, and its effectiveness and feasibility have been demonstrated by numerical simulations.

  18. Mathematical properties of a semi-classical signal analysis method: Noisy signal case

    KAUST Repository

    Liu, Dayan

    2012-08-01

    Recently, a new signal analysis method based on a semi-classical approach has been proposed [1]. The main idea in this method is to interpret a signal as a potential of a Schrodinger operator and then to use the discrete spectrum of this operator to analyze the signal. In this paper, we are interested in a mathematical analysis of this method in discrete case considering noisy signals. © 2012 IEEE.

  19. Mathematical properties of a semi-classical signal analysis method: Noisy signal case

    KAUST Repository

    Liu, Dayan; Laleg-Kirati, Taous-Meriem

    2012-01-01

    Recently, a new signal analysis method based on a semi-classical approach has been proposed [1]. The main idea in this method is to interpret a signal as a potential of a Schrodinger operator and then to use the discrete spectrum of this operator to analyze the signal. In this paper, we are interested in a mathematical analysis of this method in discrete case considering noisy signals. © 2012 IEEE.

  20. Computationally efficient algorithms for statistical image processing : implementation in R

    NARCIS (Netherlands)

    Langovoy, M.; Wittich, O.

    2010-01-01

    In the series of our earlier papers on the subject, we proposed a novel statistical hypothesis testing method for detection of objects in noisy images. The method uses results from percolation theory and random graph theory. We developed algorithms that allowed to detect objects of unknown shapes in

  1. Shape-based interpolation of multidimensional grey-level images

    International Nuclear Information System (INIS)

    Grevera, G.J.; Udupa, J.K.

    1996-01-01

    Shape-based interpolation as applied to binary images causes the interpolation process to be influenced by the shape of the object. It accomplishes this by first applying a distance transform to the data. This results in the creation of a grey-level data set in which the value at each point represents the minimum distance from that point to the surface of the object. (By convention, points inside the object are assigned positive values; points outside are assigned negative values.) This distance transformed data set is then interpolated using linear or higher-order interpolation and is then thresholded at a distance value of zero to produce the interpolated binary data set. In this paper, the authors describe a new method that extends shape-based interpolation to grey-level input data sets. This generalization consists of first lifting the n-dimensional (n-D) image data to represent it as a surface, or equivalently as a binary image, in an (n + 1)-dimensional [(n + 1)-D] space. The binary shape-based method is then applied to this image to create an (n + 1)-D binary interpolated image. Finally, this image is collapsed (inverse of lifting) to create the n-D interpolated grey-level data set. The authors have conducted several evaluation studies involving patient computed tomography (CT) and magnetic resonance (MR) data as well as mathematical phantoms. They all indicate that the new method produces more accurate results than commonly used grey-level linear interpolation methods, although at the cost of increased computation

  2. Adaptive and robust statistical methods for processing near-field scanning microwave microscopy images.

    Science.gov (United States)

    Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P

    2015-03-01

    Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical. Published by Elsevier B.V.

  3. Acceptable levels of digital image compression in chest radiology

    International Nuclear Information System (INIS)

    Smith, I.

    2000-01-01

    The introduction of picture archival and communications systems (PACS) and teleradiology has prompted an examination of techniques that optimize the storage capacity and speed of digital storage and distribution networks. The general acceptance of the move to replace conventional screen-film capture with computed radiography (CR) is an indication that clinicians within the radiology community are willing to accept images that have been 'compressed'. The question to be answered, therefore, is what level of compression is acceptable. The purpose of the present study is to provide an assessment of the ability of a group of imaging professionals to determine whether an image has been compressed. To undertake this study a single mobile chest image, selected for the presence of some subtle pathology in the form of a number of septal lines in both costphrenic angles, was compressed to levels of 10:1, 20:1 and 30:1. These images were randomly ordered and shown to the observers for interpretation. Analysis of the responses indicates that in general it was not possible to distinguish the original image from its compressed counterparts. Furthermore, a preference appeared to be shown for images that have undergone low levels of compression. This preference can most likely be attributed to the 'de-noising' effect of the compression algorithm at low levels. Copyright (1999) Blackwell Science Pty. Ltd

  4. Effects of flashlight guidance on chest compression performance in cardiopulmonary resuscitation in a noisy environment.

    Science.gov (United States)

    You, Je Sung; Chung, Sung Phil; Chang, Chul Ho; Park, Incheol; Lee, Hye Sun; Kim, SeungHo; Lee, Hahn Shick

    2013-08-01

    In real cardiopulmonary resuscitation (CPR), noise can arise from instructional voices and environmental sounds in places such as a battlefield and industrial and high-traffic areas. A feedback device using a flashing light was designed to overcome noise-induced stimulus saturation during CPR. This study was conducted to determine whether 'flashlight' guidance influences CPR performance in a simulated noisy setting. We recruited 30 senior medical students with no previous experience of using flashlight-guided CPR to participate in this prospective, simulation-based, crossover study. The experiment was conducted in a simulated noisy situation using a cardiac arrest model without ventilation. Noise such as patrol car and fire engine sirens was artificially generated. The flashlight guidance device emitted light pulses at the rate of 100 flashes/min. Participants also received instructions to achieve the desired rate of 100 compressions/min. CPR performances were recorded with a Resusci Anne mannequin with a computer skill-reporting system. There were significant differences between the control and flashlight groups in mean compression rate (MCR), MCR/min and visual analogue scale. However, there were no significant differences in correct compression depth, mean compression depth, correct hand position, and correctly released compression. The flashlight group constantly maintained the pace at the desired 100 compressions/min. Furthermore, the flashlight group had a tendency to keep the MCR constant, whereas the control group had a tendency to decrease it after 60 s. Flashlight-guided CPR is particularly advantageous for maintaining a desired MCR during hands-only CPR in noisy environments, where metronome pacing might not be clearly heard.

  5. A Nash-game approach to joint image restoration and segmentation

    OpenAIRE

    Kallel , Moez; Aboulaich , Rajae; Habbal , Abderrahmane; Moakher , Maher

    2014-01-01

    International audience; We propose a game theory approach to simultaneously restore and segment noisy images. We define two players: one is restoration, with the image intensity as strategy, and the other is segmentation with contours as strategy. Cost functions are the classical relevant ones for restoration and segmentation, respectively. The two players play a static game with complete information, and we consider as solution to the game the so-called Nash Equilibrium. For the computation ...

  6. A statistically harmonized alignment-classification in image space enables accurate and robust alignment of noisy images in single particle analysis.

    Science.gov (United States)

    Kawata, Masaaki; Sato, Chikara

    2007-06-01

    In determining the three-dimensional (3D) structure of macromolecular assemblies in single particle analysis, a large representative dataset of two-dimensional (2D) average images from huge number of raw images is a key for high resolution. Because alignments prior to averaging are computationally intensive, currently available multireference alignment (MRA) software does not survey every possible alignment. This leads to misaligned images, creating blurred averages and reducing the quality of the final 3D reconstruction. We present a new method, in which multireference alignment is harmonized with classification (multireference multiple alignment: MRMA). This method enables a statistical comparison of multiple alignment peaks, reflecting the similarities between each raw image and a set of reference images. Among the selected alignment candidates for each raw image, misaligned images are statistically excluded, based on the principle that aligned raw images of similar projections have a dense distribution around the correctly aligned coordinates in image space. This newly developed method was examined for accuracy and speed using model image sets with various signal-to-noise ratios, and with electron microscope images of the Transient Receptor Potential C3 and the sodium channel. In every data set, the newly developed method outperformed conventional methods in robustness against noise and in speed, creating 2D average images of higher quality. This statistically harmonized alignment-classification combination should greatly improve the quality of single particle analysis.

  7. Security of modified Ping-Pong protocol in noisy and lossy channel

    OpenAIRE

    Han, Yun-Guang; Yin, Zhen-Qiang; Li, Hong-Wei; Chen, Wei; Wang, Shuang; Guo, Guang-Can; Han, Zheng-Fu

    2014-01-01

    The “Ping-Pong” (PP) protocol is a two-way quantum key protocol based on entanglement. In this protocol, Bob prepares one maximally entangled pair of qubits, and sends one qubit to Alice. Then, Alice performs some necessary operations on this qubit and sends it back to Bob. Although this protocol was proposed in 2002, its security in the noisy and lossy channel has not been proven. In this report, we add a simple and experimentally feasible modification to the original PP protocol, and prove ...

  8. Collective fluctuations in networks of noisy components

    International Nuclear Information System (INIS)

    Masuda, Naoki; Kawamura, Yoji; Kori, Hiroshi

    2010-01-01

    Collective dynamics result from interactions among noisy dynamical components. Examples include heartbeats, circadian rhythms and various pattern formations. Because of noise in each component, collective dynamics inevitably involve fluctuations, which may crucially affect the functioning of the system. However, the relation between the fluctuations in isolated individual components and those in collective dynamics is not clear. Here, we study a linear dynamical system of networked components subjected to independent Gaussian noise and analytically show that the connectivity of networks determines the intensity of fluctuations in the collective dynamics. Remarkably, in general directed networks including scale-free networks, the fluctuations decrease more slowly with system size than the standard law stated by the central limit theorem. They even remain finite for a large system size when global directionality of the network exists. Moreover, such non-trivial behavior appears even in undirected networks when nonlinear dynamical systems are considered. We demonstrate it with a coupled oscillator system.

  9. Improved automatic filtering methodology for an optimal pharmacokinetic modelling of DCE-MR images of the prostate

    Energy Technology Data Exchange (ETDEWEB)

    Vazquez Martinez, V.; Bosch Roig, I.; Sanz Requena, R.

    2016-07-01

    In Dynamic Contrast-Enhanced Magnetic Resonance (DCEMR) studies with high temporal resolution, images are quite noisy due to the complicate balance between temporal and spatial resolution. For this reason, the temporal curves extracted from the images present remarkable noise levels and, because of that, the pharmacokinetic parameters calculated by least squares fitting from the curves and the arterial phase (a useful marker in tumour diagnosis which appears in curves with high arterial contribution) are affected. In order to solve these limitations, an automatic filtering method was developed by our group. In this work, an advanced automatic filtering methodology is presented to further improve noise reduction of the temporal curves in order to obtain more accurate kinetic parameters and a proper modelling of the arterial phase. (Author)

  10. Two-stage image denoising considering interscale and intrascale dependencies

    Science.gov (United States)

    Shahdoosti, Hamid Reza

    2017-11-01

    A solution to the problem of reducing the noise of grayscale images is presented. To consider the intrascale and interscale dependencies, this study makes use of a model. It is shown that the dependency between a wavelet coefficient and its predecessors can be modeled by the first-order Markov chain, which means that the parent conveys all of the information necessary for efficient estimation. Using this fact, the proposed method employs the Kalman filter in the wavelet domain for image denoising. The proposed method has two stages. The first stage employs a simple denoising algorithm to provide the noise-free image, by which the parameters of the model such as state transition matrix, variance of the process noise, the observation model, and the covariance of the observation noise are estimated. In the second stage, the Kalman filter is applied to the wavelet coefficients of the noisy image to estimate the noise-free coefficients. In fact, the Kalman filter is used to estimate the coefficients of high-frequency subbands from the coefficients of coarser scales and noisy observations of neighboring coefficients. In this way, both the interscale and intrascale dependencies are taken into account. Results are presented and discussed on a set of standard 8-bit grayscale images. The experimental results demonstrate that the proposed method achieves performances competitive with the state-of-the-art denoising methods in terms of both peak-signal-to-noise ratio and subjective visual quality.

  11. PIRPLE: a penalized-likelihood framework for incorporation of prior images in CT reconstruction

    International Nuclear Information System (INIS)

    Stayman, J Webster; Dang, Hao; Ding, Yifu; Siewerdsen, Jeffrey H

    2013-01-01

    Over the course of diagnosis and treatment, it is common for a number of imaging studies to be acquired. Such imaging sequences can provide substantial patient-specific prior knowledge about the anatomy that can be incorporated into a prior-image-based tomographic reconstruction for improved image quality and better dose utilization. We present a general methodology using a model-based reconstruction approach including formulations of the measurement noise that also integrates prior images. This penalized-likelihood technique adopts a sparsity enforcing penalty that incorporates prior information yet allows for change between the current reconstruction and the prior image. Moreover, since prior images are generally not registered with the current image volume, we present a modified model-based approach that seeks a joint registration of the prior image in addition to the reconstruction of projection data. We demonstrate that the combined prior-image- and model-based technique outperforms methods that ignore the prior data or lack a noise model. Moreover, we demonstrate the importance of registration for prior-image-based reconstruction methods and show that the prior-image-registered penalized-likelihood estimation (PIRPLE) approach can maintain a high level of image quality in the presence of noisy and undersampled projection data. (paper)

  12. Diagnostic reference levels in medical imaging

    International Nuclear Information System (INIS)

    Rosenstein, M.

    2001-01-01

    The paper proposes additional advice to national or local authorities and the clinical community on the application of diagnostic reference levels as a practical tool to manage radiation doses to patients in diagnostic radiology and nuclear medicine. A survey was made of the various approaches that have been taken by authoritative bodies to establish diagnostic reference levels for medical imaging tasks. There are a variety of ways to implement the idea of diagnostic reference levels, depending on the medical imaging task of interest, the national or local state of practice and the national or local preferences for technical implementation. The existing International Commission on Radiological Protection (ICRP) guidance is reviewed, the survey information is summarized, a set of unifying principles is espoused and a statement of additional advice that has been proposed to ICRP Committee 3 is presented. The proposed advice would meet a need for a unifying set of principles to provide a framework for diagnostic reference levels but would allow flexibility in their selection and use. While some illustrative examples are given, the proposed advice does not specify the specific quantities to be used, the numerical values to be set for the quantities or the technical details of how national or local authorities should implement diagnostic reference levels. (author)

  13. A Fast Algorithm for Image Super-Resolution from Blurred Observations

    Directory of Open Access Journals (Sweden)

    Ng Michael K

    2006-01-01

    Full Text Available We study the problem of reconstruction of a high-resolution image from several blurred low-resolution image frames. The image frames consist of blurred, decimated, and noisy versions of a high-resolution image. The high-resolution image is modeled as a Markov random field (MRF, and a maximum a posteriori (MAP estimation technique is used for the restoration. We show that with the periodic boundary condition, a high-resolution image can be restored efficiently by using fast Fourier transforms. We also apply the preconditioned conjugate gradient method to restore high-resolution images in the aperiodic boundary condition. Computer simulations are given to illustrate the effectiveness of the proposed approach.

  14. GPU-based fast cone beam CT reconstruction from undersampled and noisy projection data via total variation

    International Nuclear Information System (INIS)

    Jia Xun; Lou Yifei; Li Ruijiang; Song, William Y.; Jiang, Steve B.

    2010-01-01

    Purpose: Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. Methods: The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. Results: It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of ∼360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. Conclusions: This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.

  15. GPU-based fast cone beam CT reconstruction from undersampled and noisy projection data via total variation.

    Science.gov (United States)

    Jia, Xun; Lou, Yifei; Li, Ruijiang; Song, William Y; Jiang, Steve B

    2010-04-01

    Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of approximately 360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.

  16. Fractional Order Differentiation by Integration and Error Analysis in Noisy Environment

    KAUST Repository

    Liu, Dayan

    2015-03-31

    The integer order differentiation by integration method based on the Jacobi orthogonal polynomials for noisy signals was originally introduced by Mboup, Join and Fliess. We propose to extend this method from the integer order to the fractional order to estimate the fractional order derivatives of noisy signals. Firstly, two fractional order differentiators are deduced from the Jacobi orthogonal polynomial filter, using the Riemann-Liouville and the Caputo fractional order derivative definitions respectively. Exact and simple formulae for these differentiators are given by integral expressions. Hence, they can be used for both continuous-time and discrete-time models in on-line or off-line applications. Secondly, some error bounds are provided for the corresponding estimation errors. These bounds allow to study the design parameters\\' influence. The noise error contribution due to a large class of stochastic processes is studied in discrete case. The latter shows that the differentiator based on the Caputo fractional order derivative can cope with a class of noises, whose mean value and variance functions are polynomial time-varying. Thanks to the design parameters analysis, the proposed fractional order differentiators are significantly improved by admitting a time-delay. Thirdly, in order to reduce the calculation time for on-line applications, a recursive algorithm is proposed. Finally, the proposed differentiator based on the Riemann-Liouville fractional order derivative is used to estimate the state of a fractional order system and numerical simulations illustrate the accuracy and the robustness with respect to corrupting noises.

  17. Improving the maximum transmission distance of continuous-variable quantum key distribution with noisy coherent states using a noiseless amplifier

    International Nuclear Information System (INIS)

    Wang, Tianyi; Yu, Song; Zhang, Yi-Chen; Gu, Wanyi; Guo, Hong

    2014-01-01

    By employing a nondeterministic noiseless linear amplifier, we propose to increase the maximum transmission distance of continuous-variable quantum key distribution with noisy coherent states. With the covariance matrix transformation, the expression of secret key rate under reverse reconciliation is derived against collective entangling cloner attacks. We show that the noiseless linear amplifier can compensate the detrimental effect of the preparation noise with an enhancement of the maximum transmission distance and the noise resistance. - Highlights: • Noiseless amplifier is applied in noisy coherent state quantum key distribution. • Negative effect of preparation noise is compensated by noiseless amplification. • Maximum transmission distance and noise resistance are both enhanced

  18. Image matching in Bayer raw domain to de-noise low-light still images, optimized for real-time implementation

    Science.gov (United States)

    Romanenko, I. V.; Edirisinghe, E. A.; Larkin, D.

    2013-03-01

    Temporal accumulation of images is a well-known approach to improve signal to noise ratios of still images taken in a low light conditions. However, the complexity of known algorithms often leads to high hardware resource usage, increased memory bandwidth and computational complexity, making their practical use impossible. In our research we attempt to solve this problem with an implementation of a practical spatial-temporal de-noising algorithm, based on image accumulation. Image matching and spatial-temporal filtering was performed in Bayer RAW data space, which allowed us to benefit from predictable sensor noise characteristics, thus allowing using a range of algorithmic optimizations. The proposed algorithm accurately compensates for global and local motion and efficiently removes different kinds of noise in noisy images taken in low light conditions. In our algorithm we were able to perform global and local motion compensation in Bayer RAW data space, while preserving the resolution and effectively improving signal to noise ratios of moving objects as well as non-stationary background. The proposed algorithm is suitable for implementation in commercial grade FPGA's and capable of processing 16MP images at capturing rate (10 frames per second). The main challenge for matching between still images is the compromise between the quality of the motion prediction and the complexity of the algorithm and required memory bandwidth. Still images taken in a burst sequence must be aligned to compensate for background motion and foreground objects movements in a scene. High resolution still images coupled with significant time between successive frames can produce large displacements between images, which creates additional difficulty for image matching algorithms. In photo applications it is very important that the noise is efficiently removed in both static, and non-static background as well as in a moving objects, maintaining the resolution of the image. In our proposed

  19. Segmentation of neuroanatomy in magnetic resonance images

    Science.gov (United States)

    Simmons, Andrew; Arridge, Simon R.; Barker, G. J.; Tofts, Paul S.

    1992-06-01

    Segmentation in neurological magnetic resonance imaging (MRI) is necessary for feature extraction, volume measurement and for the three-dimensional display of neuroanatomy. Automated and semi-automated methods offer considerable advantages over manual methods because of their lack of subjectivity, their data reduction capabilities, and the time savings they give. We have used dual echo multi-slice spin-echo data sets which take advantage of the intrinsically multispectral nature of MRI. As a pre-processing step, a rf non-uniformity correction is applied and if the data is noisy the images are smoothed using a non-isotropic blurring method. Edge-based processing is used to identify the skin (the major outer contour) and the eyes. Edge-focusing has been used to significantly simplify edge images and thus allow simple postprocessing to pick out the brain contour in each slice of the data set. Edge- focusing is a technique which locates significant edges using a high degree of smoothing at a coarse level and tracks these edges to a fine level where the edges can be determined with high positional accuracy. Both 2-D and 3-D edge-detection methods have been compared. Once isolated, the brain is further processed to identify CSF, and, depending upon the MR pulse sequence used, the brain itself may be sub-divided into gray matter and white matter using semi-automatic contrast enhancement and clustering methods.

  20. Wavelet based Image Registration Technique for Matching Dental x-rays

    OpenAIRE

    P. Ramprasad; H. C. Nagaraj; M. K. Parasuram

    2008-01-01

    Image registration plays an important role in the diagnosis of dental pathologies such as dental caries, alveolar bone loss and periapical lesions etc. This paper presents a new wavelet based algorithm for registering noisy and poor contrast dental x-rays. Proposed algorithm has two stages. First stage is a preprocessing stage, removes the noise from the x-ray images. Gaussian filter has been used. Second stage is a geometric transformation stage. Proposed work uses two l...

  1. Topological quantum computing with a very noisy network and local error rates approaching one percent.

    Science.gov (United States)

    Nickerson, Naomi H; Li, Ying; Benjamin, Simon C

    2013-01-01

    A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.

  2. Gaussian Error Correction of Quantum States in a Correlated Noisy Channel

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Berni, Adriano; Madsen, Lars Skovgaard

    2013-01-01

    Noise is the main obstacle for the realization of fault-tolerant quantum information processing and secure communication over long distances. In this work, we propose a communication protocol relying on simple linear optics that optimally protects quantum states from non-Markovian or correlated...... noise. We implement the protocol experimentally and demonstrate the near-ideal protection of coherent and entangled states in an extremely noisy channel. Since all real-life channels are exhibiting pronounced non-Markovian behavior, the proposed protocol will have immediate implications in improving...... the performance of various quantum information protocols....

  3. Continuous-variable protocol for oblivious transfer in the noisy-storage model

    DEFF Research Database (Denmark)

    Furrer, Fabian; Gehring, Tobias; Schaffner, Christian

    2018-01-01

    for oblivious transfer for optical continuous-variable systems, and prove its security in the noisy-storage model. This model allows us to establish security by sending more quantum signals than an attacker can reliably store during the protocol. The security proof is based on uncertainty relations which we...... derive for continuous-variable systems, that differ from the ones used in quantum key distribution. We experimentally demonstrate in a proof-of-principle experiment the proposed oblivious transfer protocol for various channel losses by using entangled two-mode squeezed states measured with balanced...

  4. Bi-level image compression with tree coding

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1996-01-01

    Presently, tree coders are the best bi-level image coders. The current ISO standard, JBIG, is a good example. By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code. Three general-purpose coders...... are constructed by this principle. A multi-pass free tree coding scheme produces superior compression results for all test images. A multi-pass fast free template coding scheme produces much better results than JBIG for difficult images, such as halftonings. Rissanen's algorithm `Context' is presented in a new...

  5. Feature evaluation of complex hysteresis smoothing and its practical applications to noisy SEM images.

    Science.gov (United States)

    Suzuki, Kazuhiko; Oho, Eisaku

    2013-01-01

    Quality of a scanning electron microscopy (SEM) image is strongly influenced by noise. This is a fundamental drawback of the SEM instrument. Complex hysteresis smoothing (CHS) has been previously developed for noise removal of SEM images. This noise removal is performed by monitoring and processing properly the amplitude of the SEM signal. As it stands now, CHS may not be so utilized, though it has several advantages for SEM. For example, the resolution of image processed by CHS is basically equal to that of the original image. In order to find wide application of the CHS method in microscopy, the feature of CHS, which has not been so clarified until now is evaluated correctly. As the application of the result obtained by the feature evaluation, cursor width (CW), which is the sole processing parameter of CHS, is determined more properly using standard deviation of noise Nσ. In addition, disadvantage that CHS cannot remove the noise with excessively large amplitude is improved by a certain postprocessing. CHS is successfully applicable to SEM images with various noise amplitudes. © Wiley Periodicals, Inc.

  6. Complex Lyapunov exponents from short and noisy sets of data. Application to stability analysis of BWRs

    International Nuclear Information System (INIS)

    Verdu, G.; Ginestar, D.; Bovea, M.D.; Jimenez, P.; Pena, J.; Munoz-Cobo, J.L.

    1997-01-01

    The dynamics reconstruction techniques have been applied to systems as BWRs with a big amount of noise. The success of this methodology was limited due to the noise in the signals. Recently, new techniques have been introduced for short and noisy data sets based on a global fit of the signal by means of orthonormal polynomials. In this paper, we revisit these ideas in order to adapt them for the analysis of the neutronic power signals to characterize the stability regime of BWR reactors. To check the performance of the methodology, we have analyzed simulated noisy signals, observing that the method works well, even with a big amount of noise. Also, we have analyzed experimental signals from Ringhals 1 BWR. In this case, the reconstructed phase space for the system is not very good. A modal decomposition treatment for the signals is proposed producing signals with better behaviour. (author)

  7. Convergence estimates in probability and in expectation for discrete least squares with noisy evaluations at random points

    KAUST Repository

    Migliorati, Giovanni; Nobile, Fabio; Tempone, Raul

    2015-01-01

    We study the accuracy of the discrete least-squares approximation on a finite dimensional space of a real-valued target function from noisy pointwise evaluations at independent random points distributed according to a given sampling probability

  8. SENTINEL-2 LEVEL 1 PRODUCTS AND IMAGE PROCESSING PERFORMANCES

    Directory of Open Access Journals (Sweden)

    S. J. Baillarin

    2012-07-01

    Full Text Available In partnership with the European Commission and in the frame of the Global Monitoring for Environment and Security (GMES program, the European Space Agency (ESA is developing the Sentinel-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. While ensuring data continuity of former SPOT and LANDSAT multi-spectral missions, Sentinel-2 will also offer wide improvements such as a unique combination of global coverage with a wide field of view (290 km, a high revisit (5 days with two satellites, a high resolution (10 m, 20 m and 60 m and multi-spectral imagery (13 spectral bands in visible and shortwave infra-red domains. In this context, the Centre National d'Etudes Spatiales (CNES supports ESA to define the system image products and to prototype the relevant image processing techniques. This paper offers, first, an overview of the Sentinel-2 system and then, introduces the image products delivered by the ground processing: the Level-0 and Level-1A are system products which correspond to respectively raw compressed and uncompressed data (limited to internal calibration purposes, the Level-1B is the first public product: it comprises radiometric corrections (dark signal, pixels response non uniformity, crosstalk, defective pixels, restoration, and binning for 60 m bands; and an enhanced physical geometric model appended to the product but not applied, the Level-1C provides ortho-rectified top of atmosphere reflectance with a sub-pixel multi-spectral and multi-date registration; a cloud and land/water mask is associated to the product. Note that the cloud mask also provides an indication about cirrus. The ground sampling distance of Level-1C product will be 10 m, 20 m or 60 m according to the band. The final Level-1C product is tiled following a pre-defined grid of 100x100 km2, based on UTM/WGS84 reference frame

  9. SENTINEL-2 Level 1 Products and Image Processing Performances

    Science.gov (United States)

    Baillarin, S. J.; Meygret, A.; Dechoz, C.; Petrucci, B.; Lacherade, S.; Tremas, T.; Isola, C.; Martimort, P.; Spoto, F.

    2012-07-01

    In partnership with the European Commission and in the frame of the Global Monitoring for Environment and Security (GMES) program, the European Space Agency (ESA) is developing the Sentinel-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. While ensuring data continuity of former SPOT and LANDSAT multi-spectral missions, Sentinel-2 will also offer wide improvements such as a unique combination of global coverage with a wide field of view (290 km), a high revisit (5 days with two satellites), a high resolution (10 m, 20 m and 60 m) and multi-spectral imagery (13 spectral bands in visible and shortwave infra-red domains). In this context, the Centre National d'Etudes Spatiales (CNES) supports ESA to define the system image products and to prototype the relevant image processing techniques. This paper offers, first, an overview of the Sentinel-2 system and then, introduces the image products delivered by the ground processing: the Level-0 and Level-1A are system products which correspond to respectively raw compressed and uncompressed data (limited to internal calibration purposes), the Level-1B is the first public product: it comprises radiometric corrections (dark signal, pixels response non uniformity, crosstalk, defective pixels, restoration, and binning for 60 m bands); and an enhanced physical geometric model appended to the product but not applied, the Level-1C provides ortho-rectified top of atmosphere reflectance with a sub-pixel multi-spectral and multi-date registration; a cloud and land/water mask is associated to the product. Note that the cloud mask also provides an indication about cirrus. The ground sampling distance of Level-1C product will be 10 m, 20 m or 60 m according to the band. The final Level-1C product is tiled following a pre-defined grid of 100x100 km2, based on UTM/WGS84 reference frame. The

  10. Adaptive compressive ghost imaging based on wavelet trees and sparse representation.

    Science.gov (United States)

    Yu, Wen-Kai; Li, Ming-Fei; Yao, Xu-Ri; Liu, Xue-Feng; Wu, Ling-An; Zhai, Guang-Jie

    2014-03-24

    Compressed sensing is a theory which can reconstruct an image almost perfectly with only a few measurements by finding its sparsest representation. However, the computation time consumed for large images may be a few hours or more. In this work, we both theoretically and experimentally demonstrate a method that combines the advantages of both adaptive computational ghost imaging and compressed sensing, which we call adaptive compressive ghost imaging, whereby both the reconstruction time and measurements required for any image size can be significantly reduced. The technique can be used to improve the performance of all computational ghost imaging protocols, especially when measuring ultra-weak or noisy signals, and can be extended to imaging applications at any wavelength.

  11. Analysis of Variance in Statistical Image Processing

    Science.gov (United States)

    Kurz, Ludwik; Hafed Benteftifa, M.

    1997-04-01

    A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.

  12. Images crossing borders: image and workflow sharing on multiple levels.

    Science.gov (United States)

    Ross, Peeter; Pohjonen, Hanna

    2011-04-01

    Digitalisation of medical data makes it possible to share images and workflows between related parties. In addition to linear data flow where healthcare professionals or patients are the information carriers, a new type of matrix of many-to-many connections is emerging. Implementation of shared workflow brings challenges of interoperability and legal clarity. Sharing images or workflows can be implemented on different levels with different challenges: inside the organisation, between organisations, across country borders, or between healthcare institutions and citizens. Interoperability issues vary according to the level of sharing and are either technical or semantic, including language. Legal uncertainty increases when crossing national borders. Teleradiology is regulated by multiple European Union (EU) directives and legal documents, which makes interpretation of the legal system complex. To achieve wider use of eHealth and teleradiology several strategic documents were published recently by the EU. Despite EU activities, responsibility for organising, providing and funding healthcare systems remains with the Member States. Therefore, the implementation of new solutions requires strong co-operation between radiologists, societies of radiology, healthcare administrators, politicians and relevant EU authorities. The aim of this article is to describe different dimensions of image and workflow sharing and to analyse legal acts concerning teleradiology in the EU.

  13. Image-guided regularization level set evolution for MR image segmentation and bias field correction.

    Science.gov (United States)

    Wang, Lingfeng; Pan, Chunhong

    2014-01-01

    Magnetic resonance (MR) image segmentation is a crucial step in surgical and treatment planning. In this paper, we propose a level-set-based segmentation method for MR images with intensity inhomogeneous problem. To tackle the initialization sensitivity problem, we propose a new image-guided regularization to restrict the level set function. The maximum a posteriori inference is adopted to unify segmentation and bias field correction within a single framework. Under this framework, both the contour prior and the bias field prior are fully used. As a result, the image intensity inhomogeneity can be well solved. Extensive experiments are provided to evaluate the proposed method, showing significant improvements in both segmentation and bias field correction accuracies as compared with other state-of-the-art approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Application of Data Mining and Knowledge Discovery Techniques to Enhance Binary Target Detection and Decision-Making for Compromised Visual Images

    National Research Council Canada - National Science Library

    Repperger, D. W; Phillips, C. A; Schrider, C. D; Smith, E. A

    2004-01-01

    In an effort to improve decision-making on the identity of unknown objects appearing in visual images when the surrounding environment may be noisy and cluttered, a highly sensitive target detection...

  15. Directional Joint Bilateral Filter for Depth Images

    Directory of Open Access Journals (Sweden)

    Anh Vu Le

    2014-06-01

    Full Text Available Depth maps taken by the low cost Kinect sensor are often noisy and incomplete. Thus, post-processing for obtaining reliable depth maps is necessary for advanced image and video applications such as object recognition and multi-view rendering. In this paper, we propose adaptive directional filters that fill the holes and suppress the noise in depth maps. Specifically, novel filters whose window shapes are adaptively adjusted based on the edge direction of the color image are presented. Experimental results show that our method yields higher quality filtered depth maps than other existing methods, especially at the edge boundaries.

  16. Matrix Krylov subspace methods for image restoration

    Directory of Open Access Journals (Sweden)

    khalide jbilou

    2015-09-01

    Full Text Available In the present paper, we consider some matrix Krylov subspace methods for solving ill-posed linear matrix equations and in those problems coming from the restoration of blurred and noisy images. Applying the well known Tikhonov regularization procedure leads to a Sylvester matrix equation depending the Tikhonov regularized parameter. We apply the matrix versions of the well known Krylov subspace methods, namely the Least Squared (LSQR and the conjugate gradient (CG methods to get approximate solutions representing the restored images. Some numerical tests are presented to show the effectiveness of the proposed methods.

  17. Utilizing functional near-infrared spectroscopy for prediction of cognitive workload in noisy work environments.

    Science.gov (United States)

    Gabbard, Ryan; Fendley, Mary; Dar, Irfaan A; Warren, Rik; Kashou, Nasser H

    2017-10-01

    Occupational noise frequently occurs in the work environment in military intelligence, surveillance, and reconnaissance operations. This impacts cognitive performance by acting as a stressor, potentially interfering with the analysts' decision-making process. We investigated the effects of different noise stimuli on analysts' performance and workload in anomaly detection by simulating a noisy work environment. We utilized functional near-infrared spectroscopy (fNIRS) to quantify oxy-hemoglobin (HbO) and deoxy-hemoglobin concentration changes in the prefrontal cortex (PFC), as well as behavioral measures, which include eye tracking, reaction time, and accuracy rate. We hypothesized that noisy environments would have a negative effect on the participant in terms of anomaly detection performance due to the increase in workload, which would be reflected by an increase in PFC activity. We found that HbO for some of the channels analyzed were significantly different across noise types ([Formula: see text]). Our results also indicated that HbO activation for short-intermittent noise stimuli was greater in the PFC compared to long-intermittent noises. These approaches using fNIRS in conjunction with an understanding of the impact on human analysts in anomaly detection could potentially lead to better performance by optimizing work environments.

  18. Improving the fidelity of teleportation through noisy channels using weak measurement

    Energy Technology Data Exchange (ETDEWEB)

    Pramanik, T., E-mail: tanu.pram99@bose.res.in; Majumdar, A.S., E-mail: archan@bose.res.in

    2013-12-13

    We employ the technique of weak measurement in order to enable preservation of teleportation fidelity for two-qubit noisy channels. We consider one or both qubits of a maximally entangled state to undergo amplitude damping, and show that the application of weak measurement and a subsequent reverse operation could lead to a fidelity greater than 2/3 for any value of the decoherence parameter. The success probability of the protocol decreases with the strength of weak measurement, and is lower when both the qubits are affected by decoherence. Finally, our protocol is shown to work for the Werner state too.

  19. Can we use PCA to detect small signals in noisy data?

    Science.gov (United States)

    Spiegelberg, Jakob; Rusz, Ján

    2017-01-01

    Principal component analysis (PCA) is among the most commonly applied dimension reduction techniques suitable to denoise data. Focusing on its limitations to detect low variance signals in noisy data, we discuss how statistical and systematical errors occur in PCA reconstructed data as a function of the size of the data set, which extends the work of Lichtert and Verbeeck, (2013) [16]. Particular attention is directed towards the estimation of bias introduced by PCA and its influence on experiment design. Aiming at the denoising of large matrices, nullspace based denoising (NBD) is introduced. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Stochastic bounded consensus of second-order multi-agent systems in noisy environment

    International Nuclear Information System (INIS)

    Ren Hong-Wei; Deng Fei-Qi

    2017-01-01

    This paper investigates the stochastic bounded consensus of leader-following second-order multi-agent systems in a noisy environment. It is assumed that each agent received the information of its neighbors corrupted by noises and time delays. Based on the graph theory, stochastic tools, and the Lyapunov function method, we derive the sufficient conditions under which the systems would reach stochastic bounded consensus in mean square with the protocol we designed. Finally, a numerical simulation is illustrated to check the effectiveness of the proposed algorithms. (paper)

  1. An analysis dictionary learning algorithm under a noisy data model with orthogonality constraint.

    Science.gov (United States)

    Zhang, Ye; Yu, Tenglong; Wang, Wenwu

    2014-01-01

    Two common problems are often encountered in analysis dictionary learning (ADL) algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high), as represented by the Analysis K-SVD (AK-SVD) algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST) algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure) and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms.

  2. An Analysis Dictionary Learning Algorithm under a Noisy Data Model with Orthogonality Constraint

    Directory of Open Access Journals (Sweden)

    Ye Zhang

    2014-01-01

    Full Text Available Two common problems are often encountered in analysis dictionary learning (ADL algorithms. The first one is that the original clean signals for learning the dictionary are assumed to be known, which otherwise need to be estimated from noisy measurements. This, however, renders a computationally slow optimization process and potentially unreliable estimation (if the noise level is high, as represented by the Analysis K-SVD (AK-SVD algorithm. The other problem is the trivial solution to the dictionary, for example, the null dictionary matrix that may be given by a dictionary learning algorithm, as discussed in the learning overcomplete sparsifying transform (LOST algorithm. Here we propose a novel optimization model and an iterative algorithm to learn the analysis dictionary, where we directly employ the observed data to compute the approximate analysis sparse representation of the original signals (leading to a fast optimization procedure and enforce an orthogonality constraint on the optimization criterion to avoid the trivial solutions. Experiments demonstrate the competitive performance of the proposed algorithm as compared with three baselines, namely, the AK-SVD, LOST, and NAAOLA algorithms.

  3. GPU accelerated edge-region based level set evolution constrained by 2D gray-scale histogram.

    Science.gov (United States)

    Balla-Arabé, Souleymane; Gao, Xinbo; Wang, Bin

    2013-07-01

    Due to its intrinsic nature which allows to easily handle complex shapes and topological changes, the level set method (LSM) has been widely used in image segmentation. Nevertheless, LSM is computationally expensive, which limits its applications in real-time systems. For this purpose, we propose a new level set algorithm, which uses simultaneously edge, region, and 2D histogram information in order to efficiently segment objects of interest in a given scene. The computational complexity of the proposed LSM is greatly reduced by using the highly parallelizable lattice Boltzmann method (LBM) with a body force to solve the level set equation (LSE). The body force is the link with image data and is defined from the proposed LSE. The proposed LSM is then implemented using an NVIDIA graphics processing units to fully take advantage of the LBM local nature. The new algorithm is effective, robust against noise, independent to the initial contour, fast, and highly parallelizable. The edge and region information enable to detect objects with and without edges, and the 2D histogram information enable the effectiveness of the method in a noisy environment. Experimental results on synthetic and real images demonstrate subjectively and objectively the performance of the proposed method.

  4. Image processing of small protein-crystals in electron microscopy

    International Nuclear Information System (INIS)

    Feinberg, D.A.

    1978-11-01

    This electron microscope study was undertaken to determine whether high resolution reconstructed images could be obtained from statistically noisy micrographs by the super-position of several small areas of images of well-ordered crystals of biological macromolecules. Methods of rotational and translational alignment which use Fourier space data were demonstrated to be superior to methods which use Real space image data. After alignment, the addition of the diffraction patterns of four small areas did not produce higher resolution because of unexpected image distortion effects. A method was developed to determine the location of the distortion origin and the coefficients of spiral distortion and pincushion/barrel distortion in order to make future correction of distortions in electron microscope images of large area crystals

  5. Trading in markets with noisy information: an evolutionary analysis

    Science.gov (United States)

    Bloembergen, Daan; Hennes, Daniel; McBurney, Peter; Tuyls, Karl

    2015-07-01

    We analyse the value of information in a stock market where information can be noisy and costly, using techniques from empirical game theory. Previous work has shown that the value of information follows a J-curve, where averagely informed traders perform below market average, and only insiders prevail. Here we show that both noise and cost can change this picture, in several cases leading to opposite results where insiders perform below market average, and averagely informed traders prevail. Moreover, we investigate the effect of random explorative actions on the market dynamics, showing how these lead to a mix of traders being sustained in equilibrium. These results provide insight into the complexity of real marketplaces, and show under which conditions a broad mix of different trading strategies might be sustainable.

  6. Quantum steganography with noisy quantum channels

    International Nuclear Information System (INIS)

    Shaw, Bilal A.; Brun, Todd A.

    2011-01-01

    Steganography is the technique of hiding secret information by embedding it in a seemingly ''innocent'' message. We present protocols for hiding quantum information by disguising it as noise in a codeword of a quantum error-correcting code. The sender (Alice) swaps quantum information into the codeword and applies a random choice of unitary operation, drawing on a secret random key she shares with the receiver (Bob). Using the key, Bob can retrieve the information, but an eavesdropper (Eve) with the power to monitor the channel, but without the secret key, cannot distinguish the message from channel noise. We consider two types of protocols: one in which the hidden quantum information is stored locally in the codeword, and another in which it is embedded in the space of error syndromes. We analyze how difficult it is for Eve to detect the presence of secret messages, and estimate rates of steganographic communication and secret key consumption for specific protocols and examples of error channels. We consider both the case where there is no actual noise in the channel (so that all errors in the codeword result from the deliberate actions of Alice), and the case where the channel is noisy and not controlled by Alice and Bob.

  7. Regularized image denoising based on spectral gradient optimization

    International Nuclear Information System (INIS)

    Lukić, Tibor; Lindblad, Joakim; Sladoje, Nataša

    2011-01-01

    Image restoration methods, such as denoising, deblurring, inpainting, etc, are often based on the minimization of an appropriately defined energy function. We consider energy functions for image denoising which combine a quadratic data-fidelity term and a regularization term, where the properties of the latter are determined by a used potential function. Many potential functions are suggested for different purposes in the literature. We compare the denoising performance achieved by ten different potential functions. Several methods for efficient minimization of regularized energy functions exist. Most are only applicable to particular choices of potential functions, however. To enable a comparison of all the observed potential functions, we propose to minimize the objective function using a spectral gradient approach; spectral gradient methods put very weak restrictions on the used potential function. We present and evaluate the performance of one spectral conjugate gradient and one cyclic spectral gradient algorithm, and conclude from experiments that both are well suited for the task. We compare the performance with three total variation-based state-of-the-art methods for image denoising. From the empirical evaluation, we conclude that denoising using the Huber potential (for images degraded by higher levels of noise; signal-to-noise ratio below 10 dB) and the Geman and McClure potential (for less noisy images), in combination with the spectral conjugate gradient minimization algorithm, shows the overall best performance

  8. Medical Image Denoising Using Mixed Transforms

    Directory of Open Access Journals (Sweden)

    Jaleel Sadoon Jameel

    2018-02-01

    Full Text Available  In this paper,  a mixed transform method is proposed based on a combination of wavelet transform (WT and multiwavelet transform (MWT in order to denoise medical images. The proposed method consists of WT and MWT in cascade form to enhance the denoising performance of image processing. Practically, the first step is to add a noise to Magnetic Resonance Image (MRI or Computed Tomography (CT images for the sake of testing. The noisy image is processed by WT to achieve four sub-bands and each sub-band is treated individually using MWT before the soft/hard denoising stage. Simulation results show that a high peak signal to noise ratio (PSNR is improved significantly and the characteristic features are well preserved by employing mixed transform of WT and MWT due to their capability of separating noise signals from image signals. Moreover, the corresponding mean square error (MSE is decreased accordingly compared to other available methods.

  9. Dragonfly : an implementation of the expand–maximize–compress algorithm for single-particle imaging

    OpenAIRE

    Ayyer, Kartik; Lan, Ti-Yen; Elser, Veit; Loh, N. Duane

    2016-01-01

    Single-particle imaging (SPI) with X-ray free-electron lasers has the potential to change fundamentally how biomacromolecules are imaged. The structure would be derived from millions of diffraction patterns, each from a different copy of the macromolecule before it is torn apart by radiation damage. The challenges posed by the resultant data stream are staggering: millions of incomplete, noisy and un-oriented patterns have to be computationally assembled into a three-dimensional intensity map...

  10. Vector entropy imaging theory with application to computerized tomography

    International Nuclear Information System (INIS)

    Wang Yuanmei; Cheng Jianping; Heng, Pheng Ann

    2002-01-01

    Medical imaging theory for x-ray CT and PET is based on image reconstruction from projections. In this paper a novel vector entropy imaging theory under the framework of multiple criteria decision making is presented. We also study the most frequently used image reconstruction methods, namely, least square, maximum entropy, and filtered back-projection methods under the framework of the single performance criterion optimization. Finally, we introduce some of the results obtained by various reconstruction algorithms using computer-generated noisy projection data from the Hoffman phantom and real CT scanner data. Comparison of the reconstructed images indicates that the vector entropy method gives the best in error (difference between the original phantom data and reconstruction), smoothness (suppression of noise), grey value resolution and is free of ghost images. (author)

  11. Pitch Tracking and Voiced/Unvoiced Detection in Noisy Environment using Optimat Sequence Estimation

    OpenAIRE

    Wasserblat, Moshe; Gainza, Mikel; Dorran, David; Domb, Yuval

    2008-01-01

    This paper addresses the problem of pitch tracking and voiced/unvoiced detection in noisy speech environments. An algorithm is presented which uses a number of variable thresholds to track pitch contour with minimal error. This is achieved by modeling the pitch tracking problem in such a way that allows the use of optimal estimation methods, such MLSE. The performance of the algorithm is evaluated using the Keele pitch detection database with realistic background noise. Results show best perf...

  12. Dual respiratory and cardiac motion estimation in PET imaging: Methods design and quantitative evaluation.

    Science.gov (United States)

    Feng, Tao; Wang, Jizhe; Tsui, Benjamin M W

    2018-04-01

    The goal of this study was to develop and evaluate four post-reconstruction respiratory and cardiac (R&C) motion vector field (MVF) estimation methods for cardiac 4D PET data. In Method 1, the dual R&C motions were estimated directly from the dual R&C gated images. In Method 2, respiratory motion (RM) and cardiac motion (CM) were separately estimated from the respiratory gated only and cardiac gated only images. The effects of RM on CM estimation were modeled in Method 3 by applying an image-based RM correction on the cardiac gated images before CM estimation, the effects of CM on RM estimation were neglected. Method 4 iteratively models the mutual effects of RM and CM during dual R&C motion estimations. Realistic simulation data were generated for quantitative evaluation of four methods. Almost noise-free PET projection data were generated from the 4D XCAT phantom with realistic R&C MVF using Monte Carlo simulation. Poisson noise was added to the scaled projection data to generate additional datasets of two more different noise levels. All the projection data were reconstructed using a 4D image reconstruction method to obtain dual R&C gated images. The four dual R&C MVF estimation methods were applied to the dual R&C gated images and the accuracy of motion estimation was quantitatively evaluated using the root mean square error (RMSE) of the estimated MVFs. Results show that among the four estimation methods, Methods 2 performed the worst for noise-free case while Method 1 performed the worst for noisy cases in terms of quantitative accuracy of the estimated MVF. Methods 4 and 3 showed comparable results and achieved RMSE lower by up to 35% than that in Method 1 for noisy cases. In conclusion, we have developed and evaluated 4 different post-reconstruction R&C MVF estimation methods for use in 4D PET imaging. Comparison of the performance of four methods on simulated data indicates separate R&C estimation with modeling of RM before CM estimation (Method 3) to be

  13. A Hybrid Method of medical Image Restoration with Gaussian and Impulsive Noise; Un Metodo Hibrido de Restauracion de Images Medidas con Ruido Gausino e Impulsivo

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez, M. G.; Vidal, V.; Verdu, G.; Mayo, P.; Rodenas, F.

    2011-07-01

    The noise removal techniques to restore noisy images is currently an important issue, for example, medical images obtained by X-ray computed tomography in noise due to the use of a small number of projections present noise of different types. In this paper we analyze and evaluate two techniques that separately each behaves efficiently for the removal of Gaussian and impulsive noise respectively, and combined to form a hybrid approach obtains very good performance with respect to quality in most different types of noise.

  14. Performance of unbalanced QPSK in the presence of noisy reference and crosstalk

    Science.gov (United States)

    Divsalar, D.; Yuen, J. H.

    1979-01-01

    The problem of transmitting two telemetry data streams having different rates and different powers using unbalanced quadriphase shift keying (UQPSK) signaling is considered. It is noted that the presence of a noisy carrier phase reference causes a degradation in detection performance in coherent communications systems and that imperfect carrier synchronization not only attenuates the main demodulated signal voltage in UQPSK but also produces interchannel interference (crosstalk) which degrades the performance still further. Exact analytical expressions for symbol error probability of UQPSK in the presence of noise phase reference are derived.

  15. On robust signal reconstruction in noisy filter banks

    CERN Document Server

    Vikalo, H; Hassibi, B; Kailath, T; 10.1016/j.sigpro.2004.08.011

    2005-01-01

    We study the design of synthesis filters in noisy filter bank systems using an H/sup infinity / estimation point of view. The H/sup infinity / approach is most promising in situations where the statistical properties of the disturbances (arising from quantization, compression, etc.) in each subband of the filter bank is unknown, or is too difficult to model and analyze. For the important special case of unitary analysis polyphase matrices we obtain an explicit expression for the minimum achievable disturbance attenuation. For arbitrary analysis polyphase matrices, standard state-space H/sup infinity / techniques can be employed to obtain numerical solutions. When the synthesis filters are restricted to being FIR, as is often the case in practice, the design can be cast as a finite-dimensional semi-definite program. In this case, we can effectively exploit the inherent non-uniqueness of the H/sup infinity / solution to optimize for an additional criteria. By optimizing for average performance in addition to th...

  16. The effect of base image window level selection on the dimensional measurement accuracy of resultant three-dimensional image displays

    International Nuclear Information System (INIS)

    Kurmis, A.P.; Hearn, T.C.; Reynolds, K.J.

    2003-01-01

    Purpose: The aim of this study was to determine the effect of base image window level selection on direct linear measurement of knee structures displayed using new magnetic resonance (MR)-based three-dimensional reconstructed computer imaging techniques. Methods: A prospective comparative study was performed using a series of three-dimensional knee images, generated from conventional MR imaging (MRI) sections. Thirty distinct anatomical structural features were identified within the image series of which repeated measurements were compared at 10 different window grey scale levels. Results: Statistical analysis demonstrated an excellent raw correlation between measurements and suggested no significant difference between measurements made at each of the 10 window level settings (P>0.05). Conclusions: The findings of this study suggest that unlike conventional MR or CT applications, grey scale window level selection at the time of imaging does not significantly affect the visual quality of resultant three-dimensional reconstructed images and hence the accuracy of subsequent direct linear measurement. The diagnostic potential of clinical progression from routine two-dimensional to advanced three-dimensional reconstructed imaging techniques may therefore be less likely to be degraded by inappropriate MR technician image windowing during the capturing of image series

  17. A Hybrid Technique for Medical Image Segmentation

    Directory of Open Access Journals (Sweden)

    Alamgir Nyma

    2012-01-01

    Full Text Available Medical image segmentation is an essential and challenging aspect in computer-aided diagnosis and also in pattern recognition research. This paper proposes a hybrid method for magnetic resonance (MR image segmentation. We first remove impulsive noise inherent in MR images by utilizing a vector median filter. Subsequently, Otsu thresholding is used as an initial coarse segmentation method that finds the homogeneous regions of the input image. Finally, an enhanced suppressed fuzzy c-means is used to partition brain MR images into multiple segments, which employs an optimal suppression factor for the perfect clustering in the given data set. To evaluate the robustness of the proposed approach in noisy environment, we add different types of noise and different amount of noise to T1-weighted brain MR images. Experimental results show that the proposed algorithm outperforms other FCM based algorithms in terms of segmentation accuracy for both noise-free and noise-inserted MR images.

  18. Assessment of perfusion by dynamic contrast-enhanced imaging using a deconvolution approach based on regression and singular value decomposition.

    Science.gov (United States)

    Koh, T S; Wu, X Y; Cheong, L H; Lim, C C T

    2004-12-01

    The assessment of tissue perfusion by dynamic contrast-enhanced (DCE) imaging involves a deconvolution process. For analysis of DCE imaging data, we implemented a regression approach to select appropriate regularization parameters for deconvolution using the standard and generalized singular value decomposition methods. Monte Carlo simulation experiments were carried out to study the performance and to compare with other existing methods used for deconvolution analysis of DCE imaging data. The present approach is found to be robust and reliable at the levels of noise commonly encountered in DCE imaging, and for different models of the underlying tissue vasculature. The advantages of the present method, as compared with previous methods, include its efficiency of computation, ability to achieve adequate regularization to reproduce less noisy solutions, and that it does not require prior knowledge of the noise condition. The proposed method is applied on actual patient study cases with brain tumors and ischemic stroke, to illustrate its applicability as a clinical tool for diagnosis and assessment of treatment response.

  19. A new level set model for cell image segmentation

    Science.gov (United States)

    Ma, Jing-Feng; Hou, Kai; Bao, Shang-Lian; Chen, Chun

    2011-02-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing.

  20. Bifurcation analysis of a noisy vibro-impact oscillator with two kinds of fractional derivative elements

    Science.gov (United States)

    Yang, YongGe; Xu, Wei; Yang, Guidong

    2018-04-01

    To the best of authors' knowledge, little work was referred to the study of a noisy vibro-impact oscillator with a fractional derivative. Stochastic bifurcations of a vibro-impact oscillator with two kinds of fractional derivative elements driven by Gaussian white noise excitation are explored in this paper. We can obtain the analytical approximate solutions with the help of non-smooth transformation and stochastic averaging method. The numerical results from Monte Carlo simulation of the original system are regarded as the benchmark to verify the accuracy of the developed method. The results demonstrate that the proposed method has a satisfactory level of accuracy. We also discuss the stochastic bifurcation phenomena induced by the fractional coefficients and fractional derivative orders. The important and interesting result we can conclude in this paper is that the effect of the first fractional derivative order on the system is totally contrary to that of the second fractional derivative order.

  1. Extortion under uncertainty: Zero-determinant strategies in noisy games

    Science.gov (United States)

    Hao, Dong; Rong, Zhihai; Zhou, Tao

    2015-05-01

    Repeated game theory has been one of the most prevailing tools for understanding long-running relationships, which are the foundation in building human society. Recent works have revealed a new set of "zero-determinant" (ZD) strategies, which is an important advance in repeated games. A ZD strategy player can exert unilateral control on two players' payoffs. In particular, he can deterministically set the opponent's payoff or enforce an unfair linear relationship between the players' payoffs, thereby always seizing an advantageous share of payoffs. One of the limitations of the original ZD strategy, however, is that it does not capture the notion of robustness when the game is subjected to stochastic errors. In this paper, we propose a general model of ZD strategies for noisy repeated games and find that ZD strategies have high robustness against errors. We further derive the pinning strategy under noise, by which the ZD strategy player coercively sets the opponent's expected payoff to his desired level, although his payoff control ability declines with the increase of noise strength. Due to the uncertainty caused by noise, the ZD strategy player cannot ensure his payoff to be permanently higher than the opponent's, which implies dominant extortions do not exist even under low noise. While we show that the ZD strategy player can still establish a novel kind of extortions, named contingent extortions, where any increase of his own payoff always exceeds that of the opponent's by a fixed percentage, and the conditions under which the contingent extortions can be realized are more stringent as the noise becomes stronger.

  2. Graph state generation with noisy mirror-inverting spin chains

    International Nuclear Information System (INIS)

    Clark, Stephen R; Klein, Alexander; Bruderer, Martin; Jaksch, Dieter

    2007-01-01

    We investigate the influence of noise on a graph state generation scheme which exploits a mirror inverting spin chain. Within this scheme the spin chain is used repeatedly as an entanglement bus (EB) to create multi-partite entanglement. The noise model we consider comprises of each spin of this EB being exposed to independent local noise which degrades the capabilities of the EB. Here we concentrate on quantifying its performance as a single-qubit channel and as a mediator of a two-qubit entangling gate, since these are basic operations necessary for graph state generation using the EB. In particular, for the single-qubit case we numerically calculate the average channel fidelity and whether the channel becomes entanglement breaking, i.e. expunges any entanglement the transferred qubit may have with other external qubits. We find that neither local decay nor dephasing noise cause entanglement breaking. This is in contrast to local thermal and depolarizing noise where we determine a critical length and critical noise coupling, respectively, at which entanglement breaking occurs. The critical noise coupling for local depolarizing noise is found to exhibit a power-law dependence on the chain length. For two-qubits we similarly compute the average gate fidelity and whether the ability for this gate to create entanglement is maintained. The concatenation of these noisy gates for the construction of a five-qubit linear cluster state and a Greenberger-Horne-Zeilinger state indicates that the level of noise that can be tolerated for graph state generation is tightly constrained

  3. Effect of blood glucose level on 18F-FDG PET/CT imaging

    International Nuclear Information System (INIS)

    Tan Haibo; Lin Xiangtong; Guan Yihui; Zhao Jun; Zuo Chuantao; Hua Fengchun; Tang Wenying

    2008-01-01

    Objective: The aim of this study was to investigate the effect of blood glucose level on the image quality of 18 F-fluorodeoxyglucose (FDG) PET/CT imaging. Methods: Eighty patients referred to the authors' department for routine whole-body 18 F-FDG PET/CT check up were recruited into this study. The patients were classified into 9 groups according to their blood glucose level: normal group avg and SUV max ) of liver on different slices. SPSS 12.0 was used to analyse the data. Results: (1) There were significant differences among the 9 groups in image quality scores and image noises (all P avg and SUV max : 0.60 and 0.33, P<0.05). Conclusions: The higher the blood glucose level, the worse the image quality. When the blood glucose level is more than or equal to 12.0 mmol/L, the image quality will significantly degrade. (authors)

  4. Frequency-Zooming ARMA Modeling for Analysis of Noisy String Instrument Tones

    Directory of Open Access Journals (Sweden)

    Paulo A. A. Esquef

    2003-09-01

    Full Text Available This paper addresses model-based analysis of string instrument sounds. In particular, it reviews the application of autoregressive (AR modeling to sound analysis/synthesis purposes. Moreover, a frequency-zooming autoregressive moving average (FZ-ARMA modeling scheme is described. The performance of the FZ-ARMA method on modeling the modal behavior of isolated groups of resonance frequencies is evaluated for both synthetic and real string instrument tones immersed in background noise. We demonstrate that the FZ-ARMA modeling is a robust tool to estimate the decay time and frequency of partials of noisy tones. Finally, we discuss the use of the method in synthesis of string instrument sounds.

  5. Adaptive Fourier decomposition based R-peak detection for noisy ECG Signals.

    Science.gov (United States)

    Ze Wang; Chi Man Wong; Feng Wan

    2017-07-01

    An adaptive Fourier decomposition (AFD) based R-peak detection method is proposed for noisy ECG signals. Although lots of QRS detection methods have been proposed in literature, most detection methods require high signal quality. The proposed method extracts the R waves from the energy domain using the AFD and determines the R-peak locations based on the key decomposition parameters, achieving the denoising and the R-peak detection at the same time. Validated by clinical ECG signals in the MIT-BIH Arrhythmia Database, the proposed method shows better performance than the Pan-Tompkin (PT) algorithm in both situations of a native PT and the PT with a denoising process.

  6. Texture Feature Analysis for Different Resolution Level of Kidney Ultrasound Images

    Science.gov (United States)

    Kairuddin, Wan Nur Hafsha Wan; Mahmud, Wan Mahani Hafizah Wan

    2017-08-01

    Image feature extraction is a technique to identify the characteristic of the image. The objective of this work is to discover the texture features that best describe a tissue characteristic of a healthy kidney from ultrasound (US) image. Three ultrasound machines that have different specifications are used in order to get a different quality (different resolution) of the image. Initially, the acquired images are pre-processed to de-noise the speckle to ensure the image preserve the pixels in a region of interest (ROI) for further extraction. Gaussian Low- pass Filter is chosen as the filtering method in this work. 150 of enhanced images then are segmented by creating a foreground and background of image where the mask is created to eliminate some unwanted intensity values. Statistical based texture features method is used namely Intensity Histogram (IH), Gray-Level Co-Occurance Matrix (GLCM) and Gray-level run-length matrix (GLRLM).This method is depends on the spatial distribution of intensity values or gray levels in the kidney region. By using One-Way ANOVA in SPSS, the result indicated that three features (Contrast, Difference Variance and Inverse Difference Moment Normalized) from GLCM are not statistically significant; this concludes that these three features describe a healthy kidney characteristics regardless of the ultrasound image quality.

  7. Optimal Power Constrained Distributed Detection over a Noisy Multiaccess Channel

    Directory of Open Access Journals (Sweden)

    Zhiwen Hu

    2015-01-01

    Full Text Available The problem of optimal power constrained distributed detection over a noisy multiaccess channel (MAC is addressed. Under local power constraints, we define the transformation function for sensor to realize the mapping from local decision to transmitted waveform. The deflection coefficient maximization (DCM is used to optimize the performance of power constrained fusion system. Using optimality conditions, we derive the closed-form solution to the considered problem. Monte Carlo simulations are carried out to evaluate the performance of the proposed new method. Simulation results show that the proposed method could significantly improve the detection performance of the fusion system with low signal-to-noise ratio (SNR. We also show that the proposed new method has a robust detection performance for broad SNR region.

  8. Bounds on the dynamics of sink populations with noisy immigration.

    Science.gov (United States)

    Eager, Eric Alan; Guiver, Chris; Hodgson, Dave; Rebarber, Richard; Stott, Iain; Townley, Stuart

    2014-03-01

    Sink populations are doomed to decline to extinction in the absence of immigration. The dynamics of sink populations are not easily modelled using the standard framework of per capita rates of immigration, because numbers of immigrants are determined by extrinsic sources (for example, source populations, or population managers). Here we appeal to a systems and control framework to place upper and lower bounds on both the transient and future dynamics of sink populations that are subject to noisy immigration. Immigration has a number of interpretations and can fit a wide variety of models found in the literature. We apply the results to case studies derived from published models for Chinook salmon (Oncorhynchus tshawytscha) and blowout penstemon (Penstemon haydenii). Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Information jet: Handling noisy big data from weakly disconnected network

    Science.gov (United States)

    Aurongzeb, Deeder

    Sudden aggregation (information jet) of large amount of data is ubiquitous around connected social networks, driven by sudden interacting and non-interacting events, network security threat attacks, online sales channel etc. Clustering of information jet based on time series analysis and graph theory is not new but little work is done to connect them with particle jet statistics. We show pre-clustering based on context can element soft network or network of information which is critical to minimize time to calculate results from noisy big data. We show difference between, stochastic gradient boosting and time series-graph clustering. For disconnected higher dimensional information jet, we use Kallenberg representation theorem (Kallenberg, 2005, arXiv:1401.1137) to identify and eliminate jet similarities from dense or sparse graph.

  10. Reorganization of auditory map and pitch discrimination in adult rats chronically exposed to low-level ambient noise

    Directory of Open Access Journals (Sweden)

    Weimin eZheng

    2012-09-01

    Full Text Available Behavioral adaption to a changing environment is critical for an animal’s survival. How well the brain can modify its functional properties based on experience essentially defines the limits of behavioral adaptation. In adult animals the extent to which experience shapes brain function has not been fully explored. Moreover, the perceptual consequences of experience-induced changes in the brains of adults remain unknown. Here we show that the tonotopic map in the primary auditory cortex of adult rats living with low-level ambient noise underwent a dramatic reorganization. Behaviorally, chronic noise-exposure impaired fine, but not coarse pitch discrimination. When tested in a noisy environment, the noise-exposed rats performed as well as in a quiet environment whereas the control rats performed poorly. This suggests that noise-exposed animals had adapted to living in a noisy environment. Behavioral pattern analyses revealed that stress or distraction engendered by the noisy background could not account for the poor performance of the control rats in a noisy environment. A reorganized auditory map may therefore have served as the neural substrate for the consistent performance of the noise-exposed rats in a noisy environment.

  11. Processing of noisy magnetotelluric time series from Koyna-Warna seismic region, India: a systematic approach

    Directory of Open Access Journals (Sweden)

    Ujjal K. Borah

    2015-06-01

    Full Text Available Rolling array pattern broad band magnetotelluric (MT data was acquired in the Koyna-Warna (Maharashtra, India seismic zone during 2012-14 field campaigns. The main objective of this study is to identify the thickness of the Deccan trap in and around the Koyna-Warna seismic zone and to delineate the electrical nature of the sub-basalt. The MT data at many places got contaminated with high tension power line noise due to Koyna hydroelectric power project. So, in the present study an attempt has been made to tackle this problem due to 50 Hz noise and their harmonics and other cultural noise using commercially available processing software MAPROS. Remote site was running during the entire field period to stand against the cultural noise problem. This study is based on Fast Fourier Transform (FFT and mainly focuses on the behaviour of different processing parameters, their interrelations and the influences of different processing methods concerning improvement of the S/N ratio of noisy data. Our study suggests that no single processing approach can give desirable transfer functions, however combination of different processing approaches may be adopted while processing culturally affected noisy data.

  12. An Approach to Improve the Quality of Infrared Images of Vein-Patterns

    Directory of Open Access Journals (Sweden)

    Chih-Lung Lin

    2011-12-01

    Full Text Available This study develops an approach to improve the quality of infrared (IR images of vein-patterns, which usually have noise, low contrast, low brightness and small objects of interest, thus requiring preprocessing to improve their quality. The main characteristics of the proposed approach are that no prior knowledge about the IR image is necessary and no parameters must be preset. Two main goals are sought: impulse noise reduction and adaptive contrast enhancement technologies. In our study, a fast median-based filter (FMBF is developed as a noise reduction method. It is based on an IR imaging mechanism to detect the noisy pixels and on a modified median-based filter to remove the noisy pixels in IR images. FMBF has the advantage of a low computation load. In addition, FMBF can retain reasonably good edges and texture information when the size of the filter window increases. The most important advantage is that the peak signal-to-noise ratio (PSNR caused by FMBF is higher than the PSNR caused by the median filter. A hybrid cumulative histogram equalization (HCHE is proposed for adaptive contrast enhancement. HCHE can automatically generate a hybrid cumulative histogram (HCH based on two different pieces of information about the image histogram. HCHE can improve the enhancement effect on hot objects rather than background. The experimental results are addressed and demonstrate that the proposed approach is feasible for use as an effective and adaptive process for enhancing the quality of IR vein-pattern images.

  13. Fluid-fluid level on MR image: significance in Musculoskeletal diseases

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Hye Won; Lee, Kyung Won [Seoul Naitonal University, Seoul (Korea, Republic of). Coll. of Medicine; Song, Chi Sung [Seoul City Boramae Hospital, Seoul (Korea, Republic of); Han, Sang Wook; Kang, Heung Sik [Seoul Naitonal University, Seoul (Korea, Republic of). Coll. of Medicine

    1998-01-01

    To evaluate the frequency, number and signal intensity of fluid-fluid levels of musculoskeletal diseases on MR images, and to determine the usefulness of this information for the differentiation of musculoskeletal diseases. MR images revealed fluid-fluid levels in the following diseases : giant cell tumor(6), telangiectatic osteosarcoma(4), aneurysmal bone cyst(3), synovial sarcoma(3), chondroblastoma(2), soft tissue tuberculous abscess(2), hematoma(2), hemangioma (1), neurilemmoma(1), metastasis(1), malignant fibrous histiocytoma(1), bursitis(1), pyogenic abscess(1), and epidermoid inclusion cyst(1). Fourteen benign tumors and ten malignant, three abscesses, and the epidermoid inclusion cyst showed only one fluid-fluid level in a unilocular cyst. On T1-weighted images, the signal intensities of fluid varied, but on T2-weighted images, superior layers were in most cases more hyperintense than inferior layers. Because fluid-fluid layers are a nonspecific finding, it is difficult to specifically diagnose each disease according to the number of fluid-fluid levels or signal intensity of fluid. In spite of the nonspecificity of fluid-fluid levels, they were frequently seen in cases of giant cell tumor, telangiectatic osteosarcoma, aneurysmal bone cycle, and synovial sarcoma. Nontumorous diseases such abscesses and hematomas also demonstrated this finding. (author). 11 refs., 1 tab., 4 figs.

  14. Fluid-fluid level on MR image: significance in Musculoskeletal diseases

    International Nuclear Information System (INIS)

    Chung, Hye Won; Lee, Kyung Won; Han, Sang Wook; Kang, Heung Sik

    1998-01-01

    To evaluate the frequency, number and signal intensity of fluid-fluid levels of musculoskeletal diseases on MR images, and to determine the usefulness of this information for the differentiation of musculoskeletal diseases. MR images revealed fluid-fluid levels in the following diseases : giant cell tumor(6), telangiectatic osteosarcoma(4), aneurysmal bone cyst(3), synovial sarcoma(3), chondroblastoma(2), soft tissue tuberculous abscess(2), hematoma(2), hemangioma (1), neurilemmoma(1), metastasis(1), malignant fibrous histiocytoma(1), bursitis(1), pyogenic abscess(1), and epidermoid inclusion cyst(1). Fourteen benign tumors and ten malignant, three abscesses, and the epidermoid inclusion cyst showed only one fluid-fluid level in a unilocular cyst. On T1-weighted images, the signal intensities of fluid varied, but on T2-weighted images, superior layers were in most cases more hyperintense than inferior layers. Because fluid-fluid layers are a nonspecific finding, it is difficult to specifically diagnose each disease according to the number of fluid-fluid levels or signal intensity of fluid. In spite of the nonspecificity of fluid-fluid levels, they were frequently seen in cases of giant cell tumor, telangiectatic osteosarcoma, aneurysmal bone cycle, and synovial sarcoma. Nontumorous diseases such abscesses and hematomas also demonstrated this finding. (author). 11 refs., 1 tab., 4 figs

  15. Tile-Level Annotation of Satellite Images Using Multi-Level Max-Margin Discriminative Random Field

    Directory of Open Access Journals (Sweden)

    Hong Sun

    2013-05-01

    Full Text Available This paper proposes a multi-level max-margin discriminative analysis (M3DA framework, which takes both coarse and fine semantics into consideration, for the annotation of high-resolution satellite images. In order to generate more discriminative topic-level features, the M3DA uses the maximum entropy discrimination latent Dirichlet Allocation (MedLDA model. Moreover, for improving the spatial coherence of visual words neglected by M3DA, conditional random field (CRF is employed to optimize the soft label field composed of multiple label posteriors. The framework of M3DA enables one to combine word-level features (generated by support vector machines and topic-level features (generated by MedLDA via the bag-of-words representation. The experimental results on high-resolution satellite images have demonstrated that, using the proposed method can not only obtain suitable semantic interpretation, but also improve the annotation performance by taking into account the multi-level semantics and the contextual information.

  16. Image quality enhancement in low-light-level ghost imaging using modified compressive sensing method

    Science.gov (United States)

    Shi, Xiaohui; Huang, Xianwei; Nan, Suqin; Li, Hengxing; Bai, Yanfeng; Fu, Xiquan

    2018-04-01

    Detector noise has a significantly negative impact on ghost imaging at low light levels, especially for existing recovery algorithm. Based on the characteristics of the additive detector noise, a method named modified compressive sensing ghost imaging is proposed to reduce the background imposed by the randomly distributed detector noise at signal path. Experimental results show that, with an appropriate choice of threshold value, modified compressive sensing ghost imaging algorithm can dramatically enhance the contrast-to-noise ratio of the object reconstruction significantly compared with traditional ghost imaging and compressive sensing ghost imaging methods. The relationship between the contrast-to-noise ratio of the reconstruction image and the intensity ratio (namely, the average signal intensity to average noise intensity ratio) for the three reconstruction algorithms are also discussed. This noise suppression imaging technique will have great applications in remote-sensing and security areas.

  17. Patterned brain stimulation, what a framework with rhythmic and noisy components might tell us about recovery maximization

    Directory of Open Access Journals (Sweden)

    Sein eSchmidt

    2013-06-01

    Full Text Available Brain stimulation is having remarkable impact on clinical neurology. Brain stimulation can modulate neuronal activity in functionally segregated circumscribed regions of the human brain. Polarity-, frequency and noise specific stimulation can induce specific manipulations on neural activity.. In contrast to neocortical stimulation, deep-brain stimulation has become a tool that can dramatically improve the impact clinicians can possibly have on movement disorders. In contrast, neocortical brain stimulation is proving to be remarkably susceptible to intrinsic brain-states. Although evidence is accumulating that brain stimulation can facilitate recovery processes in patients with cerebral stroke, the high variability of results impedes successful clinical implementation. Interestingly, recent data in healthy subjects suggests that brain-state dependent patterned stimulation might help resolve some of the intrinsic variability found in previous studies. In parallel, other studies suggest that noisy stochastic resonance -like processes are a non-negligible component in NBS studies.The hypothesis developed in this manuscript is that stimulation patterning with noisy and oscillatory components will help patients recover from stroke related deficits more reliably. To address this hypothesis we focus on two factors common to both neural computation (intrinsic variables as well as brain stimulation (extrinsic variables: noise and oscillation. We review diverse theoretical and experimental evidence that demonstrates that subject-function specific brain-states are associated with specific oscillatory activity patterns. These states are transient and can be maintained by noisy processes. The resulting control procedures can resemble homeostatic or stochastic resonance processes. In this context we try to extend awareness for inter-individual differences and the use of individualized stimulation in the recovery maximization of stroke patients.

  18. BAYESIAN IMAGE RESTORATION, USING CONFIGURATIONS

    Directory of Open Access Journals (Sweden)

    Thordis Linda Thorarinsdottir

    2011-05-01

    Full Text Available In this paper, we develop a Bayesian procedure for removing noise from images that can be viewed as noisy realisations of random sets in the plane. The procedure utilises recent advances in configuration theory for noise free random sets, where the probabilities of observing the different boundary configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for salt and pepper noise. The inference in the model is discussed in detail for 3 X 3 and 5 X 5 configurations and examples of the performance of the procedure are given.

  19. A new level set model for cell image segmentation

    International Nuclear Information System (INIS)

    Ma Jing-Feng; Chen Chun; Hou Kai; Bao Shang-Lian

    2011-01-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing. (cross-disciplinary physics and related areas of science and technology)

  20. An aperiodic phenomenon of the unscented Kalman filter in filtering noisy chaotic signals

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A non-periodic oscillatory behavior of the unscented Kalman filter (UKF) when used to filter noisy contaminated chaotic signals is reported. We show both theoretically and experimentally that the gain of the UKF may not converge or diverge but oscillate aperiodically. More precisely, when a nonlinear system is periodic, the Kalman gain and error covariance of the UKF converge to zero. However, when the system being considered is chaotic, the Kalman gain either converges to a fixed point with a magnitude larger than zero or oscillates aperiodically.

  1. A quantification of the hazards of fitting sums of exponentials to noisy data

    International Nuclear Information System (INIS)

    Bromage, G.E.

    1983-06-01

    The ill-conditioned nature of sums-of-exponentials analyses is confirmed and quantified, using synthetic noisy data. In particular, the magnification of errors is plotted for various two-exponential models, to illustrate its dependence on the ratio of decay constants, and on the ratios of amplitudes of the contributing terms. On moving from two- to three-exponential models, the condition deteriorates badly. It is also shown that the use of 'direct' Prony-type analyses (rather than general iterative nonlinear optimisation) merely aggravates the condition. (author)

  2. Stabilized quasi-Newton optimization of noisy potential energy surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Schaefer, Bastian; Goedecker, Stefan, E-mail: stefan.goedecker@unibas.ch [Department of Physics, University of Basel, Klingelbergstrasse 82, CH-4056 Basel (Switzerland); Alireza Ghasemi, S. [Institute for Advanced Studies in Basic Sciences, P.O. Box 45195-1159, IR-Zanjan (Iran, Islamic Republic of); Roy, Shantanu [Computational and Systems Biology, Biozentrum, University of Basel, CH-4056 Basel (Switzerland)

    2015-01-21

    Optimizations of atomic positions belong to the most commonly performed tasks in electronic structure calculations. Many simulations like global minimum searches or characterizations of chemical reactions require performing hundreds or thousands of minimizations or saddle computations. To automatize these tasks, optimization algorithms must not only be efficient but also very reliable. Unfortunately, computational noise in forces and energies is inherent to electronic structure codes. This computational noise poses a severe problem to the stability of efficient optimization methods like the limited-memory Broyden–Fletcher–Goldfarb–Shanno algorithm. We here present a technique that allows obtaining significant curvature information of noisy potential energy surfaces. We use this technique to construct both, a stabilized quasi-Newton minimization method and a stabilized quasi-Newton saddle finding approach. We demonstrate with the help of benchmarks that both the minimizer and the saddle finding approach are superior to comparable existing methods.

  3. Stabilized quasi-Newton optimization of noisy potential energy surfaces

    International Nuclear Information System (INIS)

    Schaefer, Bastian; Goedecker, Stefan; Alireza Ghasemi, S.; Roy, Shantanu

    2015-01-01

    Optimizations of atomic positions belong to the most commonly performed tasks in electronic structure calculations. Many simulations like global minimum searches or characterizations of chemical reactions require performing hundreds or thousands of minimizations or saddle computations. To automatize these tasks, optimization algorithms must not only be efficient but also very reliable. Unfortunately, computational noise in forces and energies is inherent to electronic structure codes. This computational noise poses a severe problem to the stability of efficient optimization methods like the limited-memory Broyden–Fletcher–Goldfarb–Shanno algorithm. We here present a technique that allows obtaining significant curvature information of noisy potential energy surfaces. We use this technique to construct both, a stabilized quasi-Newton minimization method and a stabilized quasi-Newton saddle finding approach. We demonstrate with the help of benchmarks that both the minimizer and the saddle finding approach are superior to comparable existing methods

  4. Technical Note: Correcting for signal attenuation from noisy proxy data in climate reconstructions

    KAUST Repository

    Ammann, C. M.

    2010-04-20

    Regression-based climate reconstructions scale one or more noisy proxy records against a (generally) short instrumental data series. Based on that relationship, the indirect information is then used to estimate that particular measure of climate back in time. A well-calibrated proxy record(s), if stationary in its relationship to the target, should faithfully preserve the mean amplitude of the climatic variable. However, it is well established in the statistical literature that traditional regression parameter estimation can lead to substantial amplitude attenuation if the predictors carry significant amounts of noise. This issue is known as "Measurement Error" (Fuller, 1987; Carroll et al., 2006). Climate proxies derived from tree-rings, ice cores, lake sediments, etc., are inherently noisy and thus all regression-based reconstructions could suffer from this problem. Some recent applications attempt to ward off amplitude attenuation, but implementations are often complex (Lee et al., 2008) or require additional information, e.g. from climate models (Hegerl et al., 2006, 2007). Here we explain the cause of the problem and propose an easy, generally applicable, data-driven strategy to effectively correct for attenuation (Fuller, 1987; Carroll et al., 2006), even at annual resolution. The impact is illustrated in the context of a Northern Hemisphere mean temperature reconstruction. An inescapable trade-off for achieving an unbiased reconstruction is an increase in variance, but for many climate applications the change in mean is a core interest.

  5. Technical Note: Correcting for signal attenuation from noisy proxy data in climate reconstructions

    Directory of Open Access Journals (Sweden)

    C. M. Ammann

    2010-04-01

    Full Text Available Regression-based climate reconstructions scale one or more noisy proxy records against a (generally short instrumental data series. Based on that relationship, the indirect information is then used to estimate that particular measure of climate back in time. A well-calibrated proxy record(s, if stationary in its relationship to the target, should faithfully preserve the mean amplitude of the climatic variable. However, it is well established in the statistical literature that traditional regression parameter estimation can lead to substantial amplitude attenuation if the predictors carry significant amounts of noise. This issue is known as "Measurement Error" (Fuller, 1987; Carroll et al., 2006. Climate proxies derived from tree-rings, ice cores, lake sediments, etc., are inherently noisy and thus all regression-based reconstructions could suffer from this problem. Some recent applications attempt to ward off amplitude attenuation, but implementations are often complex (Lee et al., 2008 or require additional information, e.g. from climate models (Hegerl et al., 2006, 2007. Here we explain the cause of the problem and propose an easy, generally applicable, data-driven strategy to effectively correct for attenuation (Fuller, 1987; Carroll et al., 2006, even at annual resolution. The impact is illustrated in the context of a Northern Hemisphere mean temperature reconstruction. An inescapable trade-off for achieving an unbiased reconstruction is an increase in variance, but for many climate applications the change in mean is a core interest.

  6. Low-level processing for real-time image analysis

    Science.gov (United States)

    Eskenazi, R.; Wilf, J. M.

    1979-01-01

    A system that detects object outlines in television images in real time is described. A high-speed pipeline processor transforms the raw image into an edge map and a microprocessor, which is integrated into the system, clusters the edges, and represents them as chain codes. Image statistics, useful for higher level tasks such as pattern recognition, are computed by the microprocessor. Peak intensity and peak gradient values are extracted within a programmable window and are used for iris and focus control. The algorithms implemented in hardware and the pipeline processor architecture are described. The strategy for partitioning functions in the pipeline was chosen to make the implementation modular. The microprocessor interface allows flexible and adaptive control of the feature extraction process. The software algorithms for clustering edge segments, creating chain codes, and computing image statistics are also discussed. A strategy for real time image analysis that uses this system is given.

  7. Implementation of two-party protocols in the noisy-storage model

    International Nuclear Information System (INIS)

    Wehner, Stephanie; Curty, Marcos; Schaffner, Christian; Lo, Hoi-Kwong

    2010-01-01

    The noisy-storage model allows the implementation of secure two-party protocols under the sole assumption that no large-scale reliable quantum storage is available to the cheating party. No quantum storage is thereby required for the honest parties. Examples of such protocols include bit commitment, oblivious transfer, and secure identification. Here, we provide a guideline for the practical implementation of such protocols. In particular, we analyze security in a practical setting where the honest parties themselves are unable to perform perfect operations and need to deal with practical problems such as errors during transmission and detector inefficiencies. We provide explicit security parameters for two different experimental setups using weak coherent, and parametric down-conversion sources. In addition, we analyze a modification of the protocols based on decoy states.

  8. Effects of flashlight guidance on chest compression performance in cardiopulmonary resuscitation in a noisy environment

    OpenAIRE

    You, Je Sung; Chung, Sung Phil; Chang, Chul Ho; Park, Incheol; Lee, Hye Sun; Kim, SeungHo; Lee, Hahn Shick

    2012-01-01

    Background In real cardiopulmonary resuscitation (CPR), noise can arise from instructional voices and environmental sounds in places such as a battlefield and industrial and high-traffic areas. A feedback device using a flashing light was designed to overcome noise-induced stimulus saturation during CPR. This study was conducted to determine whether ?flashlight? guidance influences CPR performance in a simulated noisy setting. Materials and methods We recruited 30 senior medical students with...

  9. Advanced methods for image registration applied to JET videos

    Energy Technology Data Exchange (ETDEWEB)

    Craciunescu, Teddy, E-mail: teddy.craciunescu@jet.uk [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Murari, Andrea [Consorzio RFX, Associazione EURATOM-ENEA per la Fusione, Padova (Italy); Gelfusa, Michela [Associazione EURATOM-ENEA – University of Rome “Tor Vergata”, Roma (Italy); Tiseanu, Ion; Zoita, Vasile [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Arnoux, Gilles [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon (United Kingdom)

    2015-10-15

    Graphical abstract: - Highlights: • Development of an image registration method for JET IR and fast visible cameras. • Method based on SIFT descriptors and coherent point drift points set registration technique. • Method able to deal with extremely noisy images and very low luminosity images. • Computation time compatible with the inter-shot analysis. - Abstract: The last years have witnessed a significant increase in the use of digital cameras on JET. They are routinely applied for imaging in the IR and visible spectral regions. One of the main technical difficulties in interpreting the data of camera based diagnostics is the presence of movements of the field of view. Small movements occur due to machine shaking during normal pulses while large ones may arise during disruptions. Some cameras show a correlation of image movement with change of magnetic field strength. For deriving unaltered information from the videos and for allowing correct interpretation an image registration method, based on highly distinctive scale invariant feature transform (SIFT) descriptors and on the coherent point drift (CPD) points set registration technique, has been developed. The algorithm incorporates a complex procedure for rejecting outliers. The method has been applied for vibrations correction to videos collected by the JET wide angle infrared camera and for the correction of spurious rotations in the case of the JET fast visible camera (which is equipped with an image intensifier). The method has proved to be able to deal with the images provided by this camera frequently characterized by low contrast and a high level of blurring and noise.

  10. Lossless, Near-Lossless, and Refinement Coding of Bi-level Images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren Otto

    1997-01-01

    We present general and unified algorithms for lossy/lossless codingof bi-level images. The compression is realized by applying arithmetic coding to conditional probabilities. As in the current JBIG standard the conditioning may be specified by a template.For better compression, the more general....... Introducing only a small amount of loss in halftoned test images, compression is increased by up to a factor of four compared with JBIG. Lossy, lossless, and refinement decoding speed and lossless encoding speed are less than a factor of two slower than JBIG. The (de)coding method is proposed as part of JBIG......-2, an emerging international standard for lossless/lossy compression of bi-level images....

  11. A Variational Level Set Model Combined with FCMS for Image Clustering Segmentation

    Directory of Open Access Journals (Sweden)

    Liming Tang

    2014-01-01

    Full Text Available The fuzzy C means clustering algorithm with spatial constraint (FCMS is effective for image segmentation. However, it lacks essential smoothing constraints to the cluster boundaries and enough robustness to the noise. Samson et al. proposed a variational level set model for image clustering segmentation, which can get the smooth cluster boundaries and closed cluster regions due to the use of level set scheme. However it is very sensitive to the noise since it is actually a hard C means clustering model. In this paper, based on Samson’s work, we propose a new variational level set model combined with FCMS for image clustering segmentation. Compared with FCMS clustering, the proposed model can get smooth cluster boundaries and closed cluster regions due to the use of level set scheme. In addition, a block-based energy is incorporated into the energy functional, which enables the proposed model to be more robust to the noise than FCMS clustering and Samson’s model. Some experiments on the synthetic and real images are performed to assess the performance of the proposed model. Compared with some classical image segmentation models, the proposed model has a better performance for the images contaminated by different noise levels.

  12. Correlation characteristics of optical coherence tomography images of turbid media with statistically inhomogeneous optical parameters

    International Nuclear Information System (INIS)

    Dolin, Lev S.; Sergeeva, Ekaterina A.; Turchin, Ilya V.

    2012-01-01

    Noisy structure of optical coherence tomography (OCT) images of turbid medium contains information about spatial variations of its optical parameters. We propose analytical model of statistical characteristics of OCT signal fluctuations from turbid medium with spatially inhomogeneous coefficients of absorption and backscattering. Analytically predicted correlation characteristics of OCT signal from spatially inhomogeneous medium are in good agreement with the results of correlation analysis of OCT images of different biological tissues. The proposed model can be efficiently applied for quantitative evaluation of statistical properties of absorption and backscattering fluctuations basing on correlation characteristics of OCT images.

  13. Nutrients and toxin producing phytoplankton control algal blooms - a spatio-temporal study in a noisy environment.

    Science.gov (United States)

    Sarkar, Ram Rup; Malchow, Horst

    2005-12-01

    A phytoplankton-zooplankton prey-predator model has been investigated for temporal, spatial and spatio-temporal dissipative pattern formation in a deterministic and noisy environment, respectively. The overall carrying capacity for the phytoplankton population depends on the nutrient level. The role of nutrient concentrations and toxin producing phytoplankton for controlling the algal blooms has been discussed. The local analysis yields a number of stationary and/or oscillatory regimes and their combinations. Correspondingly interesting is the spatio-temporal behaviour, modelled by stochastic reaction-diffusion equations. The present study also reveals the fact that the rate of toxin production by toxin producing phytoplankton (TPP) plays an important role for controlling oscillations in the plankton system. We also observe that different mortality functions of zooplankton due to TPP have significant influence in controlling oscillations, coexistence, survival or extinction of the zoo-plankton population. External noise can enhance the survival and spread of zooplankton that would go extinct in the deterministic system due to a high rate of toxin production.

  14. Indium-111 labeled leukocyte images demonstrating a lung abscess with prominent fluid level

    International Nuclear Information System (INIS)

    Massie, J.D.; Winer-Muram, H.

    1986-01-01

    In-111 labeled leukocyte images show an abscess cavity with a fluid level on 24-hour upright images. Fluid levels, frequently seen on radiographs, are uncommon on nuclear images. This finding demonstrates rapid migration of labeled leukocytes into purulent abscess fluid

  15. Synergistic Instance-Level Subspace Alignment for Fine-Grained Sketch-Based Image Retrieval.

    Science.gov (United States)

    Li, Ke; Pang, Kaiyue; Song, Yi-Zhe; Hospedales, Timothy M; Xiang, Tao; Zhang, Honggang

    2017-08-25

    We study the problem of fine-grained sketch-based image retrieval. By performing instance-level (rather than category-level) retrieval, it embodies a timely and practical application, particularly with the ubiquitous availability of touchscreens. Three factors contribute to the challenging nature of the problem: (i) free-hand sketches are inherently abstract and iconic, making visual comparisons with photos difficult, (ii) sketches and photos are in two different visual domains, i.e. black and white lines vs. color pixels, and (iii) fine-grained distinctions are especially challenging when executed across domain and abstraction-level. To address these challenges, we propose to bridge the image-sketch gap both at the high-level via parts and attributes, as well as at the low-level, via introducing a new domain alignment method. More specifically, (i) we contribute a dataset with 304 photos and 912 sketches, where each sketch and image is annotated with its semantic parts and associated part-level attributes. With the help of this dataset, we investigate (ii) how strongly-supervised deformable part-based models can be learned that subsequently enable automatic detection of part-level attributes, and provide pose-aligned sketch-image comparisons. To reduce the sketch-image gap when comparing low-level features, we also (iii) propose a novel method for instance-level domain-alignment, that exploits both subspace and instance-level cues to better align the domains. Finally (iv) these are combined in a matching framework integrating aligned low-level features, mid-level geometric structure and high-level semantic attributes. Extensive experiments conducted on our new dataset demonstrate effectiveness of the proposed method.

  16. The consequences of multiplexing and limited view angle in coded-aperture imaging

    International Nuclear Information System (INIS)

    Smith, W.E.; Barrett, H.H.; Paxman, R.G.

    1984-01-01

    Coded-aperture imaging (CAI) is a method for reconstructing distributions of radionuclide tracers that offers advantages over ECT and PET; namely, many views can be taken simultaneously without detector motion, and large numbers of photons are utilized since collimators are not required. However, because of this type of data acquisition, the coded image suffers from multiplexing; i.e., more than one object point may be mapped to each detector in the coded image. To investigate the dependence of the reconstruction on multiplexing, the authors reconstruct a simulated two-dimensional circular object from multiplexed one-dimensional coded-image data, then perform the reconstruction from un-multiplexed data. Each of these reconstructions are produced both from noise-free and noisy simulated data. To investigate the dependence on view angle, the authors reconstruct two simulated three-dimensional objects; a spherical phantom, and a series of point-like objects arranged nearly in a plane. Each of these reconstructions are from multiplexed two-dimensional coded-image data, first using two orthogonal views, and then a single viewing direction. The two-dimensional reconstructions demonstrate that, in the noise-free case, the multiplexing of the data does not seriously affect the reconstruction equality and that in the noisy-data case, the multiplexing helps, due to the fact that more photons are collected. Also, for point-like objects confined to a near-planar region of space, the authors show that restricted views can give satisfactory results, but that, for a large, three-dimensional object, a more complete viewing geometry is required

  17. The Effect of Information Analysis Automation Display Content on Human Judgment Performance in Noisy Environments

    Science.gov (United States)

    Bass, Ellen J.; Baumgart, Leigh A.; Shepley, Kathryn Klein

    2014-01-01

    Displaying both the strategy that information analysis automation employs to makes its judgments and variability in the task environment may improve human judgment performance, especially in cases where this variability impacts the judgment performance of the information analysis automation. This work investigated the contribution of providing either information analysis automation strategy information, task environment information, or both, on human judgment performance in a domain where noisy sensor data are used by both the human and the information analysis automation to make judgments. In a simplified air traffic conflict prediction experiment, 32 participants made probability of horizontal conflict judgments under different display content conditions. After being exposed to the information analysis automation, judgment achievement significantly improved for all participants as compared to judgments without any of the automation's information. Participants provided with additional display content pertaining to cue variability in the task environment had significantly higher aided judgment achievement compared to those provided with only the automation's judgment of a probability of conflict. When designing information analysis automation for environments where the automation's judgment achievement is impacted by noisy environmental data, it may be beneficial to show additional task environment information to the human judge in order to improve judgment performance. PMID:24847184

  18. The Effect of Information Analysis Automation Display Content on Human Judgment Performance in Noisy Environments.

    Science.gov (United States)

    Bass, Ellen J; Baumgart, Leigh A; Shepley, Kathryn Klein

    2013-03-01

    Displaying both the strategy that information analysis automation employs to makes its judgments and variability in the task environment may improve human judgment performance, especially in cases where this variability impacts the judgment performance of the information analysis automation. This work investigated the contribution of providing either information analysis automation strategy information, task environment information, or both, on human judgment performance in a domain where noisy sensor data are used by both the human and the information analysis automation to make judgments. In a simplified air traffic conflict prediction experiment, 32 participants made probability of horizontal conflict judgments under different display content conditions. After being exposed to the information analysis automation, judgment achievement significantly improved for all participants as compared to judgments without any of the automation's information. Participants provided with additional display content pertaining to cue variability in the task environment had significantly higher aided judgment achievement compared to those provided with only the automation's judgment of a probability of conflict. When designing information analysis automation for environments where the automation's judgment achievement is impacted by noisy environmental data, it may be beneficial to show additional task environment information to the human judge in order to improve judgment performance.

  19. RAID: a relation-augmented image descriptor

    KAUST Repository

    Guerrero, Paul; Mitra, Niloy J.; Wonka, Peter

    2016-01-01

    As humans, we regularly interpret scenes based on how objects are related, rather than based on the objects themselves. For example, we see a person riding an object X or a plank bridging two objects. Current methods provide limited support to search for content based on such relations. We present RAID, a relation-augmented image descriptor that supports queries based on inter-region relations. The key idea of our descriptor is to encode region-to-region relations as the spatial distribution of point-to-region relationships between two image regions. RAID allows sketch-based retrieval and requires minimal training data, thus making it suited even for querying uncommon relations. We evaluate the proposed descriptor by querying into large image databases and successfully extract nontrivial images demonstrating complex inter-region relations, which are easily missed or erroneously classified by existing methods. We assess the robustness of RAID on multiple datasets even when the region segmentation is computed automatically or very noisy.

  20. RAID: a relation-augmented image descriptor

    KAUST Repository

    Guerrero, Paul

    2016-07-11

    As humans, we regularly interpret scenes based on how objects are related, rather than based on the objects themselves. For example, we see a person riding an object X or a plank bridging two objects. Current methods provide limited support to search for content based on such relations. We present RAID, a relation-augmented image descriptor that supports queries based on inter-region relations. The key idea of our descriptor is to encode region-to-region relations as the spatial distribution of point-to-region relationships between two image regions. RAID allows sketch-based retrieval and requires minimal training data, thus making it suited even for querying uncommon relations. We evaluate the proposed descriptor by querying into large image databases and successfully extract nontrivial images demonstrating complex inter-region relations, which are easily missed or erroneously classified by existing methods. We assess the robustness of RAID on multiple datasets even when the region segmentation is computed automatically or very noisy.

  1. Noise reduction and image enhancement using a hardware implementation of artificial neural networks

    Science.gov (United States)

    David, Robert; Williams, Erin; de Tremiolles, Ghislain; Tannhof, Pascal

    1999-03-01

    In this paper, we present a neural based solution developed for noise reduction and image enhancement using the ZISC, an IBM hardware processor which implements the Restricted Coulomb Energy algorithm and the K-Nearest Neighbor algorithm. Artificial neural networks present the advantages of processing time reduction in comparison with classical models, adaptability, and the weighted property of pattern learning. The goal of the developed application is image enhancement in order to restore old movies (noise reduction, focus correction, etc.), to improve digital television images, or to treat images which require adaptive processing (medical images, spatial images, special effects, etc.). Image results show a quantitative improvement over the noisy image as well as the efficiency of this system. Further enhancements are being examined to improve the output of the system.

  2. Minimal Effects of Age and Exposure to a Noisy Environment on Hearing in Alpha9 Nicotinic Receptor Knockout Mice

    Directory of Open Access Journals (Sweden)

    Amanda M. Lauer

    2017-06-01

    Full Text Available Studies have suggested a role of weakened medial olivocochlear (OC efferent feedback in accelerated hearing loss and increased susceptibility to noise. The present study investigated the progression of hearing loss with age and exposure to a noisy environment in medial OC-deficient mice. Alpha9 nicotinic acetylcholine receptor knockout (α9KO and wild types were screened for hearing loss using auditory brainstem responses. α9KO mice housed in a quiet environment did not show increased hearing loss compared to wild types in young adulthood and middle age. Challenging the medial OC system by housing in a noisy environment did not increase hearing loss in α9KO mice compared to wild types. ABR wave 1 amplitudes also did not show differences between α9KO mice and wild types. These data suggest that deficient medial OC feedback does not result in early onset of hearing loss.

  3. A Scent of Lemon—Seller Meets Buyer with a Noisy Quality Observation

    Directory of Open Access Journals (Sweden)

    Jörgen W. Weibull

    2011-03-01

    Full Text Available We consider a market for lemons in which the seller is a monopolistic price setter and the buyer receives a private noisy signal of the product’s quality. We model this as a game and analyze perfect Bayesian equilibrium prices, trading probabilities and gains of trade. In particular, we vary the buyer’s signal precision, from being completely uninformative, as in standard models of lemons markets, to being perfectly informative. We show that high quality units are sold with positive probability even in the limit of uninformative signals, and we identify some discontinuities in the equilibrium predictions at the boundaries of completely uninformative and completely informative signals, respectively.

  4. Neuroscience-inspired computational systems for speech recognition under noisy conditions

    Science.gov (United States)

    Schafer, Phillip B.

    Humans routinely recognize speech in challenging acoustic environments with background music, engine sounds, competing talkers, and other acoustic noise. However, today's automatic speech recognition (ASR) systems perform poorly in such environments. In this dissertation, I present novel methods for ASR designed to approach human-level performance by emulating the brain's processing of sounds. I exploit recent advances in auditory neuroscience to compute neuron-based representations of speech, and design novel methods for decoding these representations to produce word transcriptions. I begin by considering speech representations modeled on the spectrotemporal receptive fields of auditory neurons. These representations can be tuned to optimize a variety of objective functions, which characterize the response properties of a neural population. I propose an objective function that explicitly optimizes the noise invariance of the neural responses, and find that it gives improved performance on an ASR task in noise compared to other objectives. The method as a whole, however, fails to significantly close the performance gap with humans. I next consider speech representations that make use of spiking model neurons. The neurons in this method are feature detectors that selectively respond to spectrotemporal patterns within short time windows in speech. I consider a number of methods for training the response properties of the neurons. In particular, I present a method using linear support vector machines (SVMs) and show that this method produces spikes that are robust to additive noise. I compute the spectrotemporal receptive fields of the neurons for comparison with previous physiological results. To decode the spike-based speech representations, I propose two methods designed to work on isolated word recordings. The first method uses a classical ASR technique based on the hidden Markov model. The second method is a novel template-based recognition scheme that takes

  5. A new approach of recognition of ellipsoidal micro- and nanoparticles on AFM images and determination of their sizes

    International Nuclear Information System (INIS)

    Akhmadeev, Albert A; Kh Salakhov, Myakzyum

    2016-01-01

    In this work we develop an approach of automatic recognition of ellipsoidal particles on the atomic force microscopy (AFM) image and determination of their size, which is based on image segmentation and the surface approximation by ellipsoids. In addition to the comparative simplicity and rapidity of processing, this method allows us to determine the size of particles, the surface of which is not completely visible on the image. The proposed method showed good results on simulated images including noisy ones. Using this algorithm the size distributions of silica particles on experimental AFM images have been determined. (paper)

  6. Insight into dynamic genome imaging: Canonical framework identification and high-throughput analysis.

    Science.gov (United States)

    Ronquist, Scott; Meixner, Walter; Rajapakse, Indika; Snyder, John

    2017-07-01

    The human genome is dynamic in structure, complicating researcher's attempts at fully understanding it. Time series "Fluorescent in situ Hybridization" (FISH) imaging has increased our ability to observe genome structure, but due to cell type and experimental variability this data is often noisy and difficult to analyze. Furthermore, computational analysis techniques are needed for homolog discrimination and canonical framework detection, in the case of time-series images. In this paper we introduce novel ideas for nucleus imaging analysis, present findings extracted using dynamic genome imaging, and propose an objective algorithm for high-throughput, time-series FISH imaging. While a canonical framework could not be detected beyond statistical significance in the analyzed dataset, a mathematical framework for detection has been outlined with extension to 3D image analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Combining low level features and visual attributes for VHR remote sensing image classification

    Science.gov (United States)

    Zhao, Fumin; Sun, Hao; Liu, Shuai; Zhou, Shilin

    2015-12-01

    Semantic classification of very high resolution (VHR) remote sensing images is of great importance for land use or land cover investigation. A large number of approaches exploiting different kinds of low level feature have been proposed in the literature. Engineers are often frustrated by their conclusions and a systematic assessment of various low level features for VHR remote sensing image classification is needed. In this work, we firstly perform an extensive evaluation of eight features including HOG, dense SIFT, SSIM, GIST, Geo color, LBP, Texton and Tiny images for classification of three public available datasets. Secondly, we propose to transfer ground level scene attributes to remote sensing images. Thirdly, we combine both low-level features and mid-level visual attributes to further improve the classification performance. Experimental results demonstrate that i) Dene SIFT and HOG features are more robust than other features for VHR scene image description. ii) Visual attribute competes with a combination of low level features. iii) Multiple feature combination achieves the best performance under different settings.

  8. Sketch of a Noisy Channel Model for the Translation Process

    DEFF Research Database (Denmark)

    Carl, Michael

    default rendering" procedure, later conscious processes are triggered by a monitor who interferes when something goes wrong. An attempt is made to explain monitor activities with relevance theoretic concepts according to which a translator needs to ensure the similarity of explicatures and implicatures......The paper develops a Noisy Channel Model for the translation process that is based on actual user activity data. It builds on the monitor model and makes a distinction between early, automatic and late, conscious translation processes: while early priming processes are at the basis of a "literal...... of the source and the target texts. It is suggested that events and parameters in the model need be measurable and quantifiable in the user activity data so as to trace back monitoring activities in the translation process data. Michael Carl is a Professor with special responsibilities at the Department...

  9. Recent progress in low-level gamma imaging

    International Nuclear Information System (INIS)

    Mahe, C.; Girones, Ph.; Lamadie, F.; Le Goaller, C.

    2007-01-01

    The CEA's Aladin gamma imaging system has been operated successfully for several years in nuclear plants and during decommissioning projects with additional tools such as gamma spectrometry detectors and dose rate probes. The radiological information supplied by these devices is becoming increasingly useful for establishing robust and optimized decommissioning scenarios. Recent technical improvements allow this gamma imaging system to be operated in low-level applications and with shorter acquisition times suitable for decommissioning projects. The compact portable system can be used in places inaccessible to operators. It is quick and easy to implement, notably for onsite component characterization. Feasibility trials and in situ measurements were recently carried out under low-level conditions, mainly on waste packages and glove boxes for decommissioning projects. This paper describes recent low-level in situ applications. These characterization campaigns mainly concerned gamma emitters with γ energy < 700 keV. In many cases, the localization of hot spots by gamma camera was confirmed by additional measurements such as dose rate mapping and gamma spectrometry measurements. These complementary techniques associated with advanced calculation codes (MCNP, Mercure 6.2, Visiplan and Siren) offer a mobile and compact tool for specific assessment of waste packages and glove boxes. (authors)

  10. Deep architecture neural network-based real-time image processing for image-guided radiotherapy.

    Science.gov (United States)

    Mori, Shinichiro

    2017-08-01

    To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  11. Convergence estimates in probability and in expectation for discrete least squares with noisy evaluations at random points

    KAUST Repository

    Migliorati, Giovanni

    2015-08-28

    We study the accuracy of the discrete least-squares approximation on a finite dimensional space of a real-valued target function from noisy pointwise evaluations at independent random points distributed according to a given sampling probability measure. The convergence estimates are given in mean-square sense with respect to the sampling measure. The noise may be correlated with the location of the evaluation and may have nonzero mean (offset). We consider both cases of bounded or square-integrable noise / offset. We prove conditions between the number of sampling points and the dimension of the underlying approximation space that ensure a stable and accurate approximation. Particular focus is on deriving estimates in probability within a given confidence level. We analyze how the best approximation error and the noise terms affect the convergence rate and the overall confidence level achieved by the convergence estimate. The proofs of our convergence estimates in probability use arguments from the theory of large deviations to bound the noise term. Finally we address the particular case of multivariate polynomial approximation spaces with any density in the beta family, including uniform and Chebyshev.

  12. Knowledge-based low-level image analysis for computer vision systems

    Science.gov (United States)

    Dhawan, Atam P.; Baxi, Himanshu; Ranganath, M. V.

    1988-01-01

    Two algorithms for entry-level image analysis and preliminary segmentation are proposed which are flexible enough to incorporate local properties of the image. The first algorithm involves pyramid-based multiresolution processing and a strategy to define and use interlevel and intralevel link strengths. The second algorithm, which is designed for selected window processing, extracts regions adaptively using local histograms. The preliminary segmentation and a set of features are employed as the input to an efficient rule-based low-level analysis system, resulting in suboptimal meaningful segmentation.

  13. Performance evaluation of 2D image registration algorithms with the numeric image registration and comparison platform

    International Nuclear Information System (INIS)

    Gerganov, G.; Kuvandjiev, V.; Dimitrova, I.; Mitev, K.; Kawrakow, I.

    2012-01-01

    The objective of this work is to present the capabilities of the NUMERICS web platform for evaluation of the performance of image registration algorithms. The NUMERICS platform is a web accessible tool which provides access to dedicated numerical algorithms for registration and comparison of medical images (http://numerics.phys.uni-sofia.bg). The platform allows comparison of noisy medical images by means of different types of image comparison algorithms, which are based on statistical tests for outliers. The platform also allows 2D image registration with different techniques like Elastic Thin-Plate Spline registration, registration based on rigid transformations, affine transformations, as well as non-rigid image registration based on Mobius transformations. In this work we demonstrate how the platform can be used as a tool for evaluation of the quality of the image registration process. We demonstrate performance evaluation of a deformable image registration technique based on Mobius transformations. The transformations are applied with appropriate cost functions like: Mutual information, Correlation coefficient, Sum of Squared Differences. The accent is on the results provided by the platform to the user and their interpretation in the context of the performance evaluation of 2D image registration. The NUMERICS image registration and image comparison platform provides detailed statistical information about submitted image registration jobs and can be used to perform quantitative evaluation of the performance of different image registration techniques. (authors)

  14. Extracting physics of life at the molecular level: A review of single-molecule data analyses.

    Science.gov (United States)

    Colomb, Warren; Sarkar, Susanta K

    2015-06-01

    Studying individual biomolecules at the single-molecule level has proved very insightful recently. Single-molecule experiments allow us to probe both the equilibrium and nonequilibrium properties as well as make quantitative connections with ensemble experiments and equilibrium thermodynamics. However, it is important to be careful about the analysis of single-molecule data because of the noise present and the lack of theoretical framework for processes far away from equilibrium. Biomolecular motion, whether it is free in solution, on a substrate, or under force, involves thermal fluctuations in varying degrees, which makes the motion noisy. In addition, the noise from the experimental setup makes it even more complex. The details of biologically relevant interactions, conformational dynamics, and activities are hidden in the noisy single-molecule data. As such, extracting biological insights from noisy data is still an active area of research. In this review, we will focus on analyzing both fluorescence-based and force-based single-molecule experiments and gaining biological insights at the single-molecule level. Inherently nonequilibrium nature of biological processes will be highlighted. Simulated trajectories of biomolecular diffusion will be used to compare and validate various analysis techniques. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Experimental test of the strongly nonclassical character of a noisy squeezed single-photon state

    DEFF Research Database (Denmark)

    Jezek, M.; Tipsmark, A.; Dong, R.

    2012-01-01

    We experimentally verify the quantum non-Gaussian character of a conditionally generated noisy squeezed single-photon state with a positive Wigner function. Employing an optimized witness based on probabilities of squeezed vacuum and squeezed single-photon states, we prove that the state cannot...... be expressed as a mixture of Gaussian states. In our experiment, the non-Gaussian state is generated by conditional subtraction of a single photon from a squeezed vacuum state. The state is probed with a homodyne detector and the witness is determined by averaging a suitable pattern function over the measured...

  16. Numerical modeling of optical coherent transient processes with complex configurations-III: Noisy laser source

    International Nuclear Information System (INIS)

    Chang Tiejun; Tian Mingzhen

    2007-01-01

    A previously developed numerical model based on Maxwell-Bloch equations was modified to simulate optical coherent transient and spectral hole burning processes with noisy laser sources. Random walk phase noise was simulated using laser-phase sequences generated numerically according to the normal distribution of the phase shift. The noise model was tested by comparing the simulated spectral hole burning effect with the analytical solution. The noise effects on a few typical optical coherence transient processes were investigated using this numerical tool. Flicker and random walk frequency noises were considered in accumulation process

  17. Teleportation is necessary for faithful quantum state transfer through noisy channels of maximal rank

    International Nuclear Information System (INIS)

    Romano, Raffaele; Loock, Peter van

    2010-01-01

    Quantum teleportation enables deterministic and faithful transmission of quantum states, provided a maximally entangled state is preshared between sender and receiver, and a one-way classical channel is available. Here, we prove that these resources are not only sufficient, but also necessary, for deterministically and faithfully sending quantum states through any fixed noisy channel of maximal rank, when a single use of the cannel is admitted. In other words, for this family of channels, there are no other protocols, based on different (and possibly cheaper) sets of resources, capable of replacing quantum teleportation.

  18. Quantum Privacy Amplification and the Security of Quantum Cryptography over Noisy Channels

    International Nuclear Information System (INIS)

    Deutsch, D.; Ekert, A.; Jozsa, R.; Macchiavello, C.; Popescu, S.; Sanpera, A.

    1996-01-01

    Existing quantum cryptographic schemes are not, as they stand, operable in the presence of noise on the quantum communication channel. Although they become operable if they are supplemented by classical privacy-amplification techniques, the resulting schemes are difficult to analyze and have not been proved secure. We introduce the concept of quantum privacy amplification and a cryptographic scheme incorporating it which is provably secure over a noisy channel. The scheme uses an open-quote open-quote entanglement purification close-quote close-quote procedure which, because it requires only a few quantum controlled-not and single-qubit operations, could be implemented using technology that is currently being developed. copyright 1996 The American Physical Society

  19. Dynamics of a quantum two-level system under the action of phase-diffusion field

    Energy Technology Data Exchange (ETDEWEB)

    Sobakinskaya, E.A. [Institute for Physics of Microstructures of RAS, Nizhny Novgorod, 603950 (Russian Federation); Pankratov, A.L., E-mail: alp@ipm.sci-nnov.ru [Institute for Physics of Microstructures of RAS, Nizhny Novgorod, 603950 (Russian Federation); Vaks, V.L. [Institute for Physics of Microstructures of RAS, Nizhny Novgorod, 603950 (Russian Federation)

    2012-01-09

    We study a behavior of quantum two-level system, interacting with noisy phase-diffusion field. The dynamics is shown to split into two regimes, determined by the coherence time of the phase-diffusion field. For both regimes we present a model of quantum system behavior and discuss possible applications of the obtained effect for spectroscopy. In particular, the obtained analytical formula for the macroscopic polarization demonstrates that the phase-diffusion field does not affect the absorption line shape, which opens up an intriguing possibility of noisy spectroscopy, based on broadband sources with Lorentzian line shape. -- Highlights: ► We study dynamics of quantum system interacting with noisy phase-diffusion field. ► At short times the phase-diffusion field induces polarization in the quantum system. ► At long times the noise leads to polarization decay and heating of a quantum system. ► Simple model of interaction is derived. ► Application of the described effects for spectroscopy is discussed.

  20. Lossless, Near-Lossless, and Refinement Coding of Bi-level Images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren Otto

    1999-01-01

    We present general and unified algorithms for lossy/lossless coding of bi-level images. The compression is realized by applying arithmetic coding to conditional probabilities. As in the current JBIG standard the conditioning may be specified by a template.For better compression, the more general...... to the specialized soft pattern matching techniques which work better for text. Template based refinement coding is applied for lossy-to-lossless refinement. Introducing only a small amount of loss in halftoned test images, compression is increased by up to a factor of four compared with JBIG. Lossy, lossless......, and refinement decoding speed and lossless encoding speed are less than a factor of two slower than JBIG. The (de)coding method is proposed as part of JBIG2, an emerging international standard for lossless/lossy compression of bi-level images....

  1. Model-based failure detection for cylindrical shells from noisy vibration measurements.

    Science.gov (United States)

    Candy, J V; Fisher, K A; Guidry, B L; Chambers, D H

    2014-12-01

    Model-based processing is a theoretically sound methodology to address difficult objectives in complex physical problems involving multi-channel sensor measurement systems. It involves the incorporation of analytical models of both physical phenomenology (complex vibrating structures, noisy operating environment, etc.) and the measurement processes (sensor networks and including noise) into the processor to extract the desired information. In this paper, a model-based methodology is developed to accomplish the task of online failure monitoring of a vibrating cylindrical shell externally excited by controlled excitations. A model-based processor is formulated to monitor system performance and detect potential failure conditions. The objective of this paper is to develop a real-time, model-based monitoring scheme for online diagnostics in a representative structural vibrational system based on controlled experimental data.

  2. Mobile robot trajectory tracking using noisy RSS measurements: an RFID approach.

    Science.gov (United States)

    Miah, M Suruz; Gueaieb, Wail

    2014-03-01

    Most RF beacons-based mobile robot navigation techniques rely on approximating line-of-sight (LOS) distances between the beacons and the robot. This is mostly performed using the robot's received signal strength (RSS) measurements from the beacons. However, accurate mapping between the RSS measurements and the LOS distance is almost impossible to achieve in reverberant environments. This paper presents a partially-observed feedback controller for a wheeled mobile robot where the feedback signal is in the form of noisy RSS measurements emitted from radio frequency identification (RFID) tags. The proposed controller requires neither an accurate mapping between the LOS distance and the RSS measurements, nor the linearization of the robot model. The controller performance is demonstrated through numerical simulations and real-time experiments. ©2013 Published by ISA. All rights reserved.

  3. AggNet: Deep Learning From Crowds for Mitosis Detection in Breast Cancer Histology Images.

    Science.gov (United States)

    Albarqouni, Shadi; Baur, Christoph; Achilles, Felix; Belagiannis, Vasileios; Demirci, Stefanie; Navab, Nassir

    2016-05-01

    The lack of publicly available ground-truth data has been identified as the major challenge for transferring recent developments in deep learning to the biomedical imaging domain. Though crowdsourcing has enabled annotation of large scale databases for real world images, its application for biomedical purposes requires a deeper understanding and hence, more precise definition of the actual annotation task. The fact that expert tasks are being outsourced to non-expert users may lead to noisy annotations introducing disagreement between users. Despite being a valuable resource for learning annotation models from crowdsourcing, conventional machine-learning methods may have difficulties dealing with noisy annotations during training. In this manuscript, we present a new concept for learning from crowds that handle data aggregation directly as part of the learning process of the convolutional neural network (CNN) via additional crowdsourcing layer (AggNet). Besides, we present an experimental study on learning from crowds designed to answer the following questions. 1) Can deep CNN be trained with data collected from crowdsourcing? 2) How to adapt the CNN to train on multiple types of annotation datasets (ground truth and crowd-based)? 3) How does the choice of annotation and aggregation affect the accuracy? Our experimental setup involved Annot8, a self-implemented web-platform based on Crowdflower API realizing image annotation tasks for a publicly available biomedical image database. Our results give valuable insights into the functionality of deep CNN learning from crowd annotations and prove the necessity of data aggregation integration.

  4. A deep level set method for image segmentation

    OpenAIRE

    Tang, Min; Valipour, Sepehr; Zhang, Zichen Vincent; Cobzas, Dana; MartinJagersand

    2017-01-01

    This paper proposes a novel image segmentation approachthat integrates fully convolutional networks (FCNs) with a level setmodel. Compared with a FCN, the integrated method can incorporatesmoothing and prior information to achieve an accurate segmentation.Furthermore, different than using the level set model as a post-processingtool, we integrate it into the training phase to fine-tune the FCN. Thisallows the use of unlabeled data during training in a semi-supervisedsetting. Using two types o...

  5. Effect of glucose level on brain FDG-PET images

    Energy Technology Data Exchange (ETDEWEB)

    Kim, In Young; Lee, Yong Ki; Ahn, Sung Min [Dept. of Radiological Science, Gachon University, Seongnam (Korea, Republic of)

    2017-06-15

    In addition to tumors, normal tissues, such as the brain and myocardium can intake {sup 18}F-FDG, and the amount of {sup 18}F-FDG intake by normal tissues can be altered by the surrounding environment. Therefore, a process is necessary during which the contrasts of the tumor and normal tissues can be enhanced. Thus, this study examines the effects of glucose levels on FDG PET images of brain tissues, which features high glucose activity at all times, in small animals. Micro PET scan was performed on fourteen mice after injecting {sup 18}F-FDG. The images were compared in relation to fasting. The findings showed that the mean SUV value w as 0 .84 higher in fasted mice than in non-fasted mice. During observation, the images from non-fasted mice showed high accumulation in organs other than the brain with increased surrounding noise. In addition, compared to the non-fasted mice, the fasted mice showed higher early intake and curve increase. The findings of this study suggest that fasting is important in assessing brain functions in brain PET using {sup 18}F-FDG. Additional studies to investigate whether caffeine levels and other preprocessing items have an impact on the acquired images would contribute to reducing radiation exposure in patients.

  6. Effect of glucose level on brain FDG-PET images

    International Nuclear Information System (INIS)

    Kim, In Young; Lee, Yong Ki; Ahn, Sung Min

    2017-01-01

    In addition to tumors, normal tissues, such as the brain and myocardium can intake 18 F-FDG, and the amount of 18 F-FDG intake by normal tissues can be altered by the surrounding environment. Therefore, a process is necessary during which the contrasts of the tumor and normal tissues can be enhanced. Thus, this study examines the effects of glucose levels on FDG PET images of brain tissues, which features high glucose activity at all times, in small animals. Micro PET scan was performed on fourteen mice after injecting 18 F-FDG. The images were compared in relation to fasting. The findings showed that the mean SUV value w as 0 .84 higher in fasted mice than in non-fasted mice. During observation, the images from non-fasted mice showed high accumulation in organs other than the brain with increased surrounding noise. In addition, compared to the non-fasted mice, the fasted mice showed higher early intake and curve increase. The findings of this study suggest that fasting is important in assessing brain functions in brain PET using 18 F-FDG. Additional studies to investigate whether caffeine levels and other preprocessing items have an impact on the acquired images would contribute to reducing radiation exposure in patients

  7. Annotating images by mining image search results.

    Science.gov (United States)

    Wang, Xin-Jing; Zhang, Lei; Li, Xirong; Ma, Wei-Ying

    2008-11-01

    Although it has been studied for years by the computer vision and machine learning communities, image annotation is still far from practical. In this paper, we propose a novel attempt at model-free image annotation, which is a data-driven approach that annotates images by mining their search results. Some 2.4 million images with their surrounding text are collected from a few photo forums to support this approach. The entire process is formulated in a divide-and-conquer framework where a query keyword is provided along with the uncaptioned image to improve both the effectiveness and efficiency. This is helpful when the collected data set is not dense everywhere. In this sense, our approach contains three steps: 1) the search process to discover visually and semantically similar search results, 2) the mining process to identify salient terms from textual descriptions of the search results, and 3) the annotation rejection process to filter out noisy terms yielded by Step 2. To ensure real-time annotation, two key techniques are leveraged-one is to map the high-dimensional image visual features into hash codes, the other is to implement it as a distributed system, of which the search and mining processes are provided as Web services. As a typical result, the entire process finishes in less than 1 second. Since no training data set is required, our approach enables annotating with unlimited vocabulary and is highly scalable and robust to outliers. Experimental results on both real Web images and a benchmark image data set show the effectiveness and efficiency of the proposed algorithm. It is also worth noting that, although the entire approach is illustrated within the divide-and conquer framework, a query keyword is not crucial to our current implementation. We provide experimental results to prove this.

  8. Total variation regularization in measurement and image space for PET reconstruction

    KAUST Repository

    Burger, M

    2014-09-18

    © 2014 IOP Publishing Ltd. The aim of this paper is to test and analyse a novel technique for image reconstruction in positron emission tomography, which is based on (total variation) regularization on both the image space and the projection space. We formulate our variational problem considering both total variation penalty terms on the image and on an idealized sinogram to be reconstructed from a given Poisson distributed noisy sinogram. We prove existence, uniqueness and stability results for the proposed model and provide some analytical insight into the structures favoured by joint regularization. For the numerical solution of the corresponding discretized problem we employ the split Bregman algorithm and extensively test the approach in comparison to standard total variation regularization on the image. The numerical results show that an additional penalty on the sinogram performs better on reconstructing images with thin structures.

  9. A contextual image segmentation system using a priori information for automatic data classification in nuclear physics

    International Nuclear Information System (INIS)

    Benkirane, A.; Auger, G.; Chbihi, A.; Bloyet, D.; Plagnol, E.

    1994-01-01

    This paper presents an original approach to solve an automatic data classification problem by means of image processing techniques. The classification is achieved using image segmentation techniques for extracting the meaningful classes. Two types of information are merged for this purpose: the information contained in experimental images and a priori information derived from underlying physics (and adapted to image segmentation problem). This data fusion is widely used at different stages of the segmentation process. This approach yields interesting results in terms of segmentation performances, even in very noisy cases. Satisfactory classification results are obtained in cases where more ''classical'' automatic data classification methods fail. (authors). 25 refs., 14 figs., 1 append

  10. A contextual image segmentation system using a priori information for automatic data classification in nuclear physics

    Energy Technology Data Exchange (ETDEWEB)

    Benkirane, A; Auger, G; Chbihi, A [Grand Accelerateur National d` Ions Lourds (GANIL), 14 - Caen (France); Bloyet, D [Caen Univ., 14 (France); Plagnol, E [Paris-11 Univ., 91 - Orsay (France). Inst. de Physique Nucleaire

    1994-12-31

    This paper presents an original approach to solve an automatic data classification problem by means of image processing techniques. The classification is achieved using image segmentation techniques for extracting the meaningful classes. Two types of information are merged for this purpose: the information contained in experimental images and a priori information derived from underlying physics (and adapted to image segmentation problem). This data fusion is widely used at different stages of the segmentation process. This approach yields interesting results in terms of segmentation performances, even in very noisy cases. Satisfactory classification results are obtained in cases where more ``classical`` automatic data classification methods fail. (authors). 25 refs., 14 figs., 1 append.

  11. Lithium NLP: A System for Rich Information Extraction from Noisy User Generated Text on Social Media

    OpenAIRE

    Bhargava, Preeti; Spasojevic, Nemanja; Hu, Guoning

    2017-01-01

    In this paper, we describe the Lithium Natural Language Processing (NLP) system - a resource-constrained, high- throughput and language-agnostic system for information extraction from noisy user generated text on social media. Lithium NLP extracts a rich set of information including entities, topics, hashtags and sentiment from text. We discuss several real world applications of the system currently incorporated in Lithium products. We also compare our system with existing commercial and acad...

  12. Sparse representation and dictionary learning penalized image reconstruction for positron emission tomography

    International Nuclear Information System (INIS)

    Chen, Shuhang; Liu, Huafeng; Shi, Pengcheng; Chen, Yunmei

    2015-01-01

    Accurate and robust reconstruction of the radioactivity concentration is of great importance in positron emission tomography (PET) imaging. Given the Poisson nature of photo-counting measurements, we present a reconstruction framework that integrates sparsity penalty on a dictionary into a maximum likelihood estimator. Patch-sparsity on a dictionary provides the regularization for our effort, and iterative procedures are used to solve the maximum likelihood function formulated on Poisson statistics. Specifically, in our formulation, a dictionary could be trained on CT images, to provide intrinsic anatomical structures for the reconstructed images, or adaptively learned from the noisy measurements of PET. Accuracy of the strategy with very promising application results from Monte-Carlo simulations, and real data are demonstrated. (paper)

  13. A Statistical and Spectral Model for Representing Noisy Sounds with Short-Time Sinusoids

    Directory of Open Access Journals (Sweden)

    Myriam Desainte-Catherine

    2005-07-01

    Full Text Available We propose an original model for noise analysis, transformation, and synthesis: the CNSS model. Noisy sounds are represented with short-time sinusoids whose frequencies and phases are random variables. This spectral and statistical model represents information about the spectral density of frequencies. This perceptually relevant property is modeled by three mathematical parameters that define the distribution of the frequencies. This model also represents the spectral envelope. The mathematical parameters are defined and the analysis algorithms to extract these parameters from sounds are introduced. Then algorithms for generating sounds from the parameters of the model are presented. Applications of this model include tools for composers, psychoacoustic experiments, and pedagogy.

  14. Diffusion tensor imaging of the human skeletal muscle: contributions and applications

    International Nuclear Information System (INIS)

    Neji, Radhouene

    2010-01-01

    In this thesis, we present several techniques for the processing of diffusion tensor images. They span a wide range of tasks such as estimation and regularization, clustering and segmentation, as well as registration. The variational framework proposed for recovering a tensor field from noisy diffusion weighted images exploits the fact that diffusion data represent populations of fibers and therefore each tensor can be reconstructed using a weighted combination of tensors lying in its neighborhood. The segmentation approach operates both at the voxel and the fiber tract levels. It is based on the use of Mercer kernels over Gaussian diffusion probabilities to model tensor similarity and spatial interactions, allowing the definition of fiber metrics that combine information from spatial localization and diffusion tensors. Several clustering techniques can be subsequently used to segment tensor fields and fiber tractographies. Moreover, we show how to develop supervised extensions of these algorithms. The registration algorithm uses probability kernels in order to match moving and target images. The deformation consistency is assessed using the distortion induced in the distances between neighboring probabilities. Discrete optimization is used to seek an optimum of the defined objective function. The experimental validation is done over a dataset of manually segmented diffusion images of the lower leg muscle for healthy and diseased subjects. The results of the techniques developed throughout this thesis are promising. (author)

  15. Low level cloud motion vectors from Kalpana-1 visible images

    Indian Academy of Sciences (India)

    . In this paper, an attempt has been made to retrieve low-level cloud motion vectors using Kalpana-1 visible (VIS) images at every half an hour. The VIS channel provides better detection of low level clouds, which remain obscure in thermal IR ...

  16. Research on Remote Sensing Image Classification Based on Feature Level Fusion

    Science.gov (United States)

    Yuan, L.; Zhu, G.

    2018-04-01

    Remote sensing image classification, as an important direction of remote sensing image processing and application, has been widely studied. However, in the process of existing classification algorithms, there still exists the phenomenon of misclassification and missing points, which leads to the final classification accuracy is not high. In this paper, we selected Sentinel-1A and Landsat8 OLI images as data sources, and propose a classification method based on feature level fusion. Compare three kind of feature level fusion algorithms (i.e., Gram-Schmidt spectral sharpening, Principal Component Analysis transform and Brovey transform), and then select the best fused image for the classification experimental. In the classification process, we choose four kinds of image classification algorithms (i.e. Minimum distance, Mahalanobis distance, Support Vector Machine and ISODATA) to do contrast experiment. We use overall classification precision and Kappa coefficient as the classification accuracy evaluation criteria, and the four classification results of fused image are analysed. The experimental results show that the fusion effect of Gram-Schmidt spectral sharpening is better than other methods. In four kinds of classification algorithms, the fused image has the best applicability to Support Vector Machine classification, the overall classification precision is 94.01 % and the Kappa coefficients is 0.91. The fused image with Sentinel-1A and Landsat8 OLI is not only have more spatial information and spectral texture characteristics, but also enhances the distinguishing features of the images. The proposed method is beneficial to improve the accuracy and stability of remote sensing image classification.

  17. Bayesian Inference for Linear Parabolic PDEs with Noisy Boundary Conditions

    KAUST Repository

    Ruggeri, Fabrizio; Sawlan, Zaid A; Scavino, Marco; Tempone, Raul

    2016-01-01

    In this work we develop a hierarchical Bayesian setting to infer unknown parameters in initial-boundary value problems (IBVPs) for one-dimensional linear parabolic partial differential equations. Noisy boundary data and known initial condition are assumed. We derive the likelihood function associated with the forward problem, given some measurements of the solution field subject to Gaussian noise. Such function is then analytically marginalized using the linearity of the equation. Gaussian priors have been assumed for the time-dependent Dirichlet boundary values. Our approach is applied to synthetic data for the one-dimensional heat equation model, where the thermal diffusivity is the unknown parameter. We show how to infer the thermal diffusivity parameter when its prior distribution is lognormal or modeled by means of a space-dependent stationary lognormal random field. We use the Laplace method to provide approximated Gaussian posterior distributions for the thermal diffusivity. Expected information gains and predictive posterior densities for observable quantities are numerically estimated for different experimental setups.

  18. Left hemispheric dominance during auditory processing in a noisy environment

    Directory of Open Access Journals (Sweden)

    Ross Bernhard

    2007-11-01

    Full Text Available Abstract Background In daily life, we are exposed to different sound inputs simultaneously. During neural encoding in the auditory pathway, neural activities elicited by these different sounds interact with each other. In the present study, we investigated neural interactions elicited by masker and amplitude-modulated test stimulus in primary and non-primary human auditory cortex during ipsi-lateral and contra-lateral masking by means of magnetoencephalography (MEG. Results We observed significant decrements of auditory evoked responses and a significant inter-hemispheric difference for the N1m response during both ipsi- and contra-lateral masking. Conclusion The decrements of auditory evoked neural activities during simultaneous masking can be explained by neural interactions evoked by masker and test stimulus in peripheral and central auditory systems. The inter-hemispheric differences of N1m decrements during ipsi- and contra-lateral masking reflect a basic hemispheric specialization contributing to the processing of complex auditory stimuli such as speech signals in noisy environments.

  19. Bayesian Inference for Linear Parabolic PDEs with Noisy Boundary Conditions

    KAUST Repository

    Ruggeri, Fabrizio

    2015-01-07

    In this work we develop a hierarchical Bayesian setting to infer unknown parameters in initial-boundary value problems (IBVPs) for one-dimensional linear parabolic partial differential equations. Noisy boundary data and known initial condition are assumed. We derive the likelihood function associated with the forward problem, given some measurements of the solution field subject to Gaussian noise. Such function is then analytically marginalized using the linearity of the equation. Gaussian priors have been assumed for the time-dependent Dirichlet boundary values. Our approach is applied to synthetic data for the one-dimensional heat equation model, where the thermal diffusivity is the unknown parameter. We show how to infer the thermal diffusivity parameter when its prior distribution is lognormal or modeled by means of a space-dependent stationary lognormal random field. We use the Laplace method to provide approximated Gaussian posterior distributions for the thermal diffusivity. Expected information gains and predictive posterior densities for observable quantities are numerically estimated for different experimental setups.

  20. Bayesian Inference for Linear Parabolic PDEs with Noisy Boundary Conditions

    KAUST Repository

    Ruggeri, Fabrizio

    2016-01-06

    In this work we develop a hierarchical Bayesian setting to infer unknown parameters in initial-boundary value problems (IBVPs) for one-dimensional linear parabolic partial differential equations. Noisy boundary data and known initial condition are assumed. We derive the likelihood function associated with the forward problem, given some measurements of the solution field subject to Gaussian noise. Such function is then analytically marginalized using the linearity of the equation. Gaussian priors have been assumed for the time-dependent Dirichlet boundary values. Our approach is applied to synthetic data for the one-dimensional heat equation model, where the thermal diffusivity is the unknown parameter. We show how to infer the thermal diffusivity parameter when its prior distribution is lognormal or modeled by means of a space-dependent stationary lognormal random field. We use the Laplace method to provide approximated Gaussian posterior distributions for the thermal diffusivity. Expected information gains and predictive posterior densities for observable quantities are numerically estimated for different experimental setups.

  1. X-ray imaging and spectro-imaging techniques for investigating the intergalactic medium properties within merging clusters of galaxies

    International Nuclear Information System (INIS)

    Bourdin, Herve

    2004-01-01

    Clusters of galaxies are gravitationally bound matter over-densities which are filled with a hot and ionized gas emitting in X-rays. They form during merging phases of subgroups, so that the gas undergoes shock and mixing processes which perturb its physical properties at hydrostatic equilibrium. In order to map the spatial distributions of the gas emissivity, temperature and entropy as observed by X-ray telescopes, we compared different multi-scale imaging algorithms, and also developed and tested a new multi-scale spectro-imaging algorithm. With this algorithm, the searched parameter is first estimated from a count statistics within different spatial resolution elements, and its space-frequency variations are then coded by Haar wavelet coefficients. The optimal spatial distribution of the parameter is finally restored by thresholding the noisy wavelet transform. (author) [fr

  2. Two-Level Evaluation on Sensor Interoperability of Features in Fingerprint Image Segmentation

    Directory of Open Access Journals (Sweden)

    Ya-Shuo Li

    2012-03-01

    Full Text Available Features used in fingerprint segmentation significantly affect the segmentation performance. Various features exhibit different discriminating abilities on fingerprint images derived from different sensors. One feature which has better discriminating ability on images derived from a certain sensor may not adapt to segment images derived from other sensors. This degrades the segmentation performance. This paper empirically analyzes the sensor interoperability problem of segmentation feature, which refers to the feature’s ability to adapt to the raw fingerprints captured by different sensors. To address this issue, this paper presents a two-level feature evaluation method, including the first level feature evaluation based on segmentation error rate and the second level feature evaluation based on decision tree. The proposed method is performed on a number of fingerprint databases which are obtained from various sensors. Experimental results show that the proposed method can effectively evaluate the sensor interoperability of features, and the features with good evaluation results acquire better segmentation accuracies of images originating from different sensors.

  3. The MISO Wiretap Channel with Noisy Main Channel Estimation in the High Power Regime

    KAUST Repository

    Rezki, Zouheir

    2017-02-07

    We improve upon our previous upper bound on the secrecy capacity of the wiretap channel with multiple transmit antennas and single-antenna receivers, with noisy main channel state information (CSI) at the transmitter (CSI-T). Specifically, we show that if the main CSI error does not scale with the power budget at the transmitter P̅, then the secrecy capacity is )bounded above essentially by log log (P̅ yielding a secure degree of freedom (sdof) equal to zero. However, if the main CSI error scales as O(P̅-β), for β ∈ [0,1], then the sdof is equal to β.

  4. The MISO Wiretap Channel with Noisy Main Channel Estimation in the High Power Regime

    KAUST Repository

    Rezki, Zouheir; Chaaban, Anas; Alomair, Basel; Alouini, Mohamed-Slim

    2017-01-01

    We improve upon our previous upper bound on the secrecy capacity of the wiretap channel with multiple transmit antennas and single-antenna receivers, with noisy main channel state information (CSI) at the transmitter (CSI-T). Specifically, we show that if the main CSI error does not scale with the power budget at the transmitter P̅, then the secrecy capacity is )bounded above essentially by log log (P̅ yielding a secure degree of freedom (sdof) equal to zero. However, if the main CSI error scales as O(P̅-β), for β ∈ [0,1], then the sdof is equal to β.

  5. Victims’ language: (noisy silences and (grave parodies to talk (unknowingly about individuals’ forced disappearance

    Directory of Open Access Journals (Sweden)

    Gabriel Gatti

    2011-07-01

    Full Text Available Based on the results of research carried out between 2005 and 2008 about social universes constructed in Argentina and Uruguay around the figure of the disappeared detainee, this piece aims to systematize several answer to one the more complex problems this repression figure bears: that of representation of facts and their consequences. This work focuses no on all possible answers, but on several of the more innovative and creative: those betting on talking about the impossibility to talk (the noisy silences, and those betting on forcing language up to its limit (grave parodies.

  6. Chaotic annealing with hypothesis test for function optimization in noisy environments

    International Nuclear Information System (INIS)

    Pan Hui; Wang Ling; Liu Bo

    2008-01-01

    As a special mechanism to avoid being trapped in local minimum, the ergodicity property of chaos has been used as a novel searching technique for optimization problems, but there is no research work on chaos for optimization in noisy environments. In this paper, the performance of chaotic annealing (CA) for uncertain function optimization is investigated, and a new hybrid approach (namely CAHT) that combines CA and hypothesis test (HT) is proposed. In CAHT, the merits of CA are applied for well exploration and exploitation in searching space, and solution quality can be identified reliably by hypothesis test to reduce the repeated search to some extent and to reasonably estimate performance for solution. Simulation results and comparisons show that, chaos is helpful to improve the performance of SA for uncertain function optimization, and CAHT can further improve the searching efficiency, quality and robustness

  7. A Novel Approach in Text-Independent Speaker Recognition in Noisy Environment

    Directory of Open Access Journals (Sweden)

    Nona Heydari Esfahani

    2014-10-01

    Full Text Available In this paper, robust text-independent speaker recognition is taken into consideration. The proposed method performs on manual silence-removed utterances that are segmented into smaller speech units containing few phones and at least one vowel. The segments are basic units for long-term feature extraction. Sub-band entropy is directly extracted in each segment. A robust vowel detection method is then applied on each segment to separate a high energy vowel that is used as unit for pitch frequency and formant extraction. By applying a clustering technique, extracted short-term features namely MFCC coefficients are combined with long term features. Experiments using MLP classifier show that the average speaker accuracy recognition rate is 97.33% for clean speech and 61.33% in noisy environment for -2db SNR, that shows improvement compared to other conventional methods.

  8. Iterative choice of the optimal regularization parameter in TV image deconvolution

    International Nuclear Information System (INIS)

    Sixou, B; Toma, A; Peyrin, F; Denis, L

    2013-01-01

    We present an iterative method for choosing the optimal regularization parameter for the linear inverse problem of Total Variation image deconvolution. This approach is based on the Morozov discrepancy principle and on an exponential model function for the data term. The Total Variation image deconvolution is performed with the Alternating Direction Method of Multipliers (ADMM). With a smoothed l 2 norm, the differentiability of the value of the Lagrangian at the saddle point can be shown and an approximate model function obtained. The choice of the optimal parameter can be refined with a Newton method. The efficiency of the method is demonstrated on a blurred and noisy bone CT cross section

  9. Computed tomography imaging with the Adaptive Statistical Iterative Reconstruction (ASIR) algorithm: dependence of image quality on the blending level of reconstruction.

    Science.gov (United States)

    Barca, Patrizio; Giannelli, Marco; Fantacci, Maria Evelina; Caramella, Davide

    2018-06-01

    Computed tomography (CT) is a useful and widely employed imaging technique, which represents the largest source of population exposure to ionizing radiation in industrialized countries. Adaptive Statistical Iterative Reconstruction (ASIR) is an iterative reconstruction algorithm with the potential to allow reduction of radiation exposure while preserving diagnostic information. The aim of this phantom study was to assess the performance of ASIR, in terms of a number of image quality indices, when different reconstruction blending levels are employed. CT images of the Catphan-504 phantom were reconstructed using conventional filtered back-projection (FBP) and ASIR with reconstruction blending levels of 20, 40, 60, 80, and 100%. Noise, noise power spectrum (NPS), contrast-to-noise ratio (CNR) and modulation transfer function (MTF) were estimated for different scanning parameters and contrast objects. Noise decreased and CNR increased non-linearly up to 50 and 100%, respectively, with increasing blending level of reconstruction. Also, ASIR has proven to modify the NPS curve shape. The MTF of ASIR reconstructed images depended on tube load/contrast and decreased with increasing blending level of reconstruction. In particular, for low radiation exposure and low contrast acquisitions, ASIR showed lower performance than FBP, in terms of spatial resolution for all blending levels of reconstruction. CT image quality varies substantially with the blending level of reconstruction. ASIR has the potential to reduce noise whilst maintaining diagnostic information in low radiation exposure CT imaging. Given the opposite variation of CNR and spatial resolution with the blending level of reconstruction, it is recommended to use an optimal value of this parameter for each specific clinical application.

  10. Level-set-based reconstruction algorithm for EIT lung images: first clinical results.

    Science.gov (United States)

    Rahmati, Peyman; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz; Adler, Andy

    2012-05-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure-volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM.

  11. Level-set-based reconstruction algorithm for EIT lung images: first clinical results

    International Nuclear Information System (INIS)

    Rahmati, Peyman; Adler, Andy; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz

    2012-01-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure–volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM. (paper)

  12. Remaining useful life prediction based on noisy condition monitoring signals using constrained Kalman filter

    International Nuclear Information System (INIS)

    Son, Junbo; Zhou, Shiyu; Sankavaram, Chaitanya; Du, Xinyu; Zhang, Yilu

    2016-01-01

    In this paper, a statistical prognostic method to predict the remaining useful life (RUL) of individual units based on noisy condition monitoring signals is proposed. The prediction accuracy of existing data-driven prognostic methods depends on the capability of accurately modeling the evolution of condition monitoring (CM) signals. Therefore, it is inevitable that the RUL prediction accuracy depends on the amount of random noise in CM signals. When signals are contaminated by a large amount of random noise, RUL prediction even becomes infeasible in some cases. To mitigate this issue, a robust RUL prediction method based on constrained Kalman filter is proposed. The proposed method models the CM signals subject to a set of inequality constraints so that satisfactory prediction accuracy can be achieved regardless of the noise level of signal evolution. The advantageous features of the proposed RUL prediction method is demonstrated by both numerical study and case study with real world data from automotive lead-acid batteries. - Highlights: • A computationally efficient constrained Kalman filter is proposed. • Proposed filter is integrated into an online failure prognosis framework. • A set of proper constraints significantly improves the failure prediction accuracy. • Promising results are reported in the application of battery failure prognosis.

  13. Direct Reconstruction of CT-based Attenuation Correction Images for PET with Cluster-Based Penalties

    Science.gov (United States)

    Kim, Soo Mee; Alessio, Adam M.; De Man, Bruno; Asma, Evren; Kinahan, Paul E.

    2015-01-01

    Extremely low-dose CT acquisitions for the purpose of PET attenuation correction will have a high level of noise and biasing artifacts due to factors such as photon starvation. This work explores a priori knowledge appropriate for CT iterative image reconstruction for PET attenuation correction. We investigate the maximum a posteriori (MAP) framework with cluster-based, multinomial priors for the direct reconstruction of the PET attenuation map. The objective function for direct iterative attenuation map reconstruction was modeled as a Poisson log-likelihood with prior terms consisting of quadratic (Q) and mixture (M) distributions. The attenuation map is assumed to have values in 4 clusters: air+background, lung, soft tissue, and bone. Under this assumption, the MP was a mixture probability density function consisting of one exponential and three Gaussian distributions. The relative proportion of each cluster was jointly estimated during each voxel update of direct iterative coordinate decent (dICD) method. Noise-free data were generated from NCAT phantom and Poisson noise was added. Reconstruction with FBP (ramp filter) was performed on the noise-free (ground truth) and noisy data. For the noisy data, dICD reconstruction was performed with the combination of different prior strength parameters (β and γ) of Q- and M-penalties. The combined quadratic and mixture penalties reduces the RMSE by 18.7% compared to post-smoothed iterative reconstruction and only 0.7% compared to quadratic alone. For direct PET attenuation map reconstruction from ultra-low dose CT acquisitions, the combination of quadratic and mixture priors offers regularization of both variance and bias and is a potential method to derive attenuation maps with negligible patient dose. However, the small improvement in quantitative accuracy relative to the substantial increase in algorithm complexity does not currently justify the use of mixture-based PET attenuation priors for reconstruction of CT

  14. Task-driven image acquisition and reconstruction in cone-beam CT

    International Nuclear Information System (INIS)

    Gang, Grace J; Stayman, J Webster; Siewerdsen, Jeffrey H; Ehtiati, Tina

    2015-01-01

    This work introduces a task-driven imaging framework that incorporates a mathematical definition of the imaging task, a model of the imaging system, and a patient-specific anatomical model to prospectively design image acquisition and reconstruction techniques to optimize task performance. The framework is applied to joint optimization of tube current modulation, view-dependent reconstruction kernel, and orbital tilt in cone-beam CT. The system model considers a cone-beam CT system incorporating a flat-panel detector and 3D filtered backprojection and accurately describes the spatially varying noise and resolution over a wide range of imaging parameters in the presence of a realistic anatomical model. Task-based detectability index (d′) is incorporated as the objective function in a task-driven optimization of image acquisition and reconstruction techniques. The orbital tilt was optimized through an exhaustive search across tilt angles ranging ±30°. For each tilt angle, the view-dependent tube current and reconstruction kernel (i.e. the modulation profiles) that maximized detectability were identified via an alternating optimization. The task-driven approach was compared with conventional unmodulated and automatic exposure control (AEC) strategies for a variety of imaging tasks and anthropomorphic phantoms. The task-driven strategy outperformed the unmodulated and AEC cases for all tasks. For example, d′ for a sphere detection task in a head phantom was improved by 30% compared to the unmodulated case by using smoother kernels for noisy views and distributing mAs across less noisy views (at fixed total mAs) in a manner that was beneficial to task performance. Similarly for detection of a line-pair pattern, the task-driven approach increased d′ by 80% compared to no modulation by means of view-dependent mA and kernel selection that yields modulation transfer function and noise-power spectrum optimal to the task. Optimization of orbital tilt identified the

  15. Magnetic resonance image compression using scalar-vector quantization

    Science.gov (United States)

    Mohsenian, Nader; Shahri, Homayoun

    1995-12-01

    A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.

  16. Bit-level plane image encryption based on coupled map lattice with time-varying delay

    Science.gov (United States)

    Lv, Xiupin; Liao, Xiaofeng; Yang, Bo

    2018-04-01

    Most of the existing image encryption algorithms had two basic properties: confusion and diffusion in a pixel-level plane based on various chaotic systems. Actually, permutation in a pixel-level plane could not change the statistical characteristics of an image, and many of the existing color image encryption schemes utilized the same method to encrypt R, G and B components, which means that the three color components of a color image are processed three times independently. Additionally, dynamical performance of a single chaotic system degrades greatly with finite precisions in computer simulations. In this paper, a novel coupled map lattice with time-varying delay therefore is applied in color images bit-level plane encryption to solve the above issues. Spatiotemporal chaotic system with both much longer period in digitalization and much excellent performances in cryptography is recommended. Time-varying delay embedded in coupled map lattice enhances dynamical behaviors of the system. Bit-level plane image encryption algorithm has greatly reduced the statistical characteristics of an image through the scrambling processing. The R, G and B components cross and mix with one another, which reduces the correlation among the three components. Finally, simulations are carried out and all the experimental results illustrate that the proposed image encryption algorithm is highly secure, and at the same time, also demonstrates superior performance.

  17. Training Methods for Image Noise Level Estimation on Wavelet Components

    Directory of Open Access Journals (Sweden)

    A. De Stefano

    2004-12-01

    Full Text Available The estimation of the standard deviation of noise contaminating an image is a fundamental step in wavelet-based noise reduction techniques. The method widely used is based on the mean absolute deviation (MAD. This model-based method assumes specific characteristics of the noise-contaminated image component. Three novel and alternative methods for estimating the noise standard deviation are proposed in this work and compared with the MAD method. Two of these methods rely on a preliminary training stage in order to extract parameters which are then used in the application stage. The sets used for training and testing, 13 and 5 images, respectively, are fully disjoint. The third method assumes specific statistical distributions for image and noise components. Results showed the prevalence of the training-based methods for the images and the range of noise levels considered.

  18. Multi-Robot, Multi-Target Particle Swarm Optimization Search in Noisy Wireless Environments

    Energy Technology Data Exchange (ETDEWEB)

    Kurt Derr; Milos Manic

    2009-05-01

    Multiple small robots (swarms) can work together using Particle Swarm Optimization (PSO) to perform tasks that are difficult or impossible for a single robot to accomplish. The problem considered in this paper is exploration of an unknown environment with the goal of finding a target(s) at an unknown location(s) using multiple small mobile robots. This work demonstrates the use of a distributed PSO algorithm with a novel adaptive RSS weighting factor to guide robots for locating target(s) in high risk environments. The approach was developed and analyzed on multiple robot single and multiple target search. The approach was further enhanced by the multi-robot-multi-target search in noisy environments. The experimental results demonstrated how the availability of radio frequency signal can significantly affect robot search time to reach a target.

  19. Simultaneous reconstruction, segmentation, and edge enhancement of relatively piecewise continuous images with intensity-level information

    International Nuclear Information System (INIS)

    Liang, Z.; Jaszczak, R.; Coleman, R.; Johnson, V.

    1991-01-01

    A multinomial image model is proposed which uses intensity-level information for reconstruction of contiguous image regions. The intensity-level information assumes that image intensities are relatively constant within contiguous regions over the image-pixel array and that intensity levels of these regions are determined either empirically or theoretically by information criteria. These conditions may be valid, for example, for cardiac blood-pool imaging, where the intensity levels (or radionuclide activities) of myocardium, blood-pool, and background regions are distinct and the activities within each region of muscle, blood, or background are relatively uniform. To test the model, a mathematical phantom over a 64x64 array was constructed. The phantom had three contiguous regions. Each region had a different intensity level. Measurements from the phantom were simulated using an emission-tomography geometry. Fifty projections were generated over 180 degree, with 64 equally spaced parallel rays per projection. Projection data were randomized to contain Poisson noise. Image reconstructions were performed using an iterative maximum a posteriori probability procedure. The contiguous regions corresponding to the three intensity levels were automatically segmented. Simultaneously, the edges of the regions were sharpened. Noise in the reconstructed images was significantly suppressed. Convergence of the iterative procedure to the phantom was observed. Compared with maximum likelihood and filtered-backprojection approaches, the results obtained using the maximum a posteriori probability with the intensity-level information demonstrated qualitative and quantitative improvement in localizing the regions of varying intensities

  20. Population coding in sparsely connected networks of noisy neurons.

    Science.gov (United States)

    Tripp, Bryan P; Orchard, Jeff

    2012-01-01

    This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and because the information they encode must be decoded by other neurons, if it is to affect behavior. However, population coding theory has often ignored network structure, or assumed discrete, fully connected populations (in contrast with the sparsely connected, continuous sheet of the cortex). In this study, we modeled a sheet of cortical neurons with sparse, primarily local connections, and found that a network with this structure could encode multiple internal state variables with high signal-to-noise ratio. However, we were unable to create high-fidelity networks by instantiating connections at random according to spatial connection probabilities. In our models, high-fidelity networks required additional structure, with higher cluster factors and correlations between the inputs to nearby neurons.

  1. Population Coding in Sparsely Connected Networks of Noisy Neurons

    Directory of Open Access Journals (Sweden)

    Bryan Patrick Tripp

    2012-05-01

    Full Text Available This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and because the information they encode must be decoded by other neurons, if it is to affect behaviour. However, population coding theory has often ignored network structure, or assumed discrete, fully-connected populations (in contrast with the sparsely connected, continuous sheet of the cortex. In this study, we model a sheet of cortical neurons with sparse, primarily local connections, and find that a network with this structure can encode multiple internal state variables with high signal-to-noise ratio. However, in our model, although connection probability varies with the distance between neurons, we find that the connections cannot be instantiated at random according to these probabilities, but must have additional structure if information is to be encoded with high fidelity.

  2. Facilitation of listening comprehension by visual information under noisy listening condition

    Science.gov (United States)

    Kashimada, Chiho; Ito, Takumi; Ogita, Kazuki; Hasegawa, Hiroshi; Kamata, Kazuo; Ayama, Miyoshi

    2009-02-01

    Comprehension of a sentence under a wide range of delay conditions between auditory and visual stimuli was measured in the environment with low auditory clarity of the level of -10dB and -15dB pink noise. Results showed that the image was helpful for comprehension of the noise-obscured voice stimulus when the delay between the auditory and visual stimuli was 4 frames (=132msec) or less, the image was not helpful for comprehension when the delay between the auditory and visual stimulus was 8 frames (=264msec) or more, and in some cases of the largest delay (32 frames), the video image interfered with comprehension.

  3. Two-level image authentication by two-step phase-shifting interferometry and compressive sensing

    Science.gov (United States)

    Zhang, Xue; Meng, Xiangfeng; Yin, Yongkai; Yang, Xiulun; Wang, Yurong; Li, Xianye; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-01-01

    A two-level image authentication method is proposed; the method is based on two-step phase-shifting interferometry, double random phase encoding, and compressive sensing (CS) theory, by which the certification image can be encoded into two interferograms. Through discrete wavelet transform (DWT), sparseness processing, Arnold transform, and data compression, two compressed signals can be generated and delivered to two different participants of the authentication system. Only the participant who possesses the first compressed signal attempts to pass the low-level authentication. The application of Orthogonal Match Pursuit CS algorithm reconstruction, inverse Arnold transform, inverse DWT, two-step phase-shifting wavefront reconstruction, and inverse Fresnel transform can result in the output of a remarkable peak in the central location of the nonlinear correlation coefficient distributions of the recovered image and the standard certification image. Then, the other participant, who possesses the second compressed signal, is authorized to carry out the high-level authentication. Therefore, both compressed signals are collected to reconstruct the original meaningful certification image with a high correlation coefficient. Theoretical analysis and numerical simulations verify the feasibility of the proposed method.

  4. Using Image Gradients to Improve Robustness of Digital Image Correlation to Non-uniform Illumination: Effects of Weighting and Normalization Choices

    KAUST Repository

    Xu, Jiangping

    2015-03-05

    Changes in the light condition affect the solution of intensity-based digital image correlation algorithms. One natural way to decrease the influence of illumination is to consider the gradients of the image rather than the image itself when building the objective function. In this work, a weighted normalized gradient-based algorithm, is proposed. This algorithm optimizes the sum-of-squared difference between the weighted normalized gradients of the reference and deformed images. Due to the lower sensitivity of the gradient to the illumination variation, this algorithm is more robust and accurate than the intensity-based algorithm in case of illumination variations. Yet, it comes with a higher sensitivity to noise that can be mitigated by designing the relevant weighting and normalization of the image gradient. Numerical results demonstrate that the proposed algorithm gives better results in case of linear/non-linear space-based and non-linear gray value-based illumination variation. The proposed algorithm still performs better than the intensity-based algorithm in case of illumination variations and noisy data provided the images are pre-smoothed with a Gaussian low-pass filter in numerical and experimental examples.

  5. Level set segmentation of medical images based on local region statistics and maximum a posteriori probability.

    Science.gov (United States)

    Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan

    2013-01-01

    This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.

  6. Iris recognition: on the segmentation of degraded images acquired in the visible wavelength.

    Science.gov (United States)

    Proença, Hugo

    2010-08-01

    Iris recognition imaging constraints are receiving increasing attention. There are several proposals to develop systems that operate in the visible wavelength and in less constrained environments. These imaging conditions engender acquired noisy artifacts that lead to severely degraded images, making iris segmentation a major issue. Having observed that existing iris segmentation methods tend to fail in these challenging conditions, we present a segmentation method that can handle degraded images acquired in less constrained conditions. We offer the following contributions: 1) to consider the sclera the most easily distinguishable part of the eye in degraded images, 2) to propose a new type of feature that measures the proportion of sclera in each direction and is fundamental in segmenting the iris, and 3) to run the entire procedure in deterministically linear time in respect to the size of the image, making the procedure suitable for real-time applications.

  7. An efficient multiple exposure image fusion in JPEG domain

    Science.gov (United States)

    Hebbalaguppe, Ramya; Kakarala, Ramakrishna

    2012-01-01

    In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.

  8. 3D shape recovery from image focus using gray level co-occurrence matrix

    Science.gov (United States)

    Mahmood, Fahad; Munir, Umair; Mehmood, Fahad; Iqbal, Javaid

    2018-04-01

    Recovering a precise and accurate 3-D shape of the target object utilizing robust 3-D shape recovery algorithm is an ultimate objective of computer vision community. Focus measure algorithm plays an important role in this architecture which convert the color values of each pixel of the acquired 2-D image dataset into corresponding focus values. After convolving the focus measure filter with the input 2-D image dataset, a 3-D shape recovery approach is applied which will recover the depth map. In this document, we are concerned with proposing Gray Level Co-occurrence Matrix along with its statistical features for computing the focus information of the image dataset. The Gray Level Co-occurrence Matrix quantifies the texture present in the image using statistical features and then applies joint probability distributive function of the gray level pairs of the input image. Finally, we quantify the focus value of the input image using Gaussian Mixture Model. Due to its little computational complexity, sharp focus measure curve, robust to random noise sources and accuracy, it is considered as superior alternative to most of recently proposed 3-D shape recovery approaches. This algorithm is deeply investigated on real image sequences and synthetic image dataset. The efficiency of the proposed scheme is also compared with the state of art 3-D shape recovery approaches. Finally, by means of two global statistical measures, root mean square error and correlation, we claim that this approach -in spite of simplicity generates accurate results.

  9. RADIANCE DOMAIN COMPOSITING FOR HIGH DYNAMIC RANGE IMAGING

    Directory of Open Access Journals (Sweden)

    M.R. Renu

    2013-02-01

    Full Text Available High dynamic range imaging aims at creating an image with a range of intensity variations larger than the range supported by a camera sensor. Most commonly used methods combine multiple exposure low dynamic range (LDR images, to obtain the high dynamic range (HDR image. Available methods typically neglect the noise term while finding appropriate weighting functions to estimate the camera response function as well as the radiance map. We look at the HDR imaging problem in a denoising frame work and aim at reconstructing a low noise radiance map from noisy low dynamic range images, which is tone mapped to get the LDR equivalent of the HDR image. We propose a maximum aposteriori probability (MAP based reconstruction of the HDR image using Gibb’s prior to model the radiance map, with total variation (TV as the prior to avoid unnecessary smoothing of the radiance field. To make the computation with TV prior efficient, we extend the majorize-minimize method of upper bounding the total variation by a quadratic function to our case which has a nonlinear term arising from the camera response function. A theoretical justification for doing radiance domain denoising as opposed to image domain denoising is also provided.

  10. Image accuracy and representational enhancement through low-level, multi-sensor integration techniques

    International Nuclear Information System (INIS)

    Baker, J.E.

    1993-05-01

    Multi-Sensor Integration (MSI) is the combining of data and information from more than one source in order to generate a more reliable and consistent representation of the environment. The need for MSI derives largely from basic ambiguities inherent in our current sensor imaging technologies. These ambiguities exist as long as the mapping from reality to image is not 1-to-1. That is, if different 44 realities'' lead to identical images, a single image cannot reveal the particular reality which was the truth. MSI techniques can be divided into three categories based on the relative information content of the original images with that of the desired representation: (1) ''detail enhancement,'' wherein the relative information content of the original images is less rich than the desired representation; (2) ''data enhancement,'' wherein the MSI techniques axe concerned with improving the accuracy of the data rather than either increasing or decreasing the level of detail; and (3) ''conceptual enhancement,'' wherein the image contains more detail than is desired, making it difficult to easily recognize objects of interest. In conceptual enhancement one must group pixels corresponding to the same conceptual object and thereby reduce the level of extraneous detail. This research focuses on data and conceptual enhancement algorithms. To be useful in many real-world applications, e.g., autonomous or teleoperated robotics, real-time feedback is critical. But, many MSI/image processing algorithms require significant processing time. This is especially true of feature extraction, object isolation, and object recognition algorithms due to their typical reliance on global or large neighborhood information. This research attempts to exploit the speed currently available in state-of-the-art digitizers and highly parallel processing systems by developing MSI algorithms based on pixel rather than global-level features

  11. Geophysical Imaging of Sea-level Proxies in Beach-Ridge Deposits

    Science.gov (United States)

    Nielsen, L.; Emerich Souza, P.; Meldgaard, A.; Bendixen, M.; Kroon, A.; Clemmensen, L. B.

    2017-12-01

    We show ground-penetrating radar (GPR) reflection data collected over modern and fossil beach deposits from different localities along coastlines in meso-tidal regimes of Greenland and micro-tidal regimes of Denmark. The acquired reflection GPR sections show several similar characteristics but also some differences. A similar characteristic is the presence of downlapping reflections, where the downlap point is interpreted to mark the transition from upper shoreface to beachface deposits and, thus, be a marker of a level close to or at sea-level at the time of deposition. Differences in grain size of the investigated beach ridge system result in different scattering characteristics of the acquired GPR data. These differences call for tailored, careful processing of the GPR data for optimal imaging of internal beach ridge architecture. We outline elements of the GPR data processing of particular importance for optimal imaging. Moreover, we discuss advantages and challenges related to using GPR-based proxies of sea-level as compared to other methods traditionally used for establishment of curves of past sea-level variation.

  12. Image of а head of law-enforcement body on micro level (empirical experimentation

    Directory of Open Access Journals (Sweden)

    D. G. Perednya

    2016-01-01

    Full Text Available The article determines image of the head of law-enforcement body. Subjects and objects of image are described. Inhomogenuity of image is cleared up. Method of examination is shortly micro level described. It is talking about image, which is formed in mind of members of team of law-enforcement body, who are subordinated to object of image. State-of-the-art is illustrated, according to received data. Hypothesis about negative image of the head in mind of subordinates is disproved. It is shown contradiction of images in collective mind and social mind.

  13. Intravital imaging of cardiac function at the single-cell level.

    Science.gov (United States)

    Aguirre, Aaron D; Vinegoni, Claudio; Sebas, Matt; Weissleder, Ralph

    2014-08-05

    Knowledge of cardiomyocyte biology is limited by the lack of methods to interrogate single-cell physiology in vivo. Here we show that contracting myocytes can indeed be imaged with optical microscopy at high temporal and spatial resolution in the beating murine heart, allowing visualization of individual sarcomeres and measurement of the single cardiomyocyte contractile cycle. Collectively, this has been enabled by efficient tissue stabilization, a prospective real-time cardiac gating approach, an image processing algorithm for motion-artifact-free imaging throughout the cardiac cycle, and a fluorescent membrane staining protocol. Quantification of cardiomyocyte contractile function in vivo opens many possibilities for investigating myocardial disease and therapeutic intervention at the cellular level.

  14. Multiengine Speech Processing Using SNR Estimator in Variable Noisy Environments

    Directory of Open Access Journals (Sweden)

    Ahmad R. Abu-El-Quran

    2012-01-01

    Full Text Available We introduce a multiengine speech processing system that can detect the location and the type of audio signal in variable noisy environments. This system detects the location of the audio source using a microphone array; the system examines the audio first, determines if it is speech/nonspeech, then estimates the value of the signal to noise (SNR using a Discrete-Valued SNR Estimator. Using this SNR value, instead of trying to adapt the speech signal to the speech processing system, we adapt the speech processing system to the surrounding environment of the captured speech signal. In this paper, we introduced the Discrete-Valued SNR Estimator and a multiengine classifier, using Multiengine Selection or Multiengine Weighted Fusion. Also we use the SI as example of the speech processing. The Discrete-Valued SNR Estimator achieves an accuracy of 98.4% in characterizing the environment's SNR. Compared to a conventional single engine SI system, the improvement in accuracy was as high as 9.0% and 10.0% for the Multiengine Selection and Multiengine Weighted Fusion, respectively.

  15. Stochastic perturbations in open chaotic systems: random versus noisy maps.

    Science.gov (United States)

    Bódai, Tamás; Altmann, Eduardo G; Endler, Antonio

    2013-04-01

    We investigate the effects of random perturbations on fully chaotic open systems. Perturbations can be applied to each trajectory independently (white noise) or simultaneously to all trajectories (random map). We compare these two scenarios by generalizing the theory of open chaotic systems and introducing a time-dependent conditionally-map-invariant measure. For the same perturbation strength we show that the escape rate of the random map is always larger than that of the noisy map. In random maps we show that the escape rate κ and dimensions D of the relevant fractal sets often depend nonmonotonically on the intensity of the random perturbation. We discuss the accuracy (bias) and precision (variance) of finite-size estimators of κ and D, and show that the improvement of the precision of the estimations with the number of trajectories N is extremely slow ([proportionality]1/lnN). We also argue that the finite-size D estimators are typically biased. General theoretical results are combined with analytical calculations and numerical simulations in area-preserving baker maps.

  16. Edge-oriented dual-dictionary guided enrichment (EDGE) for MRI-CT image reconstruction.

    Science.gov (United States)

    Li, Liang; Wang, Bigong; Wang, Ge

    2016-01-01

    In this paper, we formulate the joint/simultaneous X-ray CT and MRI image reconstruction. In particular, a novel algorithm is proposed for MRI image reconstruction from highly under-sampled MRI data and CT images. It consists of two steps. First, a training dataset is generated from a series of well-registered MRI and CT images on the same patients. Then, an initial MRI image of a patient can be reconstructed via edge-oriented dual-dictionary guided enrichment (EDGE) based on the training dataset and a CT image of the patient. Second, an MRI image is reconstructed using the dictionary learning (DL) algorithm from highly under-sampled k-space data and the initial MRI image. Our algorithm can establish a one-to-one correspondence between the two imaging modalities, and obtain a good initial MRI estimation. Both noise-free and noisy simulation studies were performed to evaluate and validate the proposed algorithm. The results with different under-sampling factors show that the proposed algorithm performed significantly better than those reconstructed using the DL algorithm from MRI data alone.

  17. Image Denoising via Bayesian Estimation of Statistical Parameter Using Generalized Gamma Density Prior in Gaussian Noise Model

    Science.gov (United States)

    Kittisuwan, Pichid

    2015-03-01

    The application of image processing in industry has shown remarkable success over the last decade, for example, in security and telecommunication systems. The denoising of natural image corrupted by Gaussian noise is a classical problem in image processing. So, image denoising is an indispensable step during image processing. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. One of the cruxes of the Bayesian image denoising algorithms is to estimate the statistical parameter of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with generalized Gamma density prior for local observed variance and Laplacian or Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by efficient and flexible properties of generalized Gamma density. The experimental results show that the proposed method yields good denoising results.

  18. Utilizing Minkowski functionals for image analysis: a marching square algorithm

    International Nuclear Information System (INIS)

    Mantz, Hubert; Jacobs, Karin; Mecke, Klaus

    2008-01-01

    Comparing noisy experimental image data with statistical models requires a quantitative analysis of grey-scale images beyond mean values and two-point correlations. A real-space image analysis technique is introduced for digitized grey-scale images, based on Minkowski functionals of thresholded patterns. A novel feature of this marching square algorithm is the use of weighted side lengths for pixels, so that boundary lengths are captured accurately. As examples to illustrate the technique we study surface topologies emerging during the dewetting process of thin films and analyse spinodal decomposition as well as turbulent patterns in chemical reaction–diffusion systems. The grey-scale value corresponds to the height of the film or to the concentration of chemicals, respectively. Comparison with analytic calculations in stochastic geometry models reveals a remarkable agreement of the examples with a Gaussian random field. Thus, a statistical test for non-Gaussian features in experimental data becomes possible with this image analysis technique—even for small image sizes. Implementations of the software used for the analysis are offered for download

  19. A Study of Light Level Effect on the Accuracy of Image Processing-based Tomato Grading

    Science.gov (United States)

    Prijatna, D.; Muhaemin, M.; Wulandari, R. P.; Herwanto, T.; Saukat, M.; Sugandi, W. K.

    2018-05-01

    Image processing method has been used in non-destructive tests of agricultural products. Compared to manual method, image processing method may produce more objective and consistent results. Image capturing box installed in currently used tomato grading machine (TEP-4) is equipped with four fluorescence lamps to illuminate the processed tomatoes. Since the performance of any lamp will decrease if its service time has exceeded its lifetime, it is predicted that this will affect tomato classification. The objective of this study was to determine the minimum light levels which affect classification accuracy. This study was conducted by varying light level from minimum and maximum on tomatoes in image capturing boxes and then investigates its effects on image characteristics. Research results showed that light intensity affects two variables which are important for classification, for example, area and color of captured image. Image processing program was able to determine correctly the weight and classification of tomatoes when light level was 30 lx to 140 lx.

  20. A color fusion method of infrared and low-light-level images based on visual perception

    Science.gov (United States)

    Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa

    2014-11-01

    The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.

  1. Analysis of gene expression levels in individual bacterial cells without image segmentation.

    Science.gov (United States)

    Kwak, In Hae; Son, Minjun; Hagen, Stephen J

    2012-05-11

    Studies of stochasticity in gene expression typically make use of fluorescent protein reporters, which permit the measurement of expression levels within individual cells by fluorescence microscopy. Analysis of such microscopy images is almost invariably based on a segmentation algorithm, where the image of a cell or cluster is analyzed mathematically to delineate individual cell boundaries. However segmentation can be ineffective for studying bacterial cells or clusters, especially at lower magnification, where outlines of individual cells are poorly resolved. Here we demonstrate an alternative method for analyzing such images without segmentation. The method employs a comparison between the pixel brightness in phase contrast vs fluorescence microscopy images. By fitting the correlation between phase contrast and fluorescence intensity to a physical model, we obtain well-defined estimates for the different levels of gene expression that are present in the cell or cluster. The method reveals the boundaries of the individual cells, even if the source images lack the resolution to show these boundaries clearly. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Automated breast segmentation in ultrasound computer tomography SAFT images

    Science.gov (United States)

    Hopp, T.; You, W.; Zapf, M.; Tan, W. Y.; Gemmeke, H.; Ruiter, N. V.

    2017-03-01

    Ultrasound Computer Tomography (USCT) is a promising new imaging system for breast cancer diagnosis. An essential step before further processing is to remove the water background from the reconstructed images. In this paper we present a fully-automated image segmentation method based on three-dimensional active contours. The active contour method is extended by applying gradient vector flow and encoding the USCT aperture characteristics as additional weighting terms. A surface detection algorithm based on a ray model is developed to initialize the active contour, which is iteratively deformed to capture the breast outline in USCT reflection images. The evaluation with synthetic data showed that the method is able to cope with noisy images, and is not influenced by the position of the breast and the presence of scattering objects within the breast. The proposed method was applied to 14 in-vivo images resulting in an average surface deviation from a manual segmentation of 2.7 mm. We conclude that automated segmentation of USCT reflection images is feasible and produces results comparable to a manual segmentation. By applying the proposed method, reproducible segmentation results can be obtained without manual interaction by an expert.

  3. Contrast enhancement of mail piece images

    Science.gov (United States)

    Shin, Yong-Chul; Sridhar, Ramalingam; Demjanenko, Victor; Palumbo, Paul W.; Hull, Jonathan J.

    1992-08-01

    A New approach to contrast enhancement of mail piece images is presented. The contrast enhancement is used as a preprocessing step in the real-time address block location (RT-ABL) system. The RT-ABL system processes a stream of mail piece images and locates destination address blocks. Most of the mail pieces (classified into letters) show high contrast between background and foreground. As an extreme case, however, the seasonal greeting cards usually use colored envelopes which results in reduced contrast osured by an error rate by using a linear distributed associative memory (DAM). The DAM is trained to recognize the spectra of three classes of images: with high, medium, and low OCR error rates. The DAM is not forced to make a classification every time. It is allowed to reject as unknown a spectrum presented that does not closely resemble any that has been stored in the DAM. The DAM was fairly accurate with noisy images but conservative (i.e., rejected several text images as unknowns) when there was little ground and foreground degradations without affecting the nondegraded images. This approach provides local enhancement which adapts to local features. In order to simplify the computation of A and (sigma) , dynamic programming technique is used. Implementation details, performance, and the results on test images are presented in this paper.

  4. Megavoltage imaging with a large-area, flat-panel, amorphous silicon imager

    International Nuclear Information System (INIS)

    Antonuk, Larry E.; Yorkston, John; Huang Weidong; Sandler, Howard; Siewerdsen, Jeffrey H.; El-Mohri, Youcef

    1996-01-01

    Purpose: The creation of the first large-area, amorphous silicon megavoltage imager is reported. The imager is an engineering prototype built to serve as a stepping stone toward the creation of a future clinical prototype. The engineering prototype is described and various images demonstrating its properties are shown including the first reported patient image acquired with such an amorphous silicon imaging device. Specific limitations in the engineering prototype are reviewed and potential advantages of future, more optimized imagers of this type are presented. Methods and Materials: The imager is based on a two-dimensional, pixelated array containing amorphous silicon field-effect transistors and photodiode sensors which are deposited on a thin glass substrate. The array has a 512 x 560-pixel format and a pixel pitch of 450 μm giving an imaging area of ∼23 x 25 cm 2 . The array is used in conjunction with an overlying metal plate/phosphor screen converter as well as an electronic acquisition system. Images were acquired fluoroscopically using a megavoltage treatment machine. Results: Array and digitized film images of a variety of anthropomorphic phantoms and of a human subject are presented and compared. The information content of the array images generally appears to be at least as great as that of the digitized film images. Conclusion: Despite a variety of severe limitations in the engineering prototype, including many array defects, a relatively slow and noisy acquisition system, and the lack of a means to generate images in a radiographic manner, the prototype nevertheless generated clinically useful information. The general properties of these amorphous silicon arrays, along with the quality of the images provided by the engineering prototype, strongly suggest that such arrays could eventually form the basis of a new imaging technology for radiotherapy localization and verification. The development of a clinically useful prototype offering high

  5. Dependence of accuracy of ESPRIT estimates on signal eigenvalues: the case of a noisy sum of two real exponentials.

    Science.gov (United States)

    Alexandrov, Theodore; Golyandina, Nina; Timofeyev, Alexey

    2009-02-26

    This paper is devoted to estimation of parameters for a noisy sum of two real exponential functions. Singular Spectrum Analysis is used to extract the signal subspace and then the ESPRIT method exploiting signal subspace features is applied to obtain estimates of the desired exponential rates. Dependence of estimation quality on signal eigenvalues is investigated. The special design to test this relation is elaborated.

  6. Automated processing of label-free Raman microscope images of macrophage cells with standardized regression for high-throughput analysis.

    Science.gov (United States)

    Milewski, Robert J; Kumagai, Yutaro; Fujita, Katsumasa; Standley, Daron M; Smith, Nicholas I

    2010-11-19

    Macrophages represent the front lines of our immune system; they recognize and engulf pathogens or foreign particles thus initiating the immune response. Imaging macrophages presents unique challenges, as most optical techniques require labeling or staining of the cellular compartments in order to resolve organelles, and such stains or labels have the potential to perturb the cell, particularly in cases where incomplete information exists regarding the precise cellular reaction under observation. Label-free imaging techniques such as Raman microscopy are thus valuable tools for studying the transformations that occur in immune cells upon activation, both on the molecular and organelle levels. Due to extremely low signal levels, however, Raman microscopy requires sophisticated image processing techniques for noise reduction and signal extraction. To date, efficient, automated algorithms for resolving sub-cellular features in noisy, multi-dimensional image sets have not been explored extensively. We show that hybrid z-score normalization and standard regression (Z-LSR) can highlight the spectral differences within the cell and provide image contrast dependent on spectral content. In contrast to typical Raman imaging processing methods using multivariate analysis, such as single value decomposition (SVD), our implementation of the Z-LSR method can operate nearly in real-time. In spite of its computational simplicity, Z-LSR can automatically remove background and bias in the signal, improve the resolution of spatially distributed spectral differences and enable sub-cellular features to be resolved in Raman microscopy images of mouse macrophage cells. Significantly, the Z-LSR processed images automatically exhibited subcellular architectures whereas SVD, in general, requires human assistance in selecting the components of interest. The computational efficiency of Z-LSR enables automated resolution of sub-cellular features in large Raman microscopy data sets without

  7. An iterated cubature unscented Kalman filter for large-DoF systems identification with noisy data

    Science.gov (United States)

    Ghorbani, Esmaeil; Cha, Young-Jin

    2018-04-01

    Structural and mechanical system identification under dynamic loading has been an important research topic over the last three or four decades. Many Kalman-filtering-based approaches have been developed for linear and nonlinear systems. For example, to predict nonlinear systems, an unscented Kalman filter was applied. However, from extensive literature reviews, the unscented Kalman filter still showed weak performance on systems with large degrees of freedom. In this research, a modified unscented Kalman filter is proposed by integration of a cubature Kalman filter to improve the system identification performance of systems with large degrees of freedom. The novelty of this work lies on conjugating the unscented transform with the cubature integration concept to find a more accurate output from the transformation of the state vector and its related covariance matrix. To evaluate the proposed method, three different numerical models (i.e., the single degree-of-freedom Bouc-Wen model, the linear 3-degrees-of-freedom system, and the 10-degrees-of-freedom system) are investigated. To evaluate the robustness of the proposed method, high levels of noise in the measured response data are considered. The results show that the proposed method is significantly superior to the traditional UKF for noisy measured data in systems with large degrees of freedom.

  8. Local gray level S-curve transformation - A generalized contrast enhancement technique for medical images.

    Science.gov (United States)

    Gandhamal, Akash; Talbar, Sanjay; Gajre, Suhas; Hani, Ahmad Fadzil M; Kumar, Dileep

    2017-04-01

    Most medical images suffer from inadequate contrast and brightness, which leads to blurred or weak edges (low contrast) between adjacent tissues resulting in poor segmentation and errors in classification of tissues. Thus, contrast enhancement to improve visual information is extremely important in the development of computational approaches for obtaining quantitative measurements from medical images. In this research, a contrast enhancement algorithm that applies gray-level S-curve transformation technique locally in medical images obtained from various modalities is investigated. The S-curve transformation is an extended gray level transformation technique that results into a curve similar to a sigmoid function through a pixel to pixel transformation. This curve essentially increases the difference between minimum and maximum gray values and the image gradient, locally thereby, strengthening edges between adjacent tissues. The performance of the proposed technique is determined by measuring several parameters namely, edge content (improvement in image gradient), enhancement measure (degree of contrast enhancement), absolute mean brightness error (luminance distortion caused by the enhancement), and feature similarity index measure (preservation of the original image features). Based on medical image datasets comprising 1937 images from various modalities such as ultrasound, mammograms, fluorescent images, fundus, X-ray radiographs and MR images, it is found that the local gray-level S-curve transformation outperforms existing techniques in terms of improved contrast and brightness, resulting in clear and strong edges between adjacent tissues. The proposed technique can be used as a preprocessing tool for effective segmentation and classification of tissue structures in medical images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Robust boundary detection of left ventricles on ultrasound images using ASM-level set method.

    Science.gov (United States)

    Zhang, Yaonan; Gao, Yuan; Li, Hong; Teng, Yueyang; Kang, Yan

    2015-01-01

    Level set method has been widely used in medical image analysis, but it has difficulties when being used in the segmentation of left ventricular (LV) boundaries on echocardiography images because the boundaries are not very distinguish, and the signal-to-noise ratio of echocardiography images is not very high. In this paper, we introduce the Active Shape Model (ASM) into the traditional level set method to enforce shape constraints. It improves the accuracy of boundary detection and makes the evolution more efficient. The experiments conducted on the real cardiac ultrasound image sequences show a positive and promising result.

  10. Bayesian image reconstruction for improving detection performance of muon tomography.

    Science.gov (United States)

    Wang, Guobao; Schultz, Larry J; Qi, Jinyi

    2009-05-01

    Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.

  11. A comparative study of deep learning models for medical image classification

    Science.gov (United States)

    Dutta, Suvajit; Manideep, B. C. S.; Rai, Shalva; Vijayarajan, V.

    2017-11-01

    Deep Learning(DL) techniques are conquering over the prevailing traditional approaches of neural network, when it comes to the huge amount of dataset, applications requiring complex functions demanding increase accuracy with lower time complexities. Neurosciences has already exploited DL techniques, thus portrayed itself as an inspirational source for researchers exploring the domain of Machine learning. DL enthusiasts cover the areas of vision, speech recognition, motion planning and NLP as well, moving back and forth among fields. This concerns with building models that can successfully solve variety of tasks requiring intelligence and distributed representation. The accessibility to faster CPUs, introduction of GPUs-performing complex vector and matrix computations, supported agile connectivity to network. Enhanced software infrastructures for distributed computing worked in strengthening the thought that made researchers suffice DL methodologies. The paper emphases on the following DL procedures to traditional approaches which are performed manually for classifying medical images. The medical images are used for the study Diabetic Retinopathy(DR) and computed tomography (CT) emphysema data. Both DR and CT data diagnosis is difficult task for normal image classification methods. The initial work was carried out with basic image processing along with K-means clustering for identification of image severity levels. After determining image severity levels ANN has been applied on the data to get the basic classification result, then it is compared with the result of DNNs (Deep Neural Networks), which performed efficiently because of its multiple hidden layer features basically which increases accuracy factors, but the problem of vanishing gradient in DNNs made to consider Convolution Neural Networks (CNNs) as well for better results. The CNNs are found to be providing better outcomes when compared to other learning models aimed at classification of images. CNNs are

  12. Neutrosophic Hough Transform

    Directory of Open Access Journals (Sweden)

    Ümit Budak

    2017-12-01

    Full Text Available Hough transform (HT is a useful tool for both pattern recognition and image processing communities. In the view of pattern recognition, it can extract unique features for description of various shapes, such as lines, circles, ellipses, and etc. In the view of image processing, a dozen of applications can be handled with HT, such as lane detection for autonomous cars, blood cell detection in microscope images, and so on. As HT is a straight forward shape detector in a given image, its shape detection ability is low in noisy images. To alleviate its weakness on noisy images and improve its shape detection performance, in this paper, we proposed neutrosophic Hough transform (NHT. As it was proved earlier, neutrosophy theory based image processing applications were successful in noisy environments. To this end, the Hough space is initially transferred into the NS domain by calculating the NS membership triples (T, I, and F. An indeterminacy filtering is constructed where the neighborhood information is used in order to remove the indeterminacy in the spatial neighborhood of neutrosophic Hough space. The potential peaks are detected based on thresholding on the neutrosophic Hough space, and these peak locations are then used to detect the lines in the image domain. Extensive experiments on noisy and noise-free images are performed in order to show the efficiency of the proposed NHT algorithm. We also compared our proposed NHT with traditional HT and fuzzy HT methods on variety of images. The obtained results showed the efficiency of the proposed NHT on noisy images.

  13. CT Image Sequence Restoration Based on Sparse and Low-Rank Decomposition

    Science.gov (United States)

    Gou, Shuiping; Wang, Yueyue; Wang, Zhilong; Peng, Yong; Zhang, Xiaopeng; Jiao, Licheng; Wu, Jianshe

    2013-01-01

    Blurry organ boundaries and soft tissue structures present a major challenge in biomedical image restoration. In this paper, we propose a low-rank decomposition-based method for computed tomography (CT) image sequence restoration, where the CT image sequence is decomposed into a sparse component and a low-rank component. A new point spread function of Weiner filter is employed to efficiently remove blur in the sparse component; a wiener filtering with the Gaussian PSF is used to recover the average image of the low-rank component. And then we get the recovered CT image sequence by combining the recovery low-rank image with all recovery sparse image sequence. Our method achieves restoration results with higher contrast, sharper organ boundaries and richer soft tissue structure information, compared with existing CT image restoration methods. The robustness of our method was assessed with numerical experiments using three different low-rank models: Robust Principle Component Analysis (RPCA), Linearized Alternating Direction Method with Adaptive Penalty (LADMAP) and Go Decomposition (GoDec). Experimental results demonstrated that the RPCA model was the most suitable for the small noise CT images whereas the GoDec model was the best for the large noisy CT images. PMID:24023764

  14. An approach to remove impulse noise from a corrupted image

    International Nuclear Information System (INIS)

    Jin, Cong; Yan, Meng; Jin, Shu-Wei

    2013-01-01

    In this paper, we propose an efficient approach for detecting the impulse noise from corrupted images. This method is based on the principle that the feature of the digital image is usually local correlation and the feature of the impulse noise is usually located near one of the two ends of the image’s maximum and minimum gray values. After the noisy pixel has been detected by the proposed detector, a modified version of the mean filter is proposed to remove the detected impulse noise. Experimental results show that the implementation of the proposed method is simple, and it has better performance than comparison filters with regard to effective noise suppression and preservation of detail, especially when the noise ratio is very high. (paper)

  15. The numerical solution of total variation minimization problems in image processing

    Energy Technology Data Exchange (ETDEWEB)

    Vogel, C.R.; Oman, M.E. [Montana State Univ., Bozeman, MT (United States)

    1994-12-31

    Consider the minimization of penalized least squares functionals of the form: f(u) = 1/2 ({parallel}Au {minus} z{parallel}){sup 2} + {alpha}{integral}{sub {Omega}}{vert_bar}{del}u{vert_bar}dx. Here A is a bounded linear operator, z represents data, {parallel} {center_dot} {parallel} is a Hilbert space norm, {alpha} is a positive parameter, {integral}{sub {Omega}}{vert_bar}{del}u{vert_bar} dx represents the total variation (TV) of a function u {element_of} BV ({Omega}), the class of functions of bounded variation on a bounded region {Omega}, and {vert_bar} {center_dot} {vert_bar} denotes Euclidean norm. In image processing, u represents an image which is to be recovered from noisy data z. Certain {open_quotes}blurring processes{close_quotes} may be represented by the action of an operator A on the image u.

  16. Non-local means denoising of dynamic PET images.

    Directory of Open Access Journals (Sweden)

    Joyita Dutta

    Full Text Available Dynamic positron emission tomography (PET, which reveals information about both the spatial distribution and temporal kinetics of a radiotracer, enables quantitative interpretation of PET data. Model-based interpretation of dynamic PET images by means of parametric fitting, however, is often a challenging task due to high levels of noise, thus necessitating a denoising step. The objective of this paper is to develop and characterize a denoising framework for dynamic PET based on non-local means (NLM.NLM denoising computes weighted averages of voxel intensities assigning larger weights to voxels that are similar to a given voxel in terms of their local neighborhoods or patches. We introduce three key modifications to tailor the original NLM framework to dynamic PET. Firstly, we derive similarities from less noisy later time points in a typical PET acquisition to denoise the entire time series. Secondly, we use spatiotemporal patches for robust similarity computation. Finally, we use a spatially varying smoothing parameter based on a local variance approximation over each spatiotemporal patch.To assess the performance of our denoising technique, we performed a realistic simulation on a dynamic digital phantom based on the Digimouse atlas. For experimental validation, we denoised [Formula: see text] PET images from a mouse study and a hepatocellular carcinoma patient study. We compared the performance of NLM denoising with four other denoising approaches - Gaussian filtering, PCA, HYPR, and conventional NLM based on spatial patches.The simulation study revealed significant improvement in bias-variance performance achieved using our NLM technique relative to all the other methods. The experimental data analysis revealed that our technique leads to clear improvement in contrast-to-noise ratio in Patlak parametric images generated from denoised preclinical and clinical dynamic images, indicating its ability to preserve image contrast and high

  17. Non-local means denoising of dynamic PET images.

    Science.gov (United States)

    Dutta, Joyita; Leahy, Richard M; Li, Quanzheng

    2013-01-01

    Dynamic positron emission tomography (PET), which reveals information about both the spatial distribution and temporal kinetics of a radiotracer, enables quantitative interpretation of PET data. Model-based interpretation of dynamic PET images by means of parametric fitting, however, is often a challenging task due to high levels of noise, thus necessitating a denoising step. The objective of this paper is to develop and characterize a denoising framework for dynamic PET based on non-local means (NLM). NLM denoising computes weighted averages of voxel intensities assigning larger weights to voxels that are similar to a given voxel in terms of their local neighborhoods or patches. We introduce three key modifications to tailor the original NLM framework to dynamic PET. Firstly, we derive similarities from less noisy later time points in a typical PET acquisition to denoise the entire time series. Secondly, we use spatiotemporal patches for robust similarity computation. Finally, we use a spatially varying smoothing parameter based on a local variance approximation over each spatiotemporal patch. To assess the performance of our denoising technique, we performed a realistic simulation on a dynamic digital phantom based on the Digimouse atlas. For experimental validation, we denoised [Formula: see text] PET images from a mouse study and a hepatocellular carcinoma patient study. We compared the performance of NLM denoising with four other denoising approaches - Gaussian filtering, PCA, HYPR, and conventional NLM based on spatial patches. The simulation study revealed significant improvement in bias-variance performance achieved using our NLM technique relative to all the other methods. The experimental data analysis revealed that our technique leads to clear improvement in contrast-to-noise ratio in Patlak parametric images generated from denoised preclinical and clinical dynamic images, indicating its ability to preserve image contrast and high intensity details while

  18. Interaction between High-Level and Low-Level Image Analysis for Semantic Video Object Extraction

    Directory of Open Access Journals (Sweden)

    Andrea Cavallaro

    2004-06-01

    Full Text Available The task of extracting a semantic video object is split into two subproblems, namely, object segmentation and region segmentation. Object segmentation relies on a priori assumptions, whereas region segmentation is data-driven and can be solved in an automatic manner. These two subproblems are not mutually independent, and they can benefit from interactions with each other. In this paper, a framework for such interaction is formulated. This representation scheme based on region segmentation and semantic segmentation is compatible with the view that image analysis and scene understanding problems can be decomposed into low-level and high-level tasks. Low-level tasks pertain to region-oriented processing, whereas the high-level tasks are closely related to object-level processing. This approach emulates the human visual system: what one “sees” in a scene depends on the scene itself (region segmentation as well as on the cognitive task (semantic segmentation at hand. The higher-level segmentation results in a partition corresponding to semantic video objects. Semantic video objects do not usually have invariant physical properties and the definition depends on the application. Hence, the definition incorporates complex domain-specific knowledge and is not easy to generalize. For the specific implementation used in this paper, motion is used as a clue to semantic information. In this framework, an automatic algorithm is presented for computing the semantic partition based on color change detection. The change detection strategy is designed to be immune to the sensor noise and local illumination variations. The lower-level segmentation identifies the partition corresponding to perceptually uniform regions. These regions are derived by clustering in an N-dimensional feature space, composed of static as well as dynamic image attributes. We propose an interaction mechanism between the semantic and the region partitions which allows to

  19. Diagnosing and ranking retinopathy disease level using diabetic fundus image recuperation approach.

    Science.gov (United States)

    Somasundaram, K; Rajendran, P Alli

    2015-01-01

    Retinal fundus images are widely used in diagnosing different types of eye diseases. The existing methods such as Feature Based Macular Edema Detection (FMED) and Optimally Adjusted Morphological Operator (OAMO) effectively detected the presence of exudation in fundus images and identified the true positive ratio of exudates detection, respectively. These mechanically detected exudates did not include more detailed feature selection technique to the system for detection of diabetic retinopathy. To categorize the exudates, Diabetic Fundus Image Recuperation (DFIR) method based on sliding window approach is developed in this work to select the features of optic cup in digital retinal fundus images. The DFIR feature selection uses collection of sliding windows with varying range to obtain the features based on the histogram value using Group Sparsity Nonoverlapping Function. Using support vector model in the second phase, the DFIR method based on Spiral Basis Function effectively ranks the diabetic retinopathy disease level. The ranking of disease level on each candidate set provides a much promising result for developing practically automated and assisted diabetic retinopathy diagnosis system. Experimental work on digital fundus images using the DFIR method performs research on the factors such as sensitivity, ranking efficiency, and feature selection time.

  20. Lossy/lossless coding of bi-level images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1997-01-01

    Summary form only given. We present improvements to a general type of lossless, lossy, and refinement coding of bi-level images (Martins and Forchhammer, 1996). Loss is introduced by flipping pixels. The pixels are coded using arithmetic coding of conditional probabilities obtained using a template...... as is known from JBIG and proposed in JBIG-2 (Martins and Forchhammer). Our new state-of-the-art results are obtained using the more general free tree instead of a template. Also we introduce multiple refinement template coding. The lossy algorithm is analogous to the greedy `rate...

  1. Spatio-temporal Hotelling observer for signal detection from image sequences.

    Science.gov (United States)

    Caucci, Luca; Barrett, Harrison H; Rodriguez, Jeffrey J

    2009-06-22

    Detection of signals in noisy images is necessary in many applications, including astronomy and medical imaging. The optimal linear observer for performing a detection task, called the Hotelling observer in the medical literature, can be regarded as a generalization of the familiar prewhitening matched filter. Performance on the detection task is limited by randomness in the image data, which stems from randomness in the object, randomness in the imaging system, and randomness in the detector outputs due to photon and readout noise, and the Hotelling observer accounts for all of these effects in an optimal way. If multiple temporal frames of images are acquired, the resulting data set is a spatio-temporal random process, and the Hotelling observer becomes a spatio-temporal linear operator. This paper discusses the theory of the spatio-temporal Hotelling observer and estimation of the required spatio-temporal covariance matrices. It also presents a parallel implementation of the observer on a cluster of Sony PLAYSTATION 3 gaming consoles. As an example, we consider the use of the spatio-temporal Hotelling observer for exoplanet detection.

  2. Purity of Gaussian states: Measurement schemes and time evolution in noisy channels

    International Nuclear Information System (INIS)

    Paris, Matteo G.A.; Illuminati, Fabrizio; Serafini, Alessio; De Siena, Silvio

    2003-01-01

    We present a systematic study of the purity for Gaussian states of single-mode continuous variable systems. We prove the connection of purity to observable quantities for these states, and show that the joint measurement of two conjugate quadratures is necessary and sufficient to determine the purity at any time. The statistical reliability and the range of applicability of the proposed measurement scheme are tested by means of Monte Carlo simulated experiments. We then consider the dynamics of purity in noisy channels. We derive an evolution equation for the purity of general Gaussian states both in thermal and in squeezed thermal baths. We show that purity is maximized at any given time for an initial coherent state evolving in a thermal bath, or for an initial squeezed state evolving in a squeezed thermal bath whose asymptotic squeezing is orthogonal to that of the input state

  3. Delay-enhanced coherence of spiral waves in noisy Hodgkin-Huxley neuronal networks

    International Nuclear Information System (INIS)

    Wang Qingyun; Perc, Matjaz; Duan Zhisheng; Chen Guanrong

    2008-01-01

    We study the spatial dynamics of spiral waves in noisy Hodgkin-Huxley neuronal ensembles evoked by different information transmission delays and network topologies. In classical settings of coherence resonance the intensity of noise is fine-tuned so as to optimize the system's response. Here, we keep the noise intensity constant, and instead, vary the length of information transmission delay amongst coupled neurons. We show that there exists an intermediate transmission delay by which the spiral waves are optimally ordered, hence indicating the existence of delay-enhanced coherence of spatial dynamics in the examined system. Additionally, we examine the robustness of this phenomenon as the diffusive interaction topology changes towards the small-world type, and discover that shortcut links amongst distant neurons hinder the emergence of coherent spiral waves irrespective of transmission delay length. Presented results thus provide insights that could facilitate the understanding of information transmission delay on realistic neuronal networks

  4. ISAR Imaging of Ship Targets Based on an Integrated Cubic Phase Bilinear Autocorrelation Function

    Directory of Open Access Journals (Sweden)

    Jibin Zheng

    2017-03-01

    Full Text Available For inverse synthetic aperture radar (ISAR imaging of a ship target moving with ocean waves, the image constructed with the standard range-Doppler (RD technique is blurred and the range-instantaneous-Doppler (RID technique has to be used to improve the image quality. In this paper, azimuth echoes in a range cell of the ship target are modeled as noisy multicomponent cubic phase signals (CPSs after the motion compensation and a RID ISAR imaging algorithm is proposed based on the integrated cubic phase bilinear autocorrelation function (ICPBAF. The ICPBAF is bilinear and based on the two-dimensionally coherent energy accumulation. Compared to five other estimation algorithms, the ICPBAF can acquire higher cross term suppression and anti-noise performance with a reasonable computational cost. Through simulations and analyses with the synthetic model and real radar data, we verify the effectiveness of the ICPBAF and corresponding RID ISAR imaging algorithm.

  5. Evidential analysis of difference images for change detection of multitemporal remote sensing images

    Science.gov (United States)

    Chen, Yin; Peng, Lijuan; Cremers, Armin B.

    2018-03-01

    In this article, we develop two methods for unsupervised change detection in multitemporal remote sensing images based on Dempster-Shafer's theory of evidence (DST). In most unsupervised change detection methods, the probability of difference image is assumed to be characterized by mixture models, whose parameters are estimated by the expectation maximization (EM) method. However, the main drawback of the EM method is that it does not consider spatial contextual information, which may entail rather noisy detection results with numerous spurious alarms. To remedy this, we firstly develop an evidence theory based EM method (EEM) which incorporates spatial contextual information in EM by iteratively fusing the belief assignments of neighboring pixels to the central pixel. Secondly, an evidential labeling method in the sense of maximizing a posteriori probability (MAP) is proposed in order to further enhance the detection result. It first uses the parameters estimated by EEM to initialize the class labels of a difference image. Then it iteratively fuses class conditional information and spatial contextual information, and updates labels and class parameters. Finally it converges to a fixed state which gives the detection result. A simulated image set and two real remote sensing data sets are used to evaluate the two evidential change detection methods. Experimental results show that the new evidential methods are comparable to other prevalent methods in terms of total error rate.

  6. Reducing surgical levels by paraspinal mapping and diffusion tensor imaging techniques in lumbar spinal stenosis

    OpenAIRE

    Chen, Hua-Biao; Wan, Qi; Xu, Qi-Feng; Chen, Yi; Bai, Bo

    2016-01-01

    Background Correlating symptoms and physical examination findings with surgical levels based on common imaging results is not reliable. In patients who have no concordance between radiological and clinical symptoms, the surgical levels determined by conventional magnetic resonance imaging (MRI) and neurogenic examination (NE) may lead to a more extensive surgery and significant complications. We aimed to confirm that whether the use of diffusion tensor imaging (DTI) and paraspinal mapping (PM...

  7. Deficits in audiovisual speech perception in normal aging emerge at the level of whole-word recognition.

    Science.gov (United States)

    Stevenson, Ryan A; Nelms, Caitlin E; Baum, Sarah H; Zurkovsky, Lilia; Barense, Morgan D; Newhouse, Paul A; Wallace, Mark T

    2015-01-01

    Over the next 2 decades, a dramatic shift in the demographics of society will take place, with a rapid growth in the population of older adults. One of the most common complaints with healthy aging is a decreased ability to successfully perceive speech, particularly in noisy environments. In such noisy environments, the presence of visual speech cues (i.e., lip movements) provide striking benefits for speech perception and comprehension, but previous research suggests that older adults gain less from such audiovisual integration than their younger peers. To determine at what processing level these behavioral differences arise in healthy-aging populations, we administered a speech-in-noise task to younger and older adults. We compared the perceptual benefits of having speech information available in both the auditory and visual modalities and examined both phoneme and whole-word recognition across varying levels of signal-to-noise ratio. For whole-word recognition, older adults relative to younger adults showed greater multisensory gains at intermediate SNRs but reduced benefit at low SNRs. By contrast, at the phoneme level both younger and older adults showed approximately equivalent increases in multisensory gain as signal-to-noise ratio decreased. Collectively, the results provide important insights into both the similarities and differences in how older and younger adults integrate auditory and visual speech cues in noisy environments and help explain some of the conflicting findings in previous studies of multisensory speech perception in healthy aging. These novel findings suggest that audiovisual processing is intact at more elementary levels of speech perception in healthy-aging populations and that deficits begin to emerge only at the more complex word-recognition level of speech signals. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Automated tracking of lava lake level using thermal images at Kīlauea Volcano, Hawai’i

    Science.gov (United States)

    Patrick, Matthew R.; Swanson, Don; Orr, Tim R.

    2016-01-01

    Tracking the level of the lava lake in Halema‘uma‘u Crater, at the summit of Kīlauea Volcano, Hawai’i, is an essential part of monitoring the ongoing eruption and forecasting potentially hazardous changes in activity. We describe a simple automated image processing routine that analyzes continuously-acquired thermal images of the lava lake and measures lava level. The method uses three image segmentation approaches, based on edge detection, short-term change analysis, and composite temperature thresholding, to identify and track the lake margin in the images. These relative measurements from the images are periodically calibrated with laser rangefinder measurements to produce real-time estimates of lake elevation. Continuous, automated tracking of the lava level has been an important tool used by the U.S. Geological Survey’s Hawaiian Volcano Observatory since 2012 in real-time operational monitoring of the volcano and its hazard potential.

  9. Time-Reversal MUSIC Imaging with Time-Domain Gating Technique

    Science.gov (United States)

    Choi, Heedong; Ogawa, Yasutaka; Nishimura, Toshihiko; Ohgane, Takeo

    A time-reversal (TR) approach with multiple signal classification (MUSIC) provides super-resolution for detection and localization using multistatic data collected from an array antenna system. The theory of TR-MUSIC assumes that the number of antenna elements is greater than that of scatterers (targets). Furthermore, it requires many sets of frequency-domain data (snapshots) in seriously noisy environments. Unfortunately, these conditions are not practical for real environments due to the restriction of a reasonable antenna structure as well as limited measurement time. We propose an approach that treats both noise reduction and relaxation of the transceiver restriction by using a time-domain gating technique accompanied with the Fourier transform before applying the TR-MUSIC imaging algorithm. Instead of utilizing the conventional multistatic data matrix (MDM), we employ a modified MDM obtained from the gating technique. The resulting imaging functions yield more reliable images with only a few snapshots regardless of the limitation of the antenna arrays.

  10. Edge-preserving image denoising via group coordinate descent on the GPU.

    Science.gov (United States)

    McGaffin, Madison Gray; Fessler, Jeffrey A

    2015-04-01

    Image denoising is a fundamental operation in image processing, and its applications range from the direct (photographic enhancement) to the technical (as a subproblem in image reconstruction algorithms). In many applications, the number of pixels has continued to grow, while the serial execution speed of computational hardware has begun to stall. New image processing algorithms must exploit the power offered by massively parallel architectures like graphics processing units (GPUs). This paper describes a family of image denoising algorithms well-suited to the GPU. The algorithms iteratively perform a set of independent, parallel 1D pixel-update subproblems. To match GPU memory limitations, they perform these pixel updates in-place and only store the noisy data, denoised image, and problem parameters. The algorithms can handle a wide range of edge-preserving roughness penalties, including differentiable convex penalties and anisotropic total variation. Both algorithms use the majorize-minimize framework to solve the 1D pixel update subproblem. Results from a large 2D image denoising problem and a 3D medical imaging denoising problem demonstrate that the proposed algorithms converge rapidly in terms of both iteration and run-time.

  11. Variational Level Set Method for Two-Stage Image Segmentation Based on Morphological Gradients

    Directory of Open Access Journals (Sweden)

    Zemin Ren

    2014-01-01

    Full Text Available We use variational level set method and transition region extraction techniques to achieve image segmentation task. The proposed scheme is done by two steps. We first develop a novel algorithm to extract transition region based on the morphological gradient. After this, we integrate the transition region into a variational level set framework and develop a novel geometric active contour model, which include an external energy based on transition region and fractional order edge indicator function. The external energy is used to drive the zero level set toward the desired image features, such as object boundaries. Due to this external energy, the proposed model allows for more flexible initialization. The fractional order edge indicator function is incorporated into the length regularization term to diminish the influence of noise. Moreover, internal energy is added into the proposed model to penalize the deviation of the level set function from a signed distance function. The results evolution of the level set function is the gradient flow that minimizes the overall energy functional. The proposed model has been applied to both synthetic and real images with promising results.

  12. Listening level of music through headphones in train car noise environments.

    Science.gov (United States)

    Shimokura, Ryota; Soeta, Yoshiharu

    2012-09-01

    Although portable music devices are useful for passing time on trains, exposure to music using headphones for long periods carries the risk of damaging hearing acuity. The aim of this study is to examine the listening level of music through headphones in the noisy environment of a train car. Eight subjects adjusted the volume to an optimum level (L(music)) in a simulated noisy train car environment. In Experiment I, the effects of noise level (L(train)) and type of train noise (rolling, squealing, impact, and resonance) were examined. Spectral and temporal characteristics were found to be different according to the train noise type. In Experiment II, the effects of L(train) and type of music (five vocal and five instrumental music) were examined. Each music type had a different pitch strength and spectral centroid, and each was evaluated by φ(1) and W(φ(0)), respectively. These were classified as factors of the autocorrelation function (ACF) of the music. Results showed that L(music) increased as L(train) increased in both experiments, while the type of music greatly influenced L(music). The type of train noise, however, only slightly influenced L(music). L(music) can be estimated using L(train) and the ACF factors φ(1) and W(φ(0)).

  13. An improved image alignment procedure for high-resolution transmission electron microscopy.

    Science.gov (United States)

    Lin, Fang; Liu, Yan; Zhong, Xiaoyan; Chen, Jianghua

    2010-06-01

    Image alignment is essential for image processing methods such as through-focus exit-wavefunction reconstruction and image averaging in high-resolution transmission electron microscopy. Relative image displacements exist in any experimentally recorded image series due to the specimen drifts and image shifts, hence image alignment for correcting the image displacements has to be done prior to any further image processing. The image displacement between two successive images is determined by the correlation function of the two relatively shifted images. Here it is shown that more accurate image alignment can be achieved by using an appropriate aperture to filter the high-frequency components of the images being aligned, especially for a crystalline specimen with little non-periodic information. For the image series of crystalline specimens with little amorphous, the radius of the filter aperture should be as small as possible, so long as it covers the innermost lattice reflections. Testing with an experimental through-focus series of Si[110] images, the accuracies of image alignment with different correlation functions are compared with respect to the error functions in through-focus exit-wavefunction reconstruction based on the maximum-likelihood method. Testing with image averaging over noisy experimental images from graphene and carbon-nanotube samples, clear and sharp crystal lattice fringes are recovered after applying optimal image alignment. Copyright 2010 Elsevier Ltd. All rights reserved.

  14. Evaluation of a compartmental model for estimating tumor hypoxia via FMISO dynamic PET imaging

    International Nuclear Information System (INIS)

    Wang Wenli; Nehmeh, Sadek A; O'Donoghue, Joseph; Zanzonico, Pat B; Schmidtlein, C Ross; Lee, Nancy Y; Humm, John L; Georgi, Jens-Christoph; Paulus, Timo; Narayanan, Manoj; Bal, Matthieu

    2009-01-01

    This paper systematically evaluates a pharmacokinetic compartmental model for identifying tumor hypoxia using dynamic positron emission tomography (PET) imaging with 18 F-fluoromisonidazole (FMISO). A generic irreversible one-plasma two-tissue compartmental model was used. A dynamic PET image dataset was simulated with three tumor regions-normoxic, hypoxic and necrotic-embedded in a normal-tissue background, and with an image-based arterial input function. Each voxelized tissue's time activity curve (TAC) was simulated with typical values of kinetic parameters, as deduced from FMISO-PET data from nine head-and-neck cancer patients. The dynamic dataset was first produced without any statistical noise to ensure that correct kinetic parameters were reproducible. Next, to investigate the stability of kinetic parameter estimation in the presence of noise, 1000 noisy samples of the dynamic dataset were generated, from which 1000 noisy estimates of kinetic parameters were calculated and used to estimate the sample mean and covariance matrix. It is found that a more peaked input function gave less variation in various kinetic parameters, and the variation of kinetic parameters could also be reduced by two region-of-interest averaging techniques. To further investigate how bias in the arterial input function affected the kinetic parameter estimation, a shift error was introduced in the peak amplitude and peak location of the input TAC, and the bias of various kinetic parameters calculated. In summary, mathematical phantom studies have been used to determine the statistical accuracy and precision of model-based kinetic analysis, which helps to validate this analysis and provides guidance in planning clinical dynamic FMISO-PET studies.

  15. Uncertainty quantification of cinematic imaging for development of predictive simulations of turbulent combustion.

    Energy Technology Data Exchange (ETDEWEB)

    Lawson, Matthew; Debusschere, Bert J.; Najm, Habib N.; Sargsyan, Khachik; Frank, Jonathan H.

    2010-09-01

    Recent advances in high frame rate complementary metal-oxide-semiconductor (CMOS) cameras coupled with high repetition rate lasers have enabled laser-based imaging measurements of the temporal evolution of turbulent reacting flows. This measurement capability provides new opportunities for understanding the dynamics of turbulence-chemistry interactions, which is necessary for developing predictive simulations of turbulent combustion. However, quantitative imaging measurements using high frame rate CMOS cameras require careful characterization of the their noise, non-linear response, and variations in this response from pixel to pixel. We develop a noise model and calibration tools to mitigate these problems and to enable quantitative use of CMOS cameras. We have demonstrated proof of principle for image de-noising using both wavelet methods and Bayesian inference. The results offer new approaches for quantitative interpretation of imaging measurements from noisy data acquired with non-linear detectors. These approaches are potentially useful in many areas of scientific research that rely on quantitative imaging measurements.

  16. Colour application on mammography image segmentation

    Science.gov (United States)

    Embong, R.; Aziz, N. M. Nik Ab.; Karim, A. H. Abd; Ibrahim, M. R.

    2017-09-01

    The segmentation process is one of the most important steps in image processing and computer vision since it is vital in the initial stage of image analysis. Segmentation of medical images involves complex structures and it requires precise segmentation result which is necessary for clinical diagnosis such as the detection of tumour, oedema, and necrotic tissues. Since mammography images are grayscale, researchers are looking at the effect of colour in the segmentation process of medical images. Colour is known to play a significant role in the perception of object boundaries in non-medical colour images. Processing colour images require handling more data, hence providing a richer description of objects in the scene. Colour images contain ten percent (10%) additional edge information as compared to their grayscale counterparts. Nevertheless, edge detection in colour image is more challenging than grayscale image as colour space is considered as a vector space. In this study, we implemented red, green, yellow, and blue colour maps to grayscale mammography images with the purpose of testing the effect of colours on the segmentation of abnormality regions in the mammography images. We applied the segmentation process using the Fuzzy C-means algorithm and evaluated the percentage of average relative error of area for each colour type. The results showed that all segmentation with the colour map can be done successfully even for blurred and noisy images. Also the size of the area of the abnormality region is reduced when compare to the segmentation area without the colour map. The green colour map segmentation produced the smallest percentage of average relative error (10.009%) while yellow colour map segmentation gave the largest percentage of relative error (11.367%).

  17. Combined mixed approach algorithm for in-line phase-contrast x-ray imaging

    International Nuclear Information System (INIS)

    De Caro, Liberato; Scattarella, Francesco; Giannini, Cinzia; Tangaro, Sabina; Rigon, Luigi; Longo, Renata; Bellotti, Roberto

    2010-01-01

    Purpose: In the past decade, phase-contrast imaging (PCI) has been applied to study different kinds of tissues and human body parts, with an increased improvement of the image quality with respect to simple absorption radiography. A technique closely related to PCI is phase-retrieval imaging (PRI). Indeed, PCI is an imaging modality thought to enhance the total contrast of the images through the phase shift introduced by the object (human body part); PRI is a mathematical technique to extract the quantitative phase-shift map from PCI. A new phase-retrieval algorithm for the in-line phase-contrast x-ray imaging is here proposed. Methods: The proposed algorithm is based on a mixed transfer-function and transport-of-intensity approach (MA) and it requires, at most, an initial approximate estimate of the average phase shift introduced by the object as prior knowledge. The accuracy in the initial estimate determines the convergence speed of the algorithm. The proposed algorithm retrieves both the object phase and its complex conjugate in a combined MA (CMA). Results: Although slightly less computationally effective with respect to other mixed-approach algorithms, as two phases have to be retrieved, the results obtained by the CMA on simulated data have shown that the obtained reconstructed phase maps are characterized by particularly low normalized mean square errors. The authors have also tested the CMA on noisy experimental phase-contrast data obtained by a suitable weakly absorbing sample consisting of a grid of submillimetric nylon fibers as well as on a strongly absorbing object made of a 0.03 mm thick lead x-ray resolution star pattern. The CMA has shown a good efficiency in recovering phase information, also in presence of noisy data, characterized by peak-to-peak signal-to-noise ratios down to a few dBs, showing the possibility to enhance with phase radiography the signal-to-noise ratio for features in the submillimetric scale with respect to the attenuation

  18. On-Line Temperature Estimation for Noisy Thermal Sensors Using a Smoothing Filter-Based Kalman Predictor

    Directory of Open Access Journals (Sweden)

    Xin Li

    2018-02-01

    Full Text Available Dynamic thermal management (DTM mechanisms utilize embedded thermal sensors to collect fine-grained temperature information for monitoring the real-time thermal behavior of multi-core processors. However, embedded thermal sensors are very susceptible to a variety of sources of noise, including environmental uncertainty and process variation. This causes the discrepancies between actual temperatures and those observed by on-chip thermal sensors, which seriously affect the efficiency of DTM. In this paper, a smoothing filter-based Kalman prediction technique is proposed to accurately estimate the temperatures from noisy sensor readings. For the multi-sensor estimation scenario, the spatial correlations among different sensor locations are exploited. On this basis, a multi-sensor synergistic calibration algorithm (known as MSSCA is proposed to improve the simultaneous prediction accuracy of multiple sensors. Moreover, an infrared imaging-based temperature measurement technique is also proposed to capture the thermal traces of an advanced micro devices (AMD quad-core processor in real time. The acquired real temperature data are used to evaluate our prediction performance. Simulation shows that the proposed synergistic calibration scheme can reduce the root-mean-square error (RMSE by 1.2 ∘ C and increase the signal-to-noise ratio (SNR by 15.8 dB (with a very small average runtime overhead compared with assuming the thermal sensor readings to be ideal. Additionally, the average false alarm rate (FAR of the corrected sensor temperature readings can be reduced by 28.6%. These results clearly demonstrate that if our approach is used to perform temperature estimation, the response mechanisms of DTM can be triggered to adjust the voltages, frequencies, and cooling fan speeds at more appropriate times.

  19. On-Line Temperature Estimation for Noisy Thermal Sensors Using a Smoothing Filter-Based Kalman Predictor.

    Science.gov (United States)

    Li, Xin; Ou, Xingtao; Li, Zhi; Wei, Henglu; Zhou, Wei; Duan, Zhemin

    2018-02-02

    Dynamic thermal management (DTM) mechanisms utilize embedded thermal sensors to collect fine-grained temperature information for monitoring the real-time thermal behavior of multi-core processors. However, embedded thermal sensors are very susceptible to a variety of sources of noise, including environmental uncertainty and process variation. This causes the discrepancies between actual temperatures and those observed by on-chip thermal sensors, which seriously affect the efficiency of DTM. In this paper, a smoothing filter-based Kalman prediction technique is proposed to accurately estimate the temperatures from noisy sensor readings. For the multi-sensor estimation scenario, the spatial correlations among different sensor locations are exploited. On this basis, a multi-sensor synergistic calibration algorithm (known as MSSCA) is proposed to improve the simultaneous prediction accuracy of multiple sensors. Moreover, an infrared imaging-based temperature measurement technique is also proposed to capture the thermal traces of an advanced micro devices (AMD) quad-core processor in real time. The acquired real temperature data are used to evaluate our prediction performance. Simulation shows that the proposed synergistic calibration scheme can reduce the root-mean-square error (RMSE) by 1.2 ∘ C and increase the signal-to-noise ratio (SNR) by 15.8 dB (with a very small average runtime overhead) compared with assuming the thermal sensor readings to be ideal. Additionally, the average false alarm rate (FAR) of the corrected sensor temperature readings can be reduced by 28.6%. These results clearly demonstrate that if our approach is used to perform temperature estimation, the response mechanisms of DTM can be triggered to adjust the voltages, frequencies, and cooling fan speeds at more appropriate times.

  20. Automatic adjustment of display window (gray-level condition) for MR images using neural networks

    International Nuclear Information System (INIS)

    Ohhashi, Akinami; Nambu, Kyojiro.

    1992-01-01

    We have developed a system to automatically adjust the display window width and level (WWL) for MR images using neural networks. There were three main points in the development of our system as follows: 1) We defined an index for the clarity of a displayed image, and called 'EW'. EW is a quantitative measure of the clarity of an image displayed in a certain WWL, and can be derived from the difference between gray-level with the WWL adjusted by a human expert and with a certain WWL. 2) We extracted a group of six features from a gray-level histogram of a displayed image. We designed two neural networks which are able to learn the relationship between these features and the desired output (teaching signal), 'EQ', which is normalized to 0 to 1.0 from EW. Two neural networks were used to share the patterns to be learned; one learns a variety of patterns with less accuracy, and the other learns similar patterns with accuracy. Learning was performed using a back-propagation method. As a result, the neural networks after learning are able to provide a quantitative measure, 'Q', of the clarity of images displayed in the designated WWL. 3) Using the 'Hill climbing' method, we have been able to determine the best possible WWL for a displaying image. We have tested this technique for MR brain images. The results show that this system can adjust WWL comparable to that adjusted by a human expert for the majority of test images. The neural network is effective for the automatic adjustment of the display window for MR images. We are now studying the application of this method to MR images of another regions. (author)

  1. Diagnosing and Ranking Retinopathy Disease Level Using Diabetic Fundus Image Recuperation Approach

    Directory of Open Access Journals (Sweden)

    K. Somasundaram

    2015-01-01

    Full Text Available Retinal fundus images are widely used in diagnosing different types of eye diseases. The existing methods such as Feature Based Macular Edema Detection (FMED and Optimally Adjusted Morphological Operator (OAMO effectively detected the presence of exudation in fundus images and identified the true positive ratio of exudates detection, respectively. These mechanically detected exudates did not include more detailed feature selection technique to the system for detection of diabetic retinopathy. To categorize the exudates, Diabetic Fundus Image Recuperation (DFIR method based on sliding window approach is developed in this work to select the features of optic cup in digital retinal fundus images. The DFIR feature selection uses collection of sliding windows with varying range to obtain the features based on the histogram value using Group Sparsity Nonoverlapping Function. Using support vector model in the second phase, the DFIR method based on Spiral Basis Function effectively ranks the diabetic retinopathy disease level. The ranking of disease level on each candidate set provides a much promising result for developing practically automated and assisted diabetic retinopathy diagnosis system. Experimental work on digital fundus images using the DFIR method performs research on the factors such as sensitivity, ranking efficiency, and feature selection time.

  2. The Impact of Frequency Modulation (FM) System Use and Caregiver Training on Young Children with Hearing Impairment in a Noisy Listening Environment

    Science.gov (United States)

    Nguyen, Huong Thi Thien

    2011-01-01

    The two objectives of this single-subject study were to assess how an FM system use impacts parent-child interaction in a noisy listening environment, and how a parent/caregiver training affect the interaction between parent/caregiver and child. Two 5-year-old children with hearing loss and their parent/caregiver participated. Experiment 1 was…

  3. An improved level set method for brain MR images segmentation and bias correction.

    Science.gov (United States)

    Chen, Yunjie; Zhang, Jianwei; Macione, Jim

    2009-10-01

    Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.

  4. Environmental magneto-gradiometric marine survey in a highly anthropic noisy area

    Directory of Open Access Journals (Sweden)

    Luca Cocchi

    2009-06-01

    Full Text Available

    We describe a magneto-gradiometric survey performed in the «Mar Piccolo» of Taranto, Italy in May 2005 for

    environmental purposes. This region, which is a noisy harbour environment, provides a challenging test for magnetic methods. To reduce spurious noise signals, with both temporal and spatial origins, we used two Geometrics G880 model caesium magnetometers towed in a transverse gradient configuration. We show how, in shallow waters, this gradiometric configuration allows us to distinguish anomalies due to small metallic bodies near the seabed from the induced noise due to the anthropic contribution and geomagnetic field variations. A direct visual inspection confirmed that the peculiarities highlighted in the gradient anomaly map were due to abandoned metallic objects found on the seabed.


  5. Hand Depth Image Denoising and Superresolution via Noise-Aware Dictionaries

    Directory of Open Access Journals (Sweden)

    Huayang Li

    2016-01-01

    Full Text Available This paper proposes a two-stage method for hand depth image denoising and superresolution, using bilateral filters and learned dictionaries via noise-aware orthogonal matching pursuit (NAOMP based K-SVD. The bilateral filtering phase recovers singular points and removes artifacts on silhouettes by averaging depth data using neighborhood pixels on which both depth difference and RGB similarity restrictions are imposed. The dictionary learning phase uses NAOMP for training dictionaries which separates faithful depth from noisy data. Compared with traditional OMP, NAOMP adds a residual reduction step which effectively weakens the noise term within the residual during the residual decomposition in terms of atoms. Experimental results demonstrate that the bilateral phase and the NAOMP-based learning dictionaries phase corporately denoise both virtual and real depth images effectively.

  6. A neural network image reconstruction technique for electrical impedance tomography

    International Nuclear Information System (INIS)

    Adler, A.; Guardo, R.

    1994-01-01

    Reconstruction of Images in Electrical Impedance Tomography requires the solution of a nonlinear inverse problem on noisy data. This problem is typically ill-conditioned and requires either simplifying assumptions or regularization based on a priori knowledge. This paper presents a reconstruction algorithm using neural network techniques which calculates a linear approximation of the inverse problem directly from finite element simulations of the forward problem. This inverse is adapted to the geometry of the medium and the signal-to-noise ratio (SNR) used during network training. Results show good conductivity reconstruction where measurement SNR is similar to the training conditions. The advantages of this method are its conceptual simplicity and ease of implementation, and the ability to control the compromise between the noise performance and resolution of the image reconstruction

  7. Hand Vein Images Enhancement Based on Local Gray-level Information Histogram

    Directory of Open Access Journals (Sweden)

    Jun Wang

    2015-06-01

    Full Text Available Based on the Histogram equalization theory, this paper presents a novel concept of histogram to realize the contrast enhancement of hand vein images, avoiding the lost of topological vein structure or importing the fake vein information. Firstly, we propose the concept of gray-level information histogram, the fundamental characteristic of which is that the amplitudes of the components can objectively reflect the contribution of the gray levels and information to the representation of image information. Then, we propose the histogram equalization method that is composed of an automatic histogram separation module and an intensity transformation module, and the histogram separation module is a combination of the proposed prompt multiple threshold procedure and an optimum peak signal-to-noise (PSNR calculation to separate the histogram into small-scale detail, the use of the intensity transformation module can enhance the vein images with vein topological structure and gray information preservation for each generated sub-histogram. Experimental results show that the proposed method can achieve extremely good contrast enhancement effect.

  8. Improved detection probability of low level light and infrared image fusion system

    Science.gov (United States)

    Luo, Yuxiang; Fu, Rongguo; Zhang, Junju; Wang, Wencong; Chang, Benkang

    2018-02-01

    Low level light(LLL) image contains rich information on environment details, but is easily affected by the weather. In the case of smoke, rain, cloud or fog, much target information will lose. Infrared image, which is from the radiation produced by the object itself, can be "active" to obtain the target information in the scene. However, the image contrast and resolution is bad, the ability of the acquisition of target details is very poor, and the imaging mode does not conform to the human visual habit. The fusion of LLL and infrared image can make up for the deficiency of each sensor and give play to the advantages of single sensor. At first, we show the hardware design of fusion circuit. Then, through the recognition probability calculation of the target(one person) and the background image(trees), we find that the trees detection probability of LLL image is higher than that of the infrared image, and the person detection probability of the infrared image is obviously higher than that of LLL image. The detection probability of fusion image for one person and trees is higher than that of single detector. Therefore, image fusion can significantly enlarge recognition probability and improve detection efficiency.

  9. Watermark Compression in Medical Image Watermarking Using Lempel-Ziv-Welch (LZW) Lossless Compression Technique.

    Science.gov (United States)

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohd; Ali, Mushtaq

    2016-04-01

    In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery.

  10. Extreme deconvolution: Inferring complete distribution functions from noisy, heterogeneous and incomplete observations

    Science.gov (United States)

    Bovy Jo; Hogg, David W.; Roweis, Sam T.

    2011-06-01

    We generalize the well-known mixtures of Gaussians approach to density estimation and the accompanying Expectation-Maximization technique for finding the maximum likelihood parameters of the mixture to the case where each data point carries an individual d-dimensional uncertainty covariance and has unique missing data properties. This algorithm reconstructs the error-deconvolved or "underlying" distribution function common to all samples, even when the individual data points are samples from different distributions, obtained by convolving the underlying distribution with the heteroskedastic uncertainty distribution of the data point and projecting out the missing data directions. We show how this basic algorithm can be extended with conjugate priors on all of the model parameters and a "split-and-"erge- procedure designed to avoid local maxima of the likelihood. We demonstrate the full method by applying it to the problem of inferring the three-dimensional veloc! ity distribution of stars near the Sun from noisy two-dimensional, transverse velocity measurements from the Hipparcos satellite.

  11. Level set segmentation of bovine corpora lutea in ex situ ovarian ultrasound images

    Directory of Open Access Journals (Sweden)

    Adams Gregg P

    2008-08-01

    Full Text Available Abstract Background The objective of this study was to investigate the viability of level set image segmentation methods for the detection of corpora lutea (corpus luteum, CL boundaries in ultrasonographic ovarian images. It was hypothesized that bovine CL boundaries could be located within 1–2 mm by a level set image segmentation methodology. Methods Level set methods embed a 2D contour in a 3D surface and evolve that surface over time according to an image-dependent speed function. A speed function suitable for segmentation of CL's in ovarian ultrasound images was developed. An initial contour was manually placed and contour evolution was allowed to proceed until the rate of change of the area was sufficiently small. The method was tested on ovarian ultrasonographic images (n = 8 obtained ex situ. A expert in ovarian ultrasound interpretation delineated CL boundaries manually to serve as a "ground truth". Accuracy of the level set segmentation algorithm was determined by comparing semi-automatically determined contours with ground truth contours using the mean absolute difference (MAD, root mean squared difference (RMSD, Hausdorff distance (HD, sensitivity, and specificity metrics. Results and discussion The mean MAD was 0.87 mm (sigma = 0.36 mm, RMSD was 1.1 mm (sigma = 0.47 mm, and HD was 3.4 mm (sigma = 2.0 mm indicating that, on average, boundaries were accurate within 1–2 mm, however, deviations in excess of 3 mm from the ground truth were observed indicating under- or over-expansion of the contour. Mean sensitivity and specificity were 0.814 (sigma = 0.171 and 0.990 (sigma = 0.00786, respectively, indicating that CLs were consistently undersegmented but rarely did the contour interior include pixels that were judged by the human expert not to be part of the CL. It was observed that in localities where gradient magnitudes within the CL were strong due to high contrast speckle, contour expansion stopped too early. Conclusion The

  12. Effects of pedagogical ideology on the perceived loudness and noise levels in preschools.

    Science.gov (United States)

    Jonsdottir, Valdis; Rantala, Leena M; Oskarsson, Gudmundur Kr; Sala, Eeva

    2015-01-01

    High activity noise levels that result in detrimental effects on speech communication have been measured in preschools. To find out if different pedagogical ideologies affect the perceived loudness and levels of noise, a questionnaire study inquiring about the experience of loudness and voice symptoms was carried out in Iceland in eight private preschools, called "Hjalli model", and in six public preschools. Noise levels were also measured in the preschools. Background variables (stress level, age, length of working career, education, smoking, and number of children per teacher) were also analyzed in order to determine how much they contributed toward voice symptoms and the experience of noisiness. Results indicate that pedagogical ideology is a significant factor for predicting noise and its consequences. Teachers in the preschool with tighter pedagogical control of discipline (the "Hjalli model") experienced lower activity noise loudness than teachers in the preschool with a more relaxed control of behavior (public preschool). Lower noise levels were also measured in the "Hjalli model" preschool and fewer "Hjalli model" teachers reported voice symptoms. Public preschool teachers experienced more stress than "Hjalli model" teachers and the stress level was, indeed, the background variable that best explained the voice symptoms and the teacher's perception of a noisy environment. Discipline, structure, and organization in the type of activity predicted the activity noise level better than the number of children in the group. Results indicate that pedagogical ideology is a significant factor for predicting self-reported noise and its consequences.

  13. Identifying FRBR Work-Level Data in MARC Bibliographic Records for Manifestations of Moving Images

    Directory of Open Access Journals (Sweden)

    Lynne Bisko

    2008-12-01

    Full Text Available The library metadata community is dealing with the challenge of implementing the conceptual model, Functional Requirements for Bibliographic Records (FRBR. In response, the Online Audiovisual Catalogers (OLAC created a task force to study the issues related to creating and using FRBR-based work-level records for moving images. This article presents one part of the task force's work: it looks at the feasibility of creating provisional FRBR work-level records for moving images by extracting data from existing manifestation-level bibliographic records. Using a sample of 941 MARC records, a subgroup of the task force conducted a pilot project to look at five characteristics of moving image works. Here they discuss their methodology; analysis; selected results for two elements, original date (year and director name; and conclude with some suggested changes to MARC coding and current cataloging policy.

  14. Patient dose with quality image under diagnostic reference levels

    International Nuclear Information System (INIS)

    Akula, Suresh Kumar; Singh, Gurvinder; Chougule, Arun

    2016-01-01

    Need to set Diagnostic Reference Level (DRL) for locations for all diagnostic procedures in local as compared to National. The review of DRL's should compare local with national or referenced averages and a note made of any significant variances to these averages and the justification for it. To survey and asses radiation doses to patient and reduce the redundancy in patient imaging to maintain DRLs

  15. Investigations of image fusion

    Science.gov (United States)

    Zhang, Zhong

    1999-12-01

    The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a single image which is more suitable for the purpose of human visual perception or further image processing tasks. In this thesis, a region-based fusion algorithm using the wavelet transform is proposed. The identification of important features in each image, such as edges and regions of interest, are used to guide the fusion process. The idea of multiscale grouping is also introduced and a generic image fusion framework based on multiscale decomposition is studied. The framework includes all of the existing multiscale-decomposition- based fusion approaches we found in the literature which did not assume a statistical model for the source images. Comparisons indicate that our framework includes some new approaches which outperform the existing approaches for the cases we consider. Registration must precede our fusion algorithms. So we proposed a hybrid scheme which uses both feature-based and intensity-based methods. The idea of robust estimation of optical flow from time- varying images is employed with a coarse-to-fine multi- resolution approach and feature-based registration to overcome some of the limitations of the intensity-based schemes. Experiments show that this approach is robust and efficient. Assessing image fusion performance in a real application is a complicated issue. In this dissertation, a mixture probability density function model is used in conjunction with the Expectation- Maximization algorithm to model histograms of edge intensity. Some new techniques are proposed for estimating the quality of a noisy image of a natural scene. Such quality measures can be used to guide the fusion. Finally, we study fusion of images obtained from several copies of a new type of camera developed for video surveillance. Our techniques increase the capability and reliability of the surveillance system and provide an easy way to obtain 3-D

  16. Scaling law of diffusivity generated by a noisy telegraph signal with fractal intermittency

    International Nuclear Information System (INIS)

    Paradisi, Paolo; Allegrini, Paolo

    2015-01-01

    In many complex systems the non-linear cooperative dynamics determine the emergence of self-organized, metastable, structures that are associated with a birth–death process of cooperation. This is found to be described by a renewal point process, i.e., a sequence of crucial birth–death events corresponding to transitions among states that are faster than the typical long-life time of the metastable states. Metastable states are highly correlated, but the occurrence of crucial events is typically associated with a fast memory drop, which is the reason for the renewal condition. Consequently, these complex systems display a power-law decay and, thus, a long-range or scale-free behavior, in both time correlations and distribution of inter-event times, i.e., fractal intermittency. The emergence of fractal intermittency is then a signature of complexity. However, the scaling features of complex systems are, in general, affected by the presence of added white or short-term noise. This has been found also for fractal intermittency. In this work, after a brief review on metastability and noise in complex systems, we discuss the emerging paradigm of Temporal Complexity. Then, we propose a model of noisy fractal intermittency, where noise is interpreted as a renewal Poisson process with event rate r_p. We show that the presence of Poisson noise causes the emergence of a normal diffusion scaling in the long-time range of diffusion generated by a telegraph signal driven by noisy fractal intermittency. We analytically derive the scaling law of the long-time normal diffusivity coefficient. We find the surprising result that this long-time normal diffusivity depends not only on the Poisson event rate, but also on the parameters of the complex component of the signal: the power exponent μ of the inter-event time distribution, denoted as complexity index, and the time scale T needed to reach the asymptotic power-law behavior marking the emergence of complexity. In particular

  17. Direct 4D reconstruction of parametric images incorporating anato-functional joint entropy.

    Science.gov (United States)

    Tang, Jing; Kuwabara, Hiroto; Wong, Dean F; Rahmim, Arman

    2010-08-07

    We developed an anatomy-guided 4D closed-form algorithm to directly reconstruct parametric images from projection data for (nearly) irreversible tracers. Conventional methods consist of individually reconstructing 2D/3D PET data, followed by graphical analysis on the sequence of reconstructed image frames. The proposed direct reconstruction approach maintains the simplicity and accuracy of the expectation-maximization (EM) algorithm by extending the system matrix to include the relation between the parametric images and the measured data. A closed-form solution was achieved using a different hidden complete-data formulation within the EM framework. Furthermore, the proposed method was extended to maximum a posterior reconstruction via incorporation of MR image information, taking the joint entropy between MR and parametric PET features as the prior. Using realistic simulated noisy [(11)C]-naltrindole PET and MR brain images/data, the quantitative performance of the proposed methods was investigated. Significant improvements in terms of noise versus bias performance were demonstrated when performing direct parametric reconstruction, and additionally upon extending the algorithm to its Bayesian counterpart using the MR-PET joint entropy measure.

  18. Multimodal imaging of the human knee down to the cellular level

    Science.gov (United States)

    Schulz, G.; Götz, C.; Müller-Gerbl, M.; Zanette, I.; Zdora, M.-C.; Khimchenko, A.; Deyhle, H.; Thalmann, P.; Müller, B.

    2017-06-01

    Computed tomography reaches the best spatial resolution for the three-dimensional visualization of human tissues among the available nondestructive clinical imaging techniques. Nowadays, sub-millimeter voxel sizes are regularly obtained. Regarding investigations on true micrometer level, lab-based micro-CT (μCT) has become gold standard. The aim of the present study is firstly the hierarchical investigation of a human knee post mortem using hard X-ray μCT and secondly a multimodal imaging using absorption and phase contrast modes in order to investigate hard (bone) and soft (cartilage) tissues on the cellular level. After the visualization of the entire knee using a clinical CT, a hierarchical imaging study was performed using the lab-system nanotom® m. First, the entire knee was measured with a pixel length of 65 μm. The highest resolution with a pixel length of 3 μm could be achieved after extracting cylindrically shaped plugs from the femoral bones. For the visualization of the cartilage, grating-based phase contrast μCT (I13-2, Diamond Light Source) was performed. With an effective voxel size of 2.3 μm it was possible to visualize individual chondrocytes within the cartilage.

  19. Sparse Reconstruction Schemes for Nonlinear Electromagnetic Imaging

    KAUST Repository

    Desmal, Abdulla

    2016-03-01

    Electromagnetic imaging is the problem of determining material properties from scattered fields measured away from the domain under investigation. Solving this inverse problem is a challenging task because (i) it is ill-posed due to the presence of (smoothing) integral operators used in the representation of scattered fields in terms of material properties, and scattered fields are obtained at a finite set of points through noisy measurements; and (ii) it is nonlinear simply due the fact that scattered fields are nonlinear functions of the material properties. The work described in this thesis tackles the ill-posedness of the electromagnetic imaging problem using sparsity-based regularization techniques, which assume that the scatterer(s) occupy only a small fraction of the investigation domain. More specifically, four novel imaging methods are formulated and implemented. (i) Sparsity-regularized Born iterative method iteratively linearizes the nonlinear inverse scattering problem and each linear problem is regularized using an improved iterative shrinkage algorithm enforcing the sparsity constraint. (ii) Sparsity-regularized nonlinear inexact Newton method calls for the solution of a linear system involving the Frechet derivative matrix of the forward scattering operator at every iteration step. For faster convergence, the solution of this matrix system is regularized under the sparsity constraint and preconditioned by leveling the matrix singular values. (iii) Sparsity-regularized nonlinear Tikhonov method directly solves the nonlinear minimization problem using Landweber iterations, where a thresholding function is applied at every iteration step to enforce the sparsity constraint. (iv) This last scheme is accelerated using a projected steepest descent method when it is applied to three-dimensional investigation domains. Projection replaces the thresholding operation and enforces the sparsity constraint. Numerical experiments, which are carried out using

  20. Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms.

    Science.gov (United States)

    Harikumar, G; Bresler, Y

    1999-01-01

    We address the problem of restoring an image from its noisy convolutions with two or more unknown finite impulse response (FIR) filters. We develop theoretical results about the existence and uniqueness of solutions, and show that under some generically true assumptions, both the filters and the image can be determined exactly in the absence of noise, and stably estimated in its presence. We present efficient algorithms to estimate the blur functions and their sizes. These algorithms are of two types, subspace-based and likelihood-based, and are extensions of techniques proposed for the solution of the multichannel blind deconvolution problem in one dimension. We present memory and computation-efficient techniques to handle the very large matrices arising in the two-dimensional (2-D) case. Once the blur functions are determined, they are used in a multichannel deconvolution step to reconstruct the unknown image. The theoretical and practical implications of edge effects, and "weakly exciting" images are examined. Finally, the algorithms are demonstrated on synthetic and real data.

  1. Fuzzy C-means classification for corrosion evolution of steel images

    Science.gov (United States)

    Trujillo, Maite; Sadki, Mustapha

    2004-05-01

    An unavoidable problem of metal structures is their exposure to rust degradation during their operational life. Thus, the surfaces need to be assessed in order to avoid potential catastrophes. There is considerable interest in the use of patch repair strategies which minimize the project costs. However, to operate such strategies with confidence in the long useful life of the repair, it is essential that the condition of the existing coatings and the steel substrate can be accurately quantified and classified. This paper describes the application of fuzzy set theory for steel surfaces classification according to the steel rust time. We propose a semi-automatic technique to obtain image clustering using the Fuzzy C-means (FCM) algorithm and we analyze two kinds of data to study the classification performance. Firstly, we investigate the use of raw images" pixels without any pre-processing methods and neighborhood pixels. Secondly, we apply Gaussian noise to the images with different standard deviation to study the FCM method tolerance to Gaussian noise. The noisy images simulate the possible perturbations of the images due to the weather or rust deposits in the steel surfaces during typical on-site acquisition procedures

  2. A Symmetric Chaos-Based Image Cipher with an Improved Bit-Level Permutation Strategy

    Directory of Open Access Journals (Sweden)

    Chong Fu

    2014-02-01

    Full Text Available Very recently, several chaos-based image ciphers using a bit-level permutation have been suggested and shown promising results. Due to the diffusion effect introduced in the permutation stage, the workload of the time-consuming diffusion stage is reduced, and hence the performance of the cryptosystem is improved. In this paper, a symmetric chaos-based image cipher with a 3D cat map-based spatial bit-level permutation strategy is proposed. Compared with those recently proposed bit-level permutation methods, the diffusion effect of the new method is superior as the bits are shuffled among different bit-planes rather than within the same bit-plane. Moreover, the diffusion key stream extracted from hyperchaotic system is related to both the secret key and the plain image, which enhances the security against known/chosen plaintext attack. Extensive security analysis has been performed on the proposed scheme, including the most important ones like key space analysis, key sensitivity analysis, plaintext sensitivity analysis and various statistical analyses, which has demonstrated the satisfactory security of the proposed scheme

  3. Determining the effect of worker exposure conditions on the risk of hearing loss in noisy industrial workroom using Cox proportional hazard model.

    Science.gov (United States)

    Aliabadi, Mohsen; Fereidan, Mohammad; Farhadian, Maryam; Tajik, Leila

    2015-01-01

    In noisy workrooms, exposure conditions such as noise level, exposure duration and use of hearing protection devices are contributory factors to hearing loss. The aim of this study was to determine the effect of exposure conditions on the risk of hearing loss using the Cox model. Seventy workers, employed in a press workshop, were selected to study their hearing threshold using an audiometric test. Their noise exposure histories also were analyzed. The results of the Cox model showed that the job type, smoking and the use of protection devices were effective to induce hearing loss. The relative risk of hearing loss in smokers was 1.1 times of non-smokers The relative risk of hearing loss in workers with the intermittent use of protection devices was 3.3 times those who used these devices continuously. The Cox model could analyze the effect of exposure conditions on hearing loss and provides useful information for managers in order to improve hearing conservation programs.

  4. Depth image enhancement using perceptual texture priors

    Science.gov (United States)

    Bang, Duhyeon; Shim, Hyunjung

    2015-03-01

    A depth camera is widely used in various applications because it provides a depth image of the scene in real time. However, due to the limited power consumption, the depth camera presents severe noises, incapable of providing the high quality 3D data. Although the smoothness prior is often employed to subside the depth noise, it discards the geometric details so to degrade the distance resolution and hinder achieving the realism in 3D contents. In this paper, we propose a perceptual-based depth image enhancement technique that automatically recovers the depth details of various textures, using a statistical framework inspired by human mechanism of perceiving surface details by texture priors. We construct the database composed of the high quality normals. Based on the recent studies in human visual perception (HVP), we select the pattern density as a primary feature to classify textures. Upon the classification results, we match and substitute the noisy input normals with high quality normals in the database. As a result, our method provides the high quality depth image preserving the surface details. We expect that our work is effective to enhance the details of depth image from 3D sensors and to provide a high-fidelity virtual reality experience.

  5. Thresholding: A Pixel-Level Image Processing Methodology Preprocessing Technique for an OCR System for the Brahmi Script

    Directory of Open Access Journals (Sweden)

    H. K. Anasuya Devi

    2006-12-01

    Full Text Available In this paper we study the methodology employed for preprocessing the archaeological images. We present the various algorithms used in the low-level processing stage of image analysis for Optical Character Recognition System for Brahmi Script. The image preprocessing technique covered in this paper is thresholding. We also try to analyze the results obtained by the pixel-level processing algorithms.

  6. OSA Imaging and Applied Optics Congress Support

    Science.gov (United States)

    2017-02-16

    Digest (online) (Optical Society of America, 20 16), paper JT3A .41. V. Katkovnik, "Sparse phase retrieval from noisy data: variational formulation and...A. Wojdyla, G. Gunjala, J. Dong, M. Benk, A. Neureuther, K. Goldberg , and L. Waller, "Off-axis Aberration Estimation in an EUV Microscope Using...2016, (Optical Society of America, 20 16), paper JT3A.41. V. Katkovnik, "Sparse phase retrieval from noisy data: variational formulation and algorithms

  7. Automated determination of size and morphology information from soot transmission electron microscope (TEM)-generated images

    International Nuclear Information System (INIS)

    Wang, Cheng; Chan, Qing N.; Zhang, Renlin; Kook, Sanghoon; Hawkes, Evatt R.; Yeoh, Guan H.; Medwell, Paul R.

    2016-01-01

    The thermophoretic sampling of particulates from hot media, coupled with transmission electron microscope (TEM) imaging, is a combined approach that is widely used to derive morphological information. The identification and the measurement of the particulates, however, can be complex when the TEM images are of low contrast, noisy, and have non-uniform background signal level. The image processing method can also be challenging and time consuming, when the samples collected have large variability in shape and size, or have some degree of overlapping. In this work, a three-stage image processing sequence is presented to facilitate time-efficient automated identification and measurement of particulates from the TEM grids. The proposed processing sequence is first applied to soot samples that were thermophoretically sampled from a laminar non-premixed ethylene-air flame. The parameter values that are required to be set to facilitate the automated process are identified, and sensitivity of the results to these parameters is assessed. The same analysis process is also applied to soot samples that were acquired from an externally irradiated laminar non-premixed ethylene-air flame, which have different geometrical characteristics, to assess the morphological dependence of the proposed image processing sequence. Using the optimized parameter values, statistical assessments of the automated results reveal that the largest discrepancies that are associated with the estimated values of primary particle diameter, fractal dimension, and prefactor values of the aggregates for the tested cases, are approximately 3, 1, and 10 %, respectively, when compared with the manual measurements.

  8. Automated determination of size and morphology information from soot transmission electron microscope (TEM)-generated images

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Cheng; Chan, Qing N., E-mail: qing.chan@unsw.edu.au; Zhang, Renlin; Kook, Sanghoon; Hawkes, Evatt R.; Yeoh, Guan H. [UNSW, School of Mechanical and Manufacturing Engineering (Australia); Medwell, Paul R. [The University of Adelaide, Centre for Energy Technology (Australia)

    2016-05-15

    The thermophoretic sampling of particulates from hot media, coupled with transmission electron microscope (TEM) imaging, is a combined approach that is widely used to derive morphological information. The identification and the measurement of the particulates, however, can be complex when the TEM images are of low contrast, noisy, and have non-uniform background signal level. The image processing method can also be challenging and time consuming, when the samples collected have large variability in shape and size, or have some degree of overlapping. In this work, a three-stage image processing sequence is presented to facilitate time-efficient automated identification and measurement of particulates from the TEM grids. The proposed processing sequence is first applied to soot samples that were thermophoretically sampled from a laminar non-premixed ethylene-air flame. The parameter values that are required to be set to facilitate the automated process are identified, and sensitivity of the results to these parameters is assessed. The same analysis process is also applied to soot samples that were acquired from an externally irradiated laminar non-premixed ethylene-air flame, which have different geometrical characteristics, to assess the morphological dependence of the proposed image processing sequence. Using the optimized parameter values, statistical assessments of the automated results reveal that the largest discrepancies that are associated with the estimated values of primary particle diameter, fractal dimension, and prefactor values of the aggregates for the tested cases, are approximately 3, 1, and 10 %, respectively, when compared with the manual measurements.

  9. Comparative study between the radiopacity levels of high viscosity and of flowable composite resins, using digital imaging.

    Science.gov (United States)

    Arita, Emiko S; Silveira, Gilson P; Cortes, Arthur R; Brucoli, Henrique C

    2012-01-01

    The development of countless types and trends of high viscosite and flowable composite resins, with different physical and chemical properties applicable to their broad use in dental clinics calls for further studies regarding their radiopacity level. The aim of this study was to evaluate the radiopacity levels of high viscosity and the flowable composite resins, using digital imaging. 96 composite resin discs 5 mm in diameter and 3 mm thick were radiographed and analyzed. The image acquisition system used was the Digora® Phosphor Storage System and the images were analyzed with the Digora software for Windows. The exposure conditions were: 70 kVp, 8 mA, and 0.2 s. The focal distance was 40 cm. The image densities were obtained with the pixel values of the materials in the digital image. Most of the high viscosity composite resins presented higher radiopacity levels than the flowable composite resins, with statistically significant differences between the trends and groups analyzed (P composite resins, Tetric®Ceram presented the highest radiopacity levels and Glacier® presented the lowest. Among the flowable composite resins, Tetric®Flow presented the highest radiopacity levels and Wave® presented the lowest.

  10. Measurement of thermally ablated lesions in sonoelastographic images using level set methods

    Science.gov (United States)

    Castaneda, Benjamin; Tamez-Pena, Jose Gerardo; Zhang, Man; Hoyt, Kenneth; Bylund, Kevin; Christensen, Jared; Saad, Wael; Strang, John; Rubens, Deborah J.; Parker, Kevin J.

    2008-03-01

    The capability of sonoelastography to detect lesions based on elasticity contrast can be applied to monitor the creation of thermally ablated lesion. Currently, segmentation of lesions depicted in sonoelastographic images is performed manually which can be a time consuming process and prone to significant intra- and inter-observer variability. This work presents a semi-automated segmentation algorithm for sonoelastographic data. The user starts by planting a seed in the perceived center of the lesion. Fast marching methods use this information to create an initial estimate of the lesion. Subsequently, level set methods refine its final shape by attaching the segmented contour to edges in the image while maintaining smoothness. The algorithm is applied to in vivo sonoelastographic images from twenty five thermal ablated lesions created in porcine livers. The estimated area is compared to results from manual segmentation and gross pathology images. Results show that the algorithm outperforms manual segmentation in accuracy, inter- and intra-observer variability. The processing time per image is significantly reduced.

  11. Visual analytics of inherently noisy crowdsourced data on ultra high resolution displays

    Science.gov (United States)

    Huynh, Andrew; Ponto, Kevin; Lin, Albert Yu-Min; Kuester, Falko

    The increasing prevalence of distributed human microtasking, crowdsourcing, has followed the exponential increase in data collection capabilities. The large scale and distributed nature of these microtasks produce overwhelming amounts of information that is inherently noisy due to the nature of human input. Furthermore, these inputs create a constantly changing dataset with additional information added on a daily basis. Methods to quickly visualize, filter, and understand this information over temporal and geospatial constraints is key to the success of crowdsourcing. This paper present novel methods to visually analyze geospatial data collected through crowdsourcing on top of remote sensing satellite imagery. An ultra high resolution tiled display system is used to explore the relationship between human and satellite remote sensing data at scale. A case study is provided that evaluates the presented technique in the context of an archaeological field expedition. A team in the field communicated in real-time with and was guided by researchers in the remote visual analytics laboratory, swiftly sifting through incoming crowdsourced data to identify target locations that were identified as viable archaeological sites.

  12. Chaos synchronization in noisy environment using nonlinear filtering and sliding mode control

    Energy Technology Data Exchange (ETDEWEB)

    Behzad, Mehdi [Center of Excellence in Design, Robotics, and Automation (CEDRA), Department of Mechanical Engineering, Sharif University of Technology, Postal Code 11365-9567, Azadi Avenue, Tehran (Iran, Islamic Republic of)], E-mail: m_behzad@sharif.edu; Salarieh, Hassan [Center of Excellence in Design, Robotics, and Automation (CEDRA), Department of Mechanical Engineering, Sharif University of Technology, Postal Code 11365-9567, Azadi Avenue, Tehran (Iran, Islamic Republic of)], E-mail: salarieh@mech.sharif.edu; Alasty, Aria [Center of Excellence in Design, Robotics, and Automation (CEDRA), Department of Mechanical Engineering, Sharif University of Technology, Postal Code 11365-9567, Azadi Avenue, Tehran (Iran, Islamic Republic of)], E-mail: aalasti@sharif.edu

    2008-06-15

    This paper presents an algorithm for synchronizing two different chaotic systems, using a combination of the extended Kalman filter and the sliding mode controller. It is assumed that the drive chaotic system has a random excitation with a stochastically chaotic behavior. Two different cases are considered in this study. At first it is assumed that all state variables of the drive system are available, i.e. complete state measurement, and a sliding mode controller is designed for synchronization. For the second case, it is assumed that the output of the drive system does not contain the whole state variables of the drive system, and it is also affected by some random noise. By combination of extended Kalman filter and the sliding mode control, a synchronizing control law is proposed. As a case study, the presented algorithm is applied to the Lur'e-Genesio chaotic systems as the drive-response dynamic systems. Simulation results show the good performance of the algorithm in synchronizing the chaotic systems in presence of noisy environment.

  13. Chaos synchronization in noisy environment using nonlinear filtering and sliding mode control

    International Nuclear Information System (INIS)

    Behzad, Mehdi; Salarieh, Hassan; Alasty, Aria

    2008-01-01

    This paper presents an algorithm for synchronizing two different chaotic systems, using a combination of the extended Kalman filter and the sliding mode controller. It is assumed that the drive chaotic system has a random excitation with a stochastically chaotic behavior. Two different cases are considered in this study. At first it is assumed that all state variables of the drive system are available, i.e. complete state measurement, and a sliding mode controller is designed for synchronization. For the second case, it is assumed that the output of the drive system does not contain the whole state variables of the drive system, and it is also affected by some random noise. By combination of extended Kalman filter and the sliding mode control, a synchronizing control law is proposed. As a case study, the presented algorithm is applied to the Lur'e-Genesio chaotic systems as the drive-response dynamic systems. Simulation results show the good performance of the algorithm in synchronizing the chaotic systems in presence of noisy environment

  14. Generalized PSF modeling for optimized quantitation in PET imaging.

    Science.gov (United States)

    Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman

    2017-06-21

    Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUV mean and SUV max , including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUV mean bias in small tumours. Overall, the results indicate that exactly matched PSF

  15. Effects of pedagogical ideology on the perceived loudness and noise levels in preschools

    Science.gov (United States)

    Jonsdottir, Valdis; Rantala, Leena M.; Oskarsson, Gudmundur Kr.; Sala, Eeva

    2015-01-01

    High activity noise levels that result in detrimental effects on speech communication have been measured in preschools. To find out if different pedagogical ideologies affect the perceived loudness and levels of noise, a questionnaire study inquiring about the experience of loudness and voice symptoms was carried out in Iceland in eight private preschools, called “Hjalli model”, and in six public preschools. Noise levels were also measured in the preschools. Background variables (stress level, age, length of working career, education, smoking, and number of children per teacher) were also analyzed in order to determine how much they contributed toward voice symptoms and the experience of noisiness. Results indicate that pedagogical ideology is a significant factor for predicting noise and its consequences. Teachers in the preschool with tighter pedagogical control of discipline (the “Hjalli model”) experienced lower activity noise loudness than teachers in the preschool with a more relaxed control of behavior (public preschool). Lower noise levels were also measured in the “Hjalli model” preschool and fewer “Hjalli model” teachers reported voice symptoms. Public preschool teachers experienced more stress than “Hjalli model” teachers and the stress level was, indeed, the background variable that best explained the voice symptoms and the teacher's perception of a noisy environment. Discipline, structure, and organization in the type of activity predicted the activity noise level better than the number of children in the group. Results indicate that pedagogical ideology is a significant factor for predicting self-reported noise and its consequences. PMID:26356370

  16. An Embodied Multi-Sensor Fusion Approach to Visual Motion Estimation Using Unsupervised Deep Networks.

    Science.gov (United States)

    Shamwell, E Jared; Nothwang, William D; Perlis, Donald

    2018-05-04

    Aimed at improving size, weight, and power (SWaP)-constrained robotic vision-aided state estimation, we describe our unsupervised, deep convolutional-deconvolutional sensor fusion network, Multi-Hypothesis DeepEfference (MHDE). MHDE learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. We show how our multi-hypothesis formulation provides increased robustness against dynamic, heteroscedastic sensor and motion noise by computing hypothesis image mappings and predictions at 76⁻357 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel, inter-connected architectural pathways and n (1⁻20 in this work) multi-hypothesis generating sub-pathways to produce n global correspondence estimates between a source and a target image. We evaluated MHDE on the KITTI Odometry dataset and benchmarked it against the vision-only DeepMatching and Deformable Spatial Pyramids algorithms and were able to demonstrate a significant runtime decrease and a performance increase compared to the next-best performing method.

  17. Advances in low-level color image processing

    CERN Document Server

    Smolka, Bogdan

    2014-01-01

    Color perception plays an important role in object recognition and scene understanding both for humans and intelligent vision systems. Recent advances in digital color imaging and computer hardware technology have led to an explosion in the use of color images in a variety of applications including medical imaging, content-based image retrieval, biometrics, watermarking, digital inpainting, remote sensing, visual quality inspection, among many others. As a result, automated processing and analysis of color images has become an active area of research, to which the large number of publications of the past two decades bears witness. The multivariate nature of color image data presents new challenges for researchers and practitioners as the numerous methods developed for single channel images are often not directly applicable to multichannel  ones. The goal of this volume is to summarize the state-of-the-art in the early stages of the color image processing pipeline.

  18. WE-AB-207A-12: HLCC Based Quantitative Evaluation Method of Image Artifact in Dental CBCT

    International Nuclear Information System (INIS)

    Chen, Y; Wu, S; Qi, H; Xu, Y; Zhou, L

    2016-01-01

    Purpose: Image artifacts are usually evaluated qualitatively via visual observation of the reconstructed images, which is susceptible to subjective factors due to the lack of an objective evaluation criterion. In this work, we propose a Helgason-Ludwig consistency condition (HLCC) based evaluation method to quantify the severity level of different image artifacts in dental CBCT. Methods: Our evaluation method consists of four step: 1) Acquire Cone beam CT(CBCT) projection; 2) Convert 3D CBCT projection to fan-beam projection by extracting its central plane projection; 3) Convert fan-beam projection to parallel-beam projection utilizing sinogram-based rebinning algorithm or detail-based rebinning algorithm; 4) Obtain HLCC profile by integrating parallel-beam projection per view and calculate wave percentage and variance of the HLCC profile, which can be used to describe the severity level of image artifacts. Results: Several sets of dental CBCT projections containing only one type of artifact (i.e. geometry, scatter, beam hardening, lag and noise artifact), were simulated using gDRR, a GPU tool developed for efficient, accurate, and realistic simulation of CBCT Projections. These simulated CBCT projections were used to test our proposed method. HLCC profile wave percentage and variance induced by geometry distortion are about 3∼21 times and 16∼393 times as large as that of the artifact-free projection, respectively. The increase factor of wave percentage and variance are 6 and133 times for beam hardening, 19 and 1184 times for scatter, and 4 and16 times for lag artifacts, respectively. In contrast, for noisy projection the wave percentage, variance and inconsistency level are almost the same with those of the noise-free one. Conclusion: We have proposed a quantitative evaluation method of image artifact based on HLCC theory. According to our simulation results, the severity of different artifact types is found to be in a following order: Scatter

  19. WE-AB-207A-12: HLCC Based Quantitative Evaluation Method of Image Artifact in Dental CBCT

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Y; Wu, S; Qi, H; Xu, Y; Zhou, L [Southern Medical University, Guangzhou, Guangdong (China)

    2016-06-15

    Purpose: Image artifacts are usually evaluated qualitatively via visual observation of the reconstructed images, which is susceptible to subjective factors due to the lack of an objective evaluation criterion. In this work, we propose a Helgason-Ludwig consistency condition (HLCC) based evaluation method to quantify the severity level of different image artifacts in dental CBCT. Methods: Our evaluation method consists of four step: 1) Acquire Cone beam CT(CBCT) projection; 2) Convert 3D CBCT projection to fan-beam projection by extracting its central plane projection; 3) Convert fan-beam projection to parallel-beam projection utilizing sinogram-based rebinning algorithm or detail-based rebinning algorithm; 4) Obtain HLCC profile by integrating parallel-beam projection per view and calculate wave percentage and variance of the HLCC profile, which can be used to describe the severity level of image artifacts. Results: Several sets of dental CBCT projections containing only one type of artifact (i.e. geometry, scatter, beam hardening, lag and noise artifact), were simulated using gDRR, a GPU tool developed for efficient, accurate, and realistic simulation of CBCT Projections. These simulated CBCT projections were used to test our proposed method. HLCC profile wave percentage and variance induced by geometry distortion are about 3∼21 times and 16∼393 times as large as that of the artifact-free projection, respectively. The increase factor of wave percentage and variance are 6 and133 times for beam hardening, 19 and 1184 times for scatter, and 4 and16 times for lag artifacts, respectively. In contrast, for noisy projection the wave percentage, variance and inconsistency level are almost the same with those of the noise-free one. Conclusion: We have proposed a quantitative evaluation method of image artifact based on HLCC theory. According to our simulation results, the severity of different artifact types is found to be in a following order: Scatter

  20. Fractional averaging of repetitive waveforms induced by self-imaging effects

    Science.gov (United States)

    Romero Cortés, Luis; Maram, Reza; Azaña, José

    2015-10-01

    We report the theoretical prediction and experimental observation of averaging of stochastic events with an equivalent result of calculating the arithmetic mean (or sum) of a rational number of realizations of the process under test, not necessarily limited to an integer record of realizations, as discrete statistical theory dictates. This concept is enabled by a passive amplification process, induced by self-imaging (Talbot) effects. In the specific implementation reported here, a combined spectral-temporal Talbot operation is shown to achieve undistorted, lossless repetition-rate division of a periodic train of noisy waveforms by a rational factor, leading to local amplification, and the associated averaging process, by the fractional rate-division factor.

  1. Estimating 3D Object Parameters from 2D Grey-Level Images

    NARCIS (Netherlands)

    Houkes, Z.

    2000-01-01

    This thesis describes a general framework for parameter estimation, which is suitable for computer vision applications. The approach described combines 3D modelling, animation and estimation tools to determine parameters of objects in a scene from 2D grey-level images. The animation tool predicts

  2. A 256×256 low-light-level CMOS imaging sensor with digital CDS

    Science.gov (United States)

    Zou, Mei; Chen, Nan; Zhong, Shengyou; Li, Zhengfen; Zhang, Jicun; Yao, Li-bin

    2016-10-01

    In order to achieve high sensitivity for low-light-level CMOS image sensors (CIS), a capacitive transimpedance amplifier (CTIA) pixel circuit with a small integration capacitor is used. As the pixel and the column area are highly constrained, it is difficult to achieve analog correlated double sampling (CDS) to remove the noise for low-light-level CIS. So a digital CDS is adopted, which realizes the subtraction algorithm between the reset signal and pixel signal off-chip. The pixel reset noise and part of the column fixed-pattern noise (FPN) can be greatly reduced. A 256×256 CIS with CTIA array and digital CDS is implemented in the 0.35μm CMOS technology. The chip size is 7.7mm×6.75mm, and the pixel size is 15μm×15μm with a fill factor of 20.6%. The measured pixel noise is 24LSB with digital CDS in RMS value at dark condition, which shows 7.8× reduction compared to the image sensor without digital CDS. Running at 7fps, this low-light-level CIS can capture recognizable images with the illumination down to 0.1lux.

  3. Radiation therapists' perceptions of the minimum level of experience required to perform portal image analysis

    International Nuclear Information System (INIS)

    Rybovic, Michala; Halkett, Georgia K.; Banati, Richard B.; Cox, Jennifer

    2008-01-01

    Background and purpose: Our aim was to explore radiation therapists' views on the level of experience necessary to undertake portal image analysis and clinical decision making. Materials and methods: A questionnaire was developed to determine the availability of portal imaging equipment in Australia and New Zealand. We analysed radiation therapists' responses to a specific question regarding their opinion on the minimum level of experience required for health professionals to analyse portal images. We used grounded theory and a constant comparative method of data analysis to derive the main themes. Results: Forty-six radiation oncology facilities were represented in our survey, with 40 questionnaires being returned (87%). Thirty-seven radiation therapists answered our free-text question. Radiation therapists indicated three main themes which they felt were important in determining the minimum level of experience: 'gaining on-the-job experience', 'receiving training' and 'working as a team'. Conclusions: Radiation therapists indicated that competence in portal image review occurs via various learning mechanisms. Further research is warranted to determine perspectives of other health professionals, such as radiation oncologists, on portal image review becoming part of radiation therapists' extended role. Suitable training programs and steps for implementation should be developed to facilitate this endeavour

  4. Robustness of Input features from Noisy Silhouettes in Human Pose Estimation

    DEFF Research Database (Denmark)

    Gong, Wenjuan; Fihl, Preben; Gonzàlez, Jordi

    2014-01-01

    . In this paper, we explore this problem. First, We compare performances of several image features widely used for human pose estimation and explore their performances against each other and select one with best performance. Second, iterative closest point algorithm is introduced for a new quantitative...... of silhouette samples of different noise levels and compare with the selected feature on a public dataset: Human Eva dataset....

  5. Mass Detection in Mammographic Images Using Wavelet Processing and Adaptive Threshold Technique.

    Science.gov (United States)

    Vikhe, P S; Thool, V R

    2016-04-01

    Detection of mass in mammogram for early diagnosis of breast cancer is a significant assignment in the reduction of the mortality rate. However, in some cases, screening of mass is difficult task for radiologist, due to variation in contrast, fuzzy edges and noisy mammograms. Masses and micro-calcifications are the distinctive signs for diagnosis of breast cancer. This paper presents, a method for mass enhancement using piecewise linear operator in combination with wavelet processing from mammographic images. The method includes, artifact suppression and pectoral muscle removal based on morphological operations. Finally, mass segmentation for detection using adaptive threshold technique is carried out to separate the mass from background. The proposed method has been tested on 130 (45 + 85) images with 90.9 and 91 % True Positive Fraction (TPF) at 2.35 and 2.1 average False Positive Per Image(FP/I) from two different databases, namely Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammography (DDSM). The obtained results show that, the proposed technique gives improved diagnosis in the early breast cancer detection.

  6. Prediction of myelopathic level in cervical spondylotic myelopathy using diffusion tensor imaging.

    Science.gov (United States)

    Wang, Shu-Qiang; Li, Xiang; Cui, Jiao-Long; Li, Han-Xiong; Luk, Keith D K; Hu, Yong

    2015-06-01

    To investigate the use of a newly designed machine learning-based classifier in the automatic identification of myelopathic levels in cervical spondylotic myelopathy (CSM). In all, 58 normal volunteers and 16 subjects with CSM were recruited for diffusion tensor imaging (DTI) acquisition. The eigenvalues were extracted as the selected features from DTI images. Three classifiers, naive Bayesian, support vector machine, and support tensor machine, and fractional anisotropy (FA) were employed to identify myelopathic levels. The results were compared with clinical level diagnosis results and accuracy, sensitivity, and specificity were calculated to evaluate the performance of the developed classifiers. The accuracy by support tensor machine was the highest (93.62%) among the three classifiers. The support tensor machine also showed excellent capacity to identify true positives (sensitivity: 84.62%) and true negatives (specificity: 97.06%). The accuracy by FA value was the lowest (76%) in all the methods. The classifiers-based method using eigenvalues had a better performance in identifying the levels of CSM than the diagnosis using FA values. The support tensor machine was the best among three classifiers. © 2014 Wiley Periodicals, Inc.

  7. Underwater Image Enhancement by Adaptive Gray World and Differential Gray-Levels Histogram Equalization

    Directory of Open Access Journals (Sweden)

    WONG, S.-L.

    2018-05-01

    Full Text Available Most underwater images tend to be dominated by a single color cast. This paper presents a solution to remove the color cast and improve the contrast in underwater images. However, after the removal of the color cast using Gray World (GW method, the resultant image is not visually pleasing. Hence, we propose an integrated approach using Adaptive GW (AGW and Differential Gray-Levels Histogram Equalization (DHE that operate in parallel. The AGW is applied to remove the color cast while DHE is used to improve the contrast of the underwater image. The outputs of both chromaticity components of AGW and intensity components of DHE are combined to form the enhanced image. The results of the proposed method are compared with three existing methods using qualitative and quantitative measures. The proposed method increased the visibility of underwater images and in most cases produces better quantitative scores when compared to the three existing methods.

  8. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    Science.gov (United States)

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  9. Dragonfly: an implementation of the expand-maximize-compress algorithm for single-particle imaging.

    Science.gov (United States)

    Ayyer, Kartik; Lan, Ti-Yen; Elser, Veit; Loh, N Duane

    2016-08-01

    Single-particle imaging (SPI) with X-ray free-electron lasers has the potential to change fundamentally how biomacromolecules are imaged. The structure would be derived from millions of diffraction patterns, each from a different copy of the macromolecule before it is torn apart by radiation damage. The challenges posed by the resultant data stream are staggering: millions of incomplete, noisy and un-oriented patterns have to be computationally assembled into a three-dimensional intensity map and then phase reconstructed. In this paper, the Dragonfly software package is described, based on a parallel implementation of the expand-maximize-compress reconstruction algorithm that is well suited for this task. Auxiliary modules to simulate SPI data streams are also included to assess the feasibility of proposed SPI experiments at the Linac Coherent Light Source, Stanford, California, USA.

  10. The level of detail required in a deformable phantom to accurately perform quality assurance of deformable image registration

    Science.gov (United States)

    Saenz, Daniel L.; Kim, Hojin; Chen, Josephine; Stathakis, Sotirios; Kirby, Neil

    2016-09-01

    The primary purpose of the study was to determine how detailed deformable image registration (DIR) phantoms need to adequately simulate human anatomy and accurately assess the quality of DIR algorithms. In particular, how many distinct tissues are required in a phantom to simulate complex human anatomy? Pelvis and head-and-neck patient CT images were used for this study as virtual phantoms. Two data sets from each site were analyzed. The virtual phantoms were warped to create two pairs consisting of undeformed and deformed images. Otsu’s method was employed to create additional segmented image pairs of n distinct soft tissue CT number ranges (fat, muscle, etc). A realistic noise image was added to each image. Deformations were applied in MIM Software (MIM) and Velocity deformable multi-pass (DMP) and compared with the known warping. Images with more simulated tissue levels exhibit more contrast, enabling more accurate results. Deformation error (magnitude of the vector difference between known and predicted deformation) was used as a metric to evaluate how many CT number gray levels are needed for a phantom to serve as a realistic patient proxy. Stabilization of the mean deformation error was reached by three soft tissue levels for Velocity DMP and MIM, though MIM exhibited a persisting difference in accuracy between the discrete images and the unprocessed image pair. A minimum detail of three levels allows a realistic patient proxy for use with Velocity and MIM deformation algorithms.

  11. Natural-pose hand detection in low-resolution images

    Directory of Open Access Journals (Sweden)

    Nyan Bo Bo1

    2009-07-01

    Full Text Available Robust real-time hand detection and tracking in video sequences would enable many applications in areas as diverse ashuman-computer interaction, robotics, security and surveillance, and sign language-based systems. In this paper, we introducea new approach for detecting human hands that works on single, cluttered, low-resolution images. Our prototype system, whichis primarily intended for security applications in which the images are noisy and low-resolution, is able to detect hands as smallas 2424 pixels in cluttered scenes. The system uses grayscale appearance information to classify image sub-windows as eithercontaining or not containing a human hand very rapidly at the cost of a high false positive rate. To improve on the false positiverate of the main classifier without affecting its detection rate, we introduce a post-processor system that utilizes the geometricproperties of skin color blobs. When we test our detector on a test image set containing 106 hands, 92 of those hands aredetected (86.8% detection rate, with an average false positive rate of 1.19 false positive detections per image. The rapiddetection speed, the high detection rate of 86.8%, and the low false positive rate together ensure that our system is useable asthe main detector in a diverse variety of applications requiring robust hand detection and tracking in low-resolution, clutteredscenes.

  12. Phase retrieval for X-ray in-line phase contrast imaging

    International Nuclear Information System (INIS)

    Scattarella, F.; Bellotti, R.; Tangaro, S.; Gargano, G.; Giannini, C.

    2011-01-01

    A review article about phase retrieval problem in X-ray phase contrast imaging is presented. A simple theoretical framework of Fresnel diffraction imaging by X-rays is introduced. A review of the most important methods for phase retrieval in free-propagation-based X-ray imaging and a new method developed by our collaboration are shown. The proposed algorithm, Combined Mixed Approach (CMA) is based on a mixed transfer function and transport of intensity approach, and it requires at most an initial approximate estimate of the average phase shift introduced by the object as prior knowledge. The accuracy with which this initial estimate is known determines the convenience speed of algorithm. The new proposed algorithm is based on the retrieval of both the object phase and its complex conjugate. The results obtained by the algorithm on simulated data have shown that the obtained reconstructed phase maps are characterized by particularly low normalized mean square errors. The algorithm was also tested on noisy experimental phase contrast data, showing a good efficiency in recovering phase information and enhancing the visibility of details inside soft tissues.

  13. Bayesian inference on multiscale models for poisson intensity estimation: applications to photon-limited image denoising.

    Science.gov (United States)

    Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George

    2009-08-01

    We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.

  14. Discrimination of nitrogen fertilizer levels of tea plant (Camellia sinensis) based on hyperspectral imaging.

    Science.gov (United States)

    Wang, Yujie; Hu, Xin; Hou, Zhiwei; Ning, Jingming; Zhang, Zhengzhu

    2018-04-01

    Nitrogen (N) fertilizer plays an important role in tea plantation management, with significant impacts on the photosynthetic capacity, productivity and nutrition status of tea plants. The present study aimed to establish a method for the discrimination of N fertilizer levels using hyperspectral imaging technique. Spectral data were extracted from the region of interest, followed by the first derivative to reduce background noise. Five optimal wavelengths were selected by principal component analysis. Texture features were extracted from the images at optimal wavelengths by gray-level gradient co-occurrence matrix. Support vector machine (SVM) and extreme learning machine were used to build classification models based on spectral data, optimal wavelengths, texture features and data fusion, respectively. The SVM model using fused data gave the best performance with highest correct classification rate of 100% for prediction set. The overall results indicated that visible and near-infrared hyperspectral imaging combined with SVM were effective in discriminating N fertilizer levels of tea plants. © 2018 Society of Chemical Industry. © 2018 Society of Chemical Industry.

  15. Acute cervical spine injuries: prospective MR imaging assessment at a level 1 trauma center.

    Science.gov (United States)

    Katzberg, R W; Benedetti, P F; Drake, C M; Ivanovic, M; Levine, R A; Beatty, C S; Nemzek, W R; McFall, R A; Ontell, F K; Bishop, D M; Poirier, V C; Chong, B W

    1999-10-01

    To determine the weighted average sensitivity of magnetic resonance (MR) imaging in the prospective detection of acute neck injury and to compare these findings with those of a comprehensive conventional radiographic assessment. Conventional radiography and MR imaging were performed in 199 patients presenting to a level 1 trauma center with suspected cervical spine injury. Weighted sensitivities and specificities were calculated, and a weighted average across eight vertebral levels from C1 to T1 was formed. Fourteen parameters indicative of acute injury were tabulated. Fifty-eight patients had 172 acute cervical injuries. MR imaging depicted 136 (79%) acute abnormalities and conventional radiography depicted 39 (23%). For assessment of acute fractures, MR images (weighted average sensitivity, 43%; CI: 21%, 66%) were comparable to conventional radiographs (weighted average sensitivity, 48%; CI: 30%, 65%). MR imaging was superior to conventional radiography in the evaluation of pre- or paravertebral hemorrhage or edema, anterior or posterior longitudinal ligament injury, traumatic disk herniation, cord edema, and cord compression. Cord injuries were associated with cervical spine spondylosis (P < .05), acute fracture (P < .001), and canal stenosis (P < .001). MR imaging is more accurate than radiography in the detection of a wide spectrum of neck injuries, and further study is warranted of its potential effect on medical decision making, clinical outcome, and cost-effectiveness.

  16. Development of an omnidirectional gamma-ray imaging Compton camera for low-radiation-level environmental monitoring

    Science.gov (United States)

    Watanabe, Takara; Enomoto, Ryoji; Muraishi, Hiroshi; Katagiri, Hideaki; Kagaya, Mika; Fukushi, Masahiro; Kano, Daisuke; Satoh, Wataru; Takeda, Tohoru; Tanaka, Manobu M.; Tanaka, Souichi; Uchida, Tomohisa; Wada, Kiyoto; Wakamatsu, Ryo

    2018-02-01

    We have developed an omnidirectional gamma-ray imaging Compton camera for environmental monitoring at low levels of radiation. The camera consisted of only six CsI(Tl) scintillator cubes of 3.5 cm, each of which was readout by super-bialkali photo-multiplier tubes (PMTs). Our camera enables the visualization of the position of gamma-ray sources in all directions (∼4π sr) over a wide energy range between 300 and 1400 keV. The angular resolution (σ) was found to be ∼11°, which was realized using an image-sharpening technique. A high detection efficiency of 18 cps/(µSv/h) for 511 keV (1.6 cps/MBq at 1 m) was achieved, indicating the capability of this camera to visualize hotspots in areas with low-radiation-level contamination from the order of µSv/h to natural background levels. Our proposed technique can be easily used as a low-radiation-level imaging monitor in radiation control areas, such as medical and accelerator facilities.

  17. Mining Data of Noisy Signal Patterns in Recognition of Gasoline Bio-Based Additives using Electronic Nose

    Directory of Open Access Journals (Sweden)

    Osowski Stanisław

    2017-03-01

    Full Text Available The paper analyses the distorted data of an electronic nose in recognizing the gasoline bio-based additives. Different tools of data mining, such as the methods of data clustering, principal component analysis, wavelet transformation, support vector machine and random forest of decision trees are applied. A special stress is put on the robustness of signal processing systems to the noise distorting the registered sensor signals. A special denoising procedure based on application of discrete wavelet transformation has been proposed. This procedure enables to reduce the error rate of recognition in a significant way. The numerical results of experiments devoted to the recognition of different blends of gasoline have shown the superiority of support vector machine in a noisy environment of measurement.

  18. Automatic segmentation of fluorescence lifetime microscopy images of cells using multiresolution community detection--a first study.

    Science.gov (United States)

    Hu, D; Sarder, P; Ronhovde, P; Orthaus, S; Achilefu, S; Nussinov, Z

    2014-01-01

    Inspired by a multiresolution community detection based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Furthermore, using the proposed method, the mean-square error in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The multiresolution community detection method appeared to perform better than a popular spectral clustering-based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in mean-square error with increasing resolution. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  19. The use of scale-invariance feature transform approach to recognize and retrieve incomplete shoeprints.

    Science.gov (United States)

    Wei, Chia-Hung; Li, Yue; Gwo, Chih-Ying

    2013-05-01

    Shoeprints left at the crime scene provide valuable information in criminal investigation due to the distinctive patterns in the sole. Those shoeprints are often incomplete and noisy. In this study, scale-invariance feature transform is proposed and evaluated for recognition and retrieval of partial and noisy shoeprint images. The proposed method first constructs different scale spaces to detect local extrema in the underlying shoeprint images. Those local extrema are considered as useful key points in the image. Next, the features of those key points are extracted to represent their local patterns around key points. Then, the system computes the cross-correlation between the query image and each shoeprint image in the database. Experimental results show that full-size prints and prints from the toe area perform best among all shoeprints. Furthermore, this system also demonstrates its robustness against noise because there is a very slight difference in comparison between original shoeprints and noisy shoeprints. © 2013 American Academy of Forensic Sciences.

  20. Mathematics behind a Class of Image Restoration Algorithms

    Directory of Open Access Journals (Sweden)

    Luminita STATE

    2012-01-01

    Full Text Available The restoration techniques are usually oriented toward modeling the type of degradation in order to infer the inverse process for recovering the given image. This approach usually involves the option for a criterion to numerically evaluate the quality of the resulted image and consequently the restoration process can be expressed in terms of an optimization problem. Most of the approaches are essentially based on additional hypothesis concerning the statistical properties of images. However, in real life applications, there is no enough information to support a certain particular image model, and consequently model-free developments have to be used instead. In our approaches the problem of image denoising/restoration is viewed as an information transmission/processing system, where the signal representing a certain clean image is transmitted through a noisy channel and only a noise-corrupted version is available. The aim is to recover the available signal as much as possible by using different noise removal techniques that is to build an accurate approximation of the initial image. Unfortunately, a series of image qualities, as for instance clarity, brightness, contrast, are affected by the noise removal techniques and consequently there is a need to partially restore them on the basis of information extracted exclusively from data. Following a brief description of the image restoration framework provided in the introductory part, a PCA-based methodology is presented in the second section of the paper. The basics of a new informational-based development for image restoration purposes and scatter matrix-based methods are given in the next two sections. The final section contains concluding remarks and suggestions for further work.

  1. NOISY DISPERSION CURVE PICKING (NDCP): a Matlab friendly suite package for fully control dispersion curve picking

    Science.gov (United States)

    Granados, I.; Calo, M.; Ramos, V.

    2017-12-01

    We developed a Matlab suite package (NDCP, Noisy Dispersion Curve Picking) that allows a full control over parameters to identify correctly group velocity dispersion curves in two types of datasets: correlograms between two stations or surface wave records from earthquakes. Using the frequency-time analysis (FTAN), the procedure to obtain the dispersion curves from records with a high noise level becomes difficult, and sometimes, the picked curve result in a misinterpreted character. For correlogram functions, obtained with cross-correlation of noise records or earthquake's coda, a non-homogeneous noise sources distribution yield to a non-symmetric Green's function (GF); to retrieve the complete information contained in there, NDCP allows to pick the dispersion curve in the time domain both in the causal and non-causal part of the GF. Then the picked dispersion curve is displayed on the FTAN diagram to in order to check if it matches with the maximum of the signal energy avoiding confusion with overtones or spike of noise. To illustrate how NDCP performs, we show exemple using: i) local correlograms functions obtained from sensors deployed into a volcanic caldera (Los Humeros, in Puebla, Mexico), ii) regional correlograms functions between two stations of the National Seismological Service (SSN, Servicio Sismológico Nacional in Spanish), and iii) surface wave seismic record for an earthquake located in the Pacific Ocean coast of Mexico and recorded by the SSN. This work is supported by the GEMEX project (Geothermal Europe-Mexico consortium).

  2. InGaAs focal plane arrays for low-light-level SWIR imaging

    Science.gov (United States)

    MacDougal, Michael; Hood, Andrew; Geske, Jon; Wang, Jim; Patel, Falgun; Follman, David; Manzo, Juan; Getty, Jonathan

    2011-06-01

    Aerius Photonics will present their latest developments in large InGaAs focal plane arrays, which are used for low light level imaging in the short wavelength infrared (SWIR) regime. Aerius will present imaging in both 1280x1024 and 640x512 formats. Aerius will present characterization of the FPA including dark current measurements. Aerius will also show the results of development of SWIR FPAs for high temperaures, including imagery and dark current data. Finally, Aerius will show results of using the SWIR camera with Aerius' SWIR illuminators using VCSEL technology.

  3. Kinetic analysis of [11C]befloxatone in the human brain, a selective radioligand to image monoamine oxidase A.

    Science.gov (United States)

    Zanotti-Fregonara, Paolo; Leroy, Claire; Roumenov, Dimitri; Trichard, Christian; Martinot, Jean-Luc; Bottlaender, Michel

    2013-11-25

    [11C]Befloxatone measures the density of the enzyme monoamine oxidase A (MAO-A) in the brain. MAO-A is responsible for the degradation of different neurotransmitters and is implicated in several neurologic and psychiatric illnesses. This study sought to estimate the distribution volume (VT) values of [11C]befloxatone in humans using an arterial input function. Seven healthy volunteers were imaged with positron emission tomography (PET) after [11C]befloxatone injection. Kinetic analysis was performed using an arterial input function in association with compartmental modeling and with the Logan plot, multilinear analysis (MA1), and standard spectral analysis (SA) at both the regional and voxel level. Arterialized venous samples were drawn as an alternative and less invasive input function. An unconstrained two-compartment model reliably quantified VT values in large brain regions. A constrained model did not significantly improve VT identifiability. Similar VT results were obtained using SA; however, the Logan plot and MA1 slightly underestimated VT values (about -10%). At the voxel level, SA showed a very small bias (+2%) compared to compartmental modeling, Logan severely underestimated VT values, and voxel-wise images obtained with MA1 were too noisy to be reliably quantified. Arterialized venous blood samples did not provide a satisfactory alternative input function as the Logan-VT regional values were not comparable to those obtained with arterial sampling in all subjects. Binding of [11C]befloxatone to MAO-A can be quantified using an arterial input function and a two-compartment model or, in parametric images, with SA.

  4. Remote Sensing Image Fusion at the Segment Level Using a Spatially-Weighted Approach: Applications for Land Cover Spectral Analysis and Mapping

    Directory of Open Access Journals (Sweden)

    Brian Johnson

    2015-01-01

    Full Text Available Segment-level image fusion involves segmenting a higher spatial resolution (HSR image to derive boundaries of land cover objects, and then extracting additional descriptors of image segments (polygons from a lower spatial resolution (LSR image. In past research, an unweighted segment-level fusion (USF approach, which extracts information from a resampled LSR image, resulted in more accurate land cover classification than the use of HSR imagery alone. However, simply fusing the LSR image with segment polygons may lead to significant errors due to the high level of noise in pixels along the segment boundaries (i.e., pixels containing multiple land cover types. To mitigate this, a spatially-weighted segment-level fusion (SWSF method was proposed for extracting descriptors (mean spectral values of segments from LSR images. SWSF reduces the weights of LSR pixels located on or near segment boundaries to reduce errors in the fusion process. Compared to the USF approach, SWSF extracted more accurate spectral properties of land cover objects when the ratio of the LSR image resolution to the HSR image resolution was greater than 2:1, and SWSF was also shown to increase classification accuracy. SWSF can be used to fuse any type of imagery at the segment level since it is insensitive to spectral differences between the LSR and HSR images (e.g., different spectral ranges of the images or different image acquisition dates.

  5. Quantitative image analysis of intra-tumoral bFGF level as a molecular marker of paclitaxel resistance

    Directory of Open Access Journals (Sweden)

    Wientjes M Guillaume

    2008-01-01

    Full Text Available Abstract Background The role of basic fibroblast growth factor (bFGF in chemoresistance is controversial; some studies showed a relationship between higher bFGF level and chemoresistance while other studies showed the opposite finding. The goal of the present study was to quantify bFGF levels in archived tumor tissues, and to determine its relationship with chemosensitivity. Methods We established an image analysis-based method to quantify and convert the immunostaining intensity of intra-tumor bFGF to concentrations; this was accomplished by generating standard curves using human xenograft tumors as the renewable tissue source for simultaneous image analysis and ELISA. The relationships between bFGF concentrations and tumor chemosensitivity of patient tumors (n = 87 to paclitaxel were evaluated using linear regression analysis. Results The image analysis results were compared to our previous results obtained using a conventional, semi-quantitative visual scoring method. While both analyses indicated an inverse relationship between bFGF level and tumor sensitivity to paclitaxel, the image analysis method, by providing bFGF levels in individual tumors and therefore more data points (87 numerical values as opposed to four groups of staining intensities, further enabled the quantitative analysis of the relationship in subgroups of tumors with different pathobiological properties. The results show significant correlation between bFGF level and tumor sensitivity to the antiproliferation effect, but not the apoptotic effect, of paclitaxel. We further found stronger correlations of bFGF level and paclitaxel sensitivity in four tumor subgroups (high stage, positive p53 staining, negative aFGF staining, containing higher-than-median bFGF level, compared to all other groups. These findings suggest that the relationship between intra-tumoral bFGF level and paclitaxel sensitivity was context-dependent, which may explain the previous contradictory findings

  6. Raised Anxiety Levels Among Outpatients Preparing to Undergo a Medical Imaging Procedure: Prevalence and Correlates.

    Science.gov (United States)

    Forshaw, Kristy L; Boyes, Allison W; Carey, Mariko L; Hall, Alix E; Symonds, Michael; Brown, Sandy; Sanson-Fisher, Rob W

    2018-04-01

    To examine the percentage of patients with raised state anxiety levels before undergoing a medical imaging procedure; their attribution of procedural-related anxiety or worry; and sociodemographic, health, and procedural characteristics associated with raised state anxiety levels. This prospective cross-sectional study was undertaken in the outpatient medical imaging department at a major public hospital in Australia, with institutional board approval. Adult outpatients undergoing a medical imaging procedure (CT, x-ray, MRI, ultrasound, angiography, or fluoroscopy) completed a preprocedural survey. Anxiety was measured by the short-form state scale of the six-item State-Trait Anxiety Inventory (STAI: Y-6). The number and percentage of participants who reported raised anxiety levels (defined as a STAI: Y-6 score ≥ 33.16) and their attribution of procedural-related anxiety or worry were calculated. Characteristics associated with raised anxiety were examined using multiple logistic regression analysis. Of the 548 (86%) patients who consented to participate, 488 (77%) completed all STAI: Y-6 items. Half of the participants (n = 240; 49%) experienced raised anxiety, and of these, 48% (n = 114) reported feeling most anxious or worried about the possible results. Female gender, imaging modality, medical condition, first time having the procedure, and lower patient-perceived health status were statistically significantly associated with raised anxiety levels. Raised anxiety is common before medical imaging procedures and is mostly attributed to the possible results. Providing increased psychological preparation, particularly to patients with circulatory conditions or neoplasms or those that do not know their medical condition, may help reduce preprocedural anxiety among these subgroups. Copyright © 2018 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  7. Speckle Reduction on Ultrasound Liver Images Based on a Sparse Representation over a Learned Dictionary

    Directory of Open Access Journals (Sweden)

    Mohamed Yaseen Jabarulla

    2018-05-01

    Full Text Available Ultrasound images are corrupted with multiplicative noise known as speckle, which reduces the effectiveness of image processing and hampers interpretation. This paper proposes a multiplicative speckle suppression technique for ultrasound liver images, based on a new signal reconstruction model known as sparse representation (SR over dictionary learning. In the proposed technique, the non-uniform multiplicative signal is first converted into additive noise using an enhanced homomorphic filter. This is followed by pixel-based total variation (TV regularization and patch-based SR over a dictionary trained using K-singular value decomposition (KSVD. Finally, the split Bregman algorithm is used to solve the optimization problem and estimate the de-speckled image. The simulations performed on both synthetic and clinical ultrasound images for speckle reduction, the proposed technique achieved peak signal-to-noise ratios of 35.537 dB for the dictionary trained on noisy image patches and 35.033 dB for the dictionary trained using a set of reference ultrasound image patches. Further, the evaluation results show that the proposed method performs better than other state-of-the-art denoising algorithms in terms of both peak signal-to-noise ratio and subjective visual quality assessment.

  8. Automatic Segmentation of Fluorescence Lifetime Microscopy Images of Cells Using Multi-Resolution Community Detection -A First Study

    Science.gov (United States)

    Hu, Dandan; Sarder, Pinaki; Ronhovde, Peter; Orthaus, Sandra; Achilefu, Samuel; Nussinov, Zohar

    2014-01-01

    Inspired by a multi-resolution community detection (MCD) based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Further, using the proposed method, the mean-square error (MSE) in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The MCD method appeared to perform better than a popular spectral clustering based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in MSE with increasing resolution. PMID:24251410

  9. Normal Inverse Gaussian Model-Based Image Denoising in the NSCT Domain

    Directory of Open Access Journals (Sweden)

    Jian Jia

    2015-01-01

    Full Text Available The objective of image denoising is to retain useful details while removing as much noise as possible to recover an original image from its noisy version. This paper proposes a novel normal inverse Gaussian (NIG model-based method that uses a Bayesian estimator to carry out image denoising in the nonsubsampled contourlet transform (NSCT domain. In the proposed method, the NIG model is first used to describe the distributions of the image transform coefficients of each subband in the NSCT domain. Then, the corresponding threshold function is derived from the model using Bayesian maximum a posteriori probability estimation theory. Finally, optimal linear interpolation thresholding algorithm (OLI-Shrink is employed to guarantee a gentler thresholding effect. The results of comparative experiments conducted indicate that the denoising performance of our proposed method in terms of peak signal-to-noise ratio is superior to that of several state-of-the-art methods, including BLS-GSM, K-SVD, BivShrink, and BM3D. Further, the proposed method achieves structural similarity (SSIM index values that are comparable to those of the block-matching 3D transformation (BM3D method.

  10. Random Valued Impulse Noise Removal Using Region Based Detection Approach

    Directory of Open Access Journals (Sweden)

    S. Banerjee

    2017-12-01

    Full Text Available Removal of random valued noisy pixel is extremely challenging when the noise density is above 50%. The existing filters are generally not capable of eliminating such noise when density is above 70%. In this paper a region wise density based detection algorithm for random valued impulse noise has been proposed. On the basis of the intensity values, the pixels of a particular window are sorted and then stored into four regions. The higher density based region is considered for stepwise detection of noisy pixels. As a result of this detection scheme a maximum of 75% of noisy pixels can be detected. For this purpose this paper proposes a unique noise removal algorithm. It was experimentally proved that the proposed algorithm not only performs exceptionally when it comes to visual qualitative judgment of standard images but also this filter combination outsmarts the existing algorithm in terms of MSE, PSNR and SSIM comparison even up to 70% noise density level.

  11. Reconstruction of constitutive parameters in isotropic linear elasticity from noisy full-field measurements

    International Nuclear Information System (INIS)

    Bal, Guillaume; Bellis, Cédric; Imperiale, Sébastien; Monard, François

    2014-01-01

    Within the framework of linear elasticity we assume the availability of internal full-field measurements of the continuum deformations of a non-homogeneous isotropic solid. The aim is the quantitative reconstruction of the associated moduli. A simple gradient system for the sought constitutive parameters is derived algebraically from the momentum equation, whose coefficients are expressed in terms of the measured displacement fields and their spatial derivatives. Direct integration of this system is discussed to finally demonstrate the inexpediency of such an approach when dealing with noisy data. Upon using polluted measurements, an alternative variational formulation is deployed to invert for the physical parameters. Analysis of this latter inversion procedure provides existence and uniqueness results while the reconstruction stability with respect to the measurements is investigated. As the inversion procedure requires differentiating the measurements twice, a numerical differentiation scheme based on an ad hoc regularization then allows an optimally stable reconstruction of the sought moduli. Numerical results are included to illustrate and assess the performance of the overall approach. (paper)

  12. Diagnostic accuracy at several reduced radiation dose levels for CT imaging in the diagnosis of appendicitis

    Science.gov (United States)

    Zhang, Di; Khatonabadi, Maryam; Kim, Hyun; Jude, Matilda; Zaragoza, Edward; Lee, Margaret; Patel, Maitraya; Poon, Cheryce; Douek, Michael; Andrews-Tang, Denise; Doepke, Laura; McNitt-Gray, Shawn; Cagnon, Chris; DeMarco, John; McNitt-Gray, Michael

    2012-03-01

    Purpose: While several studies have investigated the tradeoffs between radiation dose and image quality (noise) in CT imaging, the purpose of this study was to take this analysis a step further by investigating the tradeoffs between patient radiation dose (including organ dose) and diagnostic accuracy in diagnosis of appendicitis using CT. Methods: This study was IRB approved and utilized data from 20 patients who underwent clinical CT exams for indications of appendicitis. Medical record review established true diagnosis of appendicitis, with 10 positives and 10 negatives. A validated software tool used raw projection data from each scan to create simulated images at lower dose levels (70%, 50%, 30%, 20% of original). An observer study was performed with 6 radiologists reviewing each case at each dose level in random order over several sessions. Readers assessed image quality and provided confidence in their diagnosis of appendicitis, each on a 5 point scale. Liver doses at each case and each dose level were estimated using Monte Carlo simulation based methods. Results: Overall diagnostic accuracy varies across dose levels: 92%, 93%, 91%, 90% and 90% across the 100%, 70%, 50%, 30% and 20% dose levels respectively. And it is 93%, 95%, 88%, 90% and 90% across the 13.5-22mGy, 9.6-13.5mGy, 6.4-9.6mGy, 4-6.4mGy, and 2-4mGy liver dose ranges respectively. Only 4 out of 600 observations were rated "unacceptable" for image quality. Conclusion: The results from this pilot study indicate that the diagnostic accuracy does not change dramatically even at significantly reduced radiation dose.

  13. Radiation therapists' perceptions of the minimum level of experience required to perform portal image analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rybovic, Michala [Discipline of Medical Radiation Sciences, Faculty of Health Sciences, University of Sydney, PO Box 170, Lidcombe, NSW 1825 (Australia)], E-mail: mryb6983@mail.usyd.edu.au; Halkett, Georgia K. [Western Australia Centre for Cancer and Palliative Care, Curtin University of Technology, Health Research Campus, GPO Box U1987, Perth, WA 6845 (Australia)], E-mail: g.halkett@curtin.edu.au; Banati, Richard B. [Faculty of Health Sciences, Brain and Mind Research Institute - Ramaciotti Centre for Brain Imaging, University of Sydney, PO Box 170, Lidcombe, NSW 1825 (Australia)], E-mail: r.banati@usyd.edu.au; Cox, Jennifer [Discipline of Medical Radiation Sciences, Faculty of Health Sciences, University of Sydney, PO Box 170, Lidcombe, NSW 1825 (Australia)], E-mail: jenny.cox@usyd.edu.au

    2008-11-15

    Background and purpose: Our aim was to explore radiation therapists' views on the level of experience necessary to undertake portal image analysis and clinical decision making. Materials and methods: A questionnaire was developed to determine the availability of portal imaging equipment in Australia and New Zealand. We analysed radiation therapists' responses to a specific question regarding their opinion on the minimum level of experience required for health professionals to analyse portal images. We used grounded theory and a constant comparative method of data analysis to derive the main themes. Results: Forty-six radiation oncology facilities were represented in our survey, with 40 questionnaires being returned (87%). Thirty-seven radiation therapists answered our free-text question. Radiation therapists indicated three main themes which they felt were important in determining the minimum level of experience: 'gaining on-the-job experience', 'receiving training' and 'working as a team'. Conclusions: Radiation therapists indicated that competence in portal image review occurs via various learning mechanisms. Further research is warranted to determine perspectives of other health professionals, such as radiation oncologists, on portal image review becoming part of radiation therapists' extended role. Suitable training programs and steps for implementation should be developed to facilitate this endeavour.

  14. 3D change detection at street level using mobile laser scanning point clouds and terrestrial images

    Science.gov (United States)

    Qin, Rongjun; Gruen, Armin

    2014-04-01

    Automatic change detection and geo-database updating in the urban environment are difficult tasks. There has been much research on detecting changes with satellite and aerial images, but studies have rarely been performed at the street level, which is complex in its 3D geometry. Contemporary geo-databases include 3D street-level objects, which demand frequent data updating. Terrestrial images provides rich texture information for change detection, but the change detection with terrestrial images from different epochs sometimes faces problems with illumination changes, perspective distortions and unreliable 3D geometry caused by the lack of performance of automatic image matchers, while mobile laser scanning (MLS) data acquired from different epochs provides accurate 3D geometry for change detection, but is very expensive for periodical acquisition. This paper proposes a new method for change detection at street level by using combination of MLS point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial images or photogrammetric images captured from an image-based mobile mapping system (MMS) at a later epoch are used to detect the geometrical changes between different epochs. The method will automatically mark the possible changes in each view, which provides a cost-efficient method for frequent data updating. The methodology is divided into several steps. In the first step, the point clouds are recorded by the MLS system and processed, with data cleaned and classified by semi-automatic means. In the second step, terrestrial images or mobile mapping images at a later epoch are taken and registered to the point cloud, and then point clouds are projected on each image by a weighted window based z-buffering method for view dependent 2D triangulation. In the next step, stereo pairs of the terrestrial images are rectified and re-projected between each other to check the geometrical

  15. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    Science.gov (United States)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  16. Determination of the level of noise in nurseries and pre-schools and the teachers′ level of annoyance

    Directory of Open Access Journals (Sweden)

    Ozan Gokdogan

    2016-01-01

    Full Text Available Objective: The aim of this article is to determine the level of noise in nurseries and pre-schools and also to compare measured levels with standard levels and evaluate the teachers’ level of annoyance. Materials and Methods: The level of noise was measured in three different schools. A total of 162 students, whose ages were between 3 and 6 years, and 12 teachers were included the study. Every age groups’ level of noise was measured during sleeping, gaming, and eating activity. In addition, teachers’ annoyance was assessed in different age groups. Results: The 4- to 6-year-old groups were found to have higher level of sounds than 3-year-old group. Eating period was found to be the highest level of sound whereas sleeping was found the lowest. Furthermore, teachers’ annoyance was found higher as the age decreased. Conclusion: Nurseries and pre-schools have noisy environment both for the students and the teachers. High level of noise, which has bad effects on health, is a public health problem. Both the students’ families and teachers must be aware of this annoying situation.

  17. AN AUTOMATIC OPTICAL AND SAR IMAGE REGISTRATION METHOD USING ITERATIVE MULTI-LEVEL AND REFINEMENT MODEL

    Directory of Open Access Journals (Sweden)

    C. Xu

    2016-06-01

    Full Text Available Automatic image registration is a vital yet challenging task, particularly for multi-sensor remote sensing images. Given the diversity of the data, it is unlikely that a single registration algorithm or a single image feature will work satisfactorily for all applications. Focusing on this issue, the mainly contribution of this paper is to propose an automatic optical-to-SAR image registration method using –level and refinement model: Firstly, a multi-level strategy of coarse-to-fine registration is presented, the visual saliency features is used to acquire coarse registration, and then specific area and line features are used to refine the registration result, after that, sub-pixel matching is applied using KNN Graph. Secondly, an iterative strategy that involves adaptive parameter adjustment for re-extracting and re-matching features is presented. Considering the fact that almost all feature-based registration methods rely on feature extraction results, the iterative strategy improve the robustness of feature matching. And all parameters can be automatically and adaptively adjusted in the iterative procedure. Thirdly, a uniform level set segmentation model for optical and SAR images is presented to segment conjugate features, and Voronoi diagram is introduced into Spectral Point Matching (VSPM to further enhance the matching accuracy between two sets of matching points. Experimental results show that the proposed method can effectively and robustly generate sufficient, reliable point pairs and provide accurate registration.

  18. Detection System of Sound Noise Level (SNL) Based on Condenser Microphone Sensor

    Science.gov (United States)

    Rajagukguk, Juniastel; Eka Sari, Nurdieni

    2018-03-01

    The research aims to know the noise level by using the Arduino Uno as data processing input from sensors and called as Sound Noise Level (SNL). The working principle of the instrument is as noise detector with the show notifications the noise level on the LCD indicator and in the audiovisual form. Noise detection using the sensor is a condenser microphone and LM 567 as IC op-amps, which are assembled so that it can detect the noise, which sounds are captured by the sensor will turn the tide of sinusoida voice became sine wave energy electricity (altering sinusoida electric current) that is able to responded to complaints by the Arduino Uno. The tool is equipped with a detector consists of a set indicator LED and sound well as the notification from the text on LCD 16*2. Work setting indicators on the condition that, if the measured noise > 75 dB then sound will beep, the red LED will light up indicating the status of the danger. If the measured value on the LCD is higher than 56 dB, sound indicator will be beep and yellow LED will be on indicating noisy. If the noise measured value <55 dB, sound indicator will be quiet indicating peaceful from noisy. From the result of the research can be explained that the SNL is capable to detecting and displaying noise level with a measuring range 50-100 dB and capable to delivering the notification noise in audiovisual.

  19. Performing particle image velocimetry using artificial neural networks: a proof-of-concept

    Science.gov (United States)

    Rabault, Jean; Kolaas, Jostein; Jensen, Atle

    2017-12-01

    Traditional programs based on feature engineering are underperforming on a steadily increasing number of tasks compared with artificial neural networks (ANNs), in particular for image analysis. Image analysis is widely used in fluid mechanics when performing particle image velocimetry (PIV) and particle tracking velocimetry (PTV), and therefore it is natural to test the ability of ANNs to perform such tasks. We report for the first time the use of convolutional neural networks (CNNs) and fully connected neural networks (FCNNs) for performing end-to-end PIV. Realistic synthetic images are used for training the networks and several synthetic test cases are used to assess the quality of each network’s predictions and compare them with state-of-the-art PIV software. In addition, we present tests on real-world data that prove ANNs can be used not only with synthetic images but also with more noisy, imperfect images obtained in a real experimental setup. While the ANNs we present have slightly higher root mean square error than state-of-the-art cross-correlation methods, they perform better near edges and allow for higher spatial resolution than such methods. In addition, it is likely that one could with further work develop ANNs which perform better that the proof-of-concept we offer.

  20. Multi-level discriminative dictionary learning with application to large scale image classification.

    Science.gov (United States)

    Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua

    2015-10-01

    The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.