WorldWideScience

Sample records for denoising shack hartmann

  1. Coded Shack-Hartmann Wavefront Sensor

    KAUST Repository

    Wang, Congli

    2016-12-01

    Wavefront sensing is an old yet fundamental problem in adaptive optics. Traditional wavefront sensors are limited to time-consuming measurements, complicated and expensive setup, or low theoretically achievable resolution. In this thesis, we introduce an optically encoded and computationally decodable novel approach to the wavefront sensing problem: the Coded Shack-Hartmann. Our proposed Coded Shack-Hartmann wavefront sensor is inexpensive, easy to fabricate and calibrate, highly sensitive, accurate, and with high resolution. Most importantly, using simple optical flow tracking combined with phase smoothness prior, with the help of modern optimization technique, the computational part is split, efficient, and parallelized, hence real time performance has been achieved on Graphics Processing Unit (GPU), with high accuracy as well. This is validated by experimental results. We also show how optical flow intensity consistency term can be derived, using rigor scalar diffraction theory with proper approximation. This is the true physical law behind our model. Based on this insight, Coded Shack-Hartmann can be interpreted as an illumination post-modulated wavefront sensor. This offers a new theoretical approach for wavefront sensor design.

  2. Shack-Hartmann reflective micro profilometer

    Science.gov (United States)

    Gong, Hai; Soloviev, Oleg; Verhaegen, Michel; Vdovin, Gleb

    2018-01-01

    We present a quantitative phase imaging microscope based on a Shack-Hartmann sensor, that directly reconstructs the optical path difference (OPD) in reflective mode. Comparing with the holographic or interferometric methods, the SH technique needs no reference beam in the setup, which simplifies the system. With a preregistered reference, the OPD image can be reconstructed from a single shot. Also, the method has a rather relaxed requirement on the illumination coherence, thus a cheap light source such as a LED is feasible in the setup. In our previous research, we have successfully verified that a conventional transmissive microscope can be transformed into an optical path difference microscope by using a Shack-Hartmann wavefront sensor under incoherent illumination. The key condition is that the numerical aperture of illumination should be smaller than the numerical aperture of imaging lens. This approach is also applicable to characterization of reflective and slightly scattering surfaces.

  3. Optimal Shack-Hartmann Wavefront Sensing For Low-Light-Levels

    National Research Council Canada - National Science Library

    Solomon, Christopher

    1997-01-01

    .... He will analyze the sensitivity gains achievable in shack-hartmann wavefront sensors using bayesian estimators and compare the results with those achieved using a standard least squares approach...

  4. CMOS optical centroid processor for an integrated Shack-Hartmann wavefront sensor

    OpenAIRE

    Pui, Boon Hean

    2004-01-01

    A Shack Hartmann wavefront sensor is used to detect the distortion of light in an optical wavefront. It does this by sampling the wavefront with an array of lenslets and measuring the displacement of focused spots from reference positions. These displacements are linearly related to the local wavefront tilts from which the entire wavefront can be reconstructed. In most Shack Hartmann wavefront sensors, a CCD is used to sample the entire wavefront, typically at a rate of 25 to 60 Hz, and a who...

  5. Rapid and highly integrated FPGA-based Shack-Hartmann wavefront sensor for adaptive optics system

    Science.gov (United States)

    Chen, Yi-Pin; Chang, Chia-Yuan; Chen, Shean-Jen

    2018-02-01

    In this study, a field programmable gate array (FPGA)-based Shack-Hartmann wavefront sensor (SHWS) programmed on LabVIEW can be highly integrated into customized applications such as adaptive optics system (AOS) for performing real-time wavefront measurement. Further, a Camera Link frame grabber embedded with FPGA is adopted to enhance the sensor speed reacting to variation considering its advantage of the highest data transmission bandwidth. Instead of waiting for a frame image to be captured by the FPGA, the Shack-Hartmann algorithm are implemented in parallel processing blocks design and let the image data transmission synchronize with the wavefront reconstruction. On the other hand, we design a mechanism to control the deformable mirror in the same FPGA and verify the Shack-Hartmann sensor speed by controlling the frequency of the deformable mirror dynamic surface deformation. Currently, this FPGAbead SHWS design can achieve a 266 Hz cyclic speed limited by the camera frame rate as well as leaves 40% logic slices for additionally flexible design.

  6. Shack-Hartmann Electron Densitometer (SHED): An Optical System for Diagnosing Free Electron Density in Laser-Produced Plasmas

    Science.gov (United States)

    2016-11-01

    Free Electron Density in Laser-Produced Plasmas by Anthony R Valenzuela Approved for public release; distribution is...AND SUBTITLE Shack-Hartmann Electron Densitometer (SHED): An Optical System for Diagnosing Free Electron Density in Laser-Produced Plasmas 5a...SUPPLEMENTARY NOTES 14. ABSTRACT The Shack-Hartmann Electron Densitometer is a novel method to diagnose ultrashort pulse laser–produced plasmas

  7. Hartmann-Shack wave front measurements for real time determination of laser beam propagation parameters

    International Nuclear Information System (INIS)

    Schaefer, B.; Luebbecke, M.; Mann, K.

    2006-01-01

    The suitability of the Hartmann-Shack technique for the determination of the propagation parameters of a laser beam is faced against the well known caustic approach according to the ISO 11146 standard. A He-Ne laser (543 nm) was chosen as test beam, both in its fundamental mode as well as after intentional distortion, introducing a moderate amount of spherical aberration. Results are given for the most important beam parameters M 2 , divergence, and beam widths, indicating an agreement of better than 10% and for adapted beam diameter <5%. Furthermore, the theoretical background, pros and cons, as well as some features of the software implementation for the Hartmann-Shack sensor are briefly reviewed

  8. Sorting method to extend the dynamic range of the Shack-Hartmann wave-front sensor

    International Nuclear Information System (INIS)

    Lee, Junwon; Shack, Roland V.; Descour, Michael R.

    2005-01-01

    We propose a simple and powerful algorithm to extend the dynamic range of a Shack-Hartmann wave-front sensor. In a conventional Shack-Hartmann wave-front sensor the dynamic range is limited by the f-number of a lenslet, because the focal spot is required to remain in the area confined by the single lenslet. The sorting method proposed here eliminates such a limitation and extends the dynamic range by tagging each spot in a special sequence. Since the sorting method is a simple algorithm that does not change the measurement configuration, there is no requirement for extra hardware, multiple measurements, or complicated algorithms. We not only present the theory and a calculation example of the sorting method but also actually implement measurement of a highly aberrated wave front from nonrotational symmetric optics

  9. Comparison of forward light scatter estimations using Shack-Hartmann spot patterns and a straylight meter.

    Science.gov (United States)

    Benito Lopez, Pablo; Radhakrishnan, Hema; Nourrit, Vincent

    2015-02-01

    To determine whether an unmodified commercial wavefront aberrometer (irx3) can be used to estimate forward light scattering and how this assessment matches estimations obtained from the C-Quant straylight meter. University of Manchester, Manchester, United Kingdom. Prospective comparative study. Measurements obtained with a straylight meter and with Shack-Hartmann spot patterns using a previously reported metric were compared. The method was first validated in a model eye by spraying an aerosol over 4 contact lenses to generate various levels of scattering. Measurements with both methods were subsequently obtained in healthy eyes. The study comprised 33 healthy participants (mean age 38.9 years ± 13.1 [SD]). A good correlation was observed between the density of droplets over the contact lenses and the objective scatter value extracted from the hartmanngrams (r = 0.972, P meter and the metric derived from the Shack-Hartmann method (r = 0.133, P = .460). The hartmanngrams provided a valid objective measurement of the light scatter in a model eye; the measurements in human eyes were not significantly correlated with those of the light scatter meter. The straylight meter assesses large-angle scattering, while the Shack-Hartmann method collates information from a narrow angle around the center of the point-spread function; this could be the reason for the difference in measurements. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  10. Generalized method for sorting Shack-Hartmann spot patterns using local similarity

    International Nuclear Information System (INIS)

    Smith, Daniel G.; Greivenkamp, John E.

    2008-01-01

    The sensitivity and dynamic range of a Shack-Hartmann wavefront sensor is enhanced when the spots produced by the lenslet array are allowed to shift more than one lenslet radius from their on-axis positions. However, this presents the problem of accurately and robustly associating the spots with their respective subapertures. This paper describes a method for sorting spots that takes advantage of the local spot position distortions to unwrap the spot pattern. The described algorithm is both simple and robust and also applicable to any lenslet array geometry that can be described as a two-dimensional lattice, including hexagonal arrays, which are shown here to be more efficient than square arrays

  11. Optimization of scanning strategy of digital Shack-Hartmann wavefront sensing.

    Science.gov (United States)

    Guo, Wenjiang; Zhao, Liping; Li, Xiang; Chen, I-Ming

    2012-01-01

    In the traditional Shack-Hartmann wavefront sensing (SHWS) system, a lenslet array with a bigger configuration is desired to achieve a higher lateral resolution. However, practical implementation limits the configuration and this parameter is contradicted with the measurement range. We have proposed a digital scanning technique by making use of the high flexibility of a spatial light modulator to sample the reflected wavefront [X. Li, L. P. Zhao, Z. P. Fang, and C. S. Tan, "Improve lateral resolution in wavefront sensing with digital scanning technique," in Asia-Pacific Conference of Transducers and Micro-Nano Technology (2006)]. The lenslet array pattern is programmed to laterally scan the whole aperture. In this paper, the methodology to optimize the scanning step for the purpose of form measurement is proposed. The correctness and effectiveness are demonstrated in numerical simulation and experimental investigation. © 2012 Optical Society of America

  12. Least-squares wave-front reconstruction of Shack-Hartmann sensors and shearing interferometers using multigrid techniques

    International Nuclear Information System (INIS)

    Baker, K.L.

    2005-01-01

    This article details a multigrid algorithm that is suitable for least-squares wave-front reconstruction of Shack-Hartmann and shearing interferometer wave-front sensors. The algorithm detailed in this article is shown to scale with the number of subapertures in the same fashion as fast Fourier transform techniques, making it suitable for use in applications requiring a large number of subapertures and high Strehl ratio systems such as for high spatial frequency characterization of high-density plasmas, optics metrology, and multiconjugate and extreme adaptive optics systems

  13. A Shack-Hartmann Sensor for Single-Shot Multi-Contrast Imaging with Hard X-rays

    Directory of Open Access Journals (Sweden)

    Tomy dos Santos Rolo

    2018-05-01

    Full Text Available An array of compound refractive X-ray lenses (CRL with 20 × 20 lenslets, a focal distance of 20cm and a visibility of 0.93 is presented. It can be used as a Shack-Hartmann sensor for hard X-rays (SHARX for wavefront sensing and permits for true single-shot multi-contrast imaging the dynamics of materials with a spatial resolution in the micrometer range, sensitivity on nanosized structures and temporal resolution on the microsecond scale. The object’s absorption and its induced wavefront shift can be assessed simultaneously together with information from diffraction channels. In contrast to the established Hartmann sensors the SHARX has an increased flux efficiency through focusing of the beam rather than blocking parts of it. We investigated the spatiotemporal behavior of a cavitation bubble induced by laser pulses. Furthermore, we validated the SHARX by measuring refraction angles of a single diamond CRL, where we obtained an angular resolution better than 4 μ rad.

  14. Comparison of sorting algorithms to increase the range of Hartmann-Shack aberrometry.

    Science.gov (United States)

    Bedggood, Phillip; Metha, Andrew

    2010-01-01

    Recently many software-based approaches have been suggested for improving the range and accuracy of Hartmann-Shack aberrometry. We compare the performance of four representative algorithms, with a focus on aberrometry for the human eye. Algorithms vary in complexity from the simplistic traditional approach to iterative spline extrapolation based on prior spot measurements. Range is assessed for a variety of aberration types in isolation using computer modeling, and also for complex wavefront shapes using a real adaptive optics system. The effects of common sources of error for ocular wavefront sensing are explored. The results show that the simplest possible iterative algorithm produces comparable range and robustness compared to the more complicated algorithms, while keeping processing time minimal to afford real-time analysis.

  15. Improvement of correlation-based centroiding methods for point source Shack-Hartmann wavefront sensor

    Science.gov (United States)

    Li, Xuxu; Li, Xinyang; wang, Caixia

    2018-03-01

    This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.

  16. Accuracy of Shack-Hartmann wavefront sensor using a coherent wound fibre image bundle

    Science.gov (United States)

    Zheng, Jessica R.; Goodwin, Michael; Lawrence, Jon

    2018-03-01

    Shack-Hartmannwavefront sensors using wound fibre image bundles are desired for multi-object adaptive optical systems to provide large multiplex positioned by Starbugs. The use of a large-sized wound fibre image bundle provides the flexibility to use more sub-apertures wavefront sensor for ELTs. These compact wavefront sensors take advantage of large focal surfaces such as the Giant Magellan Telescope. The focus of this paper is to study the wound fibre image bundle structure defects effect on the centroid measurement accuracy of a Shack-Hartmann wavefront sensor. We use the first moment centroid method to estimate the centroid of a focused Gaussian beam sampled by a simulated bundle. Spot estimation accuracy with wound fibre image bundle and its structure impact on wavefront measurement accuracy statistics are addressed. Our results show that when the measurement signal-to-noise ratio is high, the centroid measurement accuracy is dominated by the wound fibre image bundle structure, e.g. tile angle and gap spacing. For the measurement with low signal-to-noise ratio, its accuracy is influenced by the read noise of the detector instead of the wound fibre image bundle structure defects. We demonstrate this both with simulation and experimentally. We provide a statistical model of the centroid and wavefront error of a wound fibre image bundle found through experiment.

  17. Novel Detecting Methods of Shack-Hartmann Wavefront Sensor at Low Light Levels

    International Nuclear Information System (INIS)

    Zhang, A; Rao, C H; Zhang, Y D; Jiang, W H

    2006-01-01

    A study of novel detecting methods of Shack-Hartmann wavefront sensor at low light levels has been made. Three methods of images processing before slope estimating are presented: Linear Enhancing (LE), Exponential Enhancing (EE) and Fourier Spectrum Filtering (FSF). The idea of LE method is to time the image intensity with a special coefficient before slope estimation. The image points are powered by a selected exponent in EE method. The FSF method is based on the spectrum difference between signal and noise. Most of noise spectrum is filtered and the noise is restrained. The simulated and experimental results show that the LE method does not work effectively, and the other two methods can improve the slope estimation when the Signal-to-noise ratio is higher than 3.0. When the Signal-to-noise ratio is less than 3.0, especially when it is less than 1.0, the FSF is the only method that can overcome the readout noise of the CCD detector

  18. Hough transform used on the spot-centroiding algorithm for the Shack-Hartmann wavefront sensor

    Science.gov (United States)

    Chia, Chou-Min; Huang, Kuang-Yuh; Chang, Elmer

    2016-01-01

    An approach to the spot-centroiding algorithm for the Shack-Hartmann wavefront sensor (SHWS) is presented. The SHWS has a common problem, in that while measuring high-order wavefront distortion, the spots may exceed each of the subapertures, which are used to restrict the displacement of spots. This artificial restriction may limit the dynamic range of the SHWS. When using the SHWS to measure adaptive optics or aspheric lenses, the accuracy of the traditional spot-centroiding algorithm may be uncertain because the spots leave or cross the confined area of the subapertures. The proposed algorithm combines the Hough transform with an artificial neural network, which requires no confined subapertures, to increase the dynamic range of the SHWS. This algorithm is then explored in comprehensive simulations and the results are compared with those of the existing algorithm.

  19. Peak-locking centroid bias in Shack-Hartmann wavefront sensing

    Science.gov (United States)

    Anugu, Narsireddy; Garcia, Paulo J. V.; Correia, Carlos M.

    2018-05-01

    Shack-Hartmann wavefront sensing relies on accurate spot centre measurement. Several algorithms were developed with this aim, mostly focused on precision, i.e. minimizing random errors. In the solar and extended scene community, the importance of the accuracy (bias error due to peak-locking, quantization, or sampling) of the centroid determination was identified and solutions proposed. But these solutions only allow partial bias corrections. To date, no systematic study of the bias error was conducted. This article bridges the gap by quantifying the bias error for different correlation peak-finding algorithms and types of sub-aperture images and by proposing a practical solution to minimize its effects. Four classes of sub-aperture images (point source, elongated laser guide star, crowded field, and solar extended scene) together with five types of peak-finding algorithms (1D parabola, the centre of gravity, Gaussian, 2D quadratic polynomial, and pyramid) are considered, in a variety of signal-to-noise conditions. The best performing peak-finding algorithm depends on the sub-aperture image type, but none is satisfactory to both bias and random errors. A practical solution is proposed that relies on the antisymmetric response of the bias to the sub-pixel position of the true centre. The solution decreases the bias by a factor of ˜7 to values of ≲ 0.02 pix. The computational cost is typically twice of current cross-correlation algorithms.

  20. Correlation Wave-Front Sensing Algorithms for Shack-Hartmann-Based Adaptive Optics using a Point Source

    International Nuclear Information System (INIS)

    Poynee, L A

    2003-01-01

    Shack-Hartmann based Adaptive Optics system with a point-source reference normally use a wave-front sensing algorithm that estimates the centroid (center of mass) of the point-source image 'spot' to determine the wave-front slope. The centroiding algorithm suffers for several weaknesses. For a small number of pixels, the algorithm gain is dependent on spot size. The use of many pixels on the detector leads to significant propagation of read noise. Finally, background light or spot halo aberrations can skew results. In this paper an alternative algorithm that suffers from none of these problems is proposed: correlation of the spot with a ideal reference spot. The correlation method is derived and a theoretical analysis evaluates its performance in comparison with centroiding. Both simulation and data from real AO systems are used to illustrate the results. The correlation algorithm is more robust than centroiding, but requires more computation

  1. Objective measurement of intraocular forward light scatter using Hartmann-Shack spot patterns from clinical aberrometers. Model-eye and human-eye study.

    Science.gov (United States)

    Cerviño, Alejandro; Bansal, Dheeraj; Hosking, Sarah L; Montés-Micó, Robert

    2008-07-01

    To apply software-based image-analysis tools to objectively determine intraocular scatter determined from clinically derived Hartmann-Shack patterns. Aston Academy of Life Sciences, Aston University, Birmingham, United Kingdom, and Department of Optics, University of Valencia, Valencia, Spain. Purpose-designed image-analysis software was used to quantify scatter from centroid patterns obtained using a clinical Hartmann-Shack analyzer (WASCA, Zeiss/Meditec). Three scatter values, as the maximum standard deviation within a lenslet for all lenslets in the pattern, were obtained in 6 model eyes and 10 human eyes. In the model-eye sample, patterns were obtained in 4 sessions: 2 without realigning between measurements, 1 with realignment, and 1 with an angular shift of 6 degrees from the instrument axis. Three measurements were made in the human eyes with the C-Quant straylight meter (Oculus) to obtain psychometric and objective measures of retinal straylight. Analysis of variance, intraclass correlation coefficients, coefficient of repeatability (CoR), and correlations were used to determine intrasession and intersession repeatability and the relationship between measures. No significant differences were found between the sessions in the model eye (P=.234). The mean CoR was less than 10% in all model- and human-eye sessions. After incomplete patterns were removed, good correlation was achieved between psychometric and objective scatter measurements despite the small sample size (n=6; r=-0.831; P=.040). The methodology was repeatable in model and human eyes, strong against realignment and misalignment, and sensitive. Clinical application would benefit from effective use of the sensor's dynamic range.

  2. Comparison of performance of some common Hartmann-Shack centroid estimation methods

    Science.gov (United States)

    Thatiparthi, C.; Ommani, A.; Burman, R.; Thapa, D.; Hutchings, N.; Lakshminarayanan, V.

    2016-03-01

    The accuracy of the estimation of optical aberrations by measuring the distorted wave front using a Hartmann-Shack wave front sensor (HSWS) is mainly dependent upon the measurement accuracy of the centroid of the focal spot. The most commonly used methods for centroid estimation such as the brightest spot centroid; first moment centroid; weighted center of gravity and intensity weighted center of gravity, are generally applied on the entire individual sub-apertures of the lens let array. However, these processes of centroid estimation are sensitive to the influence of reflections, scattered light, and noise; especially in the case where the signal spot area is smaller compared to the whole sub-aperture area. In this paper, we give a comparison of performance of the commonly used centroiding methods on estimation of optical aberrations, with and without the use of some pre-processing steps (thresholding, Gaussian smoothing and adaptive windowing). As an example we use the aberrations of the human eye model. This is done using the raw data collected from a custom made ophthalmic aberrometer and a model eye to emulate myopic and hyper-metropic defocus values up to 2 Diopters. We show that the use of any simple centroiding algorithm is sufficient in the case of ophthalmic applications for estimating aberrations within the typical clinically acceptable limits of a quarter Diopter margins, when certain pre-processing steps to reduce the impact of external factors are used.

  3. Multi-optical-axis measurement of freeform progressive addition lenses using a Hartmann-Shack wavefront sensor

    Science.gov (United States)

    Xiang, Huazhong; Guo, Hang; Fu, Dongxiang; Zheng, Gang; Zhuang, Songlin; Chen, JiaBi; Wang, Cheng; Wu, Jie

    2018-05-01

    To precisely measure the whole-surface characterization of freeform progressive addition lenses (PALs), considering the multi-optical-axis conditions is becoming particularly important. Spherical power and astigmatism (cylinder) measurements for freeform PALs, using a Hartmann-Shack wavefront sensor (HSWFS) are proposed herein. Conversion formulas for the optical performance results were provided as HSWFS Zernike polynomial expansions. For each selected zone, the studied PALs were placed and tilted to simulate the multi-optical-axis conditions. The results of two tested PALs were analyzed using MATLAB programs and represented as contour plots of the spherical equivalent and cylinder of the whole-surface. The proposed experimental setup can provide a high accuracy as well as a possibility of choosing 12 lines and positions of 193 measurement zones on the entire surface. This approach to PAL analysis is potentially an efficient and useful method to objectively evaluate the optical performances, in which the full lens surface is defined and expressed as the contour plots of power in different regions (i.e., the distance region, progressive region, and near region) of the lens for regions of interest.

  4. Shack-Hartmann centroid detection method based on high dynamic range imaging and normalization techniques

    International Nuclear Information System (INIS)

    Vargas, Javier; Gonzalez-Fernandez, Luis; Quiroga, Juan Antonio; Belenguer, Tomas

    2010-01-01

    In the optical quality measuring process of an optical system, including diamond-turning components, the use of a laser light source can produce an undesirable speckle effect in a Shack-Hartmann (SH) CCD sensor. This speckle noise can deteriorate the precision and accuracy of the wavefront sensor measurement. Here we present a SH centroid detection method founded on computer-based techniques and capable of measurement in the presence of strong speckle noise. The method extends the dynamic range imaging capabilities of the SH sensor through the use of a set of different CCD integration times. The resultant extended range spot map is normalized to accurately obtain the spot centroids. The proposed method has been applied to measure the optical quality of the main optical system (MOS) of the mid-infrared instrument telescope smulator. The wavefront at the exit of this optical system is affected by speckle noise when it is illuminated by a laser source and by air turbulence because it has a long back focal length (3017 mm). Using the proposed technique, the MOS wavefront error was measured and satisfactory results were obtained.

  5. Preliminary results of a high-resolution refractometer using the Hartmann-Shack wave-front sensor: part I Resultados preliminares com refratrômetro de alta resolução, usando sensor de frente de onda de Hartmann-Shack: parte I

    Directory of Open Access Journals (Sweden)

    Luis Alberto Carvalho

    2003-06-01

    Full Text Available In this project we are developing an instrument for measuring the wave-front aberrations of the human eye using the Hartmann-Shack sensor. A laser source is directed towards the eye and its diffuse reflection at the retina generates an approximately spherical wave-front inside the eye. This wave-front travels through the different components of the eye (vitreous humor, lens, aqueous humor, and cornea and then leaves the eye carrying information about the aberrations caused by these components. Outside the eye there is an optical system composed of an array of microlenses and a CCD camera. The wave-front hits the microlens array and forms a pattern of spots at the CCD plane. Image processing algorithms detect the center of mass of each spot and this information is used to calculate the exact wave-front surface using least squares approximation by Zernike polynomials. We describe here the details of the first phase of this project, i. e., the construction of the first generation of prototype instruments and preliminary results for an artificial eye calibrated with different ametropias, i. e., myopia, hyperopia and astigmatism.Neste projeto estamos desenvolvendo instrumento para medidas das aberrações de frente de onda do olho humano usando um sensor Hartmann-Shack. Uma fonte de luz laser é direcionada ao olho e sua reflexão difusa na retina gera frente de onda aproximadamente esférica dentro do olho. Esta frente de onda atravessa os diferentes componentes do olho (humor vítreo, lente, humor aquoso e córnea trazendo informações sobre as aberrações ópticas causadas por estes componentes. No meio externo ao olho existe sistema óptico formado por uma matriz de microlentes e uma câmera CCD. A frente de onda incide nesta matriz e forma um padrão aproximadamente matricial de "spots" no plano do CCD. Algoritmos de processamento de imagens são utilizados para detectar os centróides de cada "spot" e esta informação é utilizada para

  6. Adaptive thresholding and dynamic windowing method for automatic centroid detection of digital Shack-Hartmann wavefront sensor

    International Nuclear Information System (INIS)

    Yin Xiaoming; Li Xiang; Zhao Liping; Fang Zhongping

    2009-01-01

    A Shack-Hartmann wavefront sensor (SWHS) splits the incident wavefront into many subsections and transfers the distorted wavefront detection into the centroid measurement. The accuracy of the centroid measurement determines the accuracy of the SWHS. Many methods have been presented to improve the accuracy of the wavefront centroid measurement. However, most of these methods are discussed from the point of view of optics, based on the assumption that the spot intensity of the SHWS has a Gaussian distribution, which is not applicable to the digital SHWS. In this paper, we present a centroid measurement algorithm based on the adaptive thresholding and dynamic windowing method by utilizing image processing techniques for practical application of the digital SHWS in surface profile measurement. The method can detect the centroid of each focal spot precisely and robustly by eliminating the influence of various noises, such as diffraction of the digital SHWS, unevenness and instability of the light source, as well as deviation between the centroid of the focal spot and the center of the detection area. The experimental results demonstrate that the algorithm has better precision, repeatability, and stability compared with other commonly used centroid methods, such as the statistical averaging, thresholding, and windowing algorithms.

  7. OPERA, an automatic PSF reconstruction software for Shack-Hartmann AO systems: application to Altair

    Science.gov (United States)

    Jolissaint, Laurent; Veran, Jean-Pierre; Marino, Jose

    2004-10-01

    When doing high angular resolution imaging with adaptive optics (AO), it is of crucial importance to have an accurate knowledge of the point spread function associated with each observation. Applications are numerous: image contrast enhancement by deconvolution, improved photometry and astrometry, as well as real time AO performance evaluation. In this paper, we present our work on automatic PSF reconstruction based on control loop data, acquired simultaneously with the observation. This problem has already been solved for curvature AO systems. To adapt this method to another type of WFS, a specific analytical noise propagation model must be established. For the Shack-Hartmann WFS, we are able to derive a very accurate estimate of the noise on each slope measurement, based on the covariances of the WFS CCD pixel values in the corresponding sub-aperture. These covariances can be either derived off-line from telemetry data, or calculated by the AO computer during the acquisition. We present improved methods to determine 1) r0 from the DM drive commands, which includes an estimation of the outer scale L0 2) the contribution of the high spatial frequency component of the turbulent phase, which is not corrected by the AO system and is scaled by r0. This new method has been implemented in an IDL-based software called OPERA (Performance of Adaptive Optics). We have tested OPERA on Altair, the recently commissioned Gemini-North AO system, and present our preliminary results. We also summarize the AO data required to run OPERA on any other AO system.

  8. Adaptive optics retinal imaging with automatic detection of the pupil and its boundary in real time using Shack-Hartmann images.

    Science.gov (United States)

    de Castro, Alberto; Sawides, Lucie; Qi, Xiaofeng; Burns, Stephen A

    2017-08-20

    Retinal imaging with an adaptive optics (AO) system usually requires that the eye be centered and stable relative to the exit pupil of the system. Aberrations are then typically corrected inside a fixed circular pupil. This approach can be restrictive when imaging some subjects, since the pupil may not be round and maintaining a stable head position can be difficult. In this paper, we present an automatic algorithm that relaxes these constraints. An image quality metric is computed for each spot of the Shack-Hartmann image to detect the pupil and its boundary, and the control algorithm is applied only to regions within the subject's pupil. Images on a model eye as well as for five subjects were obtained to show that a system exit pupil larger than the subject's eye pupil could be used for AO retinal imaging without a reduction in image quality. This algorithm automates the task of selecting pupil size. It also may relax constraints on centering the subject's pupil and on the shape of the pupil.

  9. Ocular aberrations with ray tracing and Shack-Hartmann wave-front sensors: Does polarization play a role?

    Science.gov (United States)

    Marcos, Susana; Diaz-Santana, Luis; Llorente, Lourdes; Dainty, Chris

    2002-06-01

    Ocular aberrations were measured in 71 eyes by using two reflectometric aberrometers, employing laser ray tracing (LRT) (60 eyes) and a Shack-Hartmann wave-front sensor (S-H) (11 eyes). In both techniques a point source is imaged on the retina (through different pupil positions in the LRT or a single position in the S-H). The aberrations are estimated by measuring the deviations of the retinal spot from the reference as the pupil is sampled (in LRT) or the deviations of a wave front as it emerges from the eye by means of a lenslet array (in the S-H). In this paper we studied the effect of different polarization configurations in the aberration measurements, including linearly polarized light and circularly polarized light in the illuminating channel and sampling light in the crossed or parallel orientations. In addition, completely depolarized light in the imaging channel was obtained from retinal lipofuscin autofluorescence. The intensity distribution of the retinal spots as a function of entry (for LRT) or exit pupil (for S-H) depends on the polarization configuration. These intensity patterns show bright corners and a dark area at the pupil center for crossed polarization, an approximately Gaussian distribution for parallel polarization and a homogeneous distribution for the autofluorescence case. However, the measured aberrations are independent of the polarization states. These results indicate that the differences in retardation across the pupil imposed by corneal birefringence do not produce significant phase delays compared with those produced by aberrations, at least within the accuracy of these techniques. In addition, differences in the recorded aerial images due to changes in polarization do not affect the aberration measurements in these reflectometric aberrometers.

  10. Fabrication of an infrared Shack-Hartmann sensor by combining high-speed single-point diamond milling and precision compression molding processes.

    Science.gov (United States)

    Zhang, Lin; Zhou, Wenchen; Naples, Neil J; Yi, Allen Y

    2018-05-01

    A novel fabrication method by combining high-speed single-point diamond milling and precision compression molding processes for fabrication of discontinuous freeform microlens arrays was proposed. Compared with slow tool servo diamond broaching, high-speed single-point diamond milling was selected for its flexibility in the fabrication of true 3D optical surfaces with discontinuous features. The advantage of single-point diamond milling is that the surface features can be constructed sequentially by spacing the axes of a virtual spindle at arbitrary positions based on the combination of rotational and translational motions of both the high-speed spindle and linear slides. By employing this method, each micro-lenslet was regarded as a microstructure cell by passing the axis of the virtual spindle through the vertex of each cell. An optimization arithmetic based on minimum-area fabrication was introduced to the machining process to further increase the machining efficiency. After the mold insert was machined, it was employed to replicate the microlens array onto chalcogenide glass. In the ensuing optical measurement, the self-built Shack-Hartmann wavefront sensor was proven to be accurate in detecting an infrared wavefront by both experiments and numerical simulation. The combined results showed that precision compression molding of chalcogenide glasses could be an economic and precision optical fabrication technology for high-volume production of infrared optics.

  11. Channel Compensation for Speaker Recognition using MAP Adapted PLDA and Denoising DNNs

    Science.gov (United States)

    2016-06-21

    05 Jabra Cellphone Earwrap Mic 06 Motorola Cellphone Earbud 07 Olympus Pearlcorder 08 Radio Shack Computer Desktop Mic Table 1: Mixer 1 and 2...EER and min DCF vs λ for 2cov map adapt PLDA the MAP adapted PLDA model using a λ of 0.5. The remain- ing rows demonstrate the impact of the feature...degrading perfor- mance on conversational telephone speech. To assess the per- formance impact of the denoising DNN on telephony data we evaluated the

  12. Optical design of a novel instrument that uses the Hartmann-Shack sensor and Zernike polynomials to measure and simulate customized refraction correction surgery outcomes and patient satisfaction

    Science.gov (United States)

    Yasuoka, Fatima M. M.; Matos, Luciana; Cremasco, Antonio; Numajiri, Mirian; Marcato, Rafael; Oliveira, Otavio G.; Sabino, Luis G.; Castro N., Jarbas C.; Bagnato, Vanderlei S.; Carvalho, Luis A. V.

    2016-03-01

    An optical system that conjugates the patient's pupil to the plane of a Hartmann-Shack (HS) wavefront sensor has been simulated using optical design software. And an optical bench prototype is mounted using mechanical eye device, beam splitter, illumination system, lenses, mirrors, mirrored prism, movable mirror, wavefront sensor and camera CCD. The mechanical eye device is used to simulate aberrations of the eye. From this device the rays are emitted and travelled by the beam splitter to the optical system. Some rays fall on the camera CCD and others pass in the optical system and finally reach the sensor. The eye models based on typical in vivo eye aberrations is constructed using the optical design software Zemax. The computer-aided outcomes of each HS images for each case are acquired, and these images are processed using customized techniques. The simulated and real images for low order aberrations are compared using centroid coordinates to assure that the optical system is constructed precisely in order to match the simulated system. Afterwards a simulated version of retinal images is constructed to show how these typical eyes would perceive an optotype positioned 20 ft away. Certain personalized corrections are allowed by eye doctors based on different Zernike polynomial values and the optical images are rendered to the new parameters. Optical images of how that eye would see with or without corrections of certain aberrations are generated in order to allow which aberrations can be corrected and in which degree. The patient can then "personalize" the correction to their own satisfaction. This new approach to wavefront sensing is a promising change in paradigm towards the betterment of the patient-physician relationship.

  13. The size effect of searching window for measuring wavefront of laser beam

    International Nuclear Information System (INIS)

    Park, Seung Kyu; Baik, Sung Hoon; Lim, Chang Hwan; Kim, Jung Cheol; Yi, Seung Jun; Ra, Sung Woong

    2003-01-01

    We investigated the size effect of the searching window for measuring of a laser beam using a Shack-Hartmann sensor. The shapes of spot images on an acquired wavefront image by using a Shack-Hartmann sensor are usually imbalanced. Also, the distributed intensity pattern of each spot image is varied according to successively acquired wavefront image. We studied on the optimized size of searching window to get wavefront with high measurement resolution. We experimented on the various size effect of searching window on an acquired wavefront image to get fine wavefront information using a Shack-Hartmann sensor. As the experimental results, we proposed the optimum size of searching window to measure improved wavefront.

  14. Quantitative comparison of different-shaped wavefront sensors and preliminary results for defocus aberrations on a mechanical eye Comparações quantitativas entre o sensor Hartmann-Shack e o sensor de Castro e resultados preliminares para um olho mecânico

    Directory of Open Access Journals (Sweden)

    Luis Alberto Carvalho

    2006-04-01

    Full Text Available PURPOSE: There is a general acceptance among the scientific community of Cartesian symmetry wavefront sensors (such as the Hartmann-Shack (HS sensor as a standard in the field of optics and vision science. In this study it is shown that sensors of different symmetries and/or configurations should also be tested and analyzed in order to quantify and compare their effectiveness when applied to visual optics. Three types of wave-aberration sensors were developed and tested here. Each sensor has a very different configuration and/or symmetry (dodecagonal (DOD, cylindrical (CYL and conventional Hartmann-Shack (HS. METHODS: All sensors were designed and developed in the Physics Department of the Universidade de São Paulo - São Carlos. Each sensor was mounted on a laboratory optical bench used in a previous study. A commercial mechanical eye was used as control. This mechanical eye has a rotating mechanism that allows the retinal plane to be positioned at different axial distances. Ten different defocus aberrations were generated: 5 cases of myopia from -1D to -5D and 5 cases of hyperopia, from +1D to +5D, in steps of 1D following the scale printed on the mechanical eye. For each wavefront sensor a specific image-processing and fitting algorithm was implemented. For all three cases, the wavefront information was fit using the first 36 VSIA standard Zernike polynomials. Results for the mechanical eye were also compared to the absolute Zernike surface generated from coefficients associated with the theoretical sphere-cylinder aberration value. RESULTS: Precision was analyzed using two different methods: first, a theoretical approach was used by generating synthetic Zernike coefficients from the known sphere-cylinder aberrations, simply by applying sphere-cylinder equations in the backward direction. Then comparisons were made of these coefficients with the ones obtained in practice. Results for DOD, HS and CYL sensors were, respectively, as follows

  15. Performance evaluation of image denoising developed using convolutional denoising autoencoders in chest radiography

    Science.gov (United States)

    Lee, Donghoon; Choi, Sunghoon; Kim, Hee-Joung

    2018-03-01

    When processing medical images, image denoising is an important pre-processing step. Various image denoising algorithms have been developed in the past few decades. Recently, image denoising using the deep learning method has shown excellent performance compared to conventional image denoising algorithms. In this study, we introduce an image denoising technique based on a convolutional denoising autoencoder (CDAE) and evaluate clinical applications by comparing existing image denoising algorithms. We train the proposed CDAE model using 3000 chest radiograms training data. To evaluate the performance of the developed CDAE model, we compare it with conventional denoising algorithms including median filter, total variation (TV) minimization, and non-local mean (NLM) algorithms. Furthermore, to verify the clinical effectiveness of the developed denoising model with CDAE, we investigate the performance of the developed denoising algorithm on chest radiograms acquired from real patients. The results demonstrate that the proposed denoising algorithm developed using CDAE achieves a superior noise-reduction effect in chest radiograms compared to TV minimization and NLM algorithms, which are state-of-the-art algorithms for image noise reduction. For example, the peak signal-to-noise ratio and structure similarity index measure of CDAE were at least 10% higher compared to conventional denoising algorithms. In conclusion, the image denoising algorithm developed using CDAE effectively eliminated noise without loss of information on anatomical structures in chest radiograms. It is expected that the proposed denoising algorithm developed using CDAE will be effective for medical images with microscopic anatomical structures, such as terminal bronchioles.

  16. Sunmotor Solar Shack 120

    International Nuclear Information System (INIS)

    Jensen, E.

    2009-01-01

    This article described a solar pump that was developed by Alberta-based Sunmotor International Ltd. The prototype Solar Shack 120 was recently deployed in central Alberta for a remediation project for Devon Canada. The portable solar pump unit is well suited for environmental remediation in the oilpatch where conventional electricity is not available. The solar panels automatically run the pump whenever there is enough sunlight and there is liquid in the sump. Devon Canada wanted a system that continues to pump during cloudy weather to avoid the accumulation of effluent in the sump. The Solar Shack 120 delivers 120 volts of alternating current (vac) power. Solar panels are used to charge a bank of large sealed batteries that supply direct power (DC) to an inverter, which converts it into AC. A thermostat control was added to shut off the pumps in cold weather to avoid battery discharging. The Solar Shack unit has possibilities in countries with unreliable electricity supplies. It could provide a backup power supply that automatically kicks in whenever the power grid goes down. Sunmotor International Ltd. can supply complete remote power systems for both AC and DC electrical requirements. The systems are designed for each application to ensure customer satisfaction. The company is currently building a unit that integrates solar power with a generator backup, thereby eliminating the annoying noise of a continually running generator. 1 fig

  17. Measuring and modeling intraocular light scatter with Shack-Hartmann wavefront sensing and the effects of nuclear cataract on the measurement of wavefront error

    Science.gov (United States)

    Donnelly, William J., III

    Purpose. The purpose of this research is to determine if Shack/Hartmann (S/H) wavefront sensing (SHWS) can be used to objectively quantify ocular forward scatter. Methods. Patient S/H images from an study of nuclear cataract were analyzed to extract scattering data by examining characteristics of the lenslet point spread functions. Physical and computer eye models with simulated cataract were developed to control variables and to test the underlying assumptions for using SHWS to measure aberrations and light scatter from nuclear cataract. Results. (1) For patients with nuclear opalescence (NO) >=2.5, forward scatter metrics in a multiple regression analysis account for 33% of variance in Mesopic Low Contrast acuity. Prediction of visual acuity was improved by employing a multiple regression analysis that included both backscatter and forward scatter metrics (R2 = 51%) for Mesopic High Contrast acuity. (2) The physical and computer models identified areas of instrument noise (e.g., stray light and unwanted reflections) improving the design of a second generation SHWS for measuring both wavefront error and scatter. (3) Exposure time had the most influence on, and pupil size had negligible influence on forward scatter metrics. Scatter metric MAX_SD predicted changes in simulated cataract up to R2 = 92%. There were small but significant differences (alpha = 0.05) between 1.5-pass and 1-pass wavefront measurements inclusive of variable simulated nuclear cataract and exposure; however, these differences were not visually significant. Improvements to the SHWS imaging hardware, software, and test protocol were implemented in a second generation SHWS to be used in a longitudinal cataract study. Conclusions. Forward light scatter in real eyes can be quantified using a SHWS. In the presence of clinically significant nuclear opalescence, forward scatter metrics predicted acuity better than the LOCS III NO backscatter metric. The superiority of forward scatter metrics over back

  18. Reversal of Hartmann's procedure following acute diverticulitis: is timing everything?

    LENUS (Irish Health Repository)

    Fleming, Fergal J

    2012-02-01

    BACKGROUND: Patients who undergo a Hartmann\\'s procedure may not be offered a reversal due to concerns over the morbidity of the second procedure. The aims of this study were to examine the morbidity post reversal of Hartmann\\'s procedure. METHODS: Patients who underwent a Hartmann\\'s procedure for acute diverticulitis (Hinchey 3 or 4) between 1995 and 2006 were studied. Clinical factors including patient comorbidities were analysed to elucidate what preoperative factors were associated with complications following reversal of Hartmann\\'s procedure. RESULTS: One hundred and ten patients were included. Median age was 70 years and 56% of the cohort were male (n = 61). The mortality and morbidity rate for the acute presentation was 7.3% (n = 8) and 34% (n = 37) respectively. Seventy six patients (69%) underwent a reversal at a median of 7 months (range 3-22 months) post-Hartmann\\'s procedure. The complication rate in the reversal group was 25% (n = 18). A history of current smoking (p = 0.004), increasing time to reversal (p = 0.04) and low preoperative albumin (p = 0.003) were all associated with complications following reversal. CONCLUSIONS: Reversal of Hartmann\\'s procedure can be offered to appropriately selected patients though with a significant (25%) morbidity rate. The identification of potential modifiable factors such as current smoking, prolonged time to reversal and low preoperative albumin may allow optimisation of such patients preoperatively.

  19. Transition to turbulence in the Hartmann boundary layer

    Energy Technology Data Exchange (ETDEWEB)

    Thess, A.; Krasnov, D.; Boeck, T.; Zienicke, E. [Dept. of Mechanical Engineering, Ilmenau Univ. of Tech. (Germany); Zikanov, O. [Dept. of Mechanical Engineering, Univ. of Michigan, Dearborn, MI (United States); Moresco, P. [School of Physics and Astronomy, The Univ. of Manchester (United Kingdom); Alboussiere, T. [Lab. de Geophysique Interne et Tectonophysique, Observatoire des Science de l' Univers de Grenoble, Univ. Joseph Fourier, Grenoble (France)

    2007-07-01

    The Hartmann boundary layer is a paradigm of magnetohydrodynamic (MHD) flows. Hartmann boundary layers develop when a liquid metal flows under the influence of a steady magnetic field. The present paper is an overview of recent successful attempts to understand the mechanisms by which the Hartmann layer undergoes a transition from laminar to turbulent flow. (orig.)

  20. For an aesthetics of communication: Interview with Frank Hartmann Por uma estética da comunicação: entrevista com Frank Hartmann

    Directory of Open Access Journals (Sweden)

    Lucia Leao

    2011-07-01

    Full Text Available In August 18, 2010, Frank Hartmann was present at the Post Graduate Program in Communication and Semiotics of PUC-SP, where he lectured on the following theme: “Towards an aesthetics of communication”. Frank Hartmann is full professor at the Bauhaus University, Weimar in Germany and external faculty member at the Dept. of Communication, University of Vienna, Austria. He is the author, among others, of: Medien und Kommunikation (2008; Mediologie - Ansatz einer der Medientheorie Kulturwissenschaften (2003, and Medienphilosophie (2000. More information on his website: http://www.medienphilosophie.net/f_hartmann.htmlFrank Hartmann esteve no dia 18 de agosto de 2010 no Programa de Pós-Graduação em Comunicação e Semiótica da PUC-SP, onde proferiu a palestra “Towards an aesthetics of communication”. Hartmann é professor titular da Universidade Bauhaus, em Weimar, na Alemanha e professor convidado do Departamento de Comunicação da Universidade de Viena, na Áustria. É autor, entre outros, de: Medien und Kommunikation (2008; Mediologie – Ansätze einer Medientheorie der Kulturwissenschaften (2003; e Medienphilosophie (2000. Mais informações em seu site: http://www.medienphilosophie.net/f_hartmann.html

  1. Morbidity and Mortality of Hartmann's Procedure for Sigmoid ...

    African Journals Online (AJOL)

    Background: The restoration of intestinal continuity following Hartmann's procedure is associated with high morbidity and mortality rates and low restoration rate. Objective: To determine the causes of complications and deaths associated with Hartmann's procedure and the secondary restoration of digestive continuity for ...

  2. An enhanced fractal image denoising algorithm

    International Nuclear Information System (INIS)

    Lu Jian; Ye Zhongxing; Zou Yuru; Ye Ruisong

    2008-01-01

    In recent years, there has been a significant development in image denoising using fractal-based method. This paper presents an enhanced fractal predictive denoising algorithm for denoising the images corrupted by an additive white Gaussian noise (AWGN) by using quadratic gray-level function. Meanwhile, a quantization method for the fractal gray-level coefficients of the quadratic function is proposed to strictly guarantee the contractivity requirement of the enhanced fractal coding, and in terms of the quality of the fractal representation measured by PSNR, the enhanced fractal image coding using quadratic gray-level function generally performs better than the standard fractal coding using linear gray-level function. Based on this enhanced fractal coding, the enhanced fractal image denoising is implemented by estimating the fractal gray-level coefficients of the quadratic function of the noiseless image from its noisy observation. Experimental results show that, compared with other standard fractal-based image denoising schemes using linear gray-level function, the enhanced fractal denoising algorithm can improve the quality of the restored image efficiently

  3. Image denoising by exploring external and internal correlations.

    Science.gov (United States)

    Yue, Huanjing; Sun, Xiaoyan; Yang, Jingyu; Wu, Feng

    2015-06-01

    Single image denoising suffers from limited data collection within a noisy image. In this paper, we propose a novel image denoising scheme, which explores both internal and external correlations with the help of web images. For each noisy patch, we build internal and external data cubes by finding similar patches from the noisy and web images, respectively. We then propose reducing noise by a two-stage strategy using different filtering approaches. In the first stage, since the noisy patch may lead to inaccurate patch selection, we propose a graph based optimization method to improve patch matching accuracy in external denoising. The internal denoising is frequency truncation on internal cubes. By combining the internal and external denoising patches, we obtain a preliminary denoising result. In the second stage, we propose reducing noise by filtering of external and internal cubes, respectively, on transform domain. In this stage, the preliminary denoising result not only enhances the patch matching accuracy but also provides reliable estimates of filtering parameters. The final denoising image is obtained by fusing the external and internal filtering results. Experimental results show that our method constantly outperforms state-of-the-art denoising schemes in both subjective and objective quality measurements, e.g., it achieves >2 dB gain compared with BM3D at a wide range of noise levels.

  4. Optical Aperture Synthesis Object's Information Extracting Based on Wavelet Denoising

    International Nuclear Information System (INIS)

    Fan, W J; Lu, Y

    2006-01-01

    Wavelet denoising is studied to improve OAS(optical aperture synthesis) object's Fourier information extracting. Translation invariance wavelet denoising based on Donoho wavelet soft threshold denoising is researched to remove Pseudo-Gibbs in wavelet soft threshold image. OAS object's information extracting based on translation invariance wavelet denoising is studied. The study shows that wavelet threshold denoising can improve the precision and the repetition of object's information extracting from interferogram, and the translation invariance wavelet denoising information extracting is better than soft threshold wavelet denoising information extracting

  5. Energy poverty, shack fires and childhood burns

    African Journals Online (AJOL)

    sufficient choice in accessing adequate, affordable, reliable, high- quality, safe and ... The impact of informal settlement shack fires on individuals and communities has ... often the loss of lives. Fires kill thousands of people every year, with many more disabled .... self-extinguishing mechanism, which ensures that the flame is ...

  6. Denoising PCR-amplified metagenome data

    Directory of Open Access Journals (Sweden)

    Rosen Michael J

    2012-10-01

    Full Text Available Abstract Background PCR amplification and high-throughput sequencing theoretically enable the characterization of the finest-scale diversity in natural microbial and viral populations, but each of these methods introduces random errors that are difficult to distinguish from genuine biological diversity. Several approaches have been proposed to denoise these data but lack either speed or accuracy. Results We introduce a new denoising algorithm that we call DADA (Divisive Amplicon Denoising Algorithm. Without training data, DADA infers both the sample genotypes and error parameters that produced a metagenome data set. We demonstrate performance on control data sequenced on Roche’s 454 platform, and compare the results to the most accurate denoising software currently available, AmpliconNoise. Conclusions DADA is more accurate and over an order of magnitude faster than AmpliconNoise. It eliminates the need for training data to establish error parameters, fully utilizes sequence-abundance information, and enables inclusion of context-dependent PCR error rates. It should be readily extensible to other sequencing platforms such as Illumina.

  7. Brightness checkerboard lattice method for the calibration of the coaxial reverse Hartmann test

    Science.gov (United States)

    Li, Xinji; Hui, Mei; Li, Ning; Hu, Shinan; Liu, Ming; Kong, Lingqin; Dong, Liquan; Zhao, Yuejin

    2018-01-01

    The coaxial reverse Hartmann test (RHT) is widely used in the measurement of large aspheric surfaces as an auxiliary method for interference measurement, because of its large dynamic range, highly flexible test with low frequency of surface errors, and low cost. And the accuracy of the coaxial RHT depends on the calibration. However, the calibration process remains inefficient, and the signal-to-noise ratio limits the accuracy of the calibration. In this paper, brightness checkerboard lattices were used to replace the traditional dot matrix. The brightness checkerboard method can reduce the number of dot matrix projections in the calibration process, thus improving efficiency. An LCD screen displayed a brightness checkerboard lattice, in which the brighter checkerboard and the darker checkerboard alternately arranged. Based on the image on the detector, the relationship between the rays at certain angles and the photosensitive positions of the detector coordinates can be obtained. And a differential de-noising method can effectively reduce the impact of noise on the measurement results. Simulation and experimentation proved the feasibility of the method. Theoretical analysis and experimental results show that the efficiency of the brightness checkerboard lattices is about four times that of the traditional dot matrix, and the signal-to-noise ratio of the calibration is significantly improved.

  8. Non-local means denoising of dynamic PET images.

    Directory of Open Access Journals (Sweden)

    Joyita Dutta

    Full Text Available Dynamic positron emission tomography (PET, which reveals information about both the spatial distribution and temporal kinetics of a radiotracer, enables quantitative interpretation of PET data. Model-based interpretation of dynamic PET images by means of parametric fitting, however, is often a challenging task due to high levels of noise, thus necessitating a denoising step. The objective of this paper is to develop and characterize a denoising framework for dynamic PET based on non-local means (NLM.NLM denoising computes weighted averages of voxel intensities assigning larger weights to voxels that are similar to a given voxel in terms of their local neighborhoods or patches. We introduce three key modifications to tailor the original NLM framework to dynamic PET. Firstly, we derive similarities from less noisy later time points in a typical PET acquisition to denoise the entire time series. Secondly, we use spatiotemporal patches for robust similarity computation. Finally, we use a spatially varying smoothing parameter based on a local variance approximation over each spatiotemporal patch.To assess the performance of our denoising technique, we performed a realistic simulation on a dynamic digital phantom based on the Digimouse atlas. For experimental validation, we denoised [Formula: see text] PET images from a mouse study and a hepatocellular carcinoma patient study. We compared the performance of NLM denoising with four other denoising approaches - Gaussian filtering, PCA, HYPR, and conventional NLM based on spatial patches.The simulation study revealed significant improvement in bias-variance performance achieved using our NLM technique relative to all the other methods. The experimental data analysis revealed that our technique leads to clear improvement in contrast-to-noise ratio in Patlak parametric images generated from denoised preclinical and clinical dynamic images, indicating its ability to preserve image contrast and high

  9. Non-local means denoising of dynamic PET images.

    Science.gov (United States)

    Dutta, Joyita; Leahy, Richard M; Li, Quanzheng

    2013-01-01

    Dynamic positron emission tomography (PET), which reveals information about both the spatial distribution and temporal kinetics of a radiotracer, enables quantitative interpretation of PET data. Model-based interpretation of dynamic PET images by means of parametric fitting, however, is often a challenging task due to high levels of noise, thus necessitating a denoising step. The objective of this paper is to develop and characterize a denoising framework for dynamic PET based on non-local means (NLM). NLM denoising computes weighted averages of voxel intensities assigning larger weights to voxels that are similar to a given voxel in terms of their local neighborhoods or patches. We introduce three key modifications to tailor the original NLM framework to dynamic PET. Firstly, we derive similarities from less noisy later time points in a typical PET acquisition to denoise the entire time series. Secondly, we use spatiotemporal patches for robust similarity computation. Finally, we use a spatially varying smoothing parameter based on a local variance approximation over each spatiotemporal patch. To assess the performance of our denoising technique, we performed a realistic simulation on a dynamic digital phantom based on the Digimouse atlas. For experimental validation, we denoised [Formula: see text] PET images from a mouse study and a hepatocellular carcinoma patient study. We compared the performance of NLM denoising with four other denoising approaches - Gaussian filtering, PCA, HYPR, and conventional NLM based on spatial patches. The simulation study revealed significant improvement in bias-variance performance achieved using our NLM technique relative to all the other methods. The experimental data analysis revealed that our technique leads to clear improvement in contrast-to-noise ratio in Patlak parametric images generated from denoised preclinical and clinical dynamic images, indicating its ability to preserve image contrast and high intensity details while

  10. Energy-Based Wavelet De-Noising of Hydrologic Time Series

    Science.gov (United States)

    Sang, Yan-Fang; Liu, Changming; Wang, Zhonggen; Wen, Jun; Shang, Lunyu

    2014-01-01

    De-noising is a substantial issue in hydrologic time series analysis, but it is a difficult task due to the defect of methods. In this paper an energy-based wavelet de-noising method was proposed. It is to remove noise by comparing energy distribution of series with the background energy distribution, which is established from Monte-Carlo test. Differing from wavelet threshold de-noising (WTD) method with the basis of wavelet coefficient thresholding, the proposed method is based on energy distribution of series. It can distinguish noise from deterministic components in series, and uncertainty of de-noising result can be quantitatively estimated using proper confidence interval, but WTD method cannot do this. Analysis of both synthetic and observed series verified the comparable power of the proposed method and WTD, but de-noising process by the former is more easily operable. The results also indicate the influences of three key factors (wavelet choice, decomposition level choice and noise content) on wavelet de-noising. Wavelet should be carefully chosen when using the proposed method. The suitable decomposition level for wavelet de-noising should correspond to series' deterministic sub-signal which has the smallest temporal scale. If too much noise is included in a series, accurate de-noising result cannot be obtained by the proposed method or WTD, but the series would show pure random but not autocorrelation characters, so de-noising is no longer needed. PMID:25360533

  11. Image processing and analysis using neural networks for optometry area

    Science.gov (United States)

    Netto, Antonio V.; Ferreira de Oliveira, Maria C.

    2002-11-01

    In this work we describe the framework of a functional system for processing and analyzing images of the human eye acquired by the Hartmann-Shack technique (HS), in order to extract information to formulate a diagnosis of eye refractive errors (astigmatism, hypermetropia and myopia). The analysis is to be carried out using an Artificial Intelligence system based on Neural Nets, Fuzzy Logic and Classifier Combination. The major goal is to establish the basis of a new technology to effectively measure ocular refractive errors that is based on methods alternative those adopted in current patented systems. Moreover, analysis of images acquired with the Hartmann-Shack technique may enable the extraction of additional information on the health of an eye under exam from the same image used to detect refraction errors.

  12. The Noise Clinic: a Blind Image Denoising Algorithm

    Directory of Open Access Journals (Sweden)

    Marc Lebrun

    2015-01-01

    Full Text Available This paper describes the complete implementation of a blind image algorithm, that takes any digital image as input. In a first step the algorithm estimates a Signal and Frequency Dependent (SFD noise model. In a second step, the image is denoised by a multiscale adaptation of the Non-local Bayes denoising method. We focus here on a careful analysis of the denoising step and present a detailed discussion of the influence of its parameters. Extensive commented tests of the blind denoising algorithm are presented, on real JPEG images and scans of old photographs.

  13. Image denoising based on noise detection

    Science.gov (United States)

    Jiang, Yuanxiang; Yuan, Rui; Sun, Yuqiu; Tian, Jinwen

    2018-03-01

    Because of the noise points in the images, any operation of denoising would change the original information of non-noise pixel. A noise detection algorithm based on fractional calculus was proposed to denoise in this paper. Convolution of the image was made to gain direction gradient masks firstly. Then, the mean gray was calculated to obtain the gradient detection maps. Logical product was made to acquire noise position image next. Comparisons in the visual effect and evaluation parameters after processing, the results of experiment showed that the denoising algorithms based on noise were better than that of traditional methods in both subjective and objective aspects.

  14. Corneal topographer based on the Hartmann test.

    Science.gov (United States)

    Mejía, Yobani; Galeano, Janneth C

    2009-04-01

    The purpose of this article is to show the performance of a topographer based on the Hartmann test for convex surfaces of F/# approximately 1. This topographer, called "Hartmann Test topographer (HT topographer)," is a prototype developed in the Physics Department of the Universidad Nacional de Colombia. From the Hartmann pattern generated by the surface under test, and by the Fourier analysis and the optical aberration theory we obtain the sagitta (elevation map) of the surface. Then, taking the first and the second derivatives of the sagitta in the radial direction we obtain the meridional curvature map. The method is illustrated with an example. To check the performance of the HT topographer a toric surface, a revolution aspherical surface, and two human corneas were measured. Our results are compared with those obtained with a Placido ring topographer (Tomey TMS-4 videokeratoscope), and we show that our curvature maps are similar to those obtained with the Placido ring topographer. The HT topographer is able to reconstruct the corneal topography potentially eradicating the skew ray problem, therefore, corneal defects can be visualized more. The results are presented by elevation and meridional curvature maps.

  15. Laparoscopic reversal of Hartmann's procedure

    DEFF Research Database (Denmark)

    Svenningsen, Peter Olsen; Bulut, Orhan; Jess, Per

    2010-01-01

    of Hartmann's procedure as safely as in open surgery and with a faster recovery, shorter hospital stay and less blood loss despite a longer knife time. It therefore seems reasonable to offer patients a laparoscopic procedure at departments which are skilled in laparoscopic surgery and use it for standard...

  16. Sparse representations via learned dictionaries for x-ray angiogram image denoising

    Science.gov (United States)

    Shang, Jingfan; Huang, Zhenghua; Li, Qian; Zhang, Tianxu

    2018-03-01

    X-ray angiogram image denoising is always an active research topic in the field of computer vision. In particular, the denoising performance of many existing methods had been greatly improved by the widely use of nonlocal similar patches. However, the only nonlocal self-similar (NSS) patch-based methods can be still be improved and extended. In this paper, we propose an image denoising model based on the sparsity of the NSS patches to obtain high denoising performance and high-quality image. In order to represent the sparsely NSS patches in every location of the image well and solve the image denoising model more efficiently, we obtain dictionaries as a global image prior by the K-SVD algorithm over the processing image; Then the single and effectively alternating directions method of multipliers (ADMM) method is used to solve the image denoising model. The results of widely synthetic experiments demonstrate that, owing to learned dictionaries by K-SVD algorithm, a sparsely augmented lagrangian image denoising (SALID) model, which perform effectively, obtains a state-of-the-art denoising performance and better high-quality images. Moreover, we also give some denoising results of clinical X-ray angiogram images.

  17. Determination of the paraxial focal length using Zernike polynomials over different apertures

    Science.gov (United States)

    Binkele, Tobias; Hilbig, David; Henning, Thomas; Fleischmann, Friedrich

    2017-02-01

    The paraxial focal length is still the most important parameter in the design of a lens. As presented at the SPIE Optics + Photonics 2016, the measured focal length is a function of the aperture. The paraxial focal length can be found when the aperture approaches zero. In this work, we investigate the dependency of the Zernike polynomials on the aperture size with respect to 3D space. By this, conventional wavefront measurement systems that apply Zernike polynomial fitting (e.g. Shack-Hartmann-Sensor) can be used to determine the paraxial focal length, too. Since the Zernike polynomials are orthogonal over a unit circle, the aperture used in the measurement has to be normalized. By shrinking the aperture and keeping up with the normalization, the Zernike coefficients change. The relation between these changes and the paraxial focal length are investigated. The dependency of the focal length on the aperture size is derived analytically and evaluated by simulation and measurement of a strong focusing lens. The measurements are performed using experimental ray tracing and a Shack-Hartmann-Sensor. Using experimental ray tracing for the measurements, the aperture can be chosen easily. Regarding the measurements with the Shack-Hartmann- Sensor, the aperture size is fixed. Thus, the Zernike polynomials have to be adapted to use different aperture sizes by the proposed method. By doing this, the paraxial focal length can be determined from the measurements in both cases.

  18. Effect of denoising on supervised lung parenchymal clusters

    Science.gov (United States)

    Jayamani, Padmapriya; Raghunath, Sushravya; Rajagopalan, Srinivasan; Karwoski, Ronald A.; Bartholmai, Brian J.; Robb, Richard A.

    2012-03-01

    Denoising is a critical preconditioning step for quantitative analysis of medical images. Despite promises for more consistent diagnosis, denoising techniques are seldom explored in clinical settings. While this may be attributed to the esoteric nature of the parameter sensitve algorithms, lack of quantitative measures on their ecacy to enhance the clinical decision making is a primary cause of physician apathy. This paper addresses this issue by exploring the eect of denoising on the integrity of supervised lung parenchymal clusters. Multiple Volumes of Interests (VOIs) were selected across multiple high resolution CT scans to represent samples of dierent patterns (normal, emphysema, ground glass, honey combing and reticular). The VOIs were labeled through consensus of four radiologists. The original datasets were ltered by multiple denoising techniques (median ltering, anisotropic diusion, bilateral ltering and non-local means) and the corresponding ltered VOIs were extracted. Plurality of cluster indices based on multiple histogram-based pair-wise similarity measures were used to assess the quality of supervised clusters in the original and ltered space. The resultant rank orders were analyzed using the Borda criteria to nd the denoising-similarity measure combination that has the best cluster quality. Our exhaustive analyis reveals (a) for a number of similarity measures, the cluster quality is inferior in the ltered space; and (b) for measures that benet from denoising, a simple median ltering outperforms non-local means and bilateral ltering. Our study suggests the need to judiciously choose, if required, a denoising technique that does not deteriorate the integrity of supervised clusters.

  19. Effect of Hartmann layer resolution for MHD flow in a straight ...

    Indian Academy of Sciences (India)

    851–861. c Indian Academy of Sciences. Effect of Hartmann layer resolution for MHD flow in a straight, conducting duct at high Hartmann numbers. SHARANYA SUBRAMANIAN1,∗, P K SWAIN2,. A V DESHPANDE1 and P SATYAMURTHY2. 1Mechanical Engineering Department, Veermata Jijabai Technological Institute,.

  20. Thermal lensing measurement from the coefficient of defocus aberration

    CSIR Research Space (South Africa)

    Bell, Teboho

    2016-03-01

    Full Text Available We measured the thermally induced lens from the coefficient of defocus aberration using a Shack-Hartmann wavefront sensor (SHWFS). As a calibration technique, we infer the focal length of standard lenses probed by a collimated Gaussian beam...

  1. Performance tuning for CUDA-accelerated neighborhood denoising filters

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Ziyi; Mueller, Klaus [Stony Brook Univ., NY (United States). Center for Visual Computing, Computer Science; Xu, Wei

    2011-07-01

    Neighborhood denoising filters are powerful techniques in image processing and can effectively enhance the image quality in CT reconstructions. In this study, by taking the bilateral filter and the non-local mean filter as two examples, we discuss their implementations and perform fine-tuning on the targeted GPU architecture. Experimental results show that the straightforward GPU-based neighborhood filters can be further accelerated by pre-fetching. The optimized GPU-accelerated denoising filters are ready for plug-in into reconstruction framework to enable fast denoising without compromising image quality. (orig.)

  2. Pipeline for effective denoising of digital mammography and digital breast tomosynthesis

    Science.gov (United States)

    Borges, Lucas R.; Bakic, Predrag R.; Foi, Alessandro; Maidment, Andrew D. A.; Vieira, Marcelo A. C.

    2017-03-01

    Denoising can be used as a tool to enhance image quality and enforce low radiation doses in X-ray medical imaging. The effectiveness of denoising techniques relies on the validity of the underlying noise model. In full-field digital mammography (FFDM) and digital breast tomosynthesis (DBT), calibration steps like the detector offset and flat-fielding can affect some assumptions made by most denoising techniques. Furthermore, quantum noise found in X-ray images is signal-dependent and can only be treated by specific filters. In this work we propose a pipeline for FFDM and DBT image denoising that considers the calibration steps and simplifies the modeling of the noise statistics through variance-stabilizing transformations (VST). The performance of a state-of-the-art denoising method was tested with and without the proposed pipeline. To evaluate the method, objective metrics such as the normalized root mean square error (N-RMSE), noise power spectrum, modulation transfer function (MTF) and the frequency signal-to-noise ratio (SNR) were analyzed. Preliminary tests show that the pipeline improves denoising. When the pipeline is not used, bright pixels of the denoised image are under-filtered and dark pixels are over-smoothed due to the assumption of a signal-independent Gaussian model. The pipeline improved denoising up to 20% in terms of spatial N-RMSE and up to 15% in terms of frequency SNR. Besides improving the denoising, the pipeline does not increase signal smoothing significantly, as shown by the MTF. Thus, the proposed pipeline can be used with state-of-the-art denoising techniques to improve the quality of DBT and FFDM images.

  3. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    Science.gov (United States)

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  4. Speaker Recognition Using Real vs. Synthetic Parallel Data for DNN Channel Compensation

    Science.gov (United States)

    2016-09-08

    Audio Technica Hanging Mic) 05 Jabra Cellphone Earwrap Mic 06 Motorola Cellphone Earbud 07 Olympus Pearlcorder 08 Radio Shack Computer Desktop Mic...on conver- sational telephone speech. To assess the performance impact of the denoising DNN on telephony data we evaluated the DNNs on the SRE10...ASpIRE recipe. Importantly, all three denoising DNN systems did not adversely impact telephone SR perfor- mance as measured on the SRE10 telephone task

  5. Denoising in Wavelet Packet Domain via Approximation Coefficients

    Directory of Open Access Journals (Sweden)

    Zahra Vahabi

    2012-01-01

    Full Text Available In this paper we propose a new approach in the wavelet domain for image denoising. In recent researches wavelet transform has introduced a time-Frequency transform for computing wavelet coefficient and eliminating noise. Some coefficients have effected smaller than the other's from noise, so they can be use reconstruct images with other subbands. We have developed Approximation image to estimate better denoised image. Naturally noiseless subimage introduced image with lower noise. Beside denoising we obtain a bigger compression rate. Increasing image contrast is another advantage of this method. Experimental results demonstrate that our approach compares favorably to more typical methods of denoising and compression in wavelet domain.100 images of LIVE Dataset were tested, comparing signal to noise ratios (SNR,soft thresholding was %1.12 better than hard thresholding, POAC was %1.94 better than soft thresholding and POAC with wavelet packet was %1.48 better than POAC.

  6. A new method for mobile phone image denoising

    Science.gov (United States)

    Jin, Lianghai; Jin, Min; Li, Xiang; Xu, Xiangyang

    2015-12-01

    Images captured by mobile phone cameras via pipeline processing usually contain various kinds of noises, especially granular noise with different shapes and sizes in both luminance and chrominance channels. In chrominance channels, noise is closely related to image brightness. To improve image quality, this paper presents a new method to denoise such mobile phone images. The proposed scheme converts the noisy RGB image to luminance and chrominance images, which are then denoised by a common filtering framework. The common filtering framework processes a noisy pixel by first excluding the neighborhood pixels that significantly deviate from the (vector) median and then utilizing the other neighborhood pixels to restore the current pixel. In the framework, the strength of chrominance image denoising is controlled by image brightness. The experimental results show that the proposed method obviously outperforms some other representative denoising methods in terms of both objective measure and visual evaluation.

  7. Nonlinear Image Denoising Methodologies

    National Research Council Canada - National Science Library

    Yufang, Bao

    2002-01-01

    In this thesis, we propose a theoretical as well as practical framework to combine geometric prior information to a statistical/probabilistic methodology in the investigation of a denoising problem...

  8. Improved Real-time Denoising Method Based on Lifting Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Liu Zhaohua

    2014-06-01

    Full Text Available Signal denoising can not only enhance the signal to noise ratio (SNR but also reduce the effect of noise. In order to satisfy the requirements of real-time signal denoising, an improved semisoft shrinkage real-time denoising method based on lifting wavelet transform was proposed. The moving data window technology realizes the real-time wavelet denoising, which employs wavelet transform based on lifting scheme to reduce computational complexity. Also hyperbolic threshold function and recursive threshold computing can ensure the dynamic characteristics of the system, in addition, it can improve the real-time calculating efficiency as well. The simulation results show that the semisoft shrinkage real-time denoising method has quite a good performance in comparison to the traditional methods, namely soft-thresholding and hard-thresholding. Therefore, this method can solve more practical engineering problems.

  9. Research on Ship-Radiated Noise Denoising Using Secondary Variational Mode Decomposition and Correlation Coefficient.

    Science.gov (United States)

    Li, Yuxing; Li, Yaan; Chen, Xiao; Yu, Jing

    2017-12-26

    As the sound signal of ships obtained by sensors contains other many significant characteristics of ships and called ship-radiated noise (SN), research into a denoising algorithm and its application has obtained great significance. Using the advantage of variational mode decomposition (VMD) combined with the correlation coefficient for denoising, a hybrid secondary denoising algorithm is proposed using secondary VMD combined with a correlation coefficient (CC). First, different kinds of simulation signals are decomposed into several bandwidth-limited intrinsic mode functions (IMFs) using VMD, where the decomposition number by VMD is equal to the number by empirical mode decomposition (EMD); then, the CCs between the IMFs and the simulation signal are calculated respectively. The noise IMFs are identified by the CC threshold and the rest of the IMFs are reconstructed in order to realize the first denoising process. Finally, secondary denoising of the simulation signal can be accomplished by repeating the above steps of decomposition, screening and reconstruction. The final denoising result is determined according to the CC threshold. The denoising effect is compared under the different signal-to-noise ratio and the time of decomposition by VMD. Experimental results show the validity of the proposed denoising algorithm using secondary VMD (2VMD) combined with CC compared to EMD denoising, ensemble EMD (EEMD) denoising, VMD denoising and cubic VMD (3VMD) denoising, as well as two denoising algorithms presented recently. The proposed denoising algorithm is applied to feature extraction and classification for SN signals, which can effectively improve the recognition rate of different kinds of ships.

  10. Research on Ship-Radiated Noise Denoising Using Secondary Variational Mode Decomposition and Correlation Coefficient

    Directory of Open Access Journals (Sweden)

    Yuxing Li

    2017-12-01

    Full Text Available As the sound signal of ships obtained by sensors contains other many significant characteristics of ships and called ship-radiated noise (SN, research into a denoising algorithm and its application has obtained great significance. Using the advantage of variational mode decomposition (VMD combined with the correlation coefficient for denoising, a hybrid secondary denoising algorithm is proposed using secondary VMD combined with a correlation coefficient (CC. First, different kinds of simulation signals are decomposed into several bandwidth-limited intrinsic mode functions (IMFs using VMD, where the decomposition number by VMD is equal to the number by empirical mode decomposition (EMD; then, the CCs between the IMFs and the simulation signal are calculated respectively. The noise IMFs are identified by the CC threshold and the rest of the IMFs are reconstructed in order to realize the first denoising process. Finally, secondary denoising of the simulation signal can be accomplished by repeating the above steps of decomposition, screening and reconstruction. The final denoising result is determined according to the CC threshold. The denoising effect is compared under the different signal-to-noise ratio and the time of decomposition by VMD. Experimental results show the validity of the proposed denoising algorithm using secondary VMD (2VMD combined with CC compared to EMD denoising, ensemble EMD (EEMD denoising, VMD denoising and cubic VMD (3VMD denoising, as well as two denoising algorithms presented recently. The proposed denoising algorithm is applied to feature extraction and classification for SN signals, which can effectively improve the recognition rate of different kinds of ships.

  11. Random Correlation Matrix and De-Noising

    OpenAIRE

    Ken-ichi Mitsui; Yoshio Tabata

    2006-01-01

    In Finance, the modeling of a correlation matrix is one of the important problems. In particular, the correlation matrix obtained from market data has the noise. Here we apply the de-noising processing based on the wavelet analysis to the noisy correlation matrix, which is generated by a parametric function with random parameters. First of all, we show that two properties, i.e. symmetry and ones of all diagonal elements, of the correlation matrix preserve via the de-noising processing and the...

  12. A New Wavelet Threshold Determination Method Considering Interscale Correlation in Signal Denoising

    Directory of Open Access Journals (Sweden)

    Can He

    2015-01-01

    Full Text Available Due to simple calculation and good denoising effect, wavelet threshold denoising method has been widely used in signal denoising. In this method, the threshold is an important parameter that affects the denoising effect. In order to improve the denoising effect of the existing methods, a new threshold considering interscale correlation is presented. Firstly, a new correlation index is proposed based on the propagation characteristics of the wavelet coefficients. Then, a threshold determination strategy is obtained using the new index. At the end of the paper, a simulation experiment is given to verify the effectiveness of the proposed method. In the experiment, four benchmark signals are used as test signals. Simulation results show that the proposed method can achieve a good denoising effect under various signal types, noise intensities, and thresholding functions.

  13. Echocardiogram enhancement using supervised manifold denoising.

    Science.gov (United States)

    Wu, Hui; Huynh, Toan T; Souvenir, Richard

    2015-08-01

    This paper presents data-driven methods for echocardiogram enhancement. Existing denoising algorithms typically rely on a single noise model, and do not generalize to the composite noise sources typically found in real-world echocardiograms. Our methods leverage the low-dimensional intrinsic structure of echocardiogram videos. We assume that echocardiogram images are noisy samples from an underlying manifold parametrized by cardiac motion and denoise images via back-projection onto a learned (non-linear) manifold. Our methods incorporate synchronized side information (e.g., electrocardiography), which is often collected alongside the visual data. We evaluate the proposed methods on a synthetic data set and real-world echocardiograms. Quantitative results show improved performance of our methods over recent image despeckling methods and video denoising methods, and a visual analysis of real-world data shows noticeable image enhancement, even in the challenging case of noise due to dropout artifacts. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Medical Image Denoising Using Mixed Transforms

    Directory of Open Access Journals (Sweden)

    Jaleel Sadoon Jameel

    2018-02-01

    Full Text Available  In this paper,  a mixed transform method is proposed based on a combination of wavelet transform (WT and multiwavelet transform (MWT in order to denoise medical images. The proposed method consists of WT and MWT in cascade form to enhance the denoising performance of image processing. Practically, the first step is to add a noise to Magnetic Resonance Image (MRI or Computed Tomography (CT images for the sake of testing. The noisy image is processed by WT to achieve four sub-bands and each sub-band is treated individually using MWT before the soft/hard denoising stage. Simulation results show that a high peak signal to noise ratio (PSNR is improved significantly and the characteristic features are well preserved by employing mixed transform of WT and MWT due to their capability of separating noise signals from image signals. Moreover, the corresponding mean square error (MSE is decreased accordingly compared to other available methods.

  15. Regularized image denoising based on spectral gradient optimization

    International Nuclear Information System (INIS)

    Lukić, Tibor; Lindblad, Joakim; Sladoje, Nataša

    2011-01-01

    Image restoration methods, such as denoising, deblurring, inpainting, etc, are often based on the minimization of an appropriately defined energy function. We consider energy functions for image denoising which combine a quadratic data-fidelity term and a regularization term, where the properties of the latter are determined by a used potential function. Many potential functions are suggested for different purposes in the literature. We compare the denoising performance achieved by ten different potential functions. Several methods for efficient minimization of regularized energy functions exist. Most are only applicable to particular choices of potential functions, however. To enable a comparison of all the observed potential functions, we propose to minimize the objective function using a spectral gradient approach; spectral gradient methods put very weak restrictions on the used potential function. We present and evaluate the performance of one spectral conjugate gradient and one cyclic spectral gradient algorithm, and conclude from experiments that both are well suited for the task. We compare the performance with three total variation-based state-of-the-art methods for image denoising. From the empirical evaluation, we conclude that denoising using the Huber potential (for images degraded by higher levels of noise; signal-to-noise ratio below 10 dB) and the Geman and McClure potential (for less noisy images), in combination with the spectral conjugate gradient minimization algorithm, shows the overall best performance

  16. Fringe pattern denoising using coherence-enhancing diffusion.

    Science.gov (United States)

    Wang, Haixia; Kemao, Qian; Gao, Wenjing; Lin, Feng; Seah, Hock Soon

    2009-04-15

    Electronic speckle pattern interferometry is one of the methods measuring the displacement on object surfaces in which fringe patterns need to be evaluated. Noise is one of the key problems affecting further processing and reducing measurement quality. We propose an application of coherence-enhancing diffusion to fringe-pattern denoising. It smoothes a fringe pattern along directions both parallel and perpendicular to fringe orientation with suitable diffusion speeds to more effectively reduce noise and improve fringe-pattern quality. It is a generalized work of Tang's et al.'s [Opt. Lett.33, 2179 (2008)] model that only smoothes a fringe pattern along fringe orientation. Since our model diffuses a fringe pattern with an additional direction, it is able to denoise low-density fringes as well as improve denoising effectiveness for high-density fringes. Theoretical analysis as well as simulation and experimental verifications are addressed.

  17. Dictionary Pair Learning on Grassmann Manifolds for Image Denoising.

    Science.gov (United States)

    Zeng, Xianhua; Bian, Wei; Liu, Wei; Shen, Jialie; Tao, Dacheng

    2015-11-01

    Image denoising is a fundamental problem in computer vision and image processing that holds considerable practical importance for real-world applications. The traditional patch-based and sparse coding-driven image denoising methods convert 2D image patches into 1D vectors for further processing. Thus, these methods inevitably break down the inherent 2D geometric structure of natural images. To overcome this limitation pertaining to the previous image denoising methods, we propose a 2D image denoising model, namely, the dictionary pair learning (DPL) model, and we design a corresponding algorithm called the DPL on the Grassmann-manifold (DPLG) algorithm. The DPLG algorithm first learns an initial dictionary pair (i.e., the left and right dictionaries) by employing a subspace partition technique on the Grassmann manifold, wherein the refined dictionary pair is obtained through a sub-dictionary pair merging. The DPLG obtains a sparse representation by encoding each image patch only with the selected sub-dictionary pair. The non-zero elements of the sparse representation are further smoothed by the graph Laplacian operator to remove the noise. Consequently, the DPLG algorithm not only preserves the inherent 2D geometric structure of natural images but also performs manifold smoothing in the 2D sparse coding space. We demonstrate that the DPLG algorithm also improves the structural SIMilarity values of the perceptual visual quality for denoised images using the experimental evaluations on the benchmark images and Berkeley segmentation data sets. Moreover, the DPLG also produces the competitive peak signal-to-noise ratio values from popular image denoising algorithms.

  18. GPU-accelerated denoising of 3D magnetic resonance images

    Energy Technology Data Exchange (ETDEWEB)

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  19. A procedure for denoising dual-axis swallowing accelerometry signals

    International Nuclear Information System (INIS)

    Sejdić, Ervin; Chau, Tom; Steele, Catriona M

    2010-01-01

    Dual-axis swallowing accelerometry is an emerging tool for the assessment of dysphagia (swallowing difficulties). These signals however can be very noisy as a result of physiological and motion artifacts. In this note, we propose a novel scheme for denoising those signals, i.e. a computationally efficient search for the optimal denoising threshold within a reduced wavelet subspace. To determine a viable subspace, the algorithm relies on the minimum value of the estimated upper bound for the reconstruction error. A numerical analysis of the proposed scheme using synthetic test signals demonstrated that the proposed scheme is computationally more efficient than minimum noiseless description length (MNDL)-based denoising. It also yields smaller reconstruction errors than MNDL, SURE and Donoho denoising methods. When applied to dual-axis swallowing accelerometry signals, the proposed scheme exhibits improved performance for dry, wet and wet chin tuck swallows. These results are important for the further development of medical devices based on dual-axis swallowing accelerometry signals. (note)

  20. Regularized Pre-image Estimation for Kernel PCA De-noising

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    The main challenge in de-noising by kernel Principal Component Analysis (PCA) is the mapping of de-noised feature space points back into input space, also referred to as “the pre-image problem”. Since the feature space mapping is typically not bijective, pre-image estimation is inherently illposed...

  1. Image denoising via adaptive eigenvectors of graph Laplacian

    Science.gov (United States)

    Chen, Ying; Tang, Yibin; Xu, Ning; Zhou, Lin; Zhao, Li

    2016-07-01

    An image denoising method via adaptive eigenvectors of graph Laplacian (EGL) is proposed. Unlike the trivial parameter setting of the used eigenvectors in the traditional EGL method, in our method, the eigenvectors are adaptively selected in the whole denoising procedure. In detail, a rough image is first built with the eigenvectors from the noisy image, where the eigenvectors are selected by using the deviation estimation of the clean image. Subsequently, a guided image is effectively restored with a weighted average of the noisy and rough images. In this operation, the average coefficient is adaptively obtained to set the deviation of the guided image to approximately that of the clean image. Finally, the denoised image is achieved by a group-sparse model with the pattern from the guided image, where the eigenvectors are chosen in the error control of the noise deviation. Moreover, a modified group orthogonal matching pursuit algorithm is developed to efficiently solve the above group sparse model. The experiments show that our method not only improves the practicality of the EGL methods with the dependence reduction of the parameter setting, but also can outperform some well-developed denoising methods, especially for noise with large deviations.

  2. Comparison of wavelet based denoising schemes for gear condition monitoring: An Artificial Neural Network based Approach

    Science.gov (United States)

    Ahmed, Rounaq; Srinivasa Pai, P.; Sriram, N. S.; Bhat, Vasudeva

    2018-02-01

    Vibration Analysis has been extensively used in recent past for gear fault diagnosis. The vibration signals extracted is usually contaminated with noise and may lead to wrong interpretation of results. The denoising of extracted vibration signals helps the fault diagnosis by giving meaningful results. Wavelet Transform (WT) increases signal to noise ratio (SNR), reduces root mean square error (RMSE) and is effective to denoise the gear vibration signals. The extracted signals have to be denoised by selecting a proper denoising scheme in order to prevent the loss of signal information along with noise. An approach has been made in this work to show the effectiveness of Principal Component Analysis (PCA) to denoise gear vibration signal. In this regard three selected wavelet based denoising schemes namely PCA, Empirical Mode Decomposition (EMD), Neighcoeff Coefficient (NC), has been compared with Adaptive Threshold (AT) an extensively used wavelet based denoising scheme for gear vibration signal. The vibration signals acquired from a customized gear test rig were denoised by above mentioned four denoising schemes. The fault identification capability as well as SNR, Kurtosis and RMSE for the four denoising schemes have been compared. Features extracted from the denoised signals have been used to train and test artificial neural network (ANN) models. The performances of the four denoising schemes have been evaluated based on the performance of the ANN models. The best denoising scheme has been identified, based on the classification accuracy results. PCA is effective in all the regards as a best denoising scheme.

  3. ECG denoising with adaptive bionic wavelet transform.

    Science.gov (United States)

    Sayadi, Omid; Shamsollahi, Mohammad Bagher

    2006-01-01

    In this paper a new ECG denoising scheme is proposed using a novel adaptive wavelet transform, named bionic wavelet transform (BWT), which had been first developed based on a model of the active auditory system. There has been some outstanding features with the BWT such as nonlinearity, high sensitivity and frequency selectivity, concentrated energy distribution and its ability to reconstruct signal via inverse transform but the most distinguishing characteristic of BWT is that its resolution in the time-frequency domain can be adaptively adjusted not only by the signal frequency but also by the signal instantaneous amplitude and its first-order differential. Besides by optimizing the BWT parameters parallel to modifying a new threshold value, one can handle ECG denoising with results comparing to those of wavelet transform (WT). Preliminary tests of BWT application to ECG denoising were constructed on the signals of MIT-BIH database which showed high performance of noise reduction.

  4. Analysis of Non Local Image Denoising Methods

    Science.gov (United States)

    Pardo, Álvaro

    Image denoising is probably one of the most studied problems in the image processing community. Recently a new paradigm on non local denoising was introduced. The Non Local Means method proposed by Buades, Morel and Coll attracted the attention of other researches who proposed improvements and modifications to their proposal. In this work we analyze those methods trying to understand their properties while connecting them to segmentation based on spectral graph properties. We also propose some improvements to automatically estimate the parameters used on these methods.

  5. Sinogram denoising via simultaneous sparse representation in learned dictionaries

    International Nuclear Information System (INIS)

    Karimi, Davood; Ward, Rabab K

    2016-01-01

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster. (paper)

  6. Fringe pattern denoising via image decomposition.

    Science.gov (United States)

    Fu, Shujun; Zhang, Caiming

    2012-02-01

    Filtering off noise from a fringe pattern is one of the key tasks in optical interferometry. In this Letter, using some suitable function spaces to model different components of a fringe pattern, we propose a new fringe pattern denoising method based on image decomposition. In our method, a fringe image is divided into three parts: low-frequency fringe, high-frequency fringe, and noise, which are processed in different spaces. An adaptive threshold in wavelet shrinkage involved in this algorithm improves its denoising performance. Simulation and experimental results show that our algorithm obtains smooth and clean fringes with different frequencies while preserving fringe features effectively.

  7. Denoising imaging polarimetry by adapted BM3D method.

    Science.gov (United States)

    Tibbs, Alexander B; Daly, Ilse M; Roberts, Nicholas W; Bull, David R

    2018-04-01

    In addition to the visual information contained in intensity and color, imaging polarimetry allows visual information to be extracted from the polarization of light. However, a major challenge of imaging polarimetry is image degradation due to noise. This paper investigates the mitigation of noise through denoising algorithms and compares existing denoising algorithms with a new method, based on BM3D (Block Matching 3D). This algorithm, Polarization-BM3D (PBM3D), gives visual quality superior to the state of the art across all images and noise standard deviations tested. We show that denoising polarization images using PBM3D allows the degree of polarization to be more accurately calculated by comparing it with spectral polarimetry measurements.

  8. Nonlinear differential equations for the wavefront surface at arbitrary Hartmann-plane distances.

    Science.gov (United States)

    Téllez-Quiñones, Alejandro; Malacara-Doblado, Daniel; Flores-Hernández, Ricardo; Gutiérrez-Hernández, David A; León-Rodríguez, Miguel

    2016-03-20

    In the Hartmann test, a wave aberration function W is estimated from the information of the spot diagram drawn in an observation plane. The distance from a reference plane to the observation plane, the Hartmann-plane distance, is typically chosen as z=f, where f is the radius of a reference sphere. The function W and the transversal aberrations {X,Y} calculated at the plane z=f are related by two well-known linear differential equations. Here, we propose two nonlinear differential equations to denote a more general relation between W and the transversal aberrations {U,V} calculated at any arbitrary Hartmann-plane distance z=r. We also show how to directly estimate the wavefront surface w from the information of {U,V}. The use of arbitrary r values could improve the reliability of the measurements of W, or w, when finding difficulties in adequate ray identification at z=f.

  9. A New Wavelet Threshold Function and Denoising Application

    Directory of Open Access Journals (Sweden)

    Lu Jing-yi

    2016-01-01

    Full Text Available In order to improve the effects of denoising, this paper introduces the basic principles of wavelet threshold denoising and traditional structures threshold functions. Meanwhile, it proposes wavelet threshold function and fixed threshold formula which are both improved here. First, this paper studies the problems existing in the traditional wavelet threshold functions and introduces the adjustment factors to construct the new threshold function basis on soft threshold function. Then, it studies the fixed threshold and introduces the logarithmic function of layer number of wavelet decomposition to design the new fixed threshold formula. Finally, this paper uses hard threshold, soft threshold, Garrote threshold, and improved threshold function to denoise different signals. And the paper also calculates signal-to-noise (SNR and mean square errors (MSE of the hard threshold functions, soft thresholding functions, Garrote threshold functions, and the improved threshold function after denoising. Theoretical analysis and experimental results showed that the proposed approach could improve soft threshold functions with constant deviation and hard threshold with discontinuous function problems. The proposed approach could improve the different decomposition scales that adopt the same threshold value to deal with the noise problems, also effectively filter the noise in the signals, and improve the SNR and reduce the MSE of output signals.

  10. Is Diversion with Ileostomy Non-inferior to Hartmann Resection for Left-sided Colorectal Anastomotic Leak?

    Science.gov (United States)

    Stafford, Caitlin; Francone, Todd D; Marcello, Peter W; Roberts, Patricia L; Ricciardi, Rocco

    2018-03-01

    Treatment of left-sided colorectal anastomotic leaks often requires fecal stream diversion for prevention of further septic complications. To manage anastomotic leak, it is unclear if diverting ileostomy provides similar outcomes to Hartmann resection with colostomy. We identified all patients who developed anastomotic leak following left-sided colorectal resections from 1/2012 through 12/2014 using the American College of Surgeons National Surgical Quality Improvement Program. Then, we examined the risk of mortality and abdominal reoperation in patients treated with diverting ileostomy as compared to Hartmann resection. There were 1745 patients who experienced an anastomotic leak in a cohort of 63,748 patients (3.7%). Two hundred thirty-five patients had a reoperation for anastomotic leak involving the formation of a diverting ileostomy (n = 77) or Hartmann resection (n = 158). There was no difference in mortality or abdominal reoperation in patients treated with diverting ileostomy (3.9, 7.8%) versus Hartmann resection (3.8, 6.3%) (p = 0.8). There was no difference in the outcomes of mortality or need for second abdominal reoperation in patients treated with diverting ileostomy as compared to Hartmann resection for left-sided colorectal anastomotic leak. Thus, select patients with left-sided colorectal anastomotic leaks may be safely managed with diverting ileostomy.

  11. Fast and Accurate Poisson Denoising With Trainable Nonlinear Diffusion.

    Science.gov (United States)

    Feng, Wensen; Qiao, Peng; Chen, Yunjin; Wensen Feng; Peng Qiao; Yunjin Chen; Feng, Wensen; Chen, Yunjin; Qiao, Peng

    2018-06-01

    The degradation of the acquired signal by Poisson noise is a common problem for various imaging applications, such as medical imaging, night vision, and microscopy. Up to now, many state-of-the-art Poisson denoising techniques mainly concentrate on achieving utmost performance, with little consideration for the computation efficiency. Therefore, in this paper we aim to propose an efficient Poisson denoising model with both high computational efficiency and recovery quality. To this end, we exploit the newly developed trainable nonlinear reaction diffusion (TNRD) model which has proven an extremely fast image restoration approach with performance surpassing recent state-of-the-arts. However, the straightforward direct gradient descent employed in the original TNRD-based denoising task is not applicable in this paper. To solve this problem, we resort to the proximal gradient descent method. We retrain the model parameters, including the linear filters and influence functions by taking into account the Poisson noise statistics, and end up with a well-trained nonlinear diffusion model specialized for Poisson denoising. The trained model provides strongly competitive results against state-of-the-art approaches, meanwhile bearing the properties of simple structure and high efficiency. Furthermore, our proposed model comes along with an additional advantage, that the diffusion process is well-suited for parallel computation on graphics processing units (GPUs). For images of size , our GPU implementation takes less than 0.1 s to produce state-of-the-art Poisson denoising performance.

  12. Comparative study on γ energy spectrum denoise by fourier and wavelet transforms

    International Nuclear Information System (INIS)

    Shi Dongsheng; Di Yuming; Zhou Chunlin

    2007-01-01

    This paper introduces the basic principle of wavelet and Fourier transforms, applies wavelet transform method to denoise γ energy spectrum of 60 Co and compares it with Fourier transform method. The result of simulation with MATLAB software tool showed that as compared with traditional Fourier transform, wavelet transform has comparatively higher accuracy for γ energy spectrum denoising and is more feasible to γ energy spectrum denoising. (authors)

  13. [Research on electrocardiogram de-noising algorithm based on wavelet neural networks].

    Science.gov (United States)

    Wan, Xiangkui; Zhang, Jun

    2010-12-01

    In this paper, the ECG de-noising technology based on wavelet neural networks (WNN) is used to deal with the noises in Electrocardiogram (ECG) signal. The structure of WNN, which has the outstanding nonlinear mapping capability, is designed as a nonlinear filter used for ECG to cancel the baseline wander, electromyo-graphical interference and powerline interference. The network training algorithm and de-noising experiments results are presented, and some key points of the WNN filter using ECG de-noising are discussed.

  14. Exact solutions of continuous states for Hartmann potential

    International Nuclear Information System (INIS)

    Chen Changyuan; Lu Falin; Sun Dongsheng

    2004-01-01

    In this Letter, we obtain the exact solutions of continuous states for the Hartmann potential. The normalized wave functions of continuous states on the 'k/2π scale' and the calculation formula of phase shifts are presented. Analytical properties of the scattering amplitude are discussed

  15. Vibration sensor data denoising using a time-frequency manifold for machinery fault diagnosis.

    Science.gov (United States)

    He, Qingbo; Wang, Xiangxiang; Zhou, Qiang

    2013-12-27

    Vibration sensor data from a mechanical system are often associated with important measurement information useful for machinery fault diagnosis. However, in practice the existence of background noise makes it difficult to identify the fault signature from the sensing data. This paper introduces the time-frequency manifold (TFM) concept into sensor data denoising and proposes a novel denoising method for reliable machinery fault diagnosis. The TFM signature reflects the intrinsic time-frequency structure of a non-stationary signal. The proposed method intends to realize data denoising by synthesizing the TFM using time-frequency synthesis and phase space reconstruction (PSR) synthesis. Due to the merits of the TFM in noise suppression and resolution enhancement, the denoised signal would have satisfactory denoising effects, as well as inherent time-frequency structure keeping. Moreover, this paper presents a clustering-based statistical parameter to evaluate the proposed method, and also presents a new diagnostic approach, called frequency probability time series (FPTS) spectral analysis, to show its effectiveness in fault diagnosis. The proposed TFM-based data denoising method has been employed to deal with a set of vibration sensor data from defective bearings, and the results verify that for machinery fault diagnosis the method is superior to two traditional denoising methods.

  16. A wavelet multiscale denoising algorithm for magnetic resonance (MR) images

    International Nuclear Information System (INIS)

    Yang, Xiaofeng; Fei, Baowei

    2011-01-01

    Based on the Radon transform, a wavelet multiscale denoising method is proposed for MR images. The approach explicitly accounts for the Rician nature of MR data. Based on noise statistics we apply the Radon transform to the original MR images and use the Gaussian noise model to process the MR sinogram image. A translation invariant wavelet transform is employed to decompose the MR 'sinogram' into multiscales in order to effectively denoise the images. Based on the nature of Rician noise we estimate noise variance in different scales. For the final denoised sinogram we apply the inverse Radon transform in order to reconstruct the original MR images. Phantom, simulation brain MR images, and human brain MR images were used to validate our method. The experiment results show the superiority of the proposed scheme over the traditional methods. Our method can reduce Rician noise while preserving the key image details and features. The wavelet denoising method can have wide applications in MRI as well as other imaging modalities

  17. Rudin-Osher-Fatemi Total Variation Denoising using Split Bregman

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2012-05-01

    Full Text Available Denoising is the problem of removing noise from an image. The most commonly studied case is with additive white Gaussian noise (AWGN, where the observed noisy image f is related to the underlying true image u by f=u+η and η is at each point in space independently and identically distributed as a zero-mean Gaussian random variable. Total variation (TV regularization is a technique that was originally developed for AWGN image denoising by Rudin, Osher, and Fatemi. The TV regularization technique has since been applied to a multitude of other imaging problems, see for example Chan and Shen's book. We focus here on the split Bregman algorithm of Goldstein and Osher for TV-regularized denoising.

  18. Image Denoising Algorithm Combined with SGK Dictionary Learning and Principal Component Analysis Noise Estimation

    Directory of Open Access Journals (Sweden)

    Wenjing Zhao

    2018-01-01

    Full Text Available SGK (sequential generalization of K-means dictionary learning denoising algorithm has the characteristics of fast denoising speed and excellent denoising performance. However, the noise standard deviation must be known in advance when using SGK algorithm to process the image. This paper presents a denoising algorithm combined with SGK dictionary learning and the principal component analysis (PCA noise estimation. At first, the noise standard deviation of the image is estimated by using the PCA noise estimation algorithm. And then it is used for SGK dictionary learning algorithm. Experimental results show the following: (1 The SGK algorithm has the best denoising performance compared with the other three dictionary learning algorithms. (2 The SGK algorithm combined with PCA is superior to the SGK algorithm combined with other noise estimation algorithms. (3 Compared with the original SGK algorithm, the proposed algorithm has higher PSNR and better denoising performance.

  19. Wave front sensing for next generation earth observation telescope

    Science.gov (United States)

    Delvit, J.-M.; Thiebaut, C.; Latry, C.; Blanchet, G.

    2017-09-01

    High resolution observations systems are highly dependent on optics quality and are usually designed to be nearly diffraction limited. Such a performance allows to set a Nyquist frequency closer to the cut off frequency, or equivalently to minimize the pupil diameter for a given ground sampling distance target. Up to now, defocus is the only aberration that is allowed to evolve slowly and that may be inflight corrected, using an open loop correction based upon ground estimation and refocusing command upload. For instance, Pleiades satellites defocus is assessed from star acquisitions and refocusing is done with a thermal actuation of the M2 mirror. Next generation systems under study at CNES should include active optics in order to allow evolving aberrations not only limited to defocus, due for instance to in orbit thermal variable conditions. Active optics relies on aberration estimations through an onboard Wave Front Sensor (WFS). One option is using a Shack Hartmann. The Shack-Hartmann wave-front sensor could be used on extended scenes (unknown landscapes). A wave-front computation algorithm should then be implemented on-board the satellite to provide the control loop wave-front error measure. In the worst case scenario, this measure should be computed before each image acquisition. A robust and fast shift estimation algorithm between Shack-Hartmann images is then needed to fulfill this last requirement. A fast gradient-based algorithm using optical flows with a Lucas-Kanade method has been studied and implemented on an electronic device developed by CNES. Measurement accuracy depends on the Wave Front Error (WFE), the landscape frequency content, the number of searched aberrations, the a priori knowledge of high order aberrations and the characteristics of the sensor. CNES has realized a full scale sensitivity analysis on the whole parameter set with our internally developed algorithm.

  20. External Prior Guided Internal Prior Learning for Real-World Noisy Image Denoising

    Science.gov (United States)

    Xu, Jun; Zhang, Lei; Zhang, David

    2018-06-01

    Most of existing image denoising methods learn image priors from either external data or the noisy image itself to remove noise. However, priors learned from external data may not be adaptive to the image to be denoised, while priors learned from the given noisy image may not be accurate due to the interference of corrupted noise. Meanwhile, the noise in real-world noisy images is very complex, which is hard to be described by simple distributions such as Gaussian distribution, making real noisy image denoising a very challenging problem. We propose to exploit the information in both external data and the given noisy image, and develop an external prior guided internal prior learning method for real noisy image denoising. We first learn external priors from an independent set of clean natural images. With the aid of learned external priors, we then learn internal priors from the given noisy image to refine the prior model. The external and internal priors are formulated as a set of orthogonal dictionaries to efficiently reconstruct the desired image. Extensive experiments are performed on several real noisy image datasets. The proposed method demonstrates highly competitive denoising performance, outperforming state-of-the-art denoising methods including those designed for real noisy images.

  1. The denoising of Monte Carlo dose distributions using convolution superposition calculations

    International Nuclear Information System (INIS)

    El Naqa, I; Cui, J; Lindsay, P; Olivera, G; Deasy, J O

    2007-01-01

    Monte Carlo (MC) dose calculations can be accurate but are also computationally intensive. In contrast, convolution superposition (CS) offers faster and smoother results but by making approximations. We investigated MC denoising techniques, which use available convolution superposition results and new noise filtering methods to guide and accelerate MC calculations. Two main approaches were developed to combine CS information with MC denoising. In the first approach, the denoising result is iteratively updated by adding the denoised residual difference between the result and the MC image. Multi-scale methods were used (wavelets or contourlets) for denoising the residual. The iterations are initialized by the CS data. In the second approach, we used a frequency splitting technique by quadrature filtering to combine low frequency components derived from MC simulations with high frequency components derived from CS components. The rationale is to take the scattering tails as well as dose levels in the high-dose region from the MC calculations, which presumably more accurately incorporates scatter; high-frequency details are taken from CS calculations. 3D Butterworth filters were used to design the quadrature filters. The methods were demonstrated using anonymized clinical lung and head and neck cases. The MC dose distributions were calculated by the open-source dose planning method MC code with varying noise levels. Our results indicate that the frequency-splitting technique for incorporating CS-guided MC denoising is promising in terms of computational efficiency and noise reduction. (note)

  2. Edge-preserving image denoising via group coordinate descent on the GPU.

    Science.gov (United States)

    McGaffin, Madison Gray; Fessler, Jeffrey A

    2015-04-01

    Image denoising is a fundamental operation in image processing, and its applications range from the direct (photographic enhancement) to the technical (as a subproblem in image reconstruction algorithms). In many applications, the number of pixels has continued to grow, while the serial execution speed of computational hardware has begun to stall. New image processing algorithms must exploit the power offered by massively parallel architectures like graphics processing units (GPUs). This paper describes a family of image denoising algorithms well-suited to the GPU. The algorithms iteratively perform a set of independent, parallel 1D pixel-update subproblems. To match GPU memory limitations, they perform these pixel updates in-place and only store the noisy data, denoised image, and problem parameters. The algorithms can handle a wide range of edge-preserving roughness penalties, including differentiable convex penalties and anisotropic total variation. Both algorithms use the majorize-minimize framework to solve the 1D pixel update subproblem. Results from a large 2D image denoising problem and a 3D medical imaging denoising problem demonstrate that the proposed algorithms converge rapidly in terms of both iteration and run-time.

  3. A Novel Approach of Low-Light Image Denoising for Face Recognition

    Directory of Open Access Journals (Sweden)

    Yimei Kang

    2014-04-01

    Full Text Available Illumination variation makes automatic face recognition a challenging task, especially in low light environments. A very simple and efficient novel low-light image denoising of low frequency noise (DeLFN is proposed. The noise frequency distribution of low-light images is presented based on massive experimental results. The low and very low frequency noise are dominant in low light conditions. DeLFN is a three-level image denoising method. The first level denoises mixed noises by histogram equalization (HE to improve overall contrast. The second level denoises low frequency noise by logarithmic transformation (LOG to enhance the image detail. The third level denoises residual very low frequency noise by high-pass filtering to recover more features of the true images. The PCA (Principal Component Analysis recognition method is applied to test recognition rate of the preprocessed face images with DeLFN. DeLFN are compared with several representative illumination preprocessing methods on the Yale Face Database B, the Extended Yale face database B, and the CMU PIE face database, respectively. DeLFN not only outperformed other algorithms in improving visual quality and face recognition rate, but also is simpler and computationally efficient for real time applications.

  4. Raman spectroscopy denoising based on smoothing filter combined with EEMD algorithm

    Science.gov (United States)

    Tian, Dayong; Lv, Xiaoyi; Mo, Jiaqing; Chen, Chen

    2018-02-01

    In the extraction of Raman spectra, the signal will be affected by a variety of background noises, and then the effective information of Raman spectra is weakened or even submerged in noises, so the spectral analysis and denoising processing is very important. The traditional ensemble empirical mode decomposition (EEMD) method is to remove the noises by removing the IMF components that mainly contain the noises. However, it will lose some details of the Raman signal. For the problem of EEMD algorithm, the denoising method of smoothing filter combined with EEMD is proposed in this paper. First, EEMD is used to decompose the Raman noise signal into several IMF components. Then, the components mainly containing noises are selected using the self-correlation function, and the smoothing filter is used to remove the noises of the components. Finally, the sum of the denoised components is added with the remaining components to obtain the final denoised signal. The experimental results show that compared with the traditional denoising algorithm, the signal-to-noise ratio (SNR), the root mean square error (RMSE) and the correlation coefficient are significantly improved by using the proposed smoothing filter combined with EEMD.

  5. Addition of Adapted Optics towards obtaining a quantitative detection of diabetic retinopathy

    Science.gov (United States)

    Yust, Brian; Obregon, Isidro; Tsin, Andrew; Sardar, Dhiraj

    2009-04-01

    An adaptive optics system was assembled for correcting the aberrated wavefront of light reflected from the retina. The adaptive optics setup includes a superluminous diode light source, Hartmann-Shack wavefront sensor, deformable mirror, and imaging CCD camera. Aberrations found in the reflected wavefront are caused by changes in the index of refraction along the light path as the beam travels through the cornea, lens, and vitreous humour. The Hartmann-Shack sensor allows for detection of aberrations in the wavefront, which may then be corrected with the deformable mirror. It has been shown that there is a change in the polarization of light reflected from neovascularizations in the retina due to certain diseases, such as diabetic retinopathy. The adaptive optics system was assembled towards the goal of obtaining a quantitative measure of onset and progression of this ailment, as one does not currently exist. The study was done to show that the addition of adaptive optics results in a more accurate detection of neovascularization in the retina by measuring the expected changes in polarization of the corrected wavefront of reflected light.

  6. Improving performance of wavelet-based image denoising algorithm using complex diffusion process

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Sharifzadeh, Sara; Korhonen, Jari

    2012-01-01

    using a variety of standard images and its performance has been compared against several de-noising algorithms known from the prior art. Experimental results show that the proposed algorithm preserves the edges better and in most cases, improves the measured visual quality of the denoised images......Image enhancement and de-noising is an essential pre-processing step in many image processing algorithms. In any image de-noising algorithm, the main concern is to keep the interesting structures of the image. Such interesting structures often correspond to the discontinuities (edges...... in comparison to the existing methods known from the literature. The improvement is obtained without excessive computational cost, and the algorithm works well on a wide range of different types of noise....

  7. a Universal De-Noising Algorithm for Ground-Based LIDAR Signal

    Science.gov (United States)

    Ma, Xin; Xiang, Chengzhi; Gong, Wei

    2016-06-01

    Ground-based lidar, working as an effective remote sensing tool, plays an irreplaceable role in the study of atmosphere, since it has the ability to provide the atmospheric vertical profile. However, the appearance of noise in a lidar signal is unavoidable, which leads to difficulties and complexities when searching for more information. Every de-noising method has its own characteristic but with a certain limitation, since the lidar signal will vary with the atmosphere changes. In this paper, a universal de-noising algorithm is proposed to enhance the SNR of a ground-based lidar signal, which is based on signal segmentation and reconstruction. The signal segmentation serving as the keystone of the algorithm, segments the lidar signal into three different parts, which are processed by different de-noising method according to their own characteristics. The signal reconstruction is a relatively simple procedure that is to splice the signal sections end to end. Finally, a series of simulation signal tests and real dual field-of-view lidar signal shows the feasibility of the universal de-noising algorithm.

  8. Image fusion and denoising using fractional-order gradient information

    DEFF Research Database (Denmark)

    Mei, Jin-Jin; Dong, Yiqiu; Huang, Ting-Zhu

    Image fusion and denoising are significant in image processing because of the availability of multi-sensor and the presence of the noise. The first-order and second-order gradient information have been effectively applied to deal with fusing the noiseless source images. In this paper, due to the adv...... show that the proposed method outperforms the conventional total variation in methods for simultaneously fusing and denoising....

  9. A Segmental Approach with SWT Technique for Denoising the EOG Signal

    Directory of Open Access Journals (Sweden)

    Naga Rajesh

    2015-01-01

    Full Text Available The Electrooculogram (EOG signal is often contaminated with artifacts and power-line while recording. It is very much essential to denoise the EOG signal for quality diagnosis. The present study deals with denoising of noisy EOG signals using Stationary Wavelet Transformation (SWT technique by two different approaches, namely, increasing segments of the EOG signal and different equal segments of the EOG signal. For performing the segmental denoising analysis, an EOG signal is simulated and added with controlled noise powers of 5 dB, 10 dB, 15 dB, 20 dB, and 25 dB so as to obtain five different noisy EOG signals. The results obtained after denoising them are extremely encouraging. Root Mean Square Error (RMSE values between reference EOG signal and EOG signals with noise powers of 5 dB, 10 dB, and 15 dB are very less when compared with 20 dB and 25 dB noise powers. The findings suggest that the SWT technique can be used to denoise the noisy EOG signal with optimum noise powers ranging from 5 dB to 15 dB. This technique might be useful in quality diagnosis of various neurological or eye disorders.

  10. Research and Implementation of Heart Sound Denoising

    Science.gov (United States)

    Liu, Feng; Wang, Yutai; Wang, Yanxiang

    Heart sound is one of the most important signals. However, the process of getting heart sound signal can be interfered with many factors outside. Heart sound is weak electric signal and even weak external noise may lead to the misjudgment of pathological and physiological information in this signal, thus causing the misjudgment of disease diagnosis. As a result, it is a key to remove the noise which is mixed with heart sound. In this paper, a more systematic research and analysis which is involved in heart sound denoising based on matlab has been made. The study of heart sound denoising based on matlab firstly use the powerful image processing function of matlab to transform heart sound signals with noise into the wavelet domain through wavelet transform and decomposition these signals in muli-level. Then for the detail coefficient, soft thresholding is made using wavelet transform thresholding to eliminate noise, so that a signal denoising is significantly improved. The reconstructed signals are gained with stepwise coefficient reconstruction for the processed detail coefficient. Lastly, 50HZ power frequency and 35 Hz mechanical and electrical interference signals are eliminated using a notch filter.

  11. Image denoising using the squared eigenfunctions of the Schrodinger operator

    KAUST Repository

    Kaisserli, Zineb; Laleg-Kirati, Taous-Meriem

    2015-01-01

    This study introduces a new image denoising method based on the spectral analysis of the semi-classical Schrodinger operator. The noisy image is considered as a potential of the Schrodinger operator, and the denoised image is reconstructed using the discrete spectrum of this operator. First results illustrating the performance of the proposed approach are presented and compared to the singular value decomposition method.

  12. Image denoising using the squared eigenfunctions of the Schrodinger operator

    KAUST Repository

    Kaisserli, Zineb

    2015-02-02

    This study introduces a new image denoising method based on the spectral analysis of the semi-classical Schrodinger operator. The noisy image is considered as a potential of the Schrodinger operator, and the denoised image is reconstructed using the discrete spectrum of this operator. First results illustrating the performance of the proposed approach are presented and compared to the singular value decomposition method.

  13. OBS Data Denoising Based on Compressed Sensing Using Fast Discrete Curvelet Transform

    Science.gov (United States)

    Nan, F.; Xu, Y.

    2017-12-01

    OBS (Ocean Bottom Seismometer) data denoising is an important step of OBS data processing and inversion. It is necessary to get clearer seismic phases for further velocity structure analysis. Traditional methods for OBS data denoising include band-pass filter, Wiener filter and deconvolution etc. (Liu, 2015). Most of these filtering methods are based on Fourier Transform (FT). Recently, the multi-scale transform methods such as wavelet transform (WT) and Curvelet transform (CvT) are widely used for data denoising in various applications. The FT, WT and CvT could represent signal sparsely and separate noise in transform domain. They could be used in different cases. Compared with Curvelet transform, the FT has Gibbs phenomenon and it cannot handle points discontinuities well. WT is well localized and multi scale, but it has poor orientation selectivity and could not handle curves discontinuities well. CvT is a multiscale directional transform that could represent curves with only a small number of coefficients. It provide an optimal sparse representation of objects with singularities along smooth curves, which is suitable for seismic data processing. As we know, different seismic phases in OBS data are showed as discontinuous curves in time domain. Hence, we promote to analysis the OBS data via CvT and separate the noise in CvT domain. In this paper, our sparsity-promoting inversion approach is restrained by L1 condition and we solve this L1 problem by using modified iteration thresholding. Results show that the proposed method could suppress the noise well and give sparse results in Curvelet domain. Figure 1 compares the Curvelet denoising method with Wavelet method on the same iterations and threshold through synthetic example. a)Original data. b) Add-noise data. c) Denoised data using CvT. d) Denoised data using WT. The CvT can well eliminate the noise and has better result than WT. Further we applied the CvT denoise method for the OBS data processing. Figure 2a

  14. Poisson denoising on the sphere

    Science.gov (United States)

    Schmitt, J.; Starck, J. L.; Fadili, J.; Grenier, I.; Casandjian, J. M.

    2009-08-01

    In the scope of the Fermi mission, Poisson noise removal should improve data quality and make source detection easier. This paper presents a method for Poisson data denoising on sphere, called Multi-Scale Variance Stabilizing Transform on Sphere (MS-VSTS). This method is based on a Variance Stabilizing Transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has an (asymptotically) constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. Thus, MS-VSTS consists in decomposing the data into a sparse multi-scale dictionary (wavelets, curvelets, ridgelets...), and then applying a VST on the coefficients in order to get quasi-Gaussian stabilized coefficients. In this present article, the used multi-scale transform is the Isotropic Undecimated Wavelet Transform. Then, hypothesis tests are made to detect significant coefficients, and the denoised image is reconstructed with an iterative method based on Hybrid Steepest Descent (HST). The method is tested on simulated Fermi data.

  15. NOTE: The denoising of Monte Carlo dose distributions using convolution superposition calculations

    Science.gov (United States)

    El Naqa, I.; Cui, J.; Lindsay, P.; Olivera, G.; Deasy, J. O.

    2007-09-01

    Monte Carlo (MC) dose calculations can be accurate but are also computationally intensive. In contrast, convolution superposition (CS) offers faster and smoother results but by making approximations. We investigated MC denoising techniques, which use available convolution superposition results and new noise filtering methods to guide and accelerate MC calculations. Two main approaches were developed to combine CS information with MC denoising. In the first approach, the denoising result is iteratively updated by adding the denoised residual difference between the result and the MC image. Multi-scale methods were used (wavelets or contourlets) for denoising the residual. The iterations are initialized by the CS data. In the second approach, we used a frequency splitting technique by quadrature filtering to combine low frequency components derived from MC simulations with high frequency components derived from CS components. The rationale is to take the scattering tails as well as dose levels in the high-dose region from the MC calculations, which presumably more accurately incorporates scatter; high-frequency details are taken from CS calculations. 3D Butterworth filters were used to design the quadrature filters. The methods were demonstrated using anonymized clinical lung and head and neck cases. The MC dose distributions were calculated by the open-source dose planning method MC code with varying noise levels. Our results indicate that the frequency-splitting technique for incorporating CS-guided MC denoising is promising in terms of computational efficiency and noise reduction.

  16. PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.

    Science.gov (United States)

    Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David

    2009-04-01

    Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.

  17. Hardware design and implementation of a wavelet de-noising procedure for medical signal preprocessing.

    Science.gov (United States)

    Chen, Szi-Wen; Chen, Yuan-Ho

    2015-10-16

    In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz.

  18. Sparse Representation Denoising for Radar High Resolution Range Profiling

    Directory of Open Access Journals (Sweden)

    Min Li

    2014-01-01

    Full Text Available Radar high resolution range profile has attracted considerable attention in radar automatic target recognition. In practice, radar return is usually contaminated by noise, which results in profile distortion and recognition performance degradation. To deal with this problem, in this paper, a novel denoising method based on sparse representation is proposed to remove the Gaussian white additive noise. The return is sparsely described in the Fourier redundant dictionary and the denoising problem is described as a sparse representation model. Noise level of the return, which is crucial to the denoising performance but often unknown, is estimated by performing subspace method on the sliding subsequence correlation matrix. Sliding window process enables noise level estimation using only one observation sequence, not only guaranteeing estimation efficiency but also avoiding the influence of profile time-shift sensitivity. Experimental results show that the proposed method can effectively improve the signal-to-noise ratio of the return, leading to a high-quality profile.

  19. Geometrical error calibration in reflective surface testing based on reverse Hartmann test

    Science.gov (United States)

    Gong, Zhidong; Wang, Daodang; Xu, Ping; Wang, Chao; Liang, Rongguang; Kong, Ming; Zhao, Jun; Mo, Linhai; Mo, Shuhui

    2017-08-01

    In the fringe-illumination deflectometry based on reverse-Hartmann-test configuration, ray tracing of the modeled testing system is performed to reconstruct the test surface error. Careful calibration of system geometry is required to achieve high testing accuracy. To realize the high-precision surface testing with reverse Hartmann test, a computer-aided geometrical error calibration method is proposed. The aberrations corresponding to various geometrical errors are studied. With the aberration weights for various geometrical errors, the computer-aided optimization of system geometry with iterative ray tracing is carried out to calibration the geometrical error, and the accuracy in the order of subnanometer is achieved.

  20. Electrocardiogram signal denoising based on a new improved wavelet thresholding

    Science.gov (United States)

    Han, Guoqiang; Xu, Zhijun

    2016-08-01

    Good quality electrocardiogram (ECG) is utilized by physicians for the interpretation and identification of physiological and pathological phenomena. In general, ECG signals may mix various noises such as baseline wander, power line interference, and electromagnetic interference in gathering and recording process. As ECG signals are non-stationary physiological signals, wavelet transform is investigated to be an effective tool to discard noises from corrupted signals. A new compromising threshold function called sigmoid function-based thresholding scheme is adopted in processing ECG signals. Compared with other methods such as hard/soft thresholding or other existing thresholding functions, the new algorithm has many advantages in the noise reduction of ECG signals. It perfectly overcomes the discontinuity at ±T of hard thresholding and reduces the fixed deviation of soft thresholding. The improved wavelet thresholding denoising can be proved to be more efficient than existing algorithms in ECG signal denoising. The signal to noise ratio, mean square error, and percent root mean square difference are calculated to verify the denoising performance as quantitative tools. The experimental results reveal that the waves including P, Q, R, and S waves of ECG signals after denoising coincide with the original ECG signals by employing the new proposed method.

  1. Night Vision Image De-Noising of Apple Harvesting Robots Based on the Wavelet Fuzzy Threshold

    Directory of Open Access Journals (Sweden)

    Chengzhi Ruan

    2015-12-01

    Full Text Available In this paper, the de-noising problem of night vision images is studied for apple harvesting robots working at night. The wavelet threshold method is applied to the de-noising of night vision images. Due to the fact that the choice of wavelet threshold function restricts the effect of the wavelet threshold method, the fuzzy theory is introduced to construct the fuzzy threshold function. We then propose the de-noising algorithm based on the wavelet fuzzy threshold. This new method can reduce image noise interferences, which is conducive to further image segmentation and recognition. To demonstrate the performance of the proposed method, we conducted simulation experiments and compared the median filtering and the wavelet soft threshold de-noising methods. It is shown that this new method can achieve the highest relative PSNR. Compared with the original images, the median filtering de-noising method and the classical wavelet threshold de-noising method, the relative PSNR increases 24.86%, 13.95%, and 11.38% respectively. We carry out comparisons from various aspects, such as intuitive visual evaluation, objective data evaluation, edge evaluation and artificial light evaluation. The experimental results show that the proposed method has unique advantages for the de-noising of night vision images, which lay the foundation for apple harvesting robots working at night.

  2. Sparse non-linear denoising: Generalization performance and pattern reproducibility in functional MRI

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    We investigate sparse non-linear denoising of functional brain images by kernel Principal Component Analysis (kernel PCA). The main challenge is the mapping of denoised feature space points back into input space, also referred to as ”the pre-image problem”. Since the feature space mapping is typi...

  3. Hardware Design and Implementation of a Wavelet De-Noising Procedure for Medical Signal Preprocessing

    Directory of Open Access Journals (Sweden)

    Szi-Wen Chen

    2015-10-01

    Full Text Available In this paper, a discrete wavelet transform (DWT based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan 40 nm standard cell library. The integrated circuit (IC synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz.

  4. 3D seismic denoising based on a low-redundancy curvelet transform

    International Nuclear Information System (INIS)

    Cao, Jingjie; Zhao, Jingtao; Hu, Zhiying

    2015-01-01

    Contamination of seismic signal with noise is one of the main challenges during seismic data processing. Several methods exist for eliminating different types of noises, but optimal random noise attenuation remains difficult. Based on multi-scale, multi-directional locality of curvelet transform, the curvelet thresholding method is a relatively new method for random noise elimination. However, the high redundancy of a 3D curvelet transform makes its computational time and memory for massive data processing costly. To improve the efficiency of the curvelet thresholding denoising, a low-redundancy curvelet transform was introduced. The redundancy of the low-redundancy curvelet transform is approximately one-quarter of the original transform and the tightness of the original transform is also kept, thus the low-redundancy curvelet transform calls for less memory and computational resource compared with the original one. Numerical results on 3D synthetic and field data demonstrate that the low-redundancy curvelet denoising consumes one-quarter of the CPU time compared with the original curvelet transform using iterative thresholding denoising when comparable results are obtained. Thus, the low-redundancy curvelet transform is a good candidate for massive seismic denoising. (paper)

  5. Regularized Fractional Power Parameters for Image Denoising Based on Convex Solution of Fractional Heat Equation

    Directory of Open Access Journals (Sweden)

    Hamid A. Jalab

    2014-01-01

    Full Text Available The interest in using fractional mask operators based on fractional calculus operators has grown for image denoising. Denoising is one of the most fundamental image restoration problems in computer vision and image processing. This paper proposes an image denoising algorithm based on convex solution of fractional heat equation with regularized fractional power parameters. The performances of the proposed algorithms were evaluated by computing the PSNR, using different types of images. Experiments according to visual perception and the peak signal to noise ratio values show that the improvements in the denoising process are competent with the standard Gaussian filter and Wiener filter.

  6. Evaluation of Effectiveness of Wavelet Based Denoising Schemes Using ANN and SVM for Bearing Condition Classification

    Directory of Open Access Journals (Sweden)

    Vijay G. S.

    2012-01-01

    Full Text Available The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR and reducing the root-mean-square error (RMSE. In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN and the Support Vector Machine (SVM, for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher’s Criterion (FC. Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal.

  7. Image denoising using non linear diffusion tensors

    International Nuclear Information System (INIS)

    Benzarti, F.; Amiri, H.

    2011-01-01

    Image denoising is an important pre-processing step for many image analysis and computer vision system. It refers to the task of recovering a good estimate of the true image from a degraded observation without altering and changing useful structure in the image such as discontinuities and edges. In this paper, we propose a new approach for image denoising based on the combination of two non linear diffusion tensors. One allows diffusion along the orientation of greatest coherences, while the other allows diffusion along orthogonal directions. The idea is to track perfectly the local geometry of the degraded image and applying anisotropic diffusion mainly along the preferred structure direction. To illustrate the effective performance of our model, we present some experimental results on a test and real photographic color images.

  8. Beam reducery - optika pro diagnostiku širokých laserových svazků

    Czech Academy of Sciences Publication Activity Database

    Stanke, Ladislav; Palatka, Miroslav

    2017-01-01

    Roč. 62, Jan (2017), s. 141-147 ISSN 0447-6441 R&D Projects: GA MŠk EF15_008/0000162 Grant - others:ELI Beamlines(XE) CZ.02.1.01/0.0/0.0/15_008/0000162 Institutional support: RVO:68378271 Keywords : telescope design * laser beam diagnostics * Shack-Hartmann sensor Subject RIV: BH - Optics, Masers, Lasers OBOR OECD: Optics (including laser optics and quantum optics)

  9. Denoising Algorithm for CFA Image Sensors Considering Inter-Channel Correlation.

    Science.gov (United States)

    Lee, Min Seok; Park, Sang Wook; Kang, Moon Gi

    2017-05-28

    In this paper, a spatio-spectral-temporal filter considering an inter-channel correlation is proposed for the denoising of a color filter array (CFA) sequence acquired by CCD/CMOS image sensors. Owing to the alternating under-sampled grid of the CFA pattern, the inter-channel correlation must be considered in the direct denoising process. The proposed filter is applied in the spatial, spectral, and temporal domain, considering the spatio-tempo-spectral correlation. First, nonlocal means (NLM) spatial filtering with patch-based difference (PBD) refinement is performed by considering both the intra-channel correlation and inter-channel correlation to overcome the spatial resolution degradation occurring with the alternating under-sampled pattern. Second, a motion-compensated temporal filter that employs inter-channel correlated motion estimation and compensation is proposed to remove the noise in the temporal domain. Then, a motion adaptive detection value controls the ratio of the spatial filter and the temporal filter. The denoised CFA sequence can thus be obtained without motion artifacts. Experimental results for both simulated and real CFA sequences are presented with visual and numerical comparisons to several state-of-the-art denoising methods combined with a demosaicing method. Experimental results confirmed that the proposed frameworks outperformed the other techniques in terms of the objective criteria and subjective visual perception in CFA sequences.

  10. Denoising human cardiac diffusion tensor magnetic resonance images using sparse representation combined with segmentation

    International Nuclear Information System (INIS)

    Bao, L J; Zhu, Y M; Liu, W Y; Pu, Z B; Magnin, I E; Croisille, P; Robini, M

    2009-01-01

    Cardiac diffusion tensor magnetic resonance imaging (DT-MRI) is noise sensitive, and the noise can induce numerous systematic errors in subsequent parameter calculations. This paper proposes a sparse representation-based method for denoising cardiac DT-MRI images. The method first generates a dictionary of multiple bases according to the features of the observed image. A segmentation algorithm based on nonstationary degree detector is then introduced to make the selection of atoms in the dictionary adapted to the image's features. The denoising is achieved by gradually approximating the underlying image using the atoms selected from the generated dictionary. The results on both simulated image and real cardiac DT-MRI images from ex vivo human hearts show that the proposed denoising method performs better than conventional denoising techniques by preserving image contrast and fine structures.

  11. Image de-noising based on mathematical morphology and multi-objective particle swarm optimization

    Science.gov (United States)

    Dou, Liyun; Xu, Dan; Chen, Hao; Liu, Yicheng

    2017-07-01

    To overcome the problem of image de-noising, an efficient image de-noising approach based on mathematical morphology and multi-objective particle swarm optimization (MOPSO) is proposed in this paper. Firstly, constructing a series and parallel compound morphology filter based on open-close (OC) operation and selecting a structural element with different sizes try best to eliminate all noise in a series link. Then, combining multi-objective particle swarm optimization (MOPSO) to solve the parameters setting of multiple structural element. Simulation result shows that our algorithm can achieve a superior performance compared with some traditional de-noising algorithm.

  12. Numerical Evaluation of Parameter Correlation in the Hartmann-Tran Line Profile

    Science.gov (United States)

    Adkins, Erin M.; Reed, Zachary; Hodges, Joseph T.

    2017-06-01

    The partially correlated quadratic, speed-dependent hard-collision profile (pCqSDHCP), for simplicity referred to as the Hartmann-Tran profile (HTP), has been recommended as a generalized lineshape for high resolution spectroscopy. The HTP parameterizes complex collisional effects such as Dicke narrowing, speed dependent narrowing, and correlations between velocity-changing and dephasing collisions, while also simplifying to simpler profiles that are widely used, such as the Voigt profile. As advanced lineshape profiles are adopted by more researchers, it is important to understand the limitations that data quality has on the ability to retrieve physically meaningful parameters using sophisticated lineshapes that are fit to spectra of finite signal-to-noise ratio. In this work, spectra were simulated using the HITRAN Application Programming Interface (HAPI) across a full range of line parameters. Simulated spectra were evaluated to quantify the precision with which fitted lineshape parameters can be determined at a given signal-to-noise ratio, focusing on the numerical correlation between the retrieved Dicke narrowing frequency and the velocity-changing and dephasing collisions correlation parameter. Tran, H., N. Ngo, and J.-M. Hartmann, Journal of Quantitative Spectroscopy and Radiative Transfer 2013. 129: p. 89-100. Tennyson, et al., Pure Appl. Chem. 2014, 86: p. 1931-1943. Kochanov, R.V., et al., Journal of Quantitative Spectroscopy and Radiative Transfer 2016. 177: p. 15-30. Tran, H., N. Ngo, and J.-M. Hartmann, Journal of Quantitative Spectroscopy and Radiative Transfer 2013. 129: p. 199-203.

  13. Image Denoising Using Singular Value Difference in the Wavelet Domain

    Directory of Open Access Journals (Sweden)

    Min Wang

    2018-01-01

    Full Text Available Singular value (SV difference is the difference in the singular values between a noisy image and the original image; it varies regularly with noise intensity. This paper proposes an image denoising method using the singular value difference in the wavelet domain. First, the SV difference model is generated for different noise variances in the three directions of the wavelet transform and the noise variance of a new image is used to make the calculation by the diagonal part. Next, the single-level discrete 2-D wavelet transform is used to decompose each noisy image into its low-frequency and high-frequency parts. Then, singular value decomposition (SVD is used to obtain the SVs of the three high-frequency parts. Finally, the three denoised high-frequency parts are reconstructed by SVD from the SV difference, and the final denoised image is obtained using the inverse wavelet transform. Experiments show the effectiveness of this method compared with relevant existing methods.

  14. Partial discharge signal denoising with spatially adaptive wavelet thresholding and support vector machines

    Energy Technology Data Exchange (ETDEWEB)

    Mota, Hilton de Oliveira; Rocha, Leonardo Chaves Dutra da [Department of Computer Science, Federal University of Sao Joao del-Rei, Visconde do Rio Branco Ave., Colonia do Bengo, Sao Joao del-Rei, MG, 36301-360 (Brazil); Salles, Thiago Cunha de Moura [Department of Computer Science, Federal University of Minas Gerais, 6627 Antonio Carlos Ave., Pampulha, Belo Horizonte, MG, 31270-901 (Brazil); Vasconcelos, Flavio Henrique [Department of Electrical Engineering, Federal University of Minas Gerais, 6627 Antonio Carlos Ave., Pampulha, Belo Horizonte, MG, 31270-901 (Brazil)

    2011-02-15

    In this paper an improved method to denoise partial discharge (PD) signals is presented. The method is based on the wavelet transform (WT) and support vector machines (SVM) and is distinct from other WT-based denoising strategies in the sense that it exploits the high spatial correlations presented by PD wavelet decompositions as a way to identify and select the relevant coefficients. PD spatial correlations are characterized by WT modulus maxima propagation along decomposition levels (scales), which are a strong indicative of the their time-of-occurrence. Denoising is performed by identification and separation of PD-related maxima lines by an SVM pattern classifier. The results obtained confirm that this method has superior denoising capabilities when compared to other WT-based methods found in the literature for the processing of Gaussian and discrete spectral interferences. Moreover, its greatest advantages become clear when the interference has a pulsating or localized shape, situation in which traditional methods usually fail. (author)

  15. Wavelet denoising method; application to the flow rate estimation for water level control

    International Nuclear Information System (INIS)

    Park, Gee Young; Park, Jin Ho; Lee, Jung Han; Kim, Bong Soo; Seong, Poong Hyun

    2003-01-01

    The wavelet transform decomposes a signal into time- and frequency-domain signals and it is well known that a noise-corrupted signal could be reconstructed or estimated when a proper denoising method is involved in the wavelet transform. Among the wavelet denoising methods proposed up to now, the wavelets by Mallat and Zhong can reconstruct best the pure transient signal from a highly corrupted signal. But there has been no systematic way of discriminating the original signal from the noise in a dyadic wavelet transform. In this paper, a systematic method is proposed for noise discrimination, which could be implemented easily into a digital system. For demonstrating the potential role of the wavelet denoising method in the nuclear field, this method is applied to the steam or feedwater flow rate estimation of the secondary loop. And the configuration of the S/G water level control system is proposed for incorporating the wavelet denoising method in estimating the flow rate value at low operating powers

  16. The effect of image enhancement on the statistical analysis of functional neuroimages : Wavelet-based denoising and Gaussian smoothing

    NARCIS (Netherlands)

    Wink, AM; Roerdink, JBTM; Sonka, M; Fitzpatrick, JM

    2003-01-01

    The quality of statistical analyses of functional neuroimages is studied after applying various preprocessing methods. We present wavelet-based denoising as an alternative to Gaussian smoothing, the standard denoising method in statistical parametric mapping (SPM). The wavelet-based denoising

  17. Example-based human motion denoising.

    Science.gov (United States)

    Lou, Hui; Chai, Jinxiang

    2010-01-01

    With the proliferation of motion capture data, interest in removing noise and outliers from motion capture data has increased. In this paper, we introduce an efficient human motion denoising technique for the simultaneous removal of noise and outliers from input human motion data. The key idea of our approach is to learn a series of filter bases from precaptured motion data and use them along with robust statistics techniques to filter noisy motion data. Mathematically, we formulate the motion denoising process in a nonlinear optimization framework. The objective function measures the distance between the noisy input and the filtered motion in addition to how well the filtered motion preserves spatial-temporal patterns embedded in captured human motion data. Optimizing the objective function produces an optimal filtered motion that keeps spatial-temporal patterns in captured motion data. We also extend the algorithm to fill in the missing values in input motion data. We demonstrate the effectiveness of our system by experimenting with both real and simulated motion data. We also show the superior performance of our algorithm by comparing it with three baseline algorithms and to those in state-of-art motion capture data processing software such as Vicon Blade.

  18. Life Cycle Management at Brødrene Hartmann A/S - strategy,- organisation and implementation

    DEFF Research Database (Denmark)

    Pedersen, Claus Stig; Alting, Leo; Mortensen, Anna Lise

    1997-01-01

    decisionmaking is under development.The implementation of life cycle management in Hartmann is organised with respect to the divisional areas: strategic management, product development, purchase, production, sale and distribution. The implementation of life cycle managment is assisted by tools to support...... decision making. The tools are developed in coorporation with the Department of Manufacturing Engineering at the Technical University of Denmark.This paper presents- The Hartmann environmental strategy, based on the life cycle concept- Experiences and results from developing a life cycle orientated...... organisation- Experiences and results from developing and implementing tools for life cycle management...

  19. Image denoising by sparse 3-D transform-domain collaborative filtering.

    Science.gov (United States)

    Dabov, Kostadin; Foi, Alessandro; Katkovnik, Vladimir; Egiazarian, Karen

    2007-08-01

    We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2-D image fragments (e.g., blocks) into 3-D data arrays which we call "groups." Collaborative filtering is a special procedure developed to deal with these 3-D groups. We realize it using the three successive steps: 3-D transformation of a group, shrinkage of the transform spectrum, and inverse 3-D transformation. The result is a 3-D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality.

  20. Application of Improved Wavelet Thresholding Function in Image Denoising Processing

    Directory of Open Access Journals (Sweden)

    Hong Qi Zhang

    2014-07-01

    Full Text Available Wavelet analysis is a time – frequency analysis method, time-frequency localization problems are well solved, this paper analyzes the basic principles of the wavelet transform and the relationship between the signal singularity Lipschitz exponent and the local maxima of the wavelet transform coefficients mold, the principles of wavelet transform in image denoising are analyzed, the disadvantages of traditional wavelet thresholding function are studied, wavelet threshold function, the discontinuity of hard threshold and constant deviation of soft threshold are improved, image is denoised through using the improved threshold function.

  1. Dictionary-Based Image Denoising by Fused-Lasso Atom Selection

    Directory of Open Access Journals (Sweden)

    Ao Li

    2014-01-01

    Full Text Available We proposed an efficient image denoising scheme by fused lasso with dictionary learning. The scheme has two important contributions. The first one is that we learned the patch-based adaptive dictionary by principal component analysis (PCA with clustering the image into many subsets, which can better preserve the local geometric structure. The second one is that we coded the patches in each subset by fused lasso with the clustering learned dictionary and proposed an iterative Split Bregman to solve it rapidly. We present the capabilities with several experiments. The results show that the proposed scheme is competitive to some excellent denoising algorithms.

  2. Ekman-Hartmann layer in a magnetohydrodynamic Taylor-Couette flow.

    Science.gov (United States)

    Szklarski, Jacek; Rüdiger, Günther

    2007-12-01

    We study magnetic effects induced by rigidly rotating plates enclosing a cylindrical magnetohydrodynamic (MHD) Taylor-Couette flow at the finite aspect ratio HD=10 . The fluid confined between the cylinders is assumed to be liquid metal characterized by small magnetic Prandtl number, the cylinders are perfectly conducting, an axial magnetic field is imposed with Hartmann number Ha approximately 10 , and the rotation rates correspond to Reynolds numbers of order 10(2)-10(3). We show that the end plates introduce, besides the well-known Ekman circulation, similar magnetic effects which arise for infinite, rotating plates, horizontally unbounded by any walls. In particular, there exists the Hartmann current, which penetrates the fluid, turns in the radial direction, and together with the applied magnetic field gives rise to a force. Consequently, the flow can be compared with a Taylor-Dean flow driven by an azimuthal pressure gradient. We analyze the stability of such flows and show that the currents induced by the plates can give rise to instability for the considered parameters. When designing a MHD Taylor-Couette experiment, special care must be taken concerning the vertical magnetic boundaries so that they do not significantly alter the rotational profile.

  3. Image Denoising via Bayesian Estimation of Statistical Parameter Using Generalized Gamma Density Prior in Gaussian Noise Model

    Science.gov (United States)

    Kittisuwan, Pichid

    2015-03-01

    The application of image processing in industry has shown remarkable success over the last decade, for example, in security and telecommunication systems. The denoising of natural image corrupted by Gaussian noise is a classical problem in image processing. So, image denoising is an indispensable step during image processing. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. One of the cruxes of the Bayesian image denoising algorithms is to estimate the statistical parameter of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with generalized Gamma density prior for local observed variance and Laplacian or Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by efficient and flexible properties of generalized Gamma density. The experimental results show that the proposed method yields good denoising results.

  4. A hybrid spatial-spectral denoising method for infrared hyperspectral images using 2DPCA

    Science.gov (United States)

    Huang, Jun; Ma, Yong; Mei, Xiaoguang; Fan, Fan

    2016-11-01

    The traditional noise reduction methods for 3-D infrared hyperspectral images typically operate independently in either the spatial or spectral domain, and such methods overlook the relationship between the two domains. To address this issue, we propose a hybrid spatial-spectral method in this paper to link both domains. First, principal component analysis and bivariate wavelet shrinkage are performed in the 2-D spatial domain. Second, 2-D principal component analysis transformation is conducted in the 1-D spectral domain to separate the basic components from detail ones. The energy distribution of noise is unaffected by orthogonal transformation; therefore, the signal-to-noise ratio of each component is used as a criterion to determine whether a component should be protected from over-denoising or denoised with certain 1-D denoising methods. This study implements the 1-D wavelet shrinking threshold method based on Stein's unbiased risk estimator, and the quantitative results on publicly available datasets demonstrate that our method can improve denoising performance more effectively than other state-of-the-art methods can.

  5. Denoising by semi-supervised kernel PCA preimaging

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Abrahamsen, Trine Julie; Hansen, Lars Kai

    2014-01-01

    Kernel Principal Component Analysis (PCA) has proven a powerful tool for nonlinear feature extraction, and is often applied as a pre-processing step for classification algorithms. In denoising applications Kernel PCA provides the basis for dimensionality reduction, prior to the so-called pre-imag...

  6. Fractional Diffusion, Low Exponent Lévy Stable Laws, and 'Slow Motion' Denoising of Helium Ion Microscope Nanoscale Imagery.

    Science.gov (United States)

    Carasso, Alfred S; Vladár, András E

    2012-01-01

    Helium ion microscopes (HIM) are capable of acquiring images with better than 1 nm resolution, and HIM images are particularly rich in morphological surface details. However, such images are generally quite noisy. A major challenge is to denoise these images while preserving delicate surface information. This paper presents a powerful slow motion denoising technique, based on solving linear fractional diffusion equations forward in time. The method is easily implemented computationally, using fast Fourier transform (FFT) algorithms. When applied to actual HIM images, the method is found to reproduce the essential surface morphology of the sample with high fidelity. In contrast, such highly sophisticated methodologies as Curvelet Transform denoising, and Total Variation denoising using split Bregman iterations, are found to eliminate vital fine scale information, along with the noise. Image Lipschitz exponents are a useful image metrology tool for quantifying the fine structure content in an image. In this paper, this tool is applied to rank order the above three distinct denoising approaches, in terms of their texture preserving properties. In several denoising experiments on actual HIM images, it was found that fractional diffusion smoothing performed noticeably better than split Bregman TV, which in turn, performed slightly better than Curvelet denoising.

  7. Preparing the generalized Harvey–Shack rough surface scattering method for use with the discrete ordinates method

    DEFF Research Database (Denmark)

    Johansen, Villads Egede

    2015-01-01

    The paper shows how to implement the generalized Harvey–Shack (GHS) method for isotropic rough surfaces discretized in a polar coordinate system and approximated using Fourier series. This is particularly relevant for the use of the GHS method as a boundary condition for radiative transfer proble...

  8. Machinery vibration signal denoising based on learned dictionary and sparse representation

    International Nuclear Information System (INIS)

    Guo, Liang; Gao, Hongli; Li, Jun; Huang, Haifeng; Zhang, Xiaochen

    2015-01-01

    Mechanical vibration signal denoising has been an import problem for machine damage assessment and health monitoring. Wavelet transfer and sparse reconstruction are the powerful and practical methods. However, those methods are based on the fixed basis functions or atoms. In this paper, a novel method is presented. The atoms used to represent signals are learned from the raw signal. And in order to satisfy the requirements of real-time signal processing, an online dictionary learning algorithm is adopted. Orthogonal matching pursuit is applied to extract the most pursuit column in the dictionary. At last, denoised signal is calculated with the sparse vector and learned dictionary. A simulation signal and real bearing fault signal are utilized to evaluate the improved performance of the proposed method through the comparison with kinds of denoising algorithms. Then Its computing efficiency is demonstrated by an illustrative runtime example. The results show that the proposed method outperforms current algorithms with efficiency calculation. (paper)

  9. A survey on OFDM channel estimation techniques based on denoising strategies

    Directory of Open Access Journals (Sweden)

    Pallaviram Sure

    2017-04-01

    Full Text Available Channel estimation forms the heart of any orthogonal frequency division multiplexing (OFDM based wireless communication receiver. Frequency domain pilot aided channel estimation techniques are either least squares (LS based or minimum mean square error (MMSE based. LS based techniques are computationally less complex. Unlike MMSE ones, they do not require a priori knowledge of channel statistics (KCS. However, the mean square error (MSE performance of the channel estimator incorporating MMSE based techniques is better compared to that obtained with the incorporation of LS based techniques. To enhance the MSE performance using LS based techniques, a variety of denoising strategies have been developed in the literature, which are applied on the LS estimated channel impulse response (CIR. The advantage of denoising threshold based LS techniques is that, they do not require KCS but still render near optimal MMSE performance similar to MMSE based techniques. In this paper, a detailed survey on various existing denoising strategies, with a comparative discussion of these strategies is presented.

  10. Combined shearing interferometer and hartmann wavefront sensor

    International Nuclear Information System (INIS)

    Hutchin, R. A.

    1985-01-01

    A sensitive wavefront sensor combining attributes of both a Hartmann type of wavefront sensor and an AC shearing interferometer type of wavefront sensor. An incident wavefront, the slope of which is to be detected, is focussed to first and second focal points at which first and second diffraction gratings are positioned to shear and modulate the wavefront, which then diverges therefrom. The diffraction patterns of the first and second gratings are positioned substantially orthogonal to each other to shear the wavefront in two directions to produce two dimensional wavefront slope data for the AC shearing interferometer portion of the wavefront sensor. First and second dividing optical systems are positioned in the two diverging wavefronts to divide the sheared wavefront into an array of subapertures and also to focus the wavefront in each subaperture to a focal point. A quadrant detector is provided for each subaperture to detect the position of the focal point therein, which provides a first indication, in the manner of a Hartmann wavefront sensor, of the local wavefront slope in each subaperture. The total radiation in each subaperture, as modulated by the diffraction grating, is also detected by the quadrant detector which produces a modulated output signal representative thereof, the phase of which relative to modulation by the diffraction grating provides a second indication of the local wavefront slope in each subaperture, in the manner of an AC shearing interferometer wavefront sensor. The data from both types of sensors is then combined by long term averaging thereof to provide an extremely sensitive wavefront sensor

  11. Group-sparse representation with dictionary learning for medical image denoising and fusion.

    Science.gov (United States)

    Li, Shutao; Yin, Haitao; Fang, Leyuan

    2012-12-01

    Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.

  12. Denoising GPS-Based Structure Monitoring Data Using Hybrid EMD and Wavelet Packet

    Directory of Open Access Journals (Sweden)

    Lu Ke

    2017-01-01

    Full Text Available High-frequency components are often discarded for data denoising when applying pure wavelet multiscale or empirical mode decomposition (EMD based approaches. Instead, they may raise the problem of energy leakage in vibration signals. Hybrid EMD and wavelet packet (EMD-WP is proposed to denoise Global Positioning System- (GPS- based structure monitoring data. First, field observables are decomposed into a collection of intrinsic mode functions (IMFs with different characteristics. Second, high-frequency IMFs are denoised using the wavelet packet; then the monitoring data are reconstructed using the denoised IMFs together with the remaining low-frequency IMFs. Our algorithm is demonstrated on a synthetic displacement response of a 3-story frame excited by El Centro earthquake along with a set of Gaussian random white noises on different levels added. We find that the hybrid method can effectively weaken the multipath effect with low frequency and can potentially extract vibration feature. However, false modals may still exist by the rest of the noise contained in the high-frequency IMFs and when the frequency of the noise is located in the same band as that of effective vibration. Finally, real GPS observables are implemented to evaluate the efficiency of EMD-WP method in mitigating low-frequency multipath.

  13. Two-Dimensional Electron Density Measurement of Positive Streamer Discharge in Atmospheric-Pressure Air

    Science.gov (United States)

    Inada, Yuki; Ono, Ryo; Kumada, Akiko; Hidaka, Kunihiko; Maeyama, Mitsuaki

    2016-09-01

    The electron density of streamer discharges propagating in atmospheric-pressure air is crucially important for systematic understanding of the production mechanisms of reactive species utilized in wide ranging applications such as medical treatment, plasma-assisted ignition and combustion, ozone production and environmental pollutant processing. However, electron density measurement during the propagation of the atmospheric-pressure streamers is extremely difficult by using the conventional localized type measurement systems due to the streamer initiation jitters and the irreproducibility in the discharge paths. In order to overcome the difficulties, single-shot two-dimensional electron density measurement was conducted by using a Shack-Hartmann type laser wavefront sensor. The Shack-Hartmann sensor with a temporal resolution of 2 ns was applied to pulsed positive streamer discharges generated in an air gap between pin-to-plate electrodes. The electron density a few ns after the streamer initiation was 7*1021m-3 and uniformly distributed along the streamer channel. The electron density and its distribution profile were compared with a previous study simulating similar streamers, demonstrating good agreement. This work was supported in part by JKA and its promotion funds from KEIRIN RACE. The authors like to thank Mr. Kazuaki Ogura and Mr. Kaiho Aono of The University of Tokyo for their support during this work.

  14. LSTM-Based Hierarchical Denoising Network for Android Malware Detection

    Directory of Open Access Journals (Sweden)

    Jinpei Yan

    2018-01-01

    Full Text Available Mobile security is an important issue on Android platform. Most malware detection methods based on machine learning models heavily rely on expert knowledge for manual feature engineering, which are still difficult to fully describe malwares. In this paper, we present LSTM-based hierarchical denoise network (HDN, a novel static Android malware detection method which uses LSTM to directly learn from the raw opcode sequences extracted from decompiled Android files. However, most opcode sequences are too long for LSTM to train due to the gradient vanishing problem. Hence, HDN uses a hierarchical structure, whose first-level LSTM parallelly computes on opcode subsequences (we called them method blocks to learn the dense representations; then the second-level LSTM can learn and detect malware through method block sequences. Considering that malicious behavior only appears in partial sequence segments, HDN uses method block denoise module (MBDM for data denoising by adaptive gradient scaling strategy based on loss cache. We evaluate and compare HDN with the latest mainstream researches on three datasets. The results show that HDN outperforms these Android malware detection methods,and it is able to capture longer sequence features and has better detection efficiency than N-gram-based malware detection which is similar to our method.

  15. The impact of surgeon volume on colostomy reversal outcomes after Hartmann's procedure for diverticulitis.

    Science.gov (United States)

    Aquina, Christopher T; Probst, Christian P; Becerra, Adan Z; Hensley, Bradley J; Iannuzzi, James C; Noyes, Katia; Monson, John R T; Fleming, Fergal J

    2016-11-01

    Colostomy reversal after Hartmann's procedure for diverticulitis is a morbid procedure, and studies investigating factors associated with outcomes are lacking. This study identifies patient, surgeon, and hospital-level factors associated with perioperative outcomes after stoma reversal. The Statewide Planning and Research Cooperative System was queried for urgent/emergency Hartmann's procedures for diverticulitis between 2000-2012 in New York State and subsequent colostomy reversal within 1 year of the procedure. Surgeon and hospital volume were categorized into tertiles based on the annual number of colorectal resections performed each year. Bivariate and mixed-effects analyses were used to assess the association between patient, surgeon, and hospital-level factors and perioperative outcomes after colostomy reversal, including a laparoscopic approach; duration of stay; intensive care unit admission; complications; mortality; and 30-day, unscheduled readmission. Among 10,487 patients who underwent Hartmann's procedure and survived to discharge, 63% had the colostomy reversed within 1 year. After controlling for patient, surgeon, and hospital-level factors, high-volume surgeons (≥40 colorectal resections/yr) were independently associated with higher odds of a laparoscopic approach (unadjusted rates: 14% vs 7.6%; adjusted odds ratio = 1.84, 95% confidence interval = 1.12, 3.00), shorter duration of stay (median: 6 versus 7 days; adjusted incidence rate ratio = 0.87, 95% confidence interval = 0.81, 0.95), and lower odds of 90-day mortality (unadjusted rates: 0.4% vs 1.0%; adjusted odds ratio = 0.30, 95% confidence interval = 0.10, 0.88) compared with low-volume surgeons (1-15 colorectal resections/yr). High-volume surgeons are associated with better perioperative outcomes and lower health care utilization after Hartmann's reversal for diverticulitis. These findings support referral to high-volume surgeons for colostomy reversal. Copyright © 2016

  16. Improved CEEMDAN-wavelet transform de-noising method and its application in well logging noise reduction

    Science.gov (United States)

    Zhang, Jingxia; Guo, Yinghai; Shen, Yulin; Zhao, Difei; Li, Mi

    2018-06-01

    The use of geophysical logging data to identify lithology is an important groundwork in logging interpretation. Inevitably, noise is mixed in during data collection due to the equipment and other external factors and this will affect the further lithological identification and other logging interpretation. Therefore, to get a more accurate lithological identification it is necessary to adopt de-noising methods. In this study, a new de-noising method, namely improved complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN)-wavelet transform, is proposed, which integrates the superiorities of improved CEEMDAN and wavelet transform. Improved CEEMDAN, an effective self-adaptive multi-scale analysis method, is used to decompose non-stationary signals as the logging data to obtain the intrinsic mode function (IMF) of N different scales and one residual. Moreover, one self-adaptive scale selection method is used to determine the reconstruction scale k. Simultaneously, given the possible frequency aliasing problem between adjacent IMFs, a wavelet transform threshold de-noising method is used to reduce the noise of the (k-1)th IMF. Subsequently, the de-noised logging data are reconstructed by the de-noised (k-1)th IMF and the remaining low-frequency IMFs and the residual. Finally, empirical mode decomposition, improved CEEMDAN, wavelet transform and the proposed method are applied for analysis of the simulation and the actual data. Results show diverse performance of these de-noising methods with regard to accuracy for lithological identification. Compared with the other methods, the proposed method has the best self-adaptability and accuracy in lithological identification.

  17. Denoising time-resolved microscopy image sequences with singular value thresholding

    Energy Technology Data Exchange (ETDEWEB)

    Furnival, Tom, E-mail: tjof2@cam.ac.uk; Leary, Rowan K., E-mail: rkl26@cam.ac.uk; Midgley, Paul A., E-mail: pam33@cam.ac.uk

    2017-07-15

    Time-resolved imaging in microscopy is important for the direct observation of a range of dynamic processes in both the physical and life sciences. However, the image sequences are often corrupted by noise, either as a result of high frame rates or a need to limit the radiation dose received by the sample. Here we exploit both spatial and temporal correlations using low-rank matrix recovery methods to denoise microscopy image sequences. We also make use of an unbiased risk estimator to address the issue of how much thresholding to apply in a robust and automated manner. The performance of the technique is demonstrated using simulated image sequences, as well as experimental scanning transmission electron microscopy data, where surface adatom motion and nanoparticle structural dynamics are recovered at rates of up to 32 frames per second. - Highlights: • Correlations in space and time are harnessed to denoise microscopy image sequences. • A robust estimator provides automated selection of the denoising parameter. • Motion tracking and automated noise estimation provides a versatile algorithm. • Application to time-resolved STEM enables study of atomic and nanoparticle dynamics.

  18. Self-adapting denoising, alignment and reconstruction in electron tomography in materials science

    Energy Technology Data Exchange (ETDEWEB)

    Printemps, Tony, E-mail: tony.printemps@cea.fr [Université Grenoble Alpes, F-38000 Grenoble (France); CEA, LETI, MINATEC Campus, F-38054 Grenoble (France); Mula, Guido [Dipartimento di Fisica, Università di Cagliari, Cittadella Universitaria, S.P. 8km 0.700, 09042 Monserrato (Italy); Sette, Daniele; Bleuet, Pierre; Delaye, Vincent; Bernier, Nicolas; Grenier, Adeline; Audoit, Guillaume; Gambacorti, Narciso; Hervé, Lionel [Université Grenoble Alpes, F-38000 Grenoble (France); CEA, LETI, MINATEC Campus, F-38054 Grenoble (France)

    2016-01-15

    An automatic procedure for electron tomography is presented. This procedure is adapted for specimens that can be fashioned into a needle-shaped sample and has been evaluated on inorganic samples. It consists of self-adapting denoising, automatic and accurate alignment including detection and correction of tilt axis, and 3D reconstruction. We propose the exploitation of a large amount of information of an electron tomography acquisition to achieve robust and automatic mixed Poisson–Gaussian noise parameter estimation and denoising using undecimated wavelet transforms. The alignment is made by mixing three techniques, namely (i) cross-correlations between neighboring projections, (ii) common line algorithm to get a precise shift correction in the direction of the tilt axis and (iii) intermediate reconstructions to precisely determine the tilt axis and shift correction in the direction perpendicular to that axis. Mixing alignment techniques turns out to be very efficient and fast. Significant improvements are highlighted in both simulations and real data reconstructions of porous silicon in high angle annular dark field mode and agglomerated silver nanoparticles in incoherent bright field mode. 3D reconstructions obtained with minimal user-intervention present fewer artefacts and less noise, which permits easier and more reliable segmentation and quantitative analysis. After careful sample preparation and data acquisition, the denoising procedure, alignment and reconstruction can be achieved within an hour for a 3D volume of about a hundred million voxels, which is a step toward a more routine use of electron tomography. - Highlights: • Goal: perform a reliable and user-independent 3D electron tomography reconstruction. • Proposed method: self-adapting denoising and alignment prior to 3D reconstruction. • Noise estimation and denoising are performed using wavelet transform. • Tilt axis determination is done automatically as well as projection alignment.

  19. Electrocardiogram signal denoising based on empirical mode decomposition technique: an overview

    International Nuclear Information System (INIS)

    Han, G.; Lin, B.; Xu, Z.

    2017-01-01

    Electrocardiogram (ECG) signal is nonlinear and non-stationary weak signal which reflects whether the heart is functioning normally or abnormally. ECG signal is susceptible to various kinds of noises such as high/low frequency noises, powerline interference and baseline wander. Hence, the removal of noises from ECG signal becomes a vital link in the ECG signal processing and plays a significant role in the detection and diagnosis of heart diseases. The review will describe the recent developments of ECG signal denoising based on Empirical Mode Decomposition (EMD) technique including high frequency noise removal, powerline interference separation, baseline wander correction, the combining of EMD and Other Methods, EEMD technique. EMD technique is a quite potential and prospective but not perfect method in the application of processing nonlinear and non-stationary signal like ECG signal. The EMD combined with other algorithms is a good solution to improve the performance of noise cancellation. The pros and cons of EMD technique in ECG signal denoising are discussed in detail. Finally, the future work and challenges in ECG signal denoising based on EMD technique are clarified.

  20. Electrocardiogram signal denoising based on empirical mode decomposition technique: an overview

    Science.gov (United States)

    Han, G.; Lin, B.; Xu, Z.

    2017-03-01

    Electrocardiogram (ECG) signal is nonlinear and non-stationary weak signal which reflects whether the heart is functioning normally or abnormally. ECG signal is susceptible to various kinds of noises such as high/low frequency noises, powerline interference and baseline wander. Hence, the removal of noises from ECG signal becomes a vital link in the ECG signal processing and plays a significant role in the detection and diagnosis of heart diseases. The review will describe the recent developments of ECG signal denoising based on Empirical Mode Decomposition (EMD) technique including high frequency noise removal, powerline interference separation, baseline wander correction, the combining of EMD and Other Methods, EEMD technique. EMD technique is a quite potential and prospective but not perfect method in the application of processing nonlinear and non-stationary signal like ECG signal. The EMD combined with other algorithms is a good solution to improve the performance of noise cancellation. The pros and cons of EMD technique in ECG signal denoising are discussed in detail. Finally, the future work and challenges in ECG signal denoising based on EMD technique are clarified.

  1. Median Modified Wiener Filter for nonlinear adaptive spatial denoising of protein NMR multidimensional spectra

    KAUST Repository

    Cannistraci, Carlo Vittorio

    2015-01-26

    Denoising multidimensional NMR-spectra is a fundamental step in NMR protein structure determination. The state-of-the-art method uses wavelet-denoising, which may suffer when applied to non-stationary signals affected by Gaussian-white-noise mixed with strong impulsive artifacts, like those in multi-dimensional NMR-spectra. Regrettably, Wavelet\\'s performance depends on a combinatorial search of wavelet shapes and parameters; and multi-dimensional extension of wavelet-denoising is highly non-trivial, which hampers its application to multidimensional NMR-spectra. Here, we endorse a diverse philosophy of denoising NMR-spectra: less is more! We consider spatial filters that have only one parameter to tune: the window-size. We propose, for the first time, the 3D extension of the median-modified-Wiener-filter (MMWF), an adaptive variant of the median-filter, and also its novel variation named MMWF*. We test the proposed filters and the Wiener-filter, an adaptive variant of the mean-filter, on a benchmark set that contains 16 two-dimensional and three-dimensional NMR-spectra extracted from eight proteins. Our results demonstrate that the adaptive spatial filters significantly outperform their non-adaptive versions. The performance of the new MMWF* on 2D/3D-spectra is even better than wavelet-denoising. Noticeably, MMWF* produces stable high performance almost invariant for diverse window-size settings: this signifies a consistent advantage in the implementation of automatic pipelines for protein NMR-spectra analysis.

  2. Median Modified Wiener Filter for nonlinear adaptive spatial denoising of protein NMR multidimensional spectra

    KAUST Repository

    Cannistraci, Carlo Vittorio; Abbas, Ahmed; Gao, Xin

    2015-01-01

    Denoising multidimensional NMR-spectra is a fundamental step in NMR protein structure determination. The state-of-the-art method uses wavelet-denoising, which may suffer when applied to non-stationary signals affected by Gaussian-white-noise mixed with strong impulsive artifacts, like those in multi-dimensional NMR-spectra. Regrettably, Wavelet's performance depends on a combinatorial search of wavelet shapes and parameters; and multi-dimensional extension of wavelet-denoising is highly non-trivial, which hampers its application to multidimensional NMR-spectra. Here, we endorse a diverse philosophy of denoising NMR-spectra: less is more! We consider spatial filters that have only one parameter to tune: the window-size. We propose, for the first time, the 3D extension of the median-modified-Wiener-filter (MMWF), an adaptive variant of the median-filter, and also its novel variation named MMWF*. We test the proposed filters and the Wiener-filter, an adaptive variant of the mean-filter, on a benchmark set that contains 16 two-dimensional and three-dimensional NMR-spectra extracted from eight proteins. Our results demonstrate that the adaptive spatial filters significantly outperform their non-adaptive versions. The performance of the new MMWF* on 2D/3D-spectra is even better than wavelet-denoising. Noticeably, MMWF* produces stable high performance almost invariant for diverse window-size settings: this signifies a consistent advantage in the implementation of automatic pipelines for protein NMR-spectra analysis.

  3. Generalized peritonitis due to perforated diverticulitis: Hartmann's procedure or primary anastomosis?

    Science.gov (United States)

    Trenti, Loris; Biondo, Sebastiano; Golda, Thomas; Monica, Millan; Kreisler, Esther; Fraccalvieri, Domenico; Frago, Ricardo; Jaurrieta, Eduardo

    2011-03-01

    Hartmann's procedure (HP) still remains the most frequently performed procedure for diffuse peritonitis due to perforated diverticulitis. The aims of this study were to assess the feasibility and safety of resection with primary anastomosis (RPA) in patients with purulent or fecal diverticular peritonitis and review morbidity and mortality after single stage procedure and Hartmann in our experience. From January 1995 through December 2008, patients operated for generalized diverticular peritonitis were studied. Patients were classified into two main groups: RPA and HP. A total of 87 patients underwent emergency surgery for diverticulitis complicated with purulent or diffuse fecal peritonitis. Sixty (69%) had undergone HP while RPA was performed in 27 patients (31%). At the multivariate analysis, RPA was associated with less post-operative complications (P clinical anastomotic leakage and needed re-operation. RPA can be safely performed without adding morbidity and mortality in cases of diffuse diverticular peritonitis. HP should be reserved only for hemodynamically unstable or high-risk patients. Specialization in colorectal surgery improves mortality and raises the percentage of one-stage procedures.

  4. Hartmann test of the COMPASS RICH-1 optical telescopes

    CERN Document Server

    Polak, J; Alekseev, M; Angerer, H; Apollonio, M; Birsa, R; Bordalo, P; Bradamante, F; Bressan, A; Busso, L; Chiosso, V M; Ciliberti, P; Colantoni, M L; Costa, S; Dibiase, N; Dafni, T; Dalla Torre, S; Diaz, V; Duic, V; Delagnes, E; Deschamps, H; Eyrich, W; Faso, D; Ferrero, A; Finger, M; Finger, M Jr; Fischer, H; Gerassimov, S; Giorgi, M; Gobbo, B; Hagemann, R; von Harrach, D; Heinsius, F H; Joosten, R; Ketzer, B; Königsmann, K; Kolosov, V N; Konorov, I; Kramer, D; Kunne, F; Levorato, S; Maggiora, A; Magnon, A; Mann, A; Martin, A; Rebourgeard, P; Mutter, A; Nähle, O; Neyret, D; Nerling, F; Pagano, P; Paul, S; Panebianco, S; Panzieri, D; Pesaro, G; Pizzolotto, C; Menon, G; Rocco, E; Robinet, F; Schiavon, P; Schill, C; Schoenmeier, P; Silva, L; Slunecka, M; Steiger, L; Sozzi, F; Sulc, M; Svec, M; Tessarotto, F; Teufel, A; Wollny, H

    2008-01-01

    The central region of COMPASS RICH-1 has been equipped with a new photon detection system based on MultiAnode PhotoMultiplier Tubes (MAPMT). The Cherenkov photons are focused by an array of 576 fused silica telescopes onto 576 MAPMTs. The quality and positioning of all optical components have been tested by Hartmann method. The validation procedures are described. The quality of the optical concentrators was checked and alignment corrections were made. The upgraded detector showed excellent performances during 2006 data taking.

  5. Higher order monochromatic aberrations of the human infant eye

    OpenAIRE

    Wang, Jingyun; Candy, T. Rowan

    2005-01-01

    The monochromatic optical aberrations of the eye degrade retinal image quality. Any significant aberrations during postnatal development could contribute to infants’ immature visual performance and provide signals for the control of eye growth. Aberrations of human infant eyes from 5 to 7 weeks old were compared with those of adult subjects using a model of an adultlike infant eye that accounted for differences in both eye and pupil size. Data were collected using the COAS Shack-Hartmann wave...

  6. Patch Similarity Modulus and Difference Curvature Based Fourth-Order Partial Differential Equation for Image Denoising

    Directory of Open Access Journals (Sweden)

    Yunjiao Bai

    2015-01-01

    Full Text Available The traditional fourth-order nonlinear diffusion denoising model suffers the isolated speckles and the loss of fine details in the processed image. For this reason, a new fourth-order partial differential equation based on the patch similarity modulus and the difference curvature is proposed for image denoising. First, based on the intensity similarity of neighbor pixels, this paper presents a new edge indicator called patch similarity modulus, which is strongly robust to noise. Furthermore, the difference curvature which can effectively distinguish between edges and noise is incorporated into the denoising algorithm to determine the diffusion process by adaptively adjusting the size of the diffusion coefficient. The experimental results show that the proposed algorithm can not only preserve edges and texture details, but also avoid isolated speckles and staircase effect while filtering out noise. And the proposed algorithm has a better performance for the images with abundant details. Additionally, the subjective visual quality and objective evaluation index of the denoised image obtained by the proposed algorithm are higher than the ones from the related methods.

  7. [Stefan Hartmann. Revaler Handwerker im Spiegel fer Ratsprotokolle von 1722 bis 1755] / Paul Kaegbein

    Index Scriptorium Estoniae

    Kaegbein, Paul

    2007-01-01

    Arvustus: Stefan Hartmann. Revaler Handwerker im Spiegel fer Ratsprotokolle von 1722 bis 1755. In : Ostseeprovinzen, baltische Staaten und das Nationale. Münster : LIT, 2005. lk. 89-112. Kanuti gildi koondunud ametite organisatsioonist ja struktuurist

  8. Adaptive DSPI phase denoising using mutual information and 2D variational mode decomposition

    Science.gov (United States)

    Xiao, Qiyang; Li, Jian; Wu, Sijin; Li, Weixian; Yang, Lianxiang; Dong, Mingli; Zeng, Zhoumo

    2018-04-01

    In digital speckle pattern interferometry (DSPI), noise interference leads to a low peak signal-to-noise ratio (PSNR) and measurement errors in the phase map. This paper proposes an adaptive DSPI phase denoising method based on two-dimensional variational mode decomposition (2D-VMD) and mutual information. Firstly, the DSPI phase map is subjected to 2D-VMD in order to obtain a series of band-limited intrinsic mode functions (BLIMFs). Then, on the basis of characteristics of the BLIMFs and in combination with mutual information, a self-adaptive denoising method is proposed to obtain noise-free components containing the primary phase information. The noise-free components are reconstructed to obtain the denoising DSPI phase map. Simulation and experimental results show that the proposed method can effectively reduce noise interference, giving a PSNR that is higher than that of two-dimensional empirical mode decomposition methods.

  9. Two-stage image denoising considering interscale and intrascale dependencies

    Science.gov (United States)

    Shahdoosti, Hamid Reza

    2017-11-01

    A solution to the problem of reducing the noise of grayscale images is presented. To consider the intrascale and interscale dependencies, this study makes use of a model. It is shown that the dependency between a wavelet coefficient and its predecessors can be modeled by the first-order Markov chain, which means that the parent conveys all of the information necessary for efficient estimation. Using this fact, the proposed method employs the Kalman filter in the wavelet domain for image denoising. The proposed method has two stages. The first stage employs a simple denoising algorithm to provide the noise-free image, by which the parameters of the model such as state transition matrix, variance of the process noise, the observation model, and the covariance of the observation noise are estimated. In the second stage, the Kalman filter is applied to the wavelet coefficients of the noisy image to estimate the noise-free coefficients. In fact, the Kalman filter is used to estimate the coefficients of high-frequency subbands from the coefficients of coarser scales and noisy observations of neighboring coefficients. In this way, both the interscale and intrascale dependencies are taken into account. Results are presented and discussed on a set of standard 8-bit grayscale images. The experimental results demonstrate that the proposed method achieves performances competitive with the state-of-the-art denoising methods in terms of both peak-signal-to-noise ratio and subjective visual quality.

  10. Normal Inverse Gaussian Model-Based Image Denoising in the NSCT Domain

    Directory of Open Access Journals (Sweden)

    Jian Jia

    2015-01-01

    Full Text Available The objective of image denoising is to retain useful details while removing as much noise as possible to recover an original image from its noisy version. This paper proposes a novel normal inverse Gaussian (NIG model-based method that uses a Bayesian estimator to carry out image denoising in the nonsubsampled contourlet transform (NSCT domain. In the proposed method, the NIG model is first used to describe the distributions of the image transform coefficients of each subband in the NSCT domain. Then, the corresponding threshold function is derived from the model using Bayesian maximum a posteriori probability estimation theory. Finally, optimal linear interpolation thresholding algorithm (OLI-Shrink is employed to guarantee a gentler thresholding effect. The results of comparative experiments conducted indicate that the denoising performance of our proposed method in terms of peak signal-to-noise ratio is superior to that of several state-of-the-art methods, including BLS-GSM, K-SVD, BivShrink, and BM3D. Further, the proposed method achieves structural similarity (SSIM index values that are comparable to those of the block-matching 3D transformation (BM3D method.

  11. Joint seismic data denoising and interpolation with double-sparsity dictionary learning

    Science.gov (United States)

    Zhu, Lingchen; Liu, Entao; McClellan, James H.

    2017-08-01

    Seismic data quality is vital to geophysical applications, so that methods of data recovery, including denoising and interpolation, are common initial steps in the seismic data processing flow. We present a method to perform simultaneous interpolation and denoising, which is based on double-sparsity dictionary learning. This extends previous work that was for denoising only. The original double-sparsity dictionary learning algorithm is modified to track the traces with missing data by defining a masking operator that is integrated into the sparse representation of the dictionary. A weighted low-rank approximation algorithm is adopted to handle the dictionary updating as a sparse recovery optimization problem constrained by the masking operator. Compared to traditional sparse transforms with fixed dictionaries that lack the ability to adapt to complex data structures, the double-sparsity dictionary learning method learns the signal adaptively from selected patches of the corrupted seismic data, while preserving compact forward and inverse transform operators. Numerical experiments on synthetic seismic data indicate that this new method preserves more subtle features in the data set without introducing pseudo-Gibbs artifacts when compared to other directional multi-scale transform methods such as curvelets.

  12. Electrocardiogram de-noising based on forward wavelet transform ...

    Indian Academy of Sciences (India)

    Ratio (SNR) and Mean Square Error (MSE) computations showed that our proposed ... This technique permits to cancel noises and retain the informa- tion of the ... Wavelet analysis is used for transforming the signal under investigation into joined temporal and ... introduced the BWT in our proposed ECG de-noising system.

  13. Random Modeling of Daily Rainfall and Runoff Using a Seasonal Model and Wavelet Denoising

    Directory of Open Access Journals (Sweden)

    Chien-ming Chou

    2014-01-01

    Full Text Available Instead of Fourier smoothing, this study applied wavelet denoising to acquire the smooth seasonal mean and corresponding perturbation term from daily rainfall and runoff data in traditional seasonal models, which use seasonal means for hydrological time series forecasting. The denoised rainfall and runoff time series data were regarded as the smooth seasonal mean. The probability distribution of the percentage coefficients can be obtained from calibrated daily rainfall and runoff data. For validated daily rainfall and runoff data, percentage coefficients were randomly generated according to the probability distribution and the law of linear proportion. Multiplying the generated percentage coefficient by the smooth seasonal mean resulted in the corresponding perturbation term. Random modeling of daily rainfall and runoff can be obtained by adding the perturbation term to the smooth seasonal mean. To verify the accuracy of the proposed method, daily rainfall and runoff data for the Wu-Tu watershed were analyzed. The analytical results demonstrate that wavelet denoising enhances the precision of daily rainfall and runoff modeling of the seasonal model. In addition, the wavelet denoising technique proposed in this study can obtain the smooth seasonal mean of rainfall and runoff processes and is suitable for modeling actual daily rainfall and runoff processes.

  14. A Novel Partial Discharge Ultra-High Frequency Signal De-Noising Method Based on a Single-Channel Blind Source Separation Algorithm

    Directory of Open Access Journals (Sweden)

    Liangliang Wei

    2018-02-01

    Full Text Available To effectively de-noise the Gaussian white noise and periodic narrow-band interference in the background noise of partial discharge ultra-high frequency (PD UHF signals in field tests, a novel de-noising method, based on a single-channel blind source separation algorithm, is proposed. Compared with traditional methods, the proposed method can effectively de-noise the noise interference, and the distortion of the de-noising PD signal is smaller. Firstly, the PD UHF signal is time-frequency analyzed by S-transform to obtain the number of source signals. Then, the single-channel detected PD signal is converted into multi-channel signals by singular value decomposition (SVD, and background noise is separated from multi-channel PD UHF signals by the joint approximate diagonalization of eigen-matrix method. At last, the source PD signal is estimated and recovered by the l1-norm minimization method. The proposed de-noising method was applied on the simulation test and field test detected signals, and the de-noising performance of the different methods was compared. The simulation and field test results demonstrate the effectiveness and correctness of the proposed method.

  15. Denoising solar radiation data using coiflet wavelets

    Energy Technology Data Exchange (ETDEWEB)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my; Janier, Josefina B., E-mail: josefinajanier@petronas.com.my; Muthuvalu, Mohana Sundaram, E-mail: mohana.muthuvalu@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia); Hasan, Mohammad Khatim, E-mail: khatim@ftsm.ukm.my [Jabatan Komputeran Industri, Universiti Kebangsaan Malaysia, 43600 UKM Bangi, Selangor (Malaysia); Sulaiman, Jumat, E-mail: jumat@ums.edu.my [Program Matematik dengan Ekonomi, Universiti Malaysia Sabah, Beg Berkunci 2073, 88999 Kota Kinabalu, Sabah (Malaysia); Ismail, Mohd Tahir [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM Minden, Penang (Malaysia)

    2014-10-24

    Signal denoising and smoothing plays an important role in processing the given signal either from experiment or data collection through observations. Data collection usually was mixed between true data and some error or noise. This noise might be coming from the apparatus to measure or collect the data or human error in handling the data. Normally before the data is use for further processing purposes, the unwanted noise need to be filtered out. One of the efficient methods that can be used to filter the data is wavelet transform. Due to the fact that the received solar radiation data fluctuates according to time, there exist few unwanted oscillation namely noise and it must be filtered out before the data is used for developing mathematical model. In order to apply denoising using wavelet transform (WT), the thresholding values need to be calculated. In this paper the new thresholding approach is proposed. The coiflet2 wavelet with variation diminishing 4 is utilized for our purpose. From numerical results it can be seen clearly that, the new thresholding approach give better results as compare with existing approach namely global thresholding value.

  16. An NMR log echo data de-noising method based on the wavelet packet threshold algorithm

    International Nuclear Information System (INIS)

    Meng, Xiangning; Xie, Ranhong; Li, Changxi; Hu, Falong; Li, Chaoliu; Zhou, Cancan

    2015-01-01

    To improve the de-noising effects of low signal-to-noise ratio (SNR) nuclear magnetic resonance (NMR) log echo data, this paper applies the wavelet packet threshold algorithm to the data. The principle of the algorithm is elaborated in detail. By comparing the properties of a series of wavelet packet bases and the relevance between them and the NMR log echo train signal, ‘sym7’ is found to be the optimal wavelet packet basis of the wavelet packet threshold algorithm to de-noise the NMR log echo train signal. A new method is presented to determine the optimal wavelet packet decomposition scale; this is within the scope of its maximum, using the modulus maxima and the Shannon entropy minimum standards to determine the global and local optimal wavelet packet decomposition scales, respectively. The results of applying the method to the simulated and actual NMR log echo data indicate that compared with the wavelet threshold algorithm, the wavelet packet threshold algorithm, which shows higher decomposition accuracy and better de-noising effect, is much more suitable for de-noising low SNR–NMR log echo data. (paper)

  17. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    Science.gov (United States)

    Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad

    2016-01-01

    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  18. Optical components of adaptive systems for improving laser beam quality

    Science.gov (United States)

    Malakhov, Yuri I.; Atuchin, Victor V.; Kudryashov, Aleksis V.; Starikov, Fedor A.

    2008-10-01

    The short overview is given of optical equipment developed within the ISTC activity for adaptive systems of new generation allowing for correction of high-power laser beams carrying optical vortices onto the phase surface. They are the kinoform many-level optical elements of new generation, namely, special spiral phase plates and ordered rasters of microlenses, i.e. lenslet arrays, as well as the wide-aperture Hartmann-Shack sensors and bimorph deformable piezoceramics- based mirrors with various grids of control elements.

  19. Scanning laser ophthalmoscope design with adaptive optics

    OpenAIRE

    Laut, SP; Jones, SM; Olivier, SS; Werner, JS

    2005-01-01

    A design for a high-resolution scanning instrument is presented for in vivo imaging of the human eye at the cellular scale. This system combines adaptive optics technology with a scanning laser ophthalmoscope (SLO) to image structures with high lateral (∼2 μm) resolution. In this system, the ocular wavefront aberrations that reduce the resolution of conventional SLOs are detected by a Hartmann-Shack wavefront sensor, and compensated with two deformable mirrors in a closed-loop for dynamic cor...

  20. Simultaneous multi-component seismic denoising and reconstruction via K-SVD

    Science.gov (United States)

    Hou, Sian; Zhang, Feng; Li, Xiangyang; Zhao, Qiang; Dai, Hengchang

    2018-06-01

    Data denoising and reconstruction play an increasingly significant role in seismic prospecting for their value in enhancing effective signals, dealing with surface obstacles and reducing acquisition costs. In this paper, we propose a novel method to denoise and reconstruct multicomponent seismic data simultaneously. This method lies within the framework of machine learning and the key points are defining a suitable weight function and a modified inner product operator. The purpose of these two processes are to perform missing data machine learning when the random noise deviation is unknown, and building a mathematical relationship for each component to incorporate all the information of multi-component data. Two examples, using synthetic and real multicomponent data, demonstrate that the new method is a feasible alternative for multi-component seismic data processing.

  1. Hand Depth Image Denoising and Superresolution via Noise-Aware Dictionaries

    Directory of Open Access Journals (Sweden)

    Huayang Li

    2016-01-01

    Full Text Available This paper proposes a two-stage method for hand depth image denoising and superresolution, using bilateral filters and learned dictionaries via noise-aware orthogonal matching pursuit (NAOMP based K-SVD. The bilateral filtering phase recovers singular points and removes artifacts on silhouettes by averaging depth data using neighborhood pixels on which both depth difference and RGB similarity restrictions are imposed. The dictionary learning phase uses NAOMP for training dictionaries which separates faithful depth from noisy data. Compared with traditional OMP, NAOMP adds a residual reduction step which effectively weakens the noise term within the residual during the residual decomposition in terms of atoms. Experimental results demonstrate that the bilateral phase and the NAOMP-based learning dictionaries phase corporately denoise both virtual and real depth images effectively.

  2. Image Classification Based on Convolutional Denoising Sparse Autoencoder

    Directory of Open Access Journals (Sweden)

    Shuangshuang Chen

    2017-01-01

    Full Text Available Image classification aims to group images into corresponding semantic categories. Due to the difficulties of interclass similarity and intraclass variability, it is a challenging issue in computer vision. In this paper, an unsupervised feature learning approach called convolutional denoising sparse autoencoder (CDSAE is proposed based on the theory of visual attention mechanism and deep learning methods. Firstly, saliency detection method is utilized to get training samples for unsupervised feature learning. Next, these samples are sent to the denoising sparse autoencoder (DSAE, followed by convolutional layer and local contrast normalization layer. Generally, prior in a specific task is helpful for the task solution. Therefore, a new pooling strategy—spatial pyramid pooling (SPP fused with center-bias prior—is introduced into our approach. Experimental results on the common two image datasets (STL-10 and CIFAR-10 demonstrate that our approach is effective in image classification. They also demonstrate that none of these three components: local contrast normalization, SPP fused with center-prior, and l2 vector normalization can be excluded from our proposed approach. They jointly improve image representation and classification performance.

  3. Multisensor signal denoising based on matching synchrosqueezing wavelet transform for mechanical fault condition assessment

    Science.gov (United States)

    Yi, Cancan; Lv, Yong; Xiao, Han; Huang, Tao; You, Guanghui

    2018-04-01

    Since it is difficult to obtain the accurate running status of mechanical equipment with only one sensor, multisensor measurement technology has attracted extensive attention. In the field of mechanical fault diagnosis and condition assessment based on vibration signal analysis, multisensor signal denoising has emerged as an important tool to improve the reliability of the measurement result. A reassignment technique termed the synchrosqueezing wavelet transform (SWT) has obvious superiority in slow time-varying signal representation and denoising for fault diagnosis applications. The SWT uses the time-frequency reassignment scheme, which can provide signal properties in 2D domains (time and frequency). However, when the measured signal contains strong noise components and fast varying instantaneous frequency, the performance of SWT-based analysis still depends on the accuracy of instantaneous frequency estimation. In this paper, a matching synchrosqueezing wavelet transform (MSWT) is investigated as a potential candidate to replace the conventional synchrosqueezing transform for the applications of denoising and fault feature extraction. The improved technology utilizes the comprehensive instantaneous frequency estimation by chirp rate estimation to achieve a highly concentrated time-frequency representation so that the signal resolution can be significantly improved. To exploit inter-channel dependencies, the multisensor denoising strategy is performed by using a modulated multivariate oscillation model to partition the time-frequency domain; then, the common characteristics of the multivariate data can be effectively identified. Furthermore, a modified universal threshold is utilized to remove noise components, while the signal components of interest can be retained. Thus, a novel MSWT-based multisensor signal denoising algorithm is proposed in this paper. The validity of this method is verified by numerical simulation, and experiments including a rolling

  4. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    Directory of Open Access Journals (Sweden)

    Khang Jie Liew

    Full Text Available This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  5. Remote Sensing Image Classification Based on Stacked Denoising Autoencoder

    Directory of Open Access Journals (Sweden)

    Peng Liang

    2017-12-01

    Full Text Available Focused on the issue that conventional remote sensing image classification methods have run into the bottlenecks in accuracy, a new remote sensing image classification method inspired by deep learning is proposed, which is based on Stacked Denoising Autoencoder. First, the deep network model is built through the stacked layers of Denoising Autoencoder. Then, with noised input, the unsupervised Greedy layer-wise training algorithm is used to train each layer in turn for more robust expressing, characteristics are obtained in supervised learning by Back Propagation (BP neural network, and the whole network is optimized by error back propagation. Finally, Gaofen-1 satellite (GF-1 remote sensing data are used for evaluation, and the total accuracy and kappa accuracy reach 95.7% and 0.955, respectively, which are higher than that of the Support Vector Machine and Back Propagation neural network. The experiment results show that the proposed method can effectively improve the accuracy of remote sensing image classification.

  6. A Hybrid Technique for De-Noising Multi-Modality Medical Images by Employing Cuckoo’s Search with Curvelet Transform

    Directory of Open Access Journals (Sweden)

    Qaisar Javaid

    2018-01-01

    Full Text Available De-noising of the medical images is very difficult task. To improve the overall visual representation we need to apply a contrast enhancement techniques, this representation provide the physicians and clinicians a good and recovered diagnosis results. Various de-noising and contrast enhancements methods are develops. However, some of the methods are not good in providing the better results with accuracy and efficiency. In our paper we de-noise and enhance the medical images without any loss of information. We uses the curvelet transform in combination with ridglet transform along with CS (Cuckoo Search algorithm. The curvlet transform adapt and represents the sparse pixel informations with all edges. The edges play very important role in understanding of the images. Curvlet transform computes the edges very efficiently where the wavelets are failed. We used the CS to optimize the de-noising coefficients without loss of structural and morphological information. Our designed method would be accurate and efficient in de-noising the medical images. Our method attempts to remove the multiplicative and additive noises. Our proposed method is proved to be an efficient and reliable in removing all kind of noises from the medical images. Result indicates that our proposed approach is better than other approaches in removing impulse, Gaussian, and speckle noises.

  7. AMA- and RWE- Based Adaptive Kalman Filter for Denoising Fiber Optic Gyroscope Drift Signal.

    Science.gov (United States)

    Yang, Gongliu; Liu, Yuanyuan; Li, Ming; Song, Shunguang

    2015-10-23

    An improved double-factor adaptive Kalman filter called AMA-RWE-DFAKF is proposed to denoise fiber optic gyroscope (FOG) drift signal in both static and dynamic conditions. The first factor is Kalman gain updated by random weighting estimation (RWE) of the covariance matrix of innovation sequence at any time to ensure the lowest noise level of output, but the inertia of KF response increases in dynamic condition. To decrease the inertia, the second factor is the covariance matrix of predicted state vector adjusted by RWE only when discontinuities are detected by adaptive moving average (AMA).The AMA-RWE-DFAKF is applied for denoising FOG static and dynamic signals, its performance is compared with conventional KF (CKF), RWE-based adaptive KF with gain correction (RWE-AKFG), AMA- and RWE- based dual mode adaptive KF (AMA-RWE-DMAKF). Results of Allan variance on static signal and root mean square error (RMSE) on dynamic signal show that this proposed algorithm outperforms all the considered methods in denoising FOG signal.

  8. Alignment and qualification of the Gaia telescope using a Shack-Hartmann sensor

    Science.gov (United States)

    Dovillaire, G.; Pierot, D.

    2017-09-01

    Since almost 20 years, Imagine Optic develops, manufactures and offers to its worldwide customers reliable and accurate wavefront sensors and adaptive optics solutions. Long term collaboration between Imagine Optic and Airbus Defence and Space has been initiated on the Herschel program. More recently, a similar technology has been used to align and qualify the GAIA telescope.

  9. Wavelet Denoising of Radio Observations of Rotating Radio Transients (RRATs): Improved Timing Parameters for Eight RRATs

    Science.gov (United States)

    Jiang, M.; Cui, B.-Y.; Schmid, N. A.; McLaughlin, M. A.; Cao, Z.-C.

    2017-09-01

    Rotating radio transients (RRATs) are sporadically emitting pulsars detectable only through searches for single pulses. While over 100 RRATs have been detected, only a small fraction (roughly 20%) have phase-connected timing solutions, which are critical for determining how they relate to other neutron star populations. Detecting more pulses in order to achieve solutions is key to understanding their physical nature. Astronomical signals collected by radio telescopes contain noise from many sources, making the detection of weak pulses difficult. Applying a denoising method to raw time series prior to performing a single-pulse search typically leads to a more accurate estimation of their times of arrival (TOAs). Taking into account some features of RRAT pulses and noise, we present a denoising method based on wavelet data analysis, an image-processing technique. Assuming that the spin period of an RRAT is known, we estimate the frequency spectrum components contributing to the composition of RRAT pulses. This allows us to suppress the noise, which contributes to other frequencies. We apply the wavelet denoising method including selective wavelet reconstruction and wavelet shrinkage to the de-dispersed time series of eight RRATs with existing timing solutions. The signal-to-noise ratio (S/N) of most pulses are improved after wavelet denoising. Compared to the conventional approach, we measure 12%–69% more TOAs for the eight RRATs. The new timing solutions for the eight RRATs show 16%–90% smaller estimation error of most parameters. Thus, we conclude that wavelet analysis is an effective tool for denoising RRATs signal.

  10. Wavelet Denoising of Radio Observations of Rotating Radio Transients (RRATs): Improved Timing Parameters for Eight RRATs

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, M.; Schmid, N. A.; Cao, Z.-C. [Lane Department of Computer Science and Electrical Engineering West Virginia University Morgantown, WV 26506 (United States); Cui, B.-Y.; McLaughlin, M. A. [Department of Physics and Astronomy West Virginia University Morgantown, WV 26506 (United States)

    2017-09-20

    Rotating radio transients (RRATs) are sporadically emitting pulsars detectable only through searches for single pulses. While over 100 RRATs have been detected, only a small fraction (roughly 20%) have phase-connected timing solutions, which are critical for determining how they relate to other neutron star populations. Detecting more pulses in order to achieve solutions is key to understanding their physical nature. Astronomical signals collected by radio telescopes contain noise from many sources, making the detection of weak pulses difficult. Applying a denoising method to raw time series prior to performing a single-pulse search typically leads to a more accurate estimation of their times of arrival (TOAs). Taking into account some features of RRAT pulses and noise, we present a denoising method based on wavelet data analysis, an image-processing technique. Assuming that the spin period of an RRAT is known, we estimate the frequency spectrum components contributing to the composition of RRAT pulses. This allows us to suppress the noise, which contributes to other frequencies. We apply the wavelet denoising method including selective wavelet reconstruction and wavelet shrinkage to the de-dispersed time series of eight RRATs with existing timing solutions. The signal-to-noise ratio (S/N) of most pulses are improved after wavelet denoising. Compared to the conventional approach, we measure 12%–69% more TOAs for the eight RRATs. The new timing solutions for the eight RRATs show 16%–90% smaller estimation error of most parameters. Thus, we conclude that wavelet analysis is an effective tool for denoising RRATs signal.

  11. Aberration compensation using a spatial light modulator LCD

    International Nuclear Information System (INIS)

    Amezquita, R; Rincon, O; Torres, Y M

    2011-01-01

    The dynamic correction of aberrations introduced in optical systems have been a widely discussed topic in the past 10 years. Adaptive optics is the most important developed field where the Shack-Hartmann sensors and deformable mirrors are used for the measurement and correction of wavefronts. In this paper, an interferometric set-up which uses a Spatial Light Modulator (SLM) as an active element is proposed. Using this SLM a procedure for the compensation of all phase aberrations present in the experimental setup is shown.

  12. Random wave fields and scintillated beams

    CSIR Research Space (South Africa)

    Roux, FS

    2009-01-01

    Full Text Available F. Stef Roux CSIR National Laser Centre PO Box 395, Pretoria 0001, South Africa CSIR National Laser Centre – p.1/29 Contents . Scintillated beams and adaptive optics . Detecting a vortex — Shack-Hartmann . Remove optical vortices . Random vortex... beam. CSIR National Laser Centre – p.3/29 Weak scintillation If the scintillation is weak the resulting phase function of the optical beam is still continuous. Such a weakly scintillated beam can be corrected by an adaptive optical system. CSIR National...

  13. Study of Denoising in TEOAE Signals Using an Appropriate Mother Wavelet Function

    Directory of Open Access Journals (Sweden)

    Habib Alizadeh Dizaji

    2007-06-01

    Full Text Available Background and Aim: Matching a mother wavelet to class of signals can be of interest in signal analy­sis and denoising based on wavelet multiresolution analysis and decomposition. As transient evoked otoacoustic emissions (TEOAES are contaminated with noise, the aim of this work was to pro­vide a quantitative approach to the problem of matching a mother wavelet to TEOAE signals by us­ing tun­ing curves and to use it for analysis and denoising TEOAE signals. Approximated mother wave­let for TEOAE signals was calculated using an algorithm for designing wavelet to match a specified sig­nal.Materials and Methods: In this paper a tuning curve has used as a template for designing a mother wave­let that has maximum matching to the tuning curve. The mother wavelet matching was performed on tuning curves spectrum magnitude and phase independent of one another. The scaling function was calcu­lated from the matched mother wavelet and by using these functions, lowpass and highpass filters were designed for a filter bank and otoacoustic emissions signal analysis and synthesis. After signal analyz­ing, denoising was performed by time windowing the signal time-frequency component.Results: Aanalysis indicated more signal reconstruction improvement in comparison with coiflets mother wavelet and by using the purposed denoising algorithm it is possible to enhance signal to noise ra­tio up to dB.Conclusion: The wavelet generated from this algorithm was remarkably similar to the biorthogonal wave­lets. Therefore, by matching a biorthogonal wavelet to the tuning curve and using wavelet packet analy­sis, a high resolution time-frequency analysis for the otoacoustic emission signals is possible.

  14. Denoising of chaotic signal using independent component analysis and empirical mode decomposition with circulate translating

    International Nuclear Information System (INIS)

    Wang Wen-Bo; Zhang Xiao-Dong; Chang Yuchan; Wang Xiang-Li; Wang Zhao; Chen Xi; Zheng Lei

    2016-01-01

    In this paper, a new method to reduce noises within chaotic signals based on ICA (independent component analysis) and EMD (empirical mode decomposition) is proposed. The basic idea is decomposing chaotic signals and constructing multidimensional input vectors, firstly, on the base of EMD and its translation invariance. Secondly, it makes the independent component analysis on the input vectors, which means that a self adapting denoising is carried out for the intrinsic mode functions (IMFs) of chaotic signals. Finally, all IMFs compose the new denoised chaotic signal. Experiments on the Lorenz chaotic signal composed of different Gaussian noises and the monthly observed chaotic sequence on sunspots were put into practice. The results proved that the method proposed in this paper is effective in denoising of chaotic signals. Moreover, it can correct the center point in the phase space effectively, which makes it approach the real track of the chaotic attractor. (paper)

  15. A data-driven approach for denoising GNSS position time series

    Science.gov (United States)

    Li, Yanyan; Xu, Caijun; Yi, Lei; Fang, Rongxin

    2017-12-01

    Global navigation satellite system (GNSS) datasets suffer from common mode error (CME) and other unmodeled errors. To decrease the noise level in GNSS positioning, we propose a new data-driven adaptive multiscale denoising method in this paper. Both synthetic and real-world long-term GNSS datasets were employed to assess the performance of the proposed method, and its results were compared with those of stacking filtering, principal component analysis (PCA) and the recently developed multiscale multiway PCA. It is found that the proposed method can significantly eliminate the high-frequency white noise and remove the low-frequency CME. Furthermore, the proposed method is more precise for denoising GNSS signals than the other denoising methods. For example, in the real-world example, our method reduces the mean standard deviation of the north, east and vertical components from 1.54 to 0.26, 1.64 to 0.21 and 4.80 to 0.72 mm, respectively. Noise analysis indicates that for the original signals, a combination of power-law plus white noise model can be identified as the best noise model. For the filtered time series using our method, the generalized Gauss-Markov model is the best noise model with the spectral indices close to - 3, indicating that flicker walk noise can be identified. Moreover, the common mode error in the unfiltered time series is significantly reduced by the proposed method. After filtering with our method, a combination of power-law plus white noise model is the best noise model for the CMEs in the study region.

  16. Colostomia tipo hartmann em ratos: alterações morfológicas e dosagem de hidroxiprolina

    Directory of Open Access Journals (Sweden)

    João Carlos Simões

    Full Text Available A colostomia tem sido um procedimento cirúrgico freqüentemente empregado nas doenças colônicas, lesões traumáticas e neoplásicas. Este trabalho experimental, em ratos, visou estudar as progressivas mudanças morfológicas no cólon proximal e distal , após uma laparotomia e colostomia terminal, tipo Hartmann, que foram estudadas histologicamente e através da dosagem tecidual de hidroxiprolina. Utilizaram-se 40 ratos, machos, raça Wistar, com peso médio de 200 gramas, alocados em dois grupos (grupo I ou experimento e grupo II ou controle, subdivididos em quatro subgrupos: A,B,C e D com 10 animais em cada subgrupo. Os animais do grupo I (subgrupos A e B foram submetidos à colostomia tipo Hartmann, no cólon distal, a 7,5cm do canal anal. Nos ratos do grupo II foi praticada apenas uma laparotomia mediana. Os animais dos subgrupo A e C foram sacrificados no 30º dia de P.O., enquanto que os animais dos subgrupos B e D o sacrifício foi no 60º dia de P.O. A análise histológica dos segmentos colônicos permitiu observar infiltrado inflamatório agudo e crônico na lâmina própria, achatamento pronunciado das criptas, diminuição do número de criptas e da celularidade epitelial, redução das células caliciformes e da mucossecreção, adelgaçamento da muscular da mucosa, mais intensos no coto colônico distal dos animais submetidos à colostomia terminal tipo Hartmann (subgrupos A e B. Os segmentos proximais apresentavam estas alterações, porém mais discretas. A dosagem de hidroxiprolina nos tecidos colônicos não revelou alterações estatisticamente significativas quanto ao conteúdo de colágeno ou do peso desidratado. Estes achados permitem demonstrar alterações morfológicas inflamatórias e hipotróficas mais pronunciadas no cólon distal de ratos submetidos à colostomia tipo Hartmann.

  17. X-ray active mirror coupled with a Hartmann wavefront sensor

    International Nuclear Information System (INIS)

    Idir, Mourad; Mercere, Pascal; Modi, Mohammed H.; Dovillaire, Guillaume; Levecq, Xavier; Bucourt, Samuel; Escolano, Lionel; Sauvageot, Paul

    2010-01-01

    This paper reports on the design and performances of a test prototype active X-ray mirror (AXM) which has been designed and manufactured in collaboration with the French Small and Medium Enterprise mechanical company ISP System for the national French storage ring SOLEIL. Coupled with this active X-ray mirror and also in collaboration with another French Small and Medium Enterprise (Imagine Optic) a lot of efforts have been done in order to design and fabricate a wavefront X-ray analyzer based on the Hartmann principle (Hartman wavefront sensor, HWS).

  18. Deep Fault Recognizer: An Integrated Model to Denoise and Extract Features for Fault Diagnosis in Rotating Machinery

    Directory of Open Access Journals (Sweden)

    Xiaojie Guo

    2016-12-01

    Full Text Available Fault diagnosis in rotating machinery is significant to avoid serious accidents; thus, an accurate and timely diagnosis method is necessary. With the breakthrough in deep learning algorithm, some intelligent methods, such as deep belief network (DBN and deep convolution neural network (DCNN, have been developed with satisfactory performances to conduct machinery fault diagnosis. However, only a few of these methods consider properly dealing with noises that exist in practical situations and the denoising methods are in need of extensive professional experiences. Accordingly, rethinking the fault diagnosis method based on deep architectures is essential. Hence, this study proposes an automatic denoising and feature extraction method that inherently considers spatial and temporal correlations. In this study, an integrated deep fault recognizer model based on the stacked denoising autoencoder (SDAE is applied to both denoise random noises in the raw signals and represent fault features in fault pattern diagnosis for both bearing rolling fault and gearbox fault, and trained in a greedy layer-wise fashion. Finally, the experimental validation demonstrates that the proposed method has better diagnosis accuracy than DBN, particularly in the existing situation of noises with superiority of approximately 7% in fault diagnosis accuracy.

  19. Image denoising by a direct variational minimization

    Directory of Open Access Journals (Sweden)

    Pilipović Stevan

    2011-01-01

    Full Text Available Abstract In this article we introduce a novel method for the image de-noising which combines a mathematically well-posdenes of the variational modeling with the efficiency of a patch-based approach in the field of image processing. It based on a direct minimization of an energy functional containing a minimal surface regularizer that uses fractional gradient. The minimization is obtained on every predefined patch of the image, independently. By doing so, we avoid the use of an artificial time PDE model with its inherent problems of finding optimal stopping time, as well as the optimal time step. Moreover, we control the level of image smoothing on each patch (and thus on the whole image by adapting the Lagrange multiplier using the information on the level of discontinuities on a particular patch, which we obtain by pre-processing. In order to reduce the average number of vectors in the approximation generator and still to obtain the minimal degradation, we combine a Ritz variational method for the actual minimization on a patch, and a complementary fractional variational principle. Thus, the proposed method becomes computationally feasible and applicable for practical purposes. We confirm our claims with experimental results, by comparing the proposed method with a couple of PDE-based methods, where we get significantly better denoising results specially on the oscillatory regions.

  20. SU-G-IeP4-09: Method of Human Eye Aberration Measurement Using Plenoptic Camera Over Large Field of View

    International Nuclear Information System (INIS)

    Lv, Yang; Wang, Ruixing; Ma, Haotong; Zhang, Xuanzhe; Ning, Yu; Xu, Xiaojun

    2016-01-01

    Purpose: The measurement based on Shack-Hartmann wave-front sensor(WFS), obtaining both the high and low order wave-front aberrations simultaneously and accurately, has been applied in the detection of human eyes aberration in recent years. However, Its application is limited by the small field of view (FOV), slight eye movement leads the optical bacon image exceeds the lenslet array which result in uncertain detection error. To overcome difficulties of precise eye location, the capacity of detecting eye wave-front aberration over FOV much larger than simply a single conjugate Hartmann WFS accurately and simultaneously is demanded. Methods: Plenoptic camera’s lenslet array subdivides the aperture light-field in spatial frequency domain, capture the 4-D light-field information. Data recorded by plenoptic cameras can be used to extract the wave-front phases associated to the eyes aberration. The corresponding theoretical model and simulation system is built up in this article to discuss wave-front measurement performance when utilizing plenoptic camera as wave-front sensor. Results: The simulation results indicate that the plenoptic wave-front method can obtain both the high and low order eyes wave-front aberration with the same accuracy as conventional system in single visual angle detectionand over FOV much larger than simply a single conjugate Hartmann systems. Meanwhile, simulation results show that detection of eye aberrations wave-front in different visual angle can be achieved effectively and simultaneously by plenoptic method, by both point and extended optical beacon from the eye. Conclusion: Plenoptic wave-front method possesses the feasibility in eye aberrations wave-front detection. With larger FOV, the method can effectively reduce the detection error brought by imprecise eye location and simplify the eye aberrations wave-front detection system comparing with which based on Shack-Hartmann WFS. Unique advantage of the plenoptic method lies in obtaining

  1. SU-G-IeP4-09: Method of Human Eye Aberration Measurement Using Plenoptic Camera Over Large Field of View

    Energy Technology Data Exchange (ETDEWEB)

    Lv, Yang; Wang, Ruixing; Ma, Haotong; Zhang, Xuanzhe; Ning, Yu; Xu, Xiaojun [College of Optoelectronic Science and Engineering, National University of Defense Technology, Changsha (China)

    2016-06-15

    Purpose: The measurement based on Shack-Hartmann wave-front sensor(WFS), obtaining both the high and low order wave-front aberrations simultaneously and accurately, has been applied in the detection of human eyes aberration in recent years. However, Its application is limited by the small field of view (FOV), slight eye movement leads the optical bacon image exceeds the lenslet array which result in uncertain detection error. To overcome difficulties of precise eye location, the capacity of detecting eye wave-front aberration over FOV much larger than simply a single conjugate Hartmann WFS accurately and simultaneously is demanded. Methods: Plenoptic camera’s lenslet array subdivides the aperture light-field in spatial frequency domain, capture the 4-D light-field information. Data recorded by plenoptic cameras can be used to extract the wave-front phases associated to the eyes aberration. The corresponding theoretical model and simulation system is built up in this article to discuss wave-front measurement performance when utilizing plenoptic camera as wave-front sensor. Results: The simulation results indicate that the plenoptic wave-front method can obtain both the high and low order eyes wave-front aberration with the same accuracy as conventional system in single visual angle detectionand over FOV much larger than simply a single conjugate Hartmann systems. Meanwhile, simulation results show that detection of eye aberrations wave-front in different visual angle can be achieved effectively and simultaneously by plenoptic method, by both point and extended optical beacon from the eye. Conclusion: Plenoptic wave-front method possesses the feasibility in eye aberrations wave-front detection. With larger FOV, the method can effectively reduce the detection error brought by imprecise eye location and simplify the eye aberrations wave-front detection system comparing with which based on Shack-Hartmann WFS. Unique advantage of the plenoptic method lies in obtaining

  2. Wavelet-domain de-noising of OCT images of human brain malignant glioma

    Science.gov (United States)

    Dolganova, I. N.; Aleksandrova, P. V.; Beshplav, S.-I. T.; Chernomyrdin, N. V.; Dubyanskaya, E. N.; Goryaynov, S. A.; Kurlov, V. N.; Reshetov, I. V.; Potapov, A. A.; Tuchin, V. V.; Zaytsev, K. I.

    2018-04-01

    We have proposed a wavelet-domain de-noising technique for imaging of human brain malignant glioma by optical coherence tomography (OCT). It implies OCT image decomposition using the direct fast wavelet transform, thresholding of the obtained wavelet spectrum and further inverse fast wavelet transform for image reconstruction. By selecting both wavelet basis and thresholding procedure, we have found an optimal wavelet filter, which application improves differentiation of the considered brain tissue classes - i.e. malignant glioma and normal/intact tissue. Namely, it allows reducing the scattering noise in the OCT images and retaining signal decrement for each tissue class. Therefore, the observed results reveals the wavelet-domain de-noising as a prospective tool for improved characterization of biological tissue using the OCT.

  3. MULTISCALE TENSOR ANISOTROPIC FILTERING OF FLUORESCENCE MICROSCOPY FOR DENOISING MICROVASCULATURE.

    Science.gov (United States)

    Prasath, V B S; Pelapur, R; Glinskii, O V; Glinsky, V V; Huxley, V H; Palaniappan, K

    2015-04-01

    Fluorescence microscopy images are contaminated by noise and improving image quality without blurring vascular structures by filtering is an important step in automatic image analysis. The application of interest here is to automatically extract the structural components of the microvascular system with accuracy from images acquired by fluorescence microscopy. A robust denoising process is necessary in order to extract accurate vascular morphology information. For this purpose, we propose a multiscale tensor with anisotropic diffusion model which progressively and adaptively updates the amount of smoothing while preserving vessel boundaries accurately. Based on a coherency enhancing flow with planar confidence measure and fused 3D structure information, our method integrates multiple scales for microvasculature preservation and noise removal membrane structures. Experimental results on simulated synthetic images and epifluorescence images show the advantage of our improvement over other related diffusion filters. We further show that the proposed multiscale integration approach improves denoising accuracy of different tensor diffusion methods to obtain better microvasculature segmentation.

  4. Health economic analysis of laparoscopic lavage versus Hartmann's procedure for diverticulitis in the randomized DILALA trial

    DEFF Research Database (Denmark)

    Gehrman, J.; Angenete, E; Björholt, I.

    2016-01-01

    Background: Open surgery with resection and colostomy (Hartmann's procedure) has been the standard treatment for perforated diverticulitis with purulent peritonitis. In recent years laparoscopic lavage has emerged as an alternative, with potential benefits for patients with purulent peritonitis...

  5. A video Hartmann wavefront diagnostic that incorporates a monolithic microlens array

    International Nuclear Information System (INIS)

    Toeppen, J.S.; Bliss, E.S.; Long, T.W.; Salmon, J.T.

    1991-07-01

    we have developed a video Hartmann wavefront sensor that incorporates a monolithic array of microlenses as the focusing elements. The sensor uses a monolithic array of photofabricated lenslets. Combined with a video processor, this system reveals local gradients of the wavefront at a video frame rate of 30 Hz. Higher bandwidth is easily attainable with a camera and video processor that have faster frame rates. When used with a temporal filter, the reconstructed wavefront error is less than 1/10th wave

  6. Adaptive optics system for the IRSOL solar observatory

    Science.gov (United States)

    Ramelli, Renzo; Bucher, Roberto; Rossini, Leopoldo; Bianda, Michele; Balemi, Silvano

    2010-07-01

    We present a low cost adaptive optics system developed for the solar observatory at Istituto Ricerche Solari Locarno (IRSOL), Switzerland. The Shack-Hartmann Wavefront Sensor is based on a Dalsa CCD camera with 256 pixels × 256 pixels working at 1kHz. The wavefront compensation is obtained by a deformable mirror with 37 actuators and a Tip-Tilt mirror. A real time control software has been developed on a RTAI-Linux PC. Scicos/Scilab based software has been realized for an online analysis of the system behavior. The software is completely open source.

  7. Commissioning Instrument for the GTC

    Science.gov (United States)

    Cuevas, S.; Sánchez, B.; Bringas, V.; Espejo, C.; Flores, R.; Chapa, O.; Lara, G.; Chavolla, A.; Anguiano, G.; Arciniega, S.; Dorantes, A.; González, J. L.; Montoya, J. M.; Toral, R.; Hernández, H.; Nava, R.; Devaney, N.; Castro, J.; Cavaller-Marqués, L.

    2005-12-01

    During the GTC integration phase, the Commissioning Instrument (CI) will be a diagnostic tool for performance verification. The CI features four operation modes: imaging, pupil imaging, Curvature WFS, and high resolution Shack-Hartmann WFS. This instrument was built by the Instituto de Astronomía UNAM and the Centro de Ingeniería y Desarrollo Industrial (CIDESI) under GRANTECAN contract after a public bid. In this paper we made a general instrument overview and we show some of the performance final results obtained when the Factory Acceptance tests previous to its transport to La Palma.

  8. System and method for image reconstruction, analysis, and/or de-noising

    KAUST Repository

    Laleg-Kirati, Taous-Meriem; Kaisserli, Zineb

    2015-01-01

    A method and system can analyze, reconstruct, and/or denoise an image. The method and system can include interpreting a signal as a potential of a Schrödinger operator, decomposing the signal into squared eigenfunctions, reducing a design parameter

  9. Nonlinear Denoising and Analysis of Neuroimages With Kernel Principal Component Analysis and Pre-Image Estimation

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Abrahamsen, Trine Julie; Madsen, Kristoffer Hougaard

    2012-01-01

    We investigate the use of kernel principal component analysis (PCA) and the inverse problem known as pre-image estimation in neuroimaging: i) We explore kernel PCA and pre-image estimation as a means for image denoising as part of the image preprocessing pipeline. Evaluation of the denoising...... procedure is performed within a data-driven split-half evaluation framework. ii) We introduce manifold navigation for exploration of a nonlinear data manifold, and illustrate how pre-image estimation can be used to generate brain maps in the continuum between experimentally defined brain states/classes. We...

  10. Efficient bias correction for magnetic resonance image denoising.

    Science.gov (United States)

    Mukherjee, Partha Sarathi; Qiu, Peihua

    2013-05-30

    Magnetic resonance imaging (MRI) is a popular radiology technique that is used for visualizing detailed internal structure of the body. Observed MRI images are generated by the inverse Fourier transformation from received frequency signals of a magnetic resonance scanner system. Previous research has demonstrated that random noise involved in the observed MRI images can be described adequately by the so-called Rician noise model. Under that model, the observed image intensity at a given pixel is a nonlinear function of the true image intensity and of two independent zero-mean random variables with the same normal distribution. Because of such a complicated noise structure in the observed MRI images, denoised images by conventional denoising methods are usually biased, and the bias could reduce image contrast and negatively affect subsequent image analysis. Therefore, it is important to address the bias issue properly. To this end, several bias-correction procedures have been proposed in the literature. In this paper, we study the Rician noise model and the corresponding bias-correction problem systematically and propose a new and more effective bias-correction formula based on the regression analysis and Monte Carlo simulation. Numerical studies show that our proposed method works well in various applications. Copyright © 2012 John Wiley & Sons, Ltd.

  11. Wavelet-based multicomponent denoising on GPU to improve the classification of hyperspectral images

    Science.gov (United States)

    Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco; Mouriño, J. C.

    2017-10-01

    Supervised classification allows handling a wide range of remote sensing hyperspectral applications. Enhancing the spatial organization of the pixels over the image has proven to be beneficial for the interpretation of the image content, thus increasing the classification accuracy. Denoising in the spatial domain of the image has been shown as a technique that enhances the structures in the image. This paper proposes a multi-component denoising approach in order to increase the classification accuracy when a classification method is applied. It is computed on multicore CPUs and NVIDIA GPUs. The method combines feature extraction based on a 1Ddiscrete wavelet transform (DWT) applied in the spectral dimension followed by an Extended Morphological Profile (EMP) and a classifier (SVM or ELM). The multi-component noise reduction is applied to the EMP just before the classification. The denoising recursively applies a separable 2D DWT after which the number of wavelet coefficients is reduced by using a threshold. Finally, inverse 2D-DWT filters are applied to reconstruct the noise free original component. The computational cost of the classifiers as well as the cost of the whole classification chain is high but it is reduced achieving real-time behavior for some applications through their computation on NVIDIA multi-GPU platforms.

  12. Input Space Regularization Stabilizes Pre-images for Kernel PCA De-noising

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie; Hansen, Lars Kai

    2009-01-01

    Solution of the pre-image problem is key to efficient nonlinear de-noising using kernel Principal Component Analysis. Pre-image estimation is inherently ill-posed for typical kernels used in applications and consequently the most widely used estimation schemes lack stability. For de...

  13. Application of reversible denoising and lifting steps with step skipping to color space transforms for improved lossless compression

    Science.gov (United States)

    Starosolski, Roman

    2016-07-01

    Reversible denoising and lifting steps (RDLS) are lifting steps integrated with denoising filters in such a way that, despite the inherently irreversible nature of denoising, they are perfectly reversible. We investigated the application of RDLS to reversible color space transforms: RCT, YCoCg-R, RDgDb, and LDgEb. In order to improve RDLS effects, we propose a heuristic for image-adaptive denoising filter selection, a fast estimator of the compressed image bitrate, and a special filter that may result in skipping of the steps. We analyzed the properties of the presented methods, paying special attention to their usefulness from a practical standpoint. For a diverse image test-set and lossless JPEG-LS, JPEG 2000, and JPEG XR algorithms, RDLS improves the bitrates of all the examined transforms. The most interesting results were obtained for an estimation-based heuristic filter selection out of a set of seven filters; the cost of this variant was similar to or lower than the transform cost, and it improved the average lossless JPEG 2000 bitrates by 2.65% for RDgDb and by over 1% for other transforms; bitrates of certain images were improved to a significantly greater extent.

  14. Ontologie della resistenza: note sul concetto di materia in N. Hartmann e G. Lukács

    Directory of Open Access Journals (Sweden)

    D’Anna, Giuseppe

    2010-01-01

    Full Text Available The essay discusses the topic of matter in N. Hartmann and G. Lukács. It shows how both of them use their own concepts of matter in order to counter correlativistic philosophies – such as Phenomenology, Neo-Kantianism, Existentialism, Pragmatism, Positivism, Empirio-Criticism or Conventionalism – which reduce the real world to the laws of thought, to those of the absolute consciousness or to scientific laws, therefore eliminating the resistance of the reality and the meaning of praxis and dialectic in front of the givenness of the world. While Hartmann uses the concept of matter in order to revive the idea of a reality going beyond the laws of thought, Lukács uses Hartmann’s ontology in order to counter the philosophies that, by excluding the exceedance of reality and matter, provide theories which appear to him functional to the capitalist society.

  15. Discrete wavelet transform-based denoising technique for advanced state-of-charge estimator of a lithium-ion battery in electric vehicles

    International Nuclear Information System (INIS)

    Lee, Seongjun; Kim, Jonghoon

    2015-01-01

    Sophisticated data of the experimental DCV (discharging/charging voltage) of a lithium-ion battery is required for high-accuracy SOC (state-of-charge) estimation algorithms based on the state-space ECM (electrical circuit model) in BMSs (battery management systems). However, when sensing noisy DCV signals, erroneous SOC estimation (which results in low BMS performance) is inevitable. Therefore, this manuscript describes the design and implementation of a DWT (discrete wavelet transform)-based denoising technique for DCV signals. The steps for denoising a noisy DCV measurement in the proposed approach are as follows. First, using MRA (multi-resolution analysis), the noise-riding DCV signal is decomposed into different frequency sub-bands (low- and high-frequency components, A n and D n ). Specifically, signal processing of the high frequency component D n that focuses on a short-time interval is necessary to reduce noise in the DCV measurement. Second, a hard-thresholding-based denoising rule is applied to adjust the wavelet coefficients of the DWT to achieve a clear separation between the signal and the noise. Third, the desired de-noised DCV signal is reconstructed by taking the IDWT (inverse discrete wavelet transform) of the filtered detailed coefficients. Finally, this signal is sent to the ECM-based SOC estimation algorithm using an EKF (extended Kalman filter). Experimental results indicate the robustness of the proposed approach for reliable SOC estimation. - Highlights: • Sophisticated data of the experimental DCV is required for high-accuracy SOC. • DWT (discrete wavelet transform)-based denoising technique is newly investigated. • Three steps for denoising a noisy DCV measurement in this work are implemented. • Experimental results indicate the robustness of the proposed work for reliable SOC

  16. Impact of image denoising on image quality, quantitative parameters and sensitivity of ultra-low-dose volume perfusion CT imaging

    International Nuclear Information System (INIS)

    Othman, Ahmed E.; Brockmann, Carolin; Afat, Saif; Pjontek, Rastislav; Nikoubashman, Omid; Brockmann, Marc A.; Wiesmann, Martin; Yang, Zepa; Kim, Changwon; Nikolaou, Konstantin; Kim, Jong Hyo

    2016-01-01

    To examine the impact of denoising on ultra-low-dose volume perfusion CT (ULD-VPCT) imaging in acute stroke. Simulated ULD-VPCT data sets at 20 % dose rate were generated from perfusion data sets of 20 patients with suspected ischemic stroke acquired at 80 kVp/180 mAs. Four data sets were generated from each ULD-VPCT data set: not-denoised (ND); denoised using spatiotemporal filter (D1); denoised using quanta-stream diffusion technique (D2); combination of both methods (D1 + D2). Signal-to-noise ratio (SNR) was measured in the resulting 100 data sets. Image quality, presence/absence of ischemic lesions, CBV and CBF scores according to a modified ASPECTS score were assessed by two blinded readers. SNR and qualitative scores were highest for D1 + D2 and lowest for ND (all p ≤ 0.001). In 25 % of the patients, ND maps were not assessable and therefore excluded from further analyses. Compared to original data sets, in D2 and D1 + D2, readers correctly identified all patients with ischemic lesions (sensitivity 1.0, kappa 1.0). Lesion size was most accurately estimated for D1 + D2 with a sensitivity of 1.0 (CBV) and 0.94 (CBF) and an inter-rater agreement of 1.0 and 0.92, respectively. An appropriate combination of denoising techniques applied in ULD-VPCT produces diagnostically sufficient perfusion maps at substantially reduced dose rates as low as 20 % of the normal scan. (orig.)

  17. Denoising of Microscopy Images: A Review of the State-of-the-Art, and a New Sparsity-Based Method.

    Science.gov (United States)

    Meiniel, William; Olivo-Marin, Jean-Christophe; Angelini, Elsa D

    2018-08-01

    This paper reviews the state-of-the-art in denoising methods for biological microscopy images and introduces a new and original sparsity-based algorithm. The proposed method combines total variation (TV) spatial regularization, enhancement of low-frequency information, and aggregation of sparse estimators and is able to handle simple and complex types of noise (Gaussian, Poisson, and mixed), without any a priori model and with a single set of parameter values. An extended comparison is also presented, that evaluates the denoising performance of the thirteen (including ours) state-of-the-art denoising methods specifically designed to handle the different types of noises found in bioimaging. Quantitative and qualitative results on synthetic and real images show that the proposed method outperforms the other ones on the majority of the tested scenarios.

  18. A novel strategy for signal denoising using reweighted SVD and its applications to weak fault feature enhancement of rotating machinery

    Science.gov (United States)

    Zhao, Ming; Jia, Xiaodong

    2017-09-01

    Singular value decomposition (SVD), as an effective signal denoising tool, has been attracting considerable attention in recent years. The basic idea behind SVD denoising is to preserve the singular components (SCs) with significant singular values. However, it is shown that the singular values mainly reflect the energy of decomposed SCs, therefore traditional SVD denoising approaches are essentially energy-based, which tend to highlight the high-energy regular components in the measured signal, while ignoring the weak feature caused by early fault. To overcome this issue, a reweighted singular value decomposition (RSVD) strategy is proposed for signal denoising and weak feature enhancement. In this work, a novel information index called periodic modulation intensity is introduced to quantify the diagnostic information in a mechanical signal. With this index, the decomposed SCs can be evaluated and sorted according to their information levels, rather than energy. Based on that, a truncated linear weighting function is proposed to control the contribution of each SC in the reconstruction of the denoised signal. In this way, some weak but informative SCs could be highlighted effectively. The advantages of RSVD over traditional approaches are demonstrated by both simulated signals and real vibration/acoustic data from a two-stage gearbox as well as train bearings. The results demonstrate that the proposed method can successfully extract the weak fault feature even in the presence of heavy noise and ambient interferences.

  19. Comparative analysis of chosen transforms in the context of de-noising harmonic signals

    Directory of Open Access Journals (Sweden)

    Artur Zacniewski

    2015-09-01

    Full Text Available In the article, comparison of popular transforms used i.a. in denoising harmonical signals was presented. The division of signals submitted to mathematical analysis was shown and chosen transforms such as Short Time Fourier Transform, Wigner-Ville Distribution, Wavelet Transform and Discrete Cosine Transform were presented. Harmonic signal with white noise added was submitted for research. During research, the parameters of noise were changed to analyze the effects of using particular transform on noised signal. The importance of right choice for transform and its parameters (different for particular kind of transform was shown. Small changes in parameters or different functions used in transform can lead to considerably different results.[b]Keywords[/b]: denoising of harmonical signals, wavelet transform, discrete cosine transform, DCT

  20. Spectral data de-noising using semi-classical signal analysis: application to localized MRS

    KAUST Repository

    Laleg-Kirati, Taous-Meriem

    2016-09-05

    In this paper, we propose a new post-processing technique called semi-classical signal analysis (SCSA) for MRS data de-noising. Similar to Fourier transformation, SCSA decomposes the input real positive MR spectrum into a set of linear combinations of squared eigenfunctions equivalently represented by localized functions with shape derived from the potential function of the Schrodinger operator. In this manner, the MRS spectral peaks represented as a sum of these \\'shaped like\\' functions are efficiently separated from noise and accurately analyzed. The performance of the method is tested by analyzing simulated and real MRS data. The results obtained demonstrate that the SCSA method is highly efficient in localized MRS data de-noising and allows for an accurate data quantification.

  1. Spectral data de-noising using semi-classical signal analysis: application to localized MRS

    KAUST Repository

    Laleg-Kirati, Taous-Meriem; Zhang, Jiayu; Achten, Eric; Serrai, Hacene

    2016-01-01

    In this paper, we propose a new post-processing technique called semi-classical signal analysis (SCSA) for MRS data de-noising. Similar to Fourier transformation, SCSA decomposes the input real positive MR spectrum into a set of linear combinations of squared eigenfunctions equivalently represented by localized functions with shape derived from the potential function of the Schrodinger operator. In this manner, the MRS spectral peaks represented as a sum of these 'shaped like' functions are efficiently separated from noise and accurately analyzed. The performance of the method is tested by analyzing simulated and real MRS data. The results obtained demonstrate that the SCSA method is highly efficient in localized MRS data de-noising and allows for an accurate data quantification.

  2. Autonomia apesar da dependência: a construção de uma Antropologia Dimensional no diálogo entre Frankl e Hartmann

    Directory of Open Access Journals (Sweden)

    Daniel Rubens Santiago da Silva

    2016-12-01

    Full Text Available O artigo se trata de um estudo teórico onde são apresentados alguns pontos da Ontologia Dimensional de Nicolai Hartmann e a relação destes com a Antropologia Ontológico-Dimensional de Viktor Frankl. Além dos pontos em que se percebe uma clara influência de Hartmann no pensamento frankliano, procurou-se chegar a elementos do pensamento do primeiro para além daquilo que serve de fundamento à Logoterapia. Quanto à influência de Hartmann, concluiu-se pela centralidade de sua Ontologia Dimensional no esquema frankliano, atestada pelas concessões ao condicionamento “psicofísico” e pela crítica ao pandeterminismo e ao reducionismo. O resultado é a fórmula: “autonomia apesar da dependência”. Quanto aos elementos que estão para além daquilo que serve de fundamento à Logoterapia, perceberam-se possibilidades fecundas de diálogo, como por ocasião da discussão em torno da vida espiritual “a-temporal” e do “espírito objetivo”.

  3. Fractional order integration and fuzzy logic based filter for denoising of echocardiographic image.

    Science.gov (United States)

    Saadia, Ayesha; Rashdi, Adnan

    2016-12-01

    Ultrasound is widely used for imaging due to its cost effectiveness and safety feature. However, ultrasound images are inherently corrupted with speckle noise which severely affects the quality of these images and create difficulty for physicians in diagnosis. To get maximum benefit from ultrasound imaging, image denoising is an essential requirement. To perform image denoising, a two stage methodology using fuzzy weighted mean and fractional integration filter has been proposed in this research work. In stage-1, image pixels are processed by applying a 3 × 3 window around each pixel and fuzzy logic is used to assign weights to the pixels in each window, replacing central pixel of the window with weighted mean of all neighboring pixels present in the same window. Noise suppression is achieved by assigning weights to the pixels while preserving edges and other important features of an image. In stage-2, the resultant image is further improved by fractional order integration filter. Effectiveness of the proposed methodology has been analyzed for standard test images artificially corrupted with speckle noise and real ultrasound B-mode images. Results of the proposed technique have been compared with different state-of-the-art techniques including Lsmv, Wiener, Geometric filter, Bilateral, Non-local means, Wavelet, Perona et al., Total variation (TV), Global Adaptive Fractional Integral Algorithm (GAFIA) and Improved Fractional Order Differential (IFD) model. Comparison has been done on quantitative and qualitative basis. For quantitative analysis different metrics like Peak Signal to Noise Ratio (PSNR), Speckle Suppression Index (SSI), Structural Similarity (SSIM), Edge Preservation Index (β) and Correlation Coefficient (ρ) have been used. Simulations have been done using Matlab. Simulation results of artificially corrupted standard test images and two real Echocardiographic images reveal that the proposed method outperforms existing image denoising techniques

  4. Fast and accurate denoising method applied to very high resolution optical remote sensing images

    Science.gov (United States)

    Masse, Antoine; Lefèvre, Sébastien; Binet, Renaud; Artigues, Stéphanie; Lassalle, Pierre; Blanchet, Gwendoline; Baillarin, Simon

    2017-10-01

    Restoration of Very High Resolution (VHR) optical Remote Sensing Image (RSI) is critical and leads to the problem of removing instrumental noise while keeping integrity of relevant information. Improving denoising in an image processing chain implies increasing image quality and improving performance of all following tasks operated by experts (photo-interpretation, cartography, etc.) or by algorithms (land cover mapping, change detection, 3D reconstruction, etc.). In a context of large industrial VHR image production, the selected denoising method should optimized accuracy and robustness with relevant information and saliency conservation, and rapidity due to the huge amount of data acquired and/or archived. Very recent research in image processing leads to a fast and accurate algorithm called Non Local Bayes (NLB) that we propose to adapt and optimize for VHR RSIs. This method is well suited for mass production thanks to its best trade-off between accuracy and computational complexity compared to other state-of-the-art methods. NLB is based on a simple principle: similar structures in an image have similar noise distribution and thus can be denoised with the same noise estimation. In this paper, we describe in details algorithm operations and performances, and analyze parameter sensibilities on various typical real areas observed in VHR RSIs.

  5. Wavelet-based de-noising algorithm for images acquired with parallel magnetic resonance imaging (MRI)

    International Nuclear Information System (INIS)

    Delakis, Ioannis; Hammad, Omer; Kitney, Richard I

    2007-01-01

    Wavelet-based de-noising has been shown to improve image signal-to-noise ratio in magnetic resonance imaging (MRI) while maintaining spatial resolution. Wavelet-based de-noising techniques typically implemented in MRI require that noise displays uniform spatial distribution. However, images acquired with parallel MRI have spatially varying noise levels. In this work, a new algorithm for filtering images with parallel MRI is presented. The proposed algorithm extracts the edges from the original image and then generates a noise map from the wavelet coefficients at finer scales. The noise map is zeroed at locations where edges have been detected and directional analysis is also used to calculate noise in regions of low-contrast edges that may not have been detected. The new methodology was applied on phantom and brain images and compared with other applicable de-noising techniques. The performance of the proposed algorithm was shown to be comparable with other techniques in central areas of the images, where noise levels are high. In addition, finer details and edges were maintained in peripheral areas, where noise levels are low. The proposed methodology is fully automated and can be applied on final reconstructed images without requiring sensitivity profiles or noise matrices of the receiver coils, therefore making it suitable for implementation in a clinical MRI setting

  6. An efficient dictionary learning algorithm and its application to 3-D medical image denoising.

    Science.gov (United States)

    Li, Shutao; Fang, Leyuan; Yin, Haitao

    2012-02-01

    In this paper, we propose an efficient dictionary learning algorithm for sparse representation of given data and suggest a way to apply this algorithm to 3-D medical image denoising. Our learning approach is composed of two main parts: sparse coding and dictionary updating. On the sparse coding stage, an efficient algorithm named multiple clusters pursuit (MCP) is proposed. The MCP first applies a dictionary structuring strategy to cluster the atoms with high coherence together, and then employs a multiple-selection strategy to select several competitive atoms at each iteration. These two strategies can greatly reduce the computation complexity of the MCP and assist it to obtain better sparse solution. On the dictionary updating stage, the alternating optimization that efficiently approximates the singular value decomposition is introduced. Furthermore, in the 3-D medical image denoising application, a joint 3-D operation is proposed for taking the learning capabilities of the presented algorithm to simultaneously capture the correlations within each slice and correlations across the nearby slices, thereby obtaining better denoising results. The experiments on both synthetically generated data and real 3-D medical images demonstrate that the proposed approach has superior performance compared to some well-known methods. © 2011 IEEE

  7. Wavefront Derived Refraction and Full Eye Biometry in Pseudophakic Eyes.

    Directory of Open Access Journals (Sweden)

    Xinjie Mao

    Full Text Available To assess wavefront derived refraction and full eye biometry including ciliary muscle dimension and full eye axial geometry in pseudophakic eyes using spectral domain OCT equipped with a Shack-Hartmann wavefront sensor.Twenty-eight adult subjects (32 pseudophakic eyes having recently undergone cataract surgery were enrolled in this study. A custom system combining two optical coherence tomography systems with a Shack-Hartmann wavefront sensor was constructed to image and monitor changes in whole eye biometry, the ciliary muscle and ocular aberration in the pseudophakic eye. A Badal optical channel and a visual target aligning with the wavefront sensor were incorporated into the system for measuring the wavefront-derived refraction. The imaging acquisition was performed twice. The coefficients of repeatability (CoR and intraclass correlation coefficient (ICC were calculated.Images were acquired and processed successfully in all patients. No significant difference was detected between repeated measurements of ciliary muscle dimension, full-eye biometry or defocus aberration. The CoR of full-eye biometry ranged from 0.36% to 3.04% and the ICC ranged from 0.981 to 0.999. The CoR for ciliary muscle dimensions ranged from 12.2% to 41.6% and the ICC ranged from 0.767 to 0.919. The defocus aberrations of the two measurements were 0.443 ± 0.534 D and 0.447 ± 0.586 D and the ICC was 0.951.The combined system is capable of measuring full eye biometry and refraction with good repeatability. The system is suitable for future investigation of pseudoaccommodation in the pseudophakic eye.

  8. Denoising of B1+ field maps for noise-robust image reconstruction in electrical properties tomography

    International Nuclear Information System (INIS)

    Michel, Eric; Hernandez, Daniel; Cho, Min Hyoung; Lee, Soo Yeol

    2014-01-01

    Purpose: To validate the use of adaptive nonlinear filters in reconstructing conductivity and permittivity images from the noisy B 1 + maps in electrical properties tomography (EPT). Methods: In EPT, electrical property images are computed by taking Laplacian of the B 1 + maps. To mitigate the noise amplification in computing the Laplacian, the authors applied adaptive nonlinear denoising filters to the measured complex B 1 + maps. After the denoising process, they computed the Laplacian by central differences. They performed EPT experiments on phantoms and a human brain at 3 T along with corresponding EPT simulations on finite-difference time-domain models. They evaluated the EPT images comparing them with the ones obtained by previous EPT reconstruction methods. Results: In both the EPT simulations and experiments, the nonlinear filtering greatly improved the EPT image quality when evaluated in terms of the mean and standard deviation of the electrical property values at the regions of interest. The proposed method also improved the overall similarity between the reconstructed conductivity images and the true shapes of the conductivity distribution. Conclusions: The nonlinear denoising enabled us to obtain better-quality EPT images of the phantoms and the human brain at 3 T

  9. The Hilbert-Huang Transform-Based Denoising Method for the TEM Response of a PRBS Source Signal

    Science.gov (United States)

    Hai, Li; Guo-qiang, Xue; Pan, Zhao; Hua-sen, Zhong; Khan, Muhammad Younis

    2016-08-01

    The denoising process is critical in processing transient electromagnetic (TEM) sounding data. For the full waveform pseudo-random binary sequences (PRBS) response, an inadequate noise estimation may result in an erroneous interpretation. We consider the Hilbert-Huang transform (HHT) and its application to suppress the noise in the PRBS response. The focus is on the thresholding scheme to suppress the noise and the analysis of the signal based on its Hilbert time-frequency representation. The method first decomposes the signal into the intrinsic mode function, and then, inspired by the thresholding scheme in wavelet analysis; an adaptive and interval thresholding is conducted to set to zero all the components in intrinsic mode function which are lower than a threshold related to the noise level. The algorithm is based on the characteristic of the PRBS response. The HHT-based denoising scheme is tested on the synthetic and field data with the different noise levels. The result shows that the proposed method has a good capability in denoising and detail preservation.

  10. Image denoising via collaborative support-agnostic recovery

    KAUST Repository

    Behzad, Muzammil; Masood, Mudassir; Ballal, Tarig; Shadaydeh, Maha; Al-Naffouri, Tareq Y.

    2017-01-01

    In this paper, we propose a novel patch-based image denoising algorithm using collaborative support-agnostic sparse reconstruction. In the proposed collaborative scheme, similar patches are assumed to share the same support taps. For sparse reconstruction, the likelihood of a tap being active in a patch is computed and refined through a collaboration process with other similar patches in the similarity group. This provides a very good patch support estimation, hence enhancing the quality of image restoration. Performance comparisons with state-of-the-art algorithms, in terms of PSNR and SSIM, demonstrate the superiority of the proposed algorithm.

  11. Image denoising via collaborative support-agnostic recovery

    KAUST Repository

    Behzad, Muzammil

    2017-06-20

    In this paper, we propose a novel patch-based image denoising algorithm using collaborative support-agnostic sparse reconstruction. In the proposed collaborative scheme, similar patches are assumed to share the same support taps. For sparse reconstruction, the likelihood of a tap being active in a patch is computed and refined through a collaboration process with other similar patches in the similarity group. This provides a very good patch support estimation, hence enhancing the quality of image restoration. Performance comparisons with state-of-the-art algorithms, in terms of PSNR and SSIM, demonstrate the superiority of the proposed algorithm.

  12. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  13. An energy kurtosis demodulation technique for signal denoising and bearing fault detection

    International Nuclear Information System (INIS)

    Wang, Wilson; Lee, Hewen

    2013-01-01

    Rolling element bearings are commonly used in rotary machinery. Reliable bearing fault detection techniques are very useful in industries for predictive maintenance operations. Bearing fault detection still remains a very challenging task especially when defects occur on rotating bearing components because the fault-related features are non-stationary in nature. In this work, an energy kurtosis demodulation (EKD) technique is proposed for bearing fault detection especially for non-stationary signature analysis. The proposed EKD technique firstly denoises the signal by using a maximum kurtosis deconvolution filter to counteract the effect of signal transmission path so as to highlight defect-associated impulses. Next, the denoised signal is modulated over several frequency bands; a novel signature integration strategy is proposed to enhance feature characteristics. The effectiveness of the proposed EKD fault detection technique is verified by a series of experimental tests corresponding to different bearing conditions. (paper)

  14. Prognostics of Lithium-Ion Batteries Based on Wavelet Denoising and DE-RVM

    Science.gov (United States)

    Zhang, Chaolong; He, Yigang; Yuan, Lifeng; Xiang, Sheng; Wang, Jinping

    2015-01-01

    Lithium-ion batteries are widely used in many electronic systems. Therefore, it is significantly important to estimate the lithium-ion battery's remaining useful life (RUL), yet very difficult. One important reason is that the measured battery capacity data are often subject to the different levels of noise pollution. In this paper, a novel battery capacity prognostics approach is presented to estimate the RUL of lithium-ion batteries. Wavelet denoising is performed with different thresholds in order to weaken the strong noise and remove the weak noise. Relevance vector machine (RVM) improved by differential evolution (DE) algorithm is utilized to estimate the battery RUL based on the denoised data. An experiment including battery 5 capacity prognostics case and battery 18 capacity prognostics case is conducted and validated that the proposed approach can predict the trend of battery capacity trajectory closely and estimate the battery RUL accurately. PMID:26413090

  15. LSTM-Based Hierarchical Denoising Network for Android Malware Detection

    OpenAIRE

    Yan, Jinpei; Qi, Yong; Rao, Qifan

    2018-01-01

    Mobile security is an important issue on Android platform. Most malware detection methods based on machine learning models heavily rely on expert knowledge for manual feature engineering, which are still difficult to fully describe malwares. In this paper, we present LSTM-based hierarchical denoise network (HDN), a novel static Android malware detection method which uses LSTM to directly learn from the raw opcode sequences extracted from decompiled Android files. However, most opcode sequence...

  16. Denoising traffic collision data using ensemble empirical mode decomposition (EEMD) and its application for constructing continuous risk profile (CRP).

    Science.gov (United States)

    Kim, Nam-Seog; Chung, Koohong; Ahn, Seongchae; Yu, Jeong Whon; Choi, Keechoo

    2014-10-01

    Filtering out the noise in traffic collision data is essential in reducing false positive rates (i.e., requiring safety investigation of sites where it is not needed) and can assist government agencies in better allocating limited resources. Previous studies have demonstrated that denoising traffic collision data is possible when there exists a true known high collision concentration location (HCCL) list to calibrate the parameters of a denoising method. However, such a list is often not readily available in practice. To this end, the present study introduces an innovative approach for denoising traffic collision data using the Ensemble Empirical Mode Decomposition (EEMD) method which is widely used for analyzing nonlinear and nonstationary data. The present study describes how to transform the traffic collision data before the data can be decomposed using the EEMD method to obtain set of Intrinsic Mode Functions (IMFs) and residue. The attributes of the IMFs were then carefully examined to denoise the data and to construct Continuous Risk Profiles (CRPs). The findings from comparing the resulting CRP profiles with CRPs in which the noise was filtered out with two different empirically calibrated weighted moving window lengths are also documented, and the results and recommendations for future research are discussed. Published by Elsevier Ltd.

  17. TERRESTRIAL LASER SCANNER DATA DENOISING BY DICTIONARY LEARNING OF SPARSE CODING

    Directory of Open Access Journals (Sweden)

    E. Smigiel

    2013-07-01

    Full Text Available Point cloud processing is basically a signal processing issue. The huge amount of data which are collected with Terrestrial Laser Scanners or photogrammetry techniques faces the classical questions linked with signal or image processing. Among others, denoising and compression are questions which have to be addressed in this context. That is why, one has to turn attention to signal theory because it is susceptible to guide one's good practices or to inspire new ideas from the latest developments of this field. The literature have been showing for decades how strong and dynamic, the theoretical field is and how efficient the derived algorithms have become. For about ten years, a new technique has appeared: known as compressive sensing or compressive sampling, it is based first on sparsity which is an interesting characteristic of many natural signals. Based on this concept, many denoising and compression techniques have shown their efficiencies. Sparsity can also be seen as redundancy removal of natural signals. Taken along with incoherent measurements, compressive sensing has appeared and uses the idea that redundancy could be removed at the very early stage of sampling. Hence, instead of sampling the signal at high sampling rate and removing redundancy as a second stage, the acquisition stage itself may be run with redundancy removal. This paper gives some theoretical aspects of these ideas with first simple mathematics. Then, the idea of compressive sensing for a Terrestrial Laser Scanner is examined as a potential research question and finally, a denoising scheme based on a dictionary learning of sparse coding is experienced. Both the theoretical discussion and the obtained results show that it is worth staying close to signal processing theory and its community to take benefit of its latest developments.

  18. HARDI denoising using nonlocal means on S2

    Science.gov (United States)

    Kuurstra, Alan; Dolui, Sudipto; Michailovich, Oleg

    2012-02-01

    Diffusion MRI (dMRI) is a unique imaging modality for in vivo delineation of the anatomical structure of white matter in the brain. In particular, high angular resolution diffusion imaging (HARDI) is a specific instance of dMRI which is known to excel in detection of multiple neural fibers within a single voxel. Unfortunately, the angular resolution of HARDI is known to be inversely proportional to SNR, which makes the problem of denoising of HARDI data be of particular practical importance. Since HARDI signals are effectively band-limited, denoising can be accomplished by means of linear filtering. However, the spatial dependency of diffusivity in brain tissue makes it impossible to find a single set of linear filter parameters which is optimal for all types of diffusion signals. Hence, adaptive filtering is required. In this paper, we propose a new type of non-local means (NLM) filtering which possesses the required adaptivity property. As opposed to similar methods in the field, however, the proposed NLM filtering is applied in the spherical domain of spatial orientations. Moreover, the filter uses an original definition of adaptive weights, which are designed to be invariant to both spatial rotations as well as to a particular sampling scheme in use. As well, we provide a detailed description of the proposed filtering procedure, its efficient implementation, as well as experimental results with synthetic data. We demonstrate that our filter has substantially better adaptivity as compared to a number of alternative methods.

  19. Adaptive image denoising based on support vector machine and wavelet description

    Science.gov (United States)

    An, Feng-Ping; Zhou, Xian-Wei

    2017-12-01

    Adaptive image denoising method decomposes the original image into a series of basic pattern feature images on the basis of wavelet description and constructs the support vector machine regression function to realize the wavelet description of the original image. The support vector machine method allows the linear expansion of the signal to be expressed as a nonlinear function of the parameters associated with the SVM. Using the radial basis kernel function of SVM, the original image can be extended into a MEXICAN function and a residual trend. This MEXICAN represents a basic image feature pattern. If the residual does not fluctuate, it can also be represented as a characteristic pattern. If the residuals fluctuate significantly, it is treated as a new image and the same decomposition process is repeated until the residuals obtained by the decomposition do not significantly fluctuate. Experimental results show that the proposed method in this paper performs well; especially, it satisfactorily solves the problem of image noise removal. It may provide a new tool and method for image denoising.

  20. Statistical model for OCT image denoising

    KAUST Repository

    Li, Muxingzi

    2017-08-01

    Optical coherence tomography (OCT) is a non-invasive technique with a large array of applications in clinical imaging and biological tissue visualization. However, the presence of speckle noise affects the analysis of OCT images and their diagnostic utility. In this article, we introduce a new OCT denoising algorithm. The proposed method is founded on a numerical optimization framework based on maximum-a-posteriori estimate of the noise-free OCT image. It combines a novel speckle noise model, derived from local statistics of empirical spectral domain OCT (SD-OCT) data, with a Huber variant of total variation regularization for edge preservation. The proposed approach exhibits satisfying results in terms of speckle noise reduction as well as edge preservation, at reduced computational cost.

  1. Oral histories in meteoritics and planetary science—XXIV: William K. Hartmann

    Science.gov (United States)

    Sears, Derek W. G.

    2014-06-01

    In this interview, William Hartmann (Bill, Fig. 1) describes how he was inspired as a teenager by a map of the Moon in an encyclopedia and by the paintings by Chesley Bonestell. Through the amateur journal "Strolling Astronomer," he shared his interests with other teenagers who became lifelong colleagues. At college, he participated in Project Moonwatch, observing early artificial satellites. In graduate school, under Gerard Kuiper, Bill discovered Mare Orientale and other large concentric lunar basin structures. In the 1960s and 1970s, he used crater densities to study surface ages and erosive/depositional effects, predicted the approximately 3.6 Gyr ages of the lunar maria before the Apollo samples, discovered the intense pre-mare lunar bombardment, deduced the youthful Martian volcanism as part of the Mariner 9 team, and proposed (with Don Davis) the giant impact model for lunar origin. In 1972, he helped found (what is now) the Planetary Science Institute. From the late 1970s to early 1990s, Bill worked mostly with Dale Cruikshank and Dave Tholen at Mauna Kea Observatory, helping to break down the Victorian paradigm that separated comets and asteroids, and determining the approximately 4% albedo of comet nuclei. Most recently, Bill has worked with the imaging teams for several additional Mars missions. He has written three college textbooks and, since the 1970s, after painting illustrations for his textbooks, has devoted part of his time to painting, having had several exhibitions. He has also published two novels. Bill Hartmann won the 2010 Barringer Award for impact studies and the first Carl Sagan Award for outreach in 1997.

  2. Quantitative accuracy of denoising techniques applied to dynamic 82Rb myocardial blood flow PET/CT scans

    DEFF Research Database (Denmark)

    Harms, Hans; Tolbod, Lars Poulsen; Bouchelouche, Kirsten

    with suspected ischemic heart disease underwent a dynamic 7 minute 82Rb scan under resting and adenosine induced hyperaemic conditions after injection of 1100 MBq of 82Rb on a GE Discovery 690 PET/CT. Dynamic images were filtered using HighlY constrained backPRojection (HYPR) and a Hotelling filter of which...... the latter was evaluated using a range of 4 to 7 included factors and for both 2D and 3D filtering. Data were analyzed using Cardiac VUer and obtained MBF values were compared with those obtained when no denoising of the dynamic data was performed. Results: Both HYPR and Hotelling denoising could...

  3. SVD and Hankel matrix based de-noising approach for ball bearing fault detection and its assessment using artificial faults

    Science.gov (United States)

    Golafshan, Reza; Yuce Sanliturk, Kenan

    2016-03-01

    Ball bearings remain one of the most crucial components in industrial machines and due to their critical role, it is of great importance to monitor their conditions under operation. However, due to the background noise in acquired signals, it is not always possible to identify probable faults. This incapability in identifying the faults makes the de-noising process one of the most essential steps in the field of Condition Monitoring (CM) and fault detection. In the present study, Singular Value Decomposition (SVD) and Hankel matrix based de-noising process is successfully applied to the ball bearing time domain vibration signals as well as to their spectrums for the elimination of the background noise and the improvement the reliability of the fault detection process. The test cases conducted using experimental as well as the simulated vibration signals demonstrate the effectiveness of the proposed de-noising approach for the ball bearing fault detection.

  4. Three-dimension reconstruction based on spatial light modulator

    International Nuclear Information System (INIS)

    Deng Xuejiao; Zhang Nanyang; Zeng Yanan; Yin Shiliang; Wang Weiyu

    2011-01-01

    Three-dimension reconstruction, known as an important research direction of computer graphics, is widely used in the related field such as industrial design and manufacture, construction, aerospace, biology and so on. Via such technology we can obtain three-dimension digital point cloud from a two-dimension image, and then simulate the three-dimensional structure of the physical object for further study. At present, the obtaining of three-dimension digital point cloud data is mainly based on the adaptive optics system with Shack-Hartmann sensor and phase-shifting digital holography. Referring to surface fitting, there are also many available methods such as iterated discrete fourier transform, convolution and image interpolation, linear phase retrieval. The main problems we came across in three-dimension reconstruction are the extraction of feature points and arithmetic of curve fitting. To solve such problems, we can, first of all, calculate the relevant surface normal vector information of each pixel in the light source coordinate system, then these vectors are to be converted to the coordinates of image through the coordinate conversion, so the expectant 3D point cloud get arise. Secondly, after the following procedures of de-noising, repairing, the feature points can later be selected and fitted to get the fitting function of the surface topography by means of Zernike polynomial, so as to reconstruct the determinand's three-dimensional topography. In this paper, a new kind of three-dimension reconstruction algorithm is proposed, with the assistance of which, the topography can be estimated from its grayscale at different sample points. Moreover, the previous stimulation and the experimental results prove that the new algorithm has a strong capability to fit, especially for large-scale objects .

  5. Three-dimension reconstruction based on spatial light modulator

    Science.gov (United States)

    Deng, Xuejiao; Zhang, Nanyang; Zeng, Yanan; Yin, Shiliang; Wang, Weiyu

    2011-02-01

    Three-dimension reconstruction, known as an important research direction of computer graphics, is widely used in the related field such as industrial design and manufacture, construction, aerospace, biology and so on. Via such technology we can obtain three-dimension digital point cloud from a two-dimension image, and then simulate the three-dimensional structure of the physical object for further study. At present, the obtaining of three-dimension digital point cloud data is mainly based on the adaptive optics system with Shack-Hartmann sensor and phase-shifting digital holography. Referring to surface fitting, there are also many available methods such as iterated discrete fourier transform, convolution and image interpolation, linear phase retrieval. The main problems we came across in three-dimension reconstruction are the extraction of feature points and arithmetic of curve fitting. To solve such problems, we can, first of all, calculate the relevant surface normal vector information of each pixel in the light source coordinate system, then these vectors are to be converted to the coordinates of image through the coordinate conversion, so the expectant 3D point cloud get arise. Secondly, after the following procedures of de-noising, repairing, the feature points can later be selected and fitted to get the fitting function of the surface topography by means of Zernike polynomial, so as to reconstruct the determinand's three-dimensional topography. In this paper, a new kind of three-dimension reconstruction algorithm is proposed, with the assistance of which, the topography can be estimated from its grayscale at different sample points. Moreover, the previous stimulation and the experimental results prove that the new algorithm has a strong capability to fit, especially for large-scale objects .

  6. The Research on Denoising of SAR Image Based on Improved K-SVD Algorithm

    Science.gov (United States)

    Tan, Linglong; Li, Changkai; Wang, Yueqin

    2018-04-01

    SAR images often receive noise interference in the process of acquisition and transmission, which can greatly reduce the quality of images and cause great difficulties for image processing. The existing complete DCT dictionary algorithm is fast in processing speed, but its denoising effect is poor. In this paper, the problem of poor denoising, proposed K-SVD (K-means and singular value decomposition) algorithm is applied to the image noise suppression. Firstly, the sparse dictionary structure is introduced in detail. The dictionary has a compact representation and can effectively train the image signal. Then, the sparse dictionary is trained by K-SVD algorithm according to the sparse representation of the dictionary. The algorithm has more advantages in high dimensional data processing. Experimental results show that the proposed algorithm can remove the speckle noise more effectively than the complete DCT dictionary and retain the edge details better.

  7. An FPGA Architecture for Extracting Real-Time Zernike Coefficients from Measured Phase Gradients

    Science.gov (United States)

    Moser, Steven; Lee, Peter; Podoleanu, Adrian

    2015-04-01

    Zernike modes are commonly used in adaptive optics systems to represent optical wavefronts. However, real-time calculation of Zernike modes is time consuming due to two factors: the large factorial components in the radial polynomials used to define them and the large inverse matrix calculation needed for the linear fit. This paper presents an efficient parallel method for calculating Zernike coefficients from phase gradients produced by a Shack-Hartman sensor and its real-time implementation using an FPGA by pre-calculation and storage of subsections of the large inverse matrix. The architecture exploits symmetries within the Zernike modes to achieve a significant reduction in memory requirements and a speed-up of 2.9 when compared to published results utilising a 2D-FFT method for a grid size of 8×8. Analysis of processor element internal word length requirements show that 24-bit precision in precalculated values of the Zernike mode partial derivatives ensures less than 0.5% error per Zernike coefficient and an overall error of RAM usage is <16% for Shack-Hartmann grid sizes up to 32×32.

  8. Twofold processing for denoising ultrasound medical images.

    Science.gov (United States)

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India.

  9. Mesh Denoising based on Normal Voting Tensor and Binary Optimization

    OpenAIRE

    Yadav, S. K.; Reitebuch, U.; Polthier, K.

    2016-01-01

    This paper presents a tensor multiplication based smoothing algorithm that follows a two step denoising method. Unlike other traditional averaging approaches, our approach uses an element based normal voting tensor to compute smooth surfaces. By introducing a binary optimization on the proposed tensor together with a local binary neighborhood concept, our algorithm better retains sharp features and produces smoother umbilical regions than previous approaches. On top of that, we provide a stoc...

  10. Adaptive nonlocal means filtering based on local noise level for CT denoising

    International Nuclear Information System (INIS)

    Li, Zhoubo; Trzasko, Joshua D.; Lake, David S.; Blezek, Daniel J.; Manduca, Armando; Yu, Lifeng; Fletcher, Joel G.; McCollough, Cynthia H.

    2014-01-01

    Purpose: To develop and evaluate an image-domain noise reduction method based on a modified nonlocal means (NLM) algorithm that is adaptive to local noise level of CT images and to implement this method in a time frame consistent with clinical workflow. Methods: A computationally efficient technique for local noise estimation directly from CT images was developed. A forward projection, based on a 2D fan-beam approximation, was used to generate the projection data, with a noise model incorporating the effects of the bowtie filter and automatic exposure control. The noise propagation from projection data to images was analytically derived. The analytical noise map was validated using repeated scans of a phantom. A 3D NLM denoising algorithm was modified to adapt its denoising strength locally based on this noise map. The performance of this adaptive NLM filter was evaluated in phantom studies in terms of in-plane and cross-plane high-contrast spatial resolution, noise power spectrum (NPS), subjective low-contrast spatial resolution using the American College of Radiology (ACR) accreditation phantom, and objective low-contrast spatial resolution using a channelized Hotelling model observer (CHO). Graphical processing units (GPU) implementation of this noise map calculation and the adaptive NLM filtering were developed to meet demands of clinical workflow. Adaptive NLM was piloted on lower dose scans in clinical practice. Results: The local noise level estimation matches the noise distribution determined from multiple repetitive scans of a phantom, demonstrated by small variations in the ratio map between the analytical noise map and the one calculated from repeated scans. The phantom studies demonstrated that the adaptive NLM filter can reduce noise substantially without degrading the high-contrast spatial resolution, as illustrated by modulation transfer function and slice sensitivity profile results. The NPS results show that adaptive NLM denoising preserves the

  11. Adaptive Wavelet Threshold Denoising Method for Machinery Sound Based on Improved Fruit Fly Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Jing Xu

    2016-07-01

    Full Text Available As the sound signal of a machine contains abundant information and is easy to measure, acoustic-based monitoring or diagnosis systems exhibit obvious superiority, especially in some extreme conditions. However, the sound directly collected from industrial field is always polluted. In order to eliminate noise components from machinery sound, a wavelet threshold denoising method optimized by an improved fruit fly optimization algorithm (WTD-IFOA is proposed in this paper. The sound is firstly decomposed by wavelet transform (WT to obtain coefficients of each level. As the wavelet threshold functions proposed by Donoho were discontinuous, many modified functions with continuous first and second order derivative were presented to realize adaptively denoising. However, the function-based denoising process is time-consuming and it is difficult to find optimal thresholds. To overcome these problems, fruit fly optimization algorithm (FOA was introduced to the process. Moreover, to avoid falling into local extremes, an improved fly distance range obeying normal distribution was proposed on the basis of original FOA. Then, sound signal of a motor was recorded in a soundproof laboratory, and Gauss white noise was added into the signal. The simulation results illustrated the effectiveness and superiority of the proposed approach by a comprehensive comparison among five typical methods. Finally, an industrial application on a shearer in coal mining working face was performed to demonstrate the practical effect.

  12. Unmixing-Based Denoising as a Pre-Processing Step for Coral Reef Analysis

    Science.gov (United States)

    Cerra, D.; Traganos, D.; Gege, P.; Reinartz, P.

    2017-05-01

    Coral reefs, among the world's most biodiverse and productive submerged habitats, have faced several mass bleaching events due to climate change during the past 35 years. In the course of this century, global warming and ocean acidification are expected to cause corals to become increasingly rare on reef systems. This will result in a sharp decrease in the biodiversity of reef communities and carbonate reef structures. Coral reefs may be mapped, characterized and monitored through remote sensing. Hyperspectral images in particular excel in being used in coral monitoring, being characterized by very rich spectral information, which results in a strong discrimination power to characterize a target of interest, and separate healthy corals from bleached ones. Being submerged habitats, coral reef systems are difficult to analyse in airborne or satellite images, as relevant information is conveyed in bands in the blue range which exhibit lower signal-to-noise ratio (SNR) with respect to other spectral ranges; furthermore, water is absorbing most of the incident solar radiation, further decreasing the SNR. Derivative features, which are important in coral analysis, result greatly affected by the resulting noise present in relevant spectral bands, justifying the need of new denoising techniques able to keep local spatial and spectral features. In this paper, Unmixing-based Denoising (UBD) is used to enable analysis of a hyperspectral image acquired over a coral reef system in the Red Sea based on derivative features. UBD reconstructs pixelwise a dataset with reduced noise effects, by forcing each spectrum to a linear combination of other reference spectra, exploiting the high dimensionality of hyperspectral datasets. Results show clear enhancements with respect to traditional denoising methods based on spatial and spectral smoothing, facilitating the coral detection task.

  13. Image Denoising Using Interquartile Range Filter with Local Averaging

    OpenAIRE

    Jassim, Firas Ajil

    2013-01-01

    Image denoising is one of the fundamental problems in image processing. In this paper, a novel approach to suppress noise from the image is conducted by applying the interquartile range (IQR) which is one of the statistical methods used to detect outlier effect from a dataset. A window of size kXk was implemented to support IQR filter. Each pixel outside the IQR range of the kXk window is treated as noisy pixel. The estimation of the noisy pixels was obtained by local averaging. The essential...

  14. GPU-Based Block-Wise Nonlocal Means Denoising for 3D Ultrasound Images

    Directory of Open Access Journals (Sweden)

    Liu Li

    2013-01-01

    Full Text Available Speckle suppression plays an important role in improving ultrasound (US image quality. While lots of algorithms have been proposed for 2D US image denoising with remarkable filtering quality, there is relatively less work done on 3D ultrasound speckle suppression, where the whole volume data rather than just one frame needs to be considered. Then, the most crucial problem with 3D US denoising is that the computational complexity increases tremendously. The nonlocal means (NLM provides an effective method for speckle suppression in US images. In this paper, a programmable graphic-processor-unit- (GPU- based fast NLM filter is proposed for 3D ultrasound speckle reduction. A Gamma distribution noise model, which is able to reliably capture image statistics for Log-compressed ultrasound images, was used for the 3D block-wise NLM filter on basis of Bayesian framework. The most significant aspect of our method was the adopting of powerful data-parallel computing capability of GPU to improve the overall efficiency. Experimental results demonstrate that the proposed method can enormously accelerate the algorithm.

  15. Experimental study on a de-noising system for gas and oil pipelines based on an acoustic leak detection and location method

    International Nuclear Information System (INIS)

    Liu, Cuiwei; Li, Yuxing; Fang, Liping; Xu, Minghai

    2017-01-01

    To protect the pipelines from significant danger, the acoustic leak detection and location method for oil and gas pipelines is studied, and a de-noising system is established to extract leakage characteristics from signals. A test loop for gas and oil is established to carry out experiments. First, according to the measured signals, fitting leakage signals are obtained, and then, the objective signals are constructed by adding noises to the fitting signals. Based on the proposed evaluation indexes, the filtering methods are then applied to process the constructed signals and the de-noising system is established. The established leakage extraction system is validated and then applied to process signals measured in gas pipelines that include a straight pipe, elbow pipe and reducing pipe. The leak detection and location is carried out effectively. Finally, the system is applied to process signals measured in water pipelines. The results demonstrate that the proposed de-noising system is effective at extracting leakage signals from measured signals and that the proposed leak detection and location method has a higher detection sensitivity and localization accuracy. For a pipeline with an inner diameter of 42 mm, the smallest leakage orifice that can be detected is 0.1 mm for gas and water and the largest location error is 0.874% for gas and 0.176% for water. - Highlights: • Three evaluation indexes are proposed: SNR, RMSE and ALPD. • The de-noising system is established in the gas and oil pipelines. • The established system is used for gas pipeline effectively, including interference pipes. • The established de-noising system is used for water pipeline effectively.

  16. System and method for image reconstruction, analysis, and/or de-noising

    KAUST Repository

    Laleg-Kirati, Taous-Meriem

    2015-11-12

    A method and system can analyze, reconstruct, and/or denoise an image. The method and system can include interpreting a signal as a potential of a Schrödinger operator, decomposing the signal into squared eigenfunctions, reducing a design parameter of the Schrödinger operator, analyzing discrete spectra of the Schrödinger operator and combining the analysis of the discrete spectra to construct the image.

  17. Preliminary study on effects of 60Co γ-irradiation on video quality and the image de-noising methods

    International Nuclear Information System (INIS)

    Yuan Mei; Zhao Jianbin; Cui Lei

    2011-01-01

    There will be variable noises appear on images in video once the play device irradiated by γ-rays, so as to affect the image clarity. In order to eliminate the image noising, the affection mechanism of γ-irradiation on video-play device was studied in this paper and the methods to improve the image quality with both hardware and software were proposed by use of protection program and de-noising algorithm. The experimental results show that the scheme of video de-noising based on hardware and software can improve effectively the PSNR by 87.5 dB. (authors)

  18. Effect of aberrations in human eye on contrast sensitivity function

    Science.gov (United States)

    Quan, Wei; Wang, Feng-lin; Wang, Zhao-qi

    2011-06-01

    The quantitative analysis of the effect of aberrations in human eye on vision has important clinical value in the correction of aberrations. The wave-front aberrations of human eyes were measured with the Hartmann-Shack wave-front sensor and modulation transfer function (MTF) was computed from the wave-front aberrations. Contrast sensitivity function (CSF) was obtained from MTF and the retinal aerial image modulation (AIM). It is shown that the 2nd, 3rd, 4th, 5th, 6th Zernike aberrations deteriorate contrast sensitivity function. When the 2nd, 3rd, 4th, 5th, 6th Zernike aberrations are corrected high contrast sensitivity function can be obtained.

  19. Two-dimensional electron density characterisation of arc interruption phenomenon in current-zero phase

    Science.gov (United States)

    Inada, Yuki; Kamiya, Tomoki; Matsuoka, Shigeyasu; Kumada, Akiko; Ikeda, Hisatoshi; Hidaka, Kunihiko

    2018-01-01

    Two-dimensional electron density imaging over free burning SF6 arcs and SF6 gas-blast arcs was conducted at current zero using highly sensitive Shack-Hartmann type laser wavefront sensors in order to experimentally characterise electron density distributions for the success and failure of arc interruption in the thermal reignition phase. The experimental results under an interruption probability of 50% showed that free burning SF6 arcs with axially asymmetric electron density profiles were interrupted with a success rate of 88%. On the other hand, the current interruption of SF6 gas-blast arcs was reproducibly achieved under locally reduced electron densities and the interruption success rate was 100%.

  20. NAOMI: a low-order adaptive optics system for the VLT interferometer

    Science.gov (United States)

    Gonté, Frédéric Yves J.; Alonso, Jaime; Aller-Carpentier, Emmanuel; Andolfato, Luigi; Berger, Jean-Philippe; Cortes, Angela; Delplancke-Strobele, Françoise; Donaldson, Rob; Dorn, Reinhold J.; Dupuy, Christophe; Egner, Sebastian E.; Huber, Stefan; Hubin, Norbert; Kirchbauer, Jean-Paul; Le Louarn, Miska; Lilley, Paul; Jolley, Paul; Martis, Alessandro; Paufique, Jérôme; Pasquini, Luca; Quentin, Jutta; Ridings, Robert; Reyes, Javier; Shchkaturov, Pavel; Suarez, Marcos; Phan Duc, Thanh; Valdes, Guillermo; Woillez, Julien; Le Bouquin, Jean-Baptiste; Beuzit, Jean-Luc; Rochat, Sylvain; Vérinaud, Christophe; Moulin, Thibaut; Delboulbé, Alain; Michaud, Laurence; Correia, Jean-Jacques; Roux, Alain; Maurel, Didier; Stadler, Eric; Magnard, Yves

    2016-08-01

    The New Adaptive Optics Module for Interferometry (NAOMI) will be developed for and installed at the 1.8-metre Auxiliary Telescopes (ATs) at ESO Paranal. The goal of the project is to equip all four ATs with a low-order Shack- Hartmann adaptive optics system operating in the visible. By improving the wavefront quality delivered by the ATs for guide stars brighter than R = 13 mag, NAOMI will make the existing interferometer performance less dependent on the seeing conditions. Fed with higher and more stable Strehl, the fringe tracker(s) will achieve the fringe stability necessary to reach the full performance of the second-generation instruments GRAVITY and MATISSE.

  1. Wavefront Measurement in Ophthalmology

    Science.gov (United States)

    Molebny, Vasyl

    Wavefront sensing or aberration measurement in the eye is a key problem in refractive surgery and vision correction with laser. The accuracy of these measurements is critical for the outcome of the surgery. Practically all clinical methods use laser as a source of light. To better understand the background, we analyze the pre-laser techniques developed over centuries. They allowed new discoveries of the nature of the optical system of the eye, and many served as prototypes for laser-based wavefront sensing technologies. Hartmann's test was strengthened by Platt's lenslet matrix and the CCD two-dimensional photodetector acquired a new life as a Hartmann-Shack sensor in Heidelberg. Tscherning's aberroscope, invented in France, was transformed into a laser device known as a Dresden aberrometer, having seen its reincarnation in Germany with Seiler's help. The clinical ray tracing technique was brought to life by Molebny in Ukraine, and skiascopy was created by Fujieda in Japan. With the maturation of these technologies, new demands now arise for their wider implementation in optometry and vision correction with customized contact and intraocular lenses.

  2. Application of time-resolved glucose concentration photoacoustic signals based on an improved wavelet denoising

    Science.gov (United States)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2014-10-01

    Real-time monitoring of blood glucose concentration (BGC) is a great important procedure in controlling diabetes mellitus and preventing the complication for diabetic patients. Noninvasive measurement of BGC has already become a research hotspot because it can overcome the physical and psychological harm. Photoacoustic spectroscopy is a well-established, hybrid and alternative technique used to determine the BGC. According to the theory of photoacoustic technique, the blood is irradiated by plused laser with nano-second repeation time and micro-joule power, the photoacoustic singals contained the information of BGC are generated due to the thermal-elastic mechanism, then the BGC level can be interpreted from photoacoustic signal via the data analysis. But in practice, the time-resolved photoacoustic signals of BGC are polluted by the varities of noises, e.g., the interference of background sounds and multi-component of blood. The quality of photoacoustic signal of BGC directly impacts the precision of BGC measurement. So, an improved wavelet denoising method was proposed to eliminate the noises contained in BGC photoacoustic signals. To overcome the shortcoming of traditional wavelet threshold denoising, an improved dual-threshold wavelet function was proposed in this paper. Simulation experimental results illustrated that the denoising result of this improved wavelet method was better than that of traditional soft and hard threshold function. To varify the feasibility of this improved function, the actual photoacoustic BGC signals were test, the test reslut demonstrated that the signal-to-noises ratio(SNR) of the improved function increases about 40-80%, and its root-mean-square error (RMSE) decreases about 38.7-52.8%.

  3. Les aliments des habitants de la « cabane » / Food in the shack

    Directory of Open Access Journals (Sweden)

    Amandine Plancade

    2008-10-01

    Full Text Available Daniel, Lucien, Gérard et Jean-Baptiste habitent une cabane dans une ruelle d'une grande ville. Leur alimentation est issue principalement de dons, de restes ou de déchets. Le terrain mené auprès d'eux permet de mettre en évidence comment ces denrées doivent être revalorisées pour pouvoir être consommées. Entrent en ligne de compte les conditions de leur obtention, l'acte culinaire et la création de déchets qui s'en suit. L'analyse montre donc que loin de se réduire à sa dimension nutritive, l'alimentation dans ces conditions de précarité organise la vie sociale des habitants de cette cabane : c'est par elle que passe la création de relations cordiales avec le voisinage, et la possibilité de devenir aidant/donateur à son tour auprès de personnes encore plus démunies.Daniel, Lucien, Gérard and Jean-Baptiste live in a shack, in a city back-alley. They get their food mainly from donations, left-overs or waste. The fieldwork performed in their company sheds light on the fact that new values must be conferred to such foods in order to become edible. What matters is how the food was obtained, the cooking process, and the subsequent production of waste. The analysis shows that feeding in such precarious conditions, far from being reduced to its nutritive dimension, organizes the social life of this shack's residents: food is what makes friendly relations possible with neighbours and gives the opportunity to become helper/ donator in turn towards even more destitute persons

  4. Edge-preserving image denoising via group coordinate descent on the GPU

    OpenAIRE

    McGaffin, Madison G.; Fessler, Jeffrey A.

    2015-01-01

    Image denoising is a fundamental operation in image processing, and its applications range from the direct (photographic enhancement) to the technical (as a subproblem in image reconstruction algorithms). In many applications, the number of pixels has continued to grow, while the serial execution speed of computational hardware has begun to stall. New image processing algorithms must exploit the power offered by massively parallel architectures like graphics processing units (GPUs). This pape...

  5. Reconstrução de trânsito intestinal após confecção de colostomia à Hartmann

    Directory of Open Access Journals (Sweden)

    Rodrigo Gomes da Silva

    Full Text Available OBJETIVO: O objetivo desse estudo foi avaliar as taxas de morbidade e de mortalidade da tentativa de reversão do procedimento de Hartmann. MÉTODOS: Foram estudados retrospectivamente 29 pacientes submetidos à operação para reconstrução do trânsito intestinal após procedimento de Hartmann no Hospital das Clínicas da Universidade Federal de Minas Gerais no período de janeiro de 1998 a dezembro de 2006. Foram avaliados dados pré-operatório, intra-operatórios e pós-operatórios. RESULTADOS: A média de idade dos pacientes submetidos à operação para reconstrução de trânsito intestinal após realização de colostomia a Hartmann foi de 52,6 anos, sendo 16 pacientes do sexo masculino (55,2%. O tempo médio da permanência da colostomia foi de 17,6 meses (variando de 1 a 84 meses. O tempo operatório médio foi de 300 minutos (variando de 180 a 720 minutos. O sucesso na reconstrução do trânsito intestinal foi alcançado em 27 pacientes (93%. Dois pacientes apresentaram fístula anastomótica (7% e seis tiveram infecção de parede (22%. Ocorreu um óbito (3,4% em paciente com fístula anastomótica e sepse abdominal. Dentre os fatores relacionados ao insucesso na reconstrução da colostomia a Hartmann observou-se associação estatisticamente significativa com a tentativa prévia de reconstrução (p = 0,007, a utilização prévia de quimioterapia (p = 0,037 e o longo tempo de permanência da colostomia (p = 0,025 CONCLUSÃO: O intervalo entre a confecção e a tentativa de reversão não deve ser muito longo e os pacientes devem ser alertados que, numa pequena porcentagem dos casos, a reconstrução do trânsito intestinal pode ser impossível devido às condições locais do reto excluído.

  6. Denoising of chaotic signal using independent component analysis and empirical mode decomposition with circulate translating

    Science.gov (United States)

    Wen-Bo, Wang; Xiao-Dong, Zhang; Yuchan, Chang; Xiang-Li, Wang; Zhao, Wang; Xi, Chen; Lei, Zheng

    2016-01-01

    In this paper, a new method to reduce noises within chaotic signals based on ICA (independent component analysis) and EMD (empirical mode decomposition) is proposed. The basic idea is decomposing chaotic signals and constructing multidimensional input vectors, firstly, on the base of EMD and its translation invariance. Secondly, it makes the independent component analysis on the input vectors, which means that a self adapting denoising is carried out for the intrinsic mode functions (IMFs) of chaotic signals. Finally, all IMFs compose the new denoised chaotic signal. Experiments on the Lorenz chaotic signal composed of different Gaussian noises and the monthly observed chaotic sequence on sunspots were put into practice. The results proved that the method proposed in this paper is effective in denoising of chaotic signals. Moreover, it can correct the center point in the phase space effectively, which makes it approach the real track of the chaotic attractor. Project supported by the National Science and Technology, China (Grant No. 2012BAJ15B04), the National Natural Science Foundation of China (Grant Nos. 41071270 and 61473213), the Natural Science Foundation of Hubei Province, China (Grant No. 2015CFB424), the State Key Laboratory Foundation of Satellite Ocean Environment Dynamics, China (Grant No. SOED1405), the Hubei Provincial Key Laboratory Foundation of Metallurgical Industry Process System Science, China (Grant No. Z201303), and the Hubei Key Laboratory Foundation of Transportation Internet of Things, Wuhan University of Technology, China (Grant No.2015III015-B02).

  7. An Implementation and Detailed Analysis of the K-SVD Image Denoising Algorithm

    Directory of Open Access Journals (Sweden)

    Marc Lebrun

    2012-05-01

    Full Text Available K-SVD is a signal representation method which, from a set of signals, can derive a dictionary able to approximate each signal with a sparse combination of the atoms. This paper focuses on the K-SVD-based image denoising algorithm. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation.

  8. More about tunnelling times and superluminal tunnelling (Hartmann effect)

    International Nuclear Information System (INIS)

    Olkhovsky, V.S.; Recami, E.; Raciti, F.; Zaichenko, A.

    1995-05-01

    Aims of the present paper are: i) presenting and analysing the results of various numerical calculations on the penetration and return times Pen >, Ret >, during tunnelling inside a rectangular potential barrier, for various penetration depths x f ; ii) putting forth and discussing suitable definitions, besides of the mean values, also of the variances (or dispersions) D τT and D τR for the time durations of transmission and reflection processes; iii)mentioning, moreover, that our definition T > for the average transmission time results to constitute an improvement of the ordinary dwell- time formula; iv) commenting, at last, on the basis of the new numerical results, upon some recent criticism by C.R. Leavens. The paper stresses that numerical evaluations confirm that the approach implied, and implies, the existence of the Hartmann effect: an effect that in these days (due to the theoretical connections between tunnelling and evanescent-wave propagation) is receiving - at Cologne, Berkeley, Florence and Vienna - indirect, but quite interesting, experimental verification

  9. Combination of oriented partial differential equation and shearlet transform for denoising in electronic speckle pattern interferometry fringe patterns.

    Science.gov (United States)

    Xu, Wenjun; Tang, Chen; Gu, Fan; Cheng, Jiajia

    2017-04-01

    It is a key step to remove the massive speckle noise in electronic speckle pattern interferometry (ESPI) fringe patterns. In the spatial-domain filtering methods, oriented partial differential equations have been demonstrated to be a powerful tool. In the transform-domain filtering methods, the shearlet transform is a state-of-the-art method. In this paper, we propose a filtering method for ESPI fringe patterns denoising, which is a combination of second-order oriented partial differential equation (SOOPDE) and the shearlet transform, named SOOPDE-Shearlet. Here, the shearlet transform is introduced into the ESPI fringe patterns denoising for the first time. This combination takes advantage of the fact that the spatial-domain filtering method SOOPDE and the transform-domain filtering method shearlet transform benefit from each other. We test the proposed SOOPDE-Shearlet on five experimentally obtained ESPI fringe patterns with poor quality and compare our method with SOOPDE, shearlet transform, windowed Fourier filtering (WFF), and coherence-enhancing diffusion (CEDPDE). Among them, WFF and CEDPDE are the state-of-the-art methods for ESPI fringe patterns denoising in transform domain and spatial domain, respectively. The experimental results have demonstrated the good performance of the proposed SOOPDE-Shearlet.

  10. Wavelet Based Denoising for the Estimation of the State of Charge for Lithium-Ion Batteries

    Directory of Open Access Journals (Sweden)

    Xiao Wang

    2018-05-01

    Full Text Available In practical electric vehicle applications, the noise of original discharging/charging voltage (DCV signals are inevitable, which comes from electromagnetic interference and the measurement noise of the sensors. To solve such problems, the Discrete Wavelet Transform (DWT based state of charge (SOC estimation method is proposed in this paper. Through a multi-resolution analysis, the original DCV signals with noise are decomposed into different frequency sub-bands. The desired de-noised DCV signals are then reconstructed by utilizing the inverse discrete wavelet transform, based on the sure rule. With the de-noised DCV signal, the SOC and the parameters are obtained using the adaptive extended Kalman Filter algorithm, and the adaptive forgetting factor recursive least square method. Simulation and experimental results show that the SOC estimation error is less than 1%, which indicates an effective improvement in SOC estimation accuracy.

  11. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    Science.gov (United States)

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.

  12. A Digital Image Denoising Algorithm Based on Gaussian Filtering and Bilateral Filtering

    Directory of Open Access Journals (Sweden)

    Piao Weiying

    2018-01-01

    Full Text Available Bilateral filtering has been applied in the area of digital image processing widely, but in the high gradient region of the image, bilateral filtering may generate staircase effect. Bilateral filtering can be regarded as one particular form of local mode filtering, according to above analysis, an mixed image de-noising algorithm is proposed based on Gaussian filter and bilateral filtering. First of all, it uses Gaussian filter to filtrate the noise image and get the reference image, then to take both the reference image and noise image as the input for range kernel function of bilateral filter. The reference image can provide the image’s low frequency information, and noise image can provide image’s high frequency information. Through the competitive experiment on both the method in this paper and traditional bilateral filtering, the experimental result showed that the mixed de-noising algorithm can effectively overcome staircase effect, and the filtrated image was more smooth, its textural features was also more close to the original image, and it can achieve higher PSNR value, but the amount of calculation of above two algorithms are basically the same.

  13. Denoising of gravitational wave signals via dictionary learning algorithms

    Science.gov (United States)

    Torres-Forné, Alejandro; Marquina, Antonio; Font, José A.; Ibáñez, José M.

    2016-12-01

    Gravitational wave astronomy has become a reality after the historical detections accomplished during the first observing run of the two advanced LIGO detectors. In the following years, the number of detections is expected to increase significantly with the full commissioning of the advanced LIGO, advanced Virgo and KAGRA detectors. The development of sophisticated data analysis techniques to improve the opportunities of detection for low signal-to-noise-ratio events is, hence, a most crucial effort. In this paper, we present one such technique, dictionary-learning algorithms, which have been extensively developed in the last few years and successfully applied mostly in the context of image processing. However, to the best of our knowledge, such algorithms have not yet been employed to denoise gravitational wave signals. By building dictionaries from numerical relativity templates of both binary black holes mergers and bursts of rotational core collapse, we show how machine-learning algorithms based on dictionaries can also be successfully applied for gravitational wave denoising. We use a subset of signals from both catalogs, embedded in nonwhite Gaussian noise, to assess our techniques with a large sample of tests and to find the best model parameters. The application of our method to the actual signal GW150914 shows promising results. Dictionary-learning algorithms could be a complementary addition to the gravitational wave data analysis toolkit. They may be used to extract signals from noise and to infer physical parameters if the data are in good enough agreement with the morphology of the dictionary atoms.

  14. GPU Performance and Power Consumption Analysis: A DCT based denoising application

    OpenAIRE

    Pi Puig, Martín; De Giusti, Laura Cristina; Naiouf, Marcelo; De Giusti, Armando Eduardo

    2017-01-01

    It is known that energy and power consumption are becoming serious metrics in the design of high performance workstations because of heat dissipation problems. In the last years, GPU accelerators have been integrating many of these expensive systems despite they are embedding more and more transistors on their chips producing a quick increase of power consumption requirements. This paper analyzes an image processing application, in particular a Discrete Cosine Transform denoising algorithm, i...

  15. Three-dimension reconstruction based on spatial light modulator

    Energy Technology Data Exchange (ETDEWEB)

    Deng Xuejiao; Zhang Nanyang; Zeng Yanan; Yin Shiliang; Wang Weiyu, E-mail: daisydelring@yahoo.com.cn [Huazhong University of Science and Technology (China)

    2011-02-01

    Three-dimension reconstruction, known as an important research direction of computer graphics, is widely used in the related field such as industrial design and manufacture, construction, aerospace, biology and so on. Via such technology we can obtain three-dimension digital point cloud from a two-dimension image, and then simulate the three-dimensional structure of the physical object for further study. At present, the obtaining of three-dimension digital point cloud data is mainly based on the adaptive optics system with Shack-Hartmann sensor and phase-shifting digital holography. Referring to surface fitting, there are also many available methods such as iterated discrete fourier transform, convolution and image interpolation, linear phase retrieval. The main problems we came across in three-dimension reconstruction are the extraction of feature points and arithmetic of curve fitting. To solve such problems, we can, first of all, calculate the relevant surface normal vector information of each pixel in the light source coordinate system, then these vectors are to be converted to the coordinates of image through the coordinate conversion, so the expectant 3D point cloud get arise. Secondly, after the following procedures of de-noising, repairing, the feature points can later be selected and fitted to get the fitting function of the surface topography by means of Zernike polynomial, so as to reconstruct the determinand's three-dimensional topography. In this paper, a new kind of three-dimension reconstruction algorithm is proposed, with the assistance of which, the topography can be estimated from its grayscale at different sample points. Moreover, the previous stimulation and the experimental results prove that the new algorithm has a strong capability to fit, especially for large-scale objects .

  16. Parameters optimization for wavelet denoising based on normalized spectral angle and threshold constraint machine learning

    Science.gov (United States)

    Li, Hao; Ma, Yong; Liang, Kun; Tian, Yong; Wang, Rui

    2012-01-01

    Wavelet parameters (e.g., wavelet type, level of decomposition) affect the performance of the wavelet denoising algorithm in hyperspectral applications. Current studies select the best wavelet parameters for a single spectral curve by comparing similarity criteria such as spectral angle (SA). However, the method to find the best parameters for a spectral library that contains multiple spectra has not been studied. In this paper, a criterion named normalized spectral angle (NSA) is proposed. By comparing NSA, the best combination of parameters for a spectral library can be selected. Moreover, a fast algorithm based on threshold constraint and machine learning is developed to reduce the time of a full search. After several iterations of learning, the combination of parameters that constantly surpasses a threshold is selected. The experiments proved that by using the NSA criterion, the SA values decreased significantly, and the fast algorithm could save 80% time consumption, while the denoising performance was not obviously impaired.

  17. 3D mapping of turbulence: a laboratory experiment

    Science.gov (United States)

    Le Louarn, Miska; Dainty, Christopher; Paterson, Carl; Tallon, Michel

    2000-07-01

    In this paper, we present the first experimental results of the 3D mapping method. 3D mapping of turbulence is a method to remove the cone effect with multiple laser guide stars and multiple deformable mirrors. A laboratory experiment was realized to verify the theoretical predictions. The setup consisted of two turbulent phase screens (made with liquid crystal devices) and a Shack-Hartmann wavefront sensor. We describe the interaction matrix involved in reconstructing Zernike commands for multiple deformable mirror from the slope measurements made from laser guide stars. It is shown that mirror commands can indeed be reconstructed with the 3D mapping method. Limiting factors of the method, brought to light by this experiment are discussed.

  18. Fixational eye movement: a negligible source of dynamic aberration.

    Science.gov (United States)

    Mecê, Pedro; Jarosz, Jessica; Conan, Jean-Marc; Petit, Cyril; Grieve, Kate; Paques, Michel; Meimon, Serge

    2018-02-01

    To evaluate the contribution of fixational eye movements to dynamic aberration, 50 healthy eyes were examined with an original custom-built Shack-Hartmann aberrometer, running at a temporal frequency of 236Hz, with 22 lenslets across a 5mm pupil, synchronized with a 236Hz pupil tracker. A comparison of the dynamic behavior of the first 21 Zernike modes (starting from defocus) with and without digital pupil stabilization, on a 3.4s sequence between blinks, showed that the contribution of fixational eye movements to dynamic aberration is negligible. Therefore we highlighted the fact that a pupil tracker coupled to an Adaptive Optics Ophthalmoscope is not essential to achieve diffraction-limited resolution.

  19. Image Structure-Preserving Denoising Based on Difference Curvature Driven Fractional Nonlinear Diffusion

    Directory of Open Access Journals (Sweden)

    Xuehui Yin

    2015-01-01

    Full Text Available The traditional integer-order partial differential equations and gradient regularization based image denoising techniques often suffer from staircase effect, speckle artifacts, and the loss of image contrast and texture details. To address these issues, in this paper, a difference curvature driven fractional anisotropic diffusion for image noise removal is presented, which uses two new techniques, fractional calculus and difference curvature, to describe the intensity variations in images. The fractional-order derivatives information of an image can deal well with the textures of the image and achieve a good tradeoff between eliminating speckle artifacts and restraining staircase effect. The difference curvature constructed by the second order derivatives along the direction of gradient of an image and perpendicular to the gradient can effectively distinguish between ramps and edges. Fourier transform technique is also proposed to compute the fractional-order derivative. Experimental results demonstrate that the proposed denoising model can avoid speckle artifacts and staircase effect and preserve important features such as curvy edges, straight edges, ramps, corners, and textures. They are obviously superior to those of traditional integral based methods. The experimental results also reveal that our proposed model yields a good visual effect and better values of MSSIM and PSNR.

  20. MHD from a Microscopic Concept and Onset of Turbulence in Hartmann Flow

    International Nuclear Information System (INIS)

    Jirkovsky, L.; Bo-ot, L. Ma.; Chiang, C. M.

    2010-01-01

    We derive higher order magneto-hydrodynamic (MHD) equations from a microscopic picture using projection and perturbation formalism. In an application to Hartmann flow we find velocity profiles flattening towards the center at the onset of turbulence in hydrodynamic limit. Comparison with the system under the effect of a uniform magnetic field yields difference in the onset of turbulence consistent with observations, showing that the presence of magnetic field inhibits onset of instability or turbulence. The laminar-turbulent transition is demonstrated in a phase transition plot of the development in time of the relative average velocities vs. Reynolds number showing a sharp increase of the relative average velocity at the transition point as determined by the critical Reynolds number. (physics of gases, plasmas, and electric discharges)

  1. Portfolio Value at Risk Estimate for Crude Oil Markets: A Multivariate Wavelet Denoising Approach

    Directory of Open Access Journals (Sweden)

    Kin Keung Lai

    2012-04-01

    Full Text Available In the increasingly globalized economy these days, the major crude oil markets worldwide are seeing higher level of integration, which results in higher level of dependency and transmission of risks among different markets. Thus the risk of the typical multi-asset crude oil portfolio is influenced by dynamic correlation among different assets, which has both normal and transient behaviors. This paper proposes a novel multivariate wavelet denoising based approach for estimating Portfolio Value at Risk (PVaR. The multivariate wavelet analysis is introduced to analyze the multi-scale behaviors of the correlation among different markets and the portfolio volatility behavior in the higher dimensional time scale domain. The heterogeneous data and noise behavior are addressed in the proposed multi-scale denoising based PVaR estimation algorithm, which also incorporatesthe mainstream time series to address other well known data features such as autocorrelation and volatility clustering. Empirical studies suggest that the proposed algorithm outperforms the benchmark ExponentialWeighted Moving Average (EWMA and DCC-GARCH model, in terms of conventional performance evaluation criteria for the model reliability.

  2. The impact of densification by means of informal shacks in the backyards of low-cost houses on the environment and service delivery in cape town, South Africa.

    Science.gov (United States)

    Govender, Thashlin; Barnes, Jo M; Pieper, Clarissa H

    2011-01-01

    This paper investigates the state-sponsored low cost housing provided to previously disadvantaged communities in the City of Cape Town. The strain imposed on municipal services by informal densification of unofficial backyard shacks was found to create unintended public health risks. Four subsidized low-cost housing communities were selected within the City of Cape Town in this cross-sectional survey. Data was obtained from 1080 persons with a response rate of 100%. Illegal electrical connections to backyard shacks that are made of flimsy materials posed increased fire risks. A high proportion of main house owners did not pay for water but sold water to backyard dwellers. The design of state-subsidised houses and the unplanned housing in the backyard added enormous pressure on the existing municipal infrastructure and the environment. Municipal water and sewerage systems and solid waste disposal cannot cope with the increased population density and poor sanitation behaviour of the inhabitants of these settlements. The low-cost housing program in South Africa requires improved management and prudent policies to cope with the densification of state-funded low-cost housing settlements.

  3. A Novel Hybrid Model Based on Extreme Learning Machine, k-Nearest Neighbor Regression and Wavelet Denoising Applied to Short-Term Electric Load Forecasting

    Directory of Open Access Journals (Sweden)

    Weide Li

    2017-05-01

    Full Text Available Electric load forecasting plays an important role in electricity markets and power systems. Because electric load time series are complicated and nonlinear, it is very difficult to achieve a satisfactory forecasting accuracy. In this paper, a hybrid model, Wavelet Denoising-Extreme Learning Machine optimized by k-Nearest Neighbor Regression (EWKM, which combines k-Nearest Neighbor (KNN and Extreme Learning Machine (ELM based on a wavelet denoising technique is proposed for short-term load forecasting. The proposed hybrid model decomposes the time series into a low frequency-associated main signal and some detailed signals associated with high frequencies at first, then uses KNN to determine the independent and dependent variables from the low-frequency signal. Finally, the ELM is used to get the non-linear relationship between these variables to get the final prediction result for the electric load. Compared with three other models, Extreme Learning Machine optimized by k-Nearest Neighbor Regression (EKM, Wavelet Denoising-Extreme Learning Machine (WKM and Wavelet Denoising-Back Propagation Neural Network optimized by k-Nearest Neighbor Regression (WNNM, the model proposed in this paper can improve the accuracy efficiently. New South Wales is the economic powerhouse of Australia, so we use the proposed model to predict electric demand for that region. The accurate prediction has a significant meaning.

  4. Comparison of de-noising techniques of scintigraphic images; Comparaison de techniques de debruitage des images scintigraphiques

    Energy Technology Data Exchange (ETDEWEB)

    Kirkove, M.; Seret, A. [Liege Univ., Imagerie Medicale Experimentale, Institut de Physique (Belgium)

    2007-05-15

    Scintigraphic images are strongly affected by Poisson noise. This article presents the results of a comparison between de-noising methods for Poisson noise according to different criteria: the gain in signal-to-noise ratio, the preservation of resolution and contrast. and the visual quality. The wavelet techniques recently developed to de-noise Poisson noise limited images are divided into two groups based on: (1) the Haar representation. 1 (2) the transformation of Poisson noise into white Gaussian noise by the Haar-Fisz transform followed by a de-noising. In this study, three variants of the first group and three variants of the second. including the adaptative Wiener filter, four types of wavelet thresholding and the Bayesian method of Pizurica were compared to Metz and Hanning filters and to Shine, a systematic noise elimination process. All these methods, except Shine, are parametric. For each of them, ranges of optimal values for the parameters were highlighted as a function of the aforementioned criteria. The intersection of ranges for the wavelet methods without thresholding was empty, and these methods were therefore not further compared quantitatively. The thresholding techniques and Shine gave the best results in resolution and contrast. The largest improvement in signal-to-noise ratio was obtained by the filters. Ideally, these filters should be accurately defined for each image. This is difficult in the clinical context. Moreover. they generate oscillation artefacts. In addition, the wavelet techniques did not bring significant improvements, and are rather slow. Therefore, Shine, which is fast and works automatically, appears to be an interesting alternative. (authors)

  5. Mesh Denoising based on Normal Voting Tensor and Binary Optimization.

    Science.gov (United States)

    Yadav, Sunil Kumar; Reitebuch, Ulrich; Polthier, Konrad

    2017-08-17

    This paper presents a two-stage mesh denoising algorithm. Unlike other traditional averaging approaches, our approach uses an element-based normal voting tensor to compute smooth surfaces. By introducing a binary optimization on the proposed tensor together with a local binary neighborhood concept, our algorithm better retains sharp features and produces smoother umbilical regions than previous approaches. On top of that, we provide a stochastic analysis on the different kinds of noise based on the average edge length. The quantitative results demonstrate that the performance of our method is better compared to state-of-the-art smoothing approaches.

  6. Seismic data interpolation and denoising by learning a tensor tight frame

    International Nuclear Information System (INIS)

    Liu, Lina; Ma, Jianwei; Plonka, Gerlind

    2017-01-01

    Seismic data interpolation and denoising plays a key role in seismic data processing. These problems can be understood as sparse inverse problems, where the desired data are assumed to be sparsely representable within a suitable dictionary. In this paper, we present a new method based on a data-driven tight frame (DDTF) of Kronecker type (KronTF) that avoids the vectorization step and considers the multidimensional structure of data in a tensor-product way. It takes advantage of the structure contained in all different modes (dimensions) simultaneously. In order to overcome the limitations of a usual tensor-product approach we also incorporate data-driven directionality. The complete method is formulated as a sparsity-promoting minimization problem. It includes two main steps. In the first step, a hard thresholding algorithm is used to update the frame coefficients of the data in the dictionary; in the second step, an iterative alternating method is used to update the tight frame (dictionary) in each different mode. The dictionary that is learned in this way contains the principal components in each mode. Furthermore, we apply the proposed KronTF to seismic interpolation and denoising. Examples with synthetic and real seismic data show that the proposed method achieves better results than the traditional projection onto convex sets method based on the Fourier transform and the previous vectorized DDTF methods. In particular, the simple structure of the new frame construction makes it essentially more efficient. (paper)

  7. Seismic data interpolation and denoising by learning a tensor tight frame

    Science.gov (United States)

    Liu, Lina; Plonka, Gerlind; Ma, Jianwei

    2017-10-01

    Seismic data interpolation and denoising plays a key role in seismic data processing. These problems can be understood as sparse inverse problems, where the desired data are assumed to be sparsely representable within a suitable dictionary. In this paper, we present a new method based on a data-driven tight frame (DDTF) of Kronecker type (KronTF) that avoids the vectorization step and considers the multidimensional structure of data in a tensor-product way. It takes advantage of the structure contained in all different modes (dimensions) simultaneously. In order to overcome the limitations of a usual tensor-product approach we also incorporate data-driven directionality. The complete method is formulated as a sparsity-promoting minimization problem. It includes two main steps. In the first step, a hard thresholding algorithm is used to update the frame coefficients of the data in the dictionary; in the second step, an iterative alternating method is used to update the tight frame (dictionary) in each different mode. The dictionary that is learned in this way contains the principal components in each mode. Furthermore, we apply the proposed KronTF to seismic interpolation and denoising. Examples with synthetic and real seismic data show that the proposed method achieves better results than the traditional projection onto convex sets method based on the Fourier transform and the previous vectorized DDTF methods. In particular, the simple structure of the new frame construction makes it essentially more efficient.

  8. Radar Target Recognition Based on Stacked Denoising Sparse Autoencoder

    Directory of Open Access Journals (Sweden)

    Zhao Feixiang

    2017-04-01

    Full Text Available Feature extraction is a key step in radar target recognition. The quality of the extracted features determines the performance of target recognition. However, obtaining the deep nature of the data is difficult using the traditional method. The autoencoder can learn features by making use of data and can obtain feature expressions at different levels of data. To eliminate the influence of noise, the method of radar target recognition based on stacked denoising sparse autoencoder is proposed in this paper. This method can extract features directly and efficiently by setting different hidden layers and numbers of iterations. Experimental results show that the proposed method is superior to the K-nearest neighbor method and the traditional stacked autoencoder.

  9. Numerical simulations of MHD flow transition in ducts with conducting Hartmann walls. Limtech project A3 D4 (TUI)

    Energy Technology Data Exchange (ETDEWEB)

    Krasnov, D.; Boeck, T. [Technische Univ. Ilmenau (Germany). Inst. of Thermodynamics and Fluid Mechanics; Braiden, L.; Molokov, S. [Conventry Univ. (United Kingdom). Dept. of Mathematics and Physics; Buehler, L. [Karlsruher Institut fuer Technologie (Germany). Inst. fuer Kern- und Energietechnik, Programm Kernfusion

    2016-07-01

    Pressure-driven magnetohydrodynamic duct flows in a transverse, wall-parallel and uniform field have been studied by direct numerical. The conducting Hartmann walls give rise to a laminar velocity distribution with strong jets at the side walls, which are susceptible to flow instability. The onset of time-dependent flow as well as fully developed turbulent flow have been explored in a wide range of parameters.

  10. Video denoising, deblocking, and enhancement through separable 4-D nonlocal spatiotemporal transforms.

    Science.gov (United States)

    Maggioni, Matteo; Boracchi, Giacomo; Foi, Alessandro; Egiazarian, Karen

    2012-09-01

    We propose a powerful video filtering algorithm that exploits temporal and spatial redundancy characterizing natural video sequences. The algorithm implements the paradigm of nonlocal grouping and collaborative filtering, where a higher dimensional transform-domain representation of the observations is leveraged to enforce sparsity, and thus regularize the data: 3-D spatiotemporal volumes are constructed by tracking blocks along trajectories defined by the motion vectors. Mutually similar volumes are then grouped together by stacking them along an additional fourth dimension, thus producing a 4-D structure, termed group, where different types of data correlation exist along the different dimensions: local correlation along the two dimensions of the blocks, temporal correlation along the motion trajectories, and nonlocal spatial correlation (i.e., self-similarity) along the fourth dimension of the group. Collaborative filtering is then realized by transforming each group through a decorrelating 4-D separable transform and then by shrinkage and inverse transformation. In this way, the collaborative filtering provides estimates for each volume stacked in the group, which are then returned and adaptively aggregated to their original positions in the video. The proposed filtering procedure addresses several video processing applications, such as denoising, deblocking, and enhancement of both grayscale and color data. Experimental results prove the effectiveness of our method in terms of both subjective and objective visual quality, and show that it outperforms the state of the art in video denoising.

  11. Sparse Method for Direction of Arrival Estimation Using Denoised Fourth-Order Cumulants Vector.

    Science.gov (United States)

    Fan, Yangyu; Wang, Jianshu; Du, Rui; Lv, Guoyun

    2018-06-04

    Fourth-order cumulants (FOCs) vector-based direction of arrival (DOA) estimation methods of non-Gaussian sources may suffer from poor performance for limited snapshots or difficulty in setting parameters. In this paper, a novel FOCs vector-based sparse DOA estimation method is proposed. Firstly, by utilizing the concept of a fourth-order difference co-array (FODCA), an advanced FOCs vector denoising or dimension reduction procedure is presented for arbitrary array geometries. Then, a novel single measurement vector (SMV) model is established by the denoised FOCs vector, and efficiently solved by an off-grid sparse Bayesian inference (OGSBI) method. The estimation errors of FOCs are integrated in the SMV model, and are approximately estimated in a simple way. A necessary condition regarding the number of identifiable sources of our method is presented that, in order to uniquely identify all sources, the number of sources K must fulfill K ≤ ( M 4 - 2 M 3 + 7 M 2 - 6 M ) / 8 . The proposed method suits any geometry, does not need prior knowledge of the number of sources, is insensitive to associated parameters, and has maximum identifiability O ( M 4 ) , where M is the number of sensors in the array. Numerical simulations illustrate the superior performance of the proposed method.

  12. A Denoising Based Autoassociative Model for Robust Sensor Monitoring in Nuclear Power Plants

    Directory of Open Access Journals (Sweden)

    Ahmad Shaheryar

    2016-01-01

    Full Text Available Sensors health monitoring is essentially important for reliable functioning of safety-critical chemical and nuclear power plants. Autoassociative neural network (AANN based empirical sensor models have widely been reported for sensor calibration monitoring. However, such ill-posed data driven models may result in poor generalization and robustness. To address above-mentioned issues, several regularization heuristics such as training with jitter, weight decay, and cross-validation are suggested in literature. Apart from these regularization heuristics, traditional error gradient based supervised learning algorithms for multilayered AANN models are highly susceptible of being trapped in local optimum. In order to address poor regularization and robust learning issues, here, we propose a denoised autoassociative sensor model (DAASM based on deep learning framework. Proposed DAASM model comprises multiple hidden layers which are pretrained greedily in an unsupervised fashion under denoising autoencoder architecture. In order to improve robustness, dropout heuristic and domain specific data corruption processes are exercised during unsupervised pretraining phase. The proposed sensor model is trained and tested on sensor data from a PWR type nuclear power plant. Accuracy, autosensitivity, spillover, and sequential probability ratio test (SPRT based fault detectability metrics are used for performance assessment and comparison with extensively reported five-layer AANN model by Kramer.

  13. Hartmann characterization of the PEEM-3 aberration-corrected X-ray photoemission electron microscope.

    Science.gov (United States)

    Scholl, A; Marcus, M A; Doran, A; Nasiatka, J R; Young, A T; MacDowell, A A; Streubel, R; Kent, N; Feng, J; Wan, W; Padmore, H A

    2018-05-01

    Aberration correction by an electron mirror dramatically improves the spatial resolution and transmission of photoemission electron microscopes. We will review the performance of the recently installed aberration corrector of the X-ray Photoemission Electron Microscope PEEM-3 and show a large improvement in the efficiency of the electron optics. Hartmann testing is introduced as a quantitative method to measure the geometrical aberrations of a cathode lens electron microscope. We find that aberration correction leads to an order of magnitude reduction of the spherical aberrations, suggesting that a spatial resolution of below 100 nm is possible at 100% transmission of the optics when using x-rays. We demonstrate this improved performance by imaging test patterns employing element and magnetic contrast. Published by Elsevier B.V.

  14. Chambolle's Projection Algorithm for Total Variation Denoising

    Directory of Open Access Journals (Sweden)

    Joan Duran

    2013-12-01

    Full Text Available Denoising is the problem of removing the inherent noise from an image. The standard noise model is additive white Gaussian noise, where the observed image f is related to the underlying true image u by the degradation model f=u+n, and n is supposed to be at each pixel independently and identically distributed as a zero-mean Gaussian random variable. Since this is an ill-posed problem, Rudin, Osher and Fatemi introduced the total variation as a regularizing term. It has proved to be quite efficient for regularizing images without smoothing the boundaries of the objects. This paper focuses on the simple description of the theory and on the implementation of Chambolle's projection algorithm for minimizing the total variation of a grayscale image. Furthermore, we adapt the algorithm to the vectorial total variation for color images. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation.

  15. ECG Denoising Using Marginalized Particle Extended Kalman Filter With an Automatic Particle Weighting Strategy.

    Science.gov (United States)

    Hesar, Hamed Danandeh; Mohebbi, Maryam

    2017-05-01

    In this paper, a model-based Bayesian filtering framework called the "marginalized particle-extended Kalman filter (MP-EKF) algorithm" is proposed for electrocardiogram (ECG) denoising. This algorithm does not have the extended Kalman filter (EKF) shortcoming in handling non-Gaussian nonstationary situations because of its nonlinear framework. In addition, it has less computational complexity compared with particle filter. This filter improves ECG denoising performance by implementing marginalized particle filter framework while reducing its computational complexity using EKF framework. An automatic particle weighting strategy is also proposed here that controls the reliance of our framework to the acquired measurements. We evaluated the proposed filter on several normal ECGs selected from MIT-BIH normal sinus rhythm database. To do so, artificial white Gaussian and colored noises as well as nonstationary real muscle artifact (MA) noise over a range of low SNRs from 10 to -5 dB were added to these normal ECG segments. The benchmark methods were the EKF and extended Kalman smoother (EKS) algorithms which are the first model-based Bayesian algorithms introduced in the field of ECG denoising. From SNR viewpoint, the experiments showed that in the presence of Gaussian white noise, the proposed framework outperforms the EKF and EKS algorithms in lower input SNRs where the measurements and state model are not reliable. Owing to its nonlinear framework and particle weighting strategy, the proposed algorithm attained better results at all input SNRs in non-Gaussian nonstationary situations (such as presence of pink noise, brown noise, and real MA). In addition, the impact of the proposed filtering method on the distortion of diagnostic features of the ECG was investigated and compared with EKF/EKS methods using an ECG diagnostic distortion measure called the "Multi-Scale Entropy Based Weighted Distortion Measure" or MSEWPRD. The results revealed that our proposed

  16. Adaptive optics system application for solar telescope

    Science.gov (United States)

    Lukin, V. P.; Grigor'ev, V. M.; Antoshkin, L. V.; Botugina, N. N.; Emaleev, O. N.; Konyaev, P. A.; Kovadlo, P. G.; Krivolutskiy, N. P.; Lavrionova, L. N.; Skomorovski, V. I.

    2008-07-01

    The possibility of applying adaptive correction to ground-based solar astronomy is considered. Several experimental systems for image stabilization are described along with the results of their tests. Using our work along several years and world experience in solar adaptive optics (AO) we are assuming to obtain first light to the end of 2008 for the first Russian low order ANGARA solar AO system on the Big Solar Vacuum Telescope (BSVT) with 37 subapertures Shack-Hartmann wavefront sensor based of our modified correlation tracker algorithm, DALSTAR video camera, 37 elements deformable bimorph mirror, home made fast tip-tip mirror with separate correlation tracker. Too strong daytime turbulence is on the BSVT site and we are planning to obtain a partial correction for part of Sun surface image.

  17. Zonal wavefront sensing using a grating array printed on a polyester film

    Energy Technology Data Exchange (ETDEWEB)

    Pathak, Biswajit; Boruah, Bosanta R., E-mail: brboruah@iitg.ernet.in [Department of Physics, Indian Institute of Technology Guwahati, Guwahati, Assam 781039 (India); Kumar, Suraj [Department of Applied Sciences, Gauhati University, Guwahati, Assam 781014 (India)

    2015-12-15

    In this paper, we describe the development of a zonal wavefront sensor that comprises an array of binary diffraction gratings realized on a transparent sheet (i.e., polyester film) followed by a focusing lens and a camera. The sensor works in a manner similar to that of a Shack-Hartmann wavefront sensor. The fabrication of the array of gratings is immune to certain issues associated with the fabrication of the lenslet array which is commonly used in zonal wavefront sensing. Besides the sensing method offers several important advantages such as flexible dynamic range, easy configurability, and option to enhance the sensing frame rate. Here, we have demonstrated the working of the proposed sensor using a proof-of-principle experimental arrangement.

  18. Zonal wavefront sensing using a grating array printed on a polyester film

    Science.gov (United States)

    Pathak, Biswajit; Kumar, Suraj; Boruah, Bosanta R.

    2015-12-01

    In this paper, we describe the development of a zonal wavefront sensor that comprises an array of binary diffraction gratings realized on a transparent sheet (i.e., polyester film) followed by a focusing lens and a camera. The sensor works in a manner similar to that of a Shack-Hartmann wavefront sensor. The fabrication of the array of gratings is immune to certain issues associated with the fabrication of the lenslet array which is commonly used in zonal wavefront sensing. Besides the sensing method offers several important advantages such as flexible dynamic range, easy configurability, and option to enhance the sensing frame rate. Here, we have demonstrated the working of the proposed sensor using a proof-of-principle experimental arrangement.

  19. Application of NASVD method in the denoising of airborne gamma-ray data

    International Nuclear Information System (INIS)

    Yang Jia; Ge Liangquan; Zhang Qingxian; Gu Yi

    2010-01-01

    A noise reducing method based on multivariate statistical analysis f or gamma-ray spectra-the NASVD method (Noise Adjusted Singular Value Decomposition), main idea and algorithm for realizing of the NASVD are introduced in the paper. The NASVD method is used to an airborne gamma-ray data set, the result has show n that the method can dramatically remove statistical noise from raw gamma-ray spectra and the quality of processed data is much better than that of the conventional spectral denoising methods. (authors)

  20. Image decomposition model Shearlet-Hilbert-L2 with better performance for denoising in ESPI fringe patterns.

    Science.gov (United States)

    Xu, Wenjun; Tang, Chen; Su, Yonggang; Li, Biyuan; Lei, Zhenkun

    2018-02-01

    In this paper, we propose an image decomposition model Shearlet-Hilbert-L 2 with better performance for denoising in electronic speckle pattern interferometry (ESPI) fringe patterns. In our model, the low-density fringes, high-density fringes, and noise are, respectively, described by shearlet smoothness spaces, adaptive Hilbert space, and L 2 space and processed individually. Because the shearlet transform has superior directional sensitivity, our proposed Shearlet-Hilbert-L 2 model achieves commendable filtering results for various types of ESPI fringe patterns, including uniform density fringe patterns, moderately variable density fringe patterns, and greatly variable density fringe patterns. We evaluate the performance of our proposed Shearlet-Hilbert-L 2 model via application to two computer-simulated and nine experimentally obtained ESPI fringe patterns with various densities and poor quality. Furthermore, we compare our proposed model with windowed Fourier filtering and coherence-enhancing diffusion, both of which are the state-of-the-art methods for ESPI fringe patterns denoising in transform domain and spatial domain, respectively. We also compare our proposed model with the previous image decomposition model BL-Hilbert-L 2 .

  1. Wavelet Denoising of Mobile Radiation Data

    International Nuclear Information System (INIS)

    Campbell, D.B.

    2008-01-01

    The FY08 phase of this project investigated the merits of video fusion as a method for mitigating the false alarms encountered by vehicle borne detection systems in an effort to realize performance gains associated with wavelet denoising. The fusion strategy exploited the significant correlations which exist between data obtained from radiation detectors and video systems with coincident fields of view. The additional information provided by optical systems can greatly increase the capabilities of these detection systems by reducing the burden of false alarms and through the generation of actionable information. The investigation into the use of wavelet analysis techniques as a means of filtering the gross-counts signal obtained from moving radiation detectors showed promise for vehicle borne systems. However, the applicability of these techniques to man-portable systems is limited due to minimal gains in performance over the rapid feedback available to system operators under walking conditions. Furthermore, the fusion of video holds significant promise for systems operating from vehicles or systems organized into stationary arrays; however, the added complexity and hardware required by this technique renders it infeasible for man-portable systems

  2. Auditory steady state responses and cochlear implants: Modeling the artifact-response mixture in the perspective of denoising.

    Science.gov (United States)

    Mina, Faten; Attina, Virginie; Duroc, Yvan; Veuillet, Evelyne; Truy, Eric; Thai-Van, Hung

    2017-01-01

    Auditory steady state responses (ASSRs) in cochlear implant (CI) patients are contaminated by the spread of a continuous CI electrical stimulation artifact. The aim of this work was to model the electrophysiological mixture of the CI artifact and the corresponding evoked potentials on scalp electrodes in order to evaluate the performance of denoising algorithms in eliminating the CI artifact in a controlled environment. The basis of the proposed computational framework is a neural mass model representing the nodes of the auditory pathways. Six main contributors to auditory evoked potentials from the cochlear level and up to the auditory cortex were taken into consideration. The simulated dynamics were then projected into a 3-layer realistic head model. 32-channel scalp recordings of the CI artifact-response were then generated by solving the electromagnetic forward problem. As an application, the framework's simulated 32-channel datasets were used to compare the performance of 4 commonly used Independent Component Analysis (ICA) algorithms: infomax, extended infomax, jade and fastICA in eliminating the CI artifact. As expected, two major components were detectable in the simulated datasets, a low frequency component at the modulation frequency and a pulsatile high frequency component related to the stimulation frequency. The first can be attributed to the phase-locked ASSR and the second to the stimulation artifact. Among the ICA algorithms tested, simulations showed that infomax was the most efficient and reliable in denoising the CI artifact-response mixture. Denoising algorithms can induce undesirable deformation of the signal of interest in real CI patient recordings. The proposed framework is a valuable tool for evaluating these algorithms in a controllable environment ahead of experimental or clinical applications.

  3. Auditory steady state responses and cochlear implants: Modeling the artifact-response mixture in the perspective of denoising.

    Directory of Open Access Journals (Sweden)

    Faten Mina

    Full Text Available Auditory steady state responses (ASSRs in cochlear implant (CI patients are contaminated by the spread of a continuous CI electrical stimulation artifact. The aim of this work was to model the electrophysiological mixture of the CI artifact and the corresponding evoked potentials on scalp electrodes in order to evaluate the performance of denoising algorithms in eliminating the CI artifact in a controlled environment. The basis of the proposed computational framework is a neural mass model representing the nodes of the auditory pathways. Six main contributors to auditory evoked potentials from the cochlear level and up to the auditory cortex were taken into consideration. The simulated dynamics were then projected into a 3-layer realistic head model. 32-channel scalp recordings of the CI artifact-response were then generated by solving the electromagnetic forward problem. As an application, the framework's simulated 32-channel datasets were used to compare the performance of 4 commonly used Independent Component Analysis (ICA algorithms: infomax, extended infomax, jade and fastICA in eliminating the CI artifact. As expected, two major components were detectable in the simulated datasets, a low frequency component at the modulation frequency and a pulsatile high frequency component related to the stimulation frequency. The first can be attributed to the phase-locked ASSR and the second to the stimulation artifact. Among the ICA algorithms tested, simulations showed that infomax was the most efficient and reliable in denoising the CI artifact-response mixture. Denoising algorithms can induce undesirable deformation of the signal of interest in real CI patient recordings. The proposed framework is a valuable tool for evaluating these algorithms in a controllable environment ahead of experimental or clinical applications.

  4. Comments on "Image denoising by sparse 3-D transform-domain collaborative filtering".

    Science.gov (United States)

    Hou, Yingkun; Zhao, Chunxia; Yang, Deyun; Cheng, Yong

    2011-01-01

    In order to resolve the problem that the denoising performance has a sharp drop when noise standard deviation reaches 40, proposed to replace the wavelet transform by the DCT. In this comment, we argue that this replacement is unnecessary, and that the problem can be solved by adjusting some numerical parameters. We also present this parameter modification approach here. Experimental results demonstrate that the proposed modification achieves better results in terms of both peak signal-to-noise ratio and subjective visual quality than the original method for strong noise.

  5. 3D seismic data de-noising and reconstruction using Multichannel Time Slice Singular Spectrum Analysis

    Science.gov (United States)

    Rekapalli, Rajesh; Tiwari, R. K.; Sen, Mrinal K.; Vedanti, Nimisha

    2017-05-01

    Noises and data gaps complicate the seismic data processing and subsequently cause difficulties in the geological interpretation. We discuss a recent development and application of the Multi-channel Time Slice Singular Spectrum Analysis (MTSSSA) for 3D seismic data de-noising in time domain. In addition, L1 norm based simultaneous data gap filling of 3D seismic data using MTSSSA also discussed. We discriminated the noises from single individual time slices of 3D volumes by analyzing Eigen triplets of the trajectory matrix. We first tested the efficacy of the method on 3D synthetic seismic data contaminated with noise and then applied to the post stack seismic reflection data acquired from the Sleipner CO2 storage site (pre and post CO2 injection) from Norway. Our analysis suggests that the MTSSSA algorithm is efficient to enhance the S/N for better identification of amplitude anomalies along with simultaneous data gap filling. The bright spots identified in the de-noised data indicate upward migration of CO2 towards the top of the Utsira formation. The reflections identified applying MTSSSA to pre and post injection data correlate well with the geology of the Southern Viking Graben (SVG).

  6. A neuro-fuzzy inference system for sensor failure detection using wavelet denoising, PCA and SPRT

    International Nuclear Information System (INIS)

    Na, Man Gyun

    2001-01-01

    In this work, a neuro-fuzzy inference system combined with the wavelet denoising, PCA(principal component analysis) and SPRT (sequential probability ratio test) methods is developed to detect the relevant sensor failure using other sensor signals. The wavelet denoising technique is applied to remove noise components in input signals into the neuro-fuzzy system. The PCA is used to reduce the dimension of an input space without losing a significant amount of information, The PCA makes easy the selection of the input signals into the neuro-fuzzy system. Also, a lower dimensional input space usually reduces the time necessary to train a neuro-fuzzy system. The parameters of the neuro-fuzzy inference system which estimates the relevant sensor signal are optimized by a genetic algorithm and a least-squares algorithm. The residuals between the estimated signals and the measured signals are used to detect whether the sensors are failed or not. The SPRT is used in this failure detection algorithm. The proposed sensor-monitoring algorithm was verified through applications to the pressurizer water level and the hot-leg flowrate sensors in pressurized water reactors

  7. Combination of canonical correlation analysis and empirical mode decomposition applied to denoising the labor electrohysterogram.

    Science.gov (United States)

    Hassan, Mahmoud; Boudaoud, Sofiane; Terrien, Jérémy; Karlsson, Brynjar; Marque, Catherine

    2011-09-01

    The electrohysterogram (EHG) is often corrupted by electronic and electromagnetic noise as well as movement artifacts, skeletal electromyogram, and ECGs from both mother and fetus. The interfering signals are sporadic and/or have spectra overlapping the spectra of the signals of interest rendering classical filtering ineffective. In the absence of efficient methods for denoising the monopolar EHG signal, bipolar methods are usually used. In this paper, we propose a novel combination of blind source separation using canonical correlation analysis (BSS_CCA) and empirical mode decomposition (EMD) methods to denoise monopolar EHG. We first extract the uterine bursts by using BSS_CCA then the biggest part of any residual noise is removed from the bursts by EMD. Our algorithm, called CCA_EMD, was compared with wavelet filtering and independent component analysis. We also compared CCA_EMD with the corresponding bipolar signals to demonstrate that the new method gives signals that have not been degraded by the new method. The proposed method successfully removed artifacts from the signal without altering the underlying uterine activity as observed by bipolar methods. The CCA_EMD algorithm performed considerably better than the comparison methods.

  8. Wavelet denoising of multiframe optical coherence tomography data.

    Science.gov (United States)

    Mayer, Markus A; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y; Tornow, Ralf P

    2012-03-01

    We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise.

  9. Real-time wavefront correction system using a zonal deformable mirror and a Hartmann sensor

    International Nuclear Information System (INIS)

    Salmon, J.T.; Bliss, E.S.; Long, T.W.; Orham, E.L.; Presta, R.W.; Swift, C.D.; Ward, R.S.

    1991-07-01

    We have developed an adaptive optics system that corrects up to five waves of 2nd-order and 3rd-order aberrations in a high-power laser beam to less than 1/10th wave RMS. The wavefront sensor is a Hartmann sensor with discrete lenses and position-sensitive photodiodes; the deformable mirror uses piezoelectric actuators with feedback from strain gauges bonded to the stacks. The controller hardware uses a VME bus. The system removes thermally induced aberrations generated in the master-oscillator-power-amplifier chains of a dye laser, as well as aberrations generated in beam combiners and vacuum isolation windows for average output powers exceeding 1 kW. The system bandwidth is 1 Hz, but higher bandwidths are easily attainable

  10. Application of wavelet domain wiener filter in denoising of airborne γ-ray data

    International Nuclear Information System (INIS)

    Luo Yaoyao; Ge Liangquan; Xiong Chao; Xu Lipeng; Hua Yongtao

    2012-01-01

    The wavelet domain Wiener filter method, which combines the traditional wavelet method and the wiener filter, is established at CUT to reduce noising in as-recorded airborne gamma-ray spectra. It was used to treat an airborne gamma-ray data collected from an area m Inner Mongolia. The results showed that using this method, statistical noise could be greatly removed from the raw airborne gamma-ray spectra, and quality of the processed data is much better than those by conventional spectral denoising methods. (authors)

  11. Denoising of MR images using FREBAS collaborative filtering

    International Nuclear Information System (INIS)

    Ito, Satoshi; Hizume, Masayuki; Yamada, Yoshifumi

    2011-01-01

    We propose a novel image denoising strategy based on the correlation in the FREBAS transformed domain. FREBAS transform is a kind of multi-resolution image analysis which consists of two different Fresnel transforms. It can decompose images into down-scaled images of the same size with a different frequency bandwidth. Since these decomposed images have similar distributions for the same directions from the center of the FREBAS domain, even when the FREBAS signal is hidden by noise in the case of a low-signal-to-noise ratio (SNR) image, the signal distribution can be estimated using the distribution of the FREBAS signal located near the position of interest. We have developed a collaborative Wiener filter in the FREBAS transformed domain which implements collaboration of the standard deviation of the position of interest and that of analogous positions. The experimental results demonstrated that the proposed algorithm improves the SNR in terms of both the total SNR and the SNR at the edges of images. (author)

  12. A Small Leak Detection Method Based on VMD Adaptive De-Noising and Ambiguity Correlation Classification Intended for Natural Gas Pipelines.

    Science.gov (United States)

    Xiao, Qiyang; Li, Jian; Bai, Zhiliang; Sun, Jiedi; Zhou, Nan; Zeng, Zhoumo

    2016-12-13

    In this study, a small leak detection method based on variational mode decomposition (VMD) and ambiguity correlation classification (ACC) is proposed. The signals acquired from sensors were decomposed using the VMD, and numerous components were obtained. According to the probability density function (PDF), an adaptive de-noising algorithm based on VMD is proposed for noise component processing and de-noised components reconstruction. Furthermore, the ambiguity function image was employed for analysis of the reconstructed signals. Based on the correlation coefficient, ACC is proposed to detect the small leak of pipeline. The analysis of pipeline leakage signals, using 1 mm and 2 mm leaks, has shown that proposed detection method can detect a small leak accurately and effectively. Moreover, the experimental results have shown that the proposed method achieved better performances than support vector machine (SVM) and back propagation neural network (BP) methods.

  13. Wavefront optimized nonlinear microscopy of ex vivo human retinas

    Science.gov (United States)

    Gualda, Emilio J.; Bueno, Juan M.; Artal, Pablo

    2010-03-01

    A multiphoton microscope incorporating a Hartmann-Shack (HS) wavefront sensor to control the ultrafast laser beam's wavefront aberrations has been developed. This instrument allowed us to investigate the impact of the laser beam aberrations on two-photon autofluorescence imaging of human retinal tissues. We demonstrated that nonlinear microscopy images are improved when laser beam aberrations are minimized by realigning the laser system cavity while wavefront controlling. Nonlinear signals from several human retinal anatomical features have been detected for the first time, without the need of fixation or staining procedures. Beyond the improved image quality, this approach reduces the required excitation power levels, minimizing the side effects of phototoxicity within the imaged sample. In particular, this may be important to study the physiology and function of the healthy and diseased retina.

  14. Sharpness-Aware Low-Dose CT Denoising Using Conditional Generative Adversarial Network.

    Science.gov (United States)

    Yi, Xin; Babyn, Paul

    2018-02-20

    Low-dose computed tomography (LDCT) has offered tremendous benefits in radiation-restricted applications, but the quantum noise as resulted by the insufficient number of photons could potentially harm the diagnostic performance. Current image-based denoising methods tend to produce a blur effect on the final reconstructed results especially in high noise levels. In this paper, a deep learning-based approach was proposed to mitigate this problem. An adversarially trained network and a sharpness detection network were trained to guide the training process. Experiments on both simulated and real dataset show that the results of the proposed method have very small resolution loss and achieves better performance relative to state-of-the-art methods both quantitatively and visually.

  15. Incorporating HYPR de-noising within iterative PET reconstruction (HYPR-OSEM)

    Science.gov (United States)

    (Kevin Cheng, Ju-Chieh; Matthews, Julian; Sossi, Vesna; Anton-Rodriguez, Jose; Salomon, André; Boellaard, Ronald

    2017-08-01

    HighlY constrained back-PRojection (HYPR) is a post-processing de-noising technique originally developed for time-resolved magnetic resonance imaging. It has been recently applied to dynamic imaging for positron emission tomography and shown promising results. In this work, we have developed an iterative reconstruction algorithm (HYPR-OSEM) which improves the signal-to-noise ratio (SNR) in static imaging (i.e. single frame reconstruction) by incorporating HYPR de-noising directly within the ordered subsets expectation maximization (OSEM) algorithm. The proposed HYPR operator in this work operates on the target image(s) from each subset of OSEM and uses the sum of the preceding subset images as the composite which is updated every iteration. Three strategies were used to apply the HYPR operator in OSEM: (i) within the image space modeling component of the system matrix in forward-projection only, (ii) within the image space modeling component in both forward-projection and back-projection, and (iii) on the image estimate after the OSEM update for each subset thus generating three forms: (i) HYPR-F-OSEM, (ii) HYPR-FB-OSEM, and (iii) HYPR-AU-OSEM. Resolution and contrast phantom simulations with various sizes of hot and cold regions as well as experimental phantom and patient data were used to evaluate the performance of the three forms of HYPR-OSEM, and the results were compared to OSEM with and without a post reconstruction filter. It was observed that the convergence in contrast recovery coefficients (CRC) obtained from all forms of HYPR-OSEM was slower than that obtained from OSEM. Nevertheless, HYPR-OSEM improved SNR without degrading accuracy in terms of resolution and contrast. It achieved better accuracy in CRC at equivalent noise level and better precision than OSEM and better accuracy than filtered OSEM in general. In addition, HYPR-AU-OSEM has been determined to be the more effective form of HYPR-OSEM in terms of accuracy and precision based on the studies

  16. A Small Leak Detection Method Based on VMD Adaptive De-Noising and Ambiguity Correlation Classification Intended for Natural Gas Pipelines

    Directory of Open Access Journals (Sweden)

    Qiyang Xiao

    2016-12-01

    Full Text Available In this study, a small leak detection method based on variational mode decomposition (VMD and ambiguity correlation classification (ACC is proposed. The signals acquired from sensors were decomposed using the VMD, and numerous components were obtained. According to the probability density function (PDF, an adaptive de-noising algorithm based on VMD is proposed for noise component processing and de-noised components reconstruction. Furthermore, the ambiguity function image was employed for analysis of the reconstructed signals. Based on the correlation coefficient, ACC is proposed to detect the small leak of pipeline. The analysis of pipeline leakage signals, using 1 mm and 2 mm leaks, has shown that proposed detection method can detect a small leak accurately and effectively. Moreover, the experimental results have shown that the proposed method achieved better performances than support vector machine (SVM and back propagation neural network (BP methods.

  17. Quantification of GABAA receptors in the rat brain with [123I]Iomazenil SPECT from factor analysis-denoised images

    International Nuclear Information System (INIS)

    Tsartsalis, Stergios; Moulin-Sallanon, Marcelle; Dumas, Noé; Tournier, Benjamin B.; Ghezzi, Catherine; Charnay, Yves; Ginovart, Nathalie; Millet, Philippe

    2014-01-01

    Purpose: In vivo imaging of GABA A receptors is essential for the comprehension of psychiatric disorders in which the GABAergic system is implicated. Small animal SPECT provides a modality for in vivo imaging of the GABAergic system in rodents using [ 123 I]Iomazenil, an antagonist of the GABA A receptor. The goal of this work is to describe and evaluate different quantitative reference tissue methods that enable reliable binding potential (BP) estimations in the rat brain to be obtained. Methods: Five male Sprague–Dawley rats were used for [ 123 I]Iomazenil brain SPECT scans. Binding parameters were obtained with a one-tissue compartment model (1TC), a constrained two-tissue compartment model (2TC c ), the two-step Simplified Reference Tissue Model (SRTM2), Logan graphical analysis and analysis of delayed-activity images. In addition, we employed factor analysis (FA) to deal with noise in data. Results: BP ND obtained with SRTM2, Logan graphical analysis and delayed-activity analysis was highly correlated with BP F values obtained with 2TC c (r = 0.954 and 0.945 respectively, p c and SRTM2 in raw and FA-denoised images (r = 0.961 and 0.909 respectively, p ND values from raw images while scans of only 70 min are sufficient from FA-denoised images. These images are also associated with significantly lower standard errors of 2TC c and SRTM2 BP values. Conclusion: Reference tissue methods such as SRTM2 and Logan graphical analysis can provide equally reliable BP ND values from rat brain [ 123 I]Iomazenil SPECT. Acquisitions, however, can be much less time-consuming either with analysis of delayed activity obtained from a 20-minute scan 50 min after tracer injection or with FA-denoising of images

  18. Colovaginal anastomosis: an unusual complication of stapler use in restorative procedure after Hartmann operation

    Directory of Open Access Journals (Sweden)

    Liao Guoqing

    2005-11-01

    Full Text Available Abstract Background Rectovaginal fistula is uncommon after lower anterior resection for rectal cancer. The most leading cause of this complication is involvement of the posterior wall of the vagina into the staple line when firing the circular stapler. Case presentation A 50-year-old women underwent resection for obstructed carcinoma of the sigmoid colon with Hartmann procedure. Four months later she underwent restorative surgery with circular stapler. Following which she developed rectovaginal fistula. A transvaginal repair was performed but stool passing from vagina not per rectum. Laporotomy revealed colovaginal anastomosis, which was corrected accordingly. Patient had an uneventful recovery. Conclusion Inadvertent formation of colovaginal anastomosis associated with a rectovaginal fistula is a rare complication caused by the operator's error. The present case again highlights the importance of ensuring that the posterior wall of vagina is away from the staple line.

  19. Adaptive bilateral filter for image denoising and its application to in-vitro Time-of-Flight data

    Science.gov (United States)

    Seitel, Alexander; dos Santos, Thiago R.; Mersmann, Sven; Penne, Jochen; Groch, Anja; Yung, Kwong; Tetzlaff, Ralf; Meinzer, Hans-Peter; Maier-Hein, Lena

    2011-03-01

    Image-guided therapy systems generally require registration of pre-operative planning data with the patient's anatomy. One common approach to achieve this is to acquire intra-operative surface data and match it to surfaces extracted from the planning image. Although increasingly popular for surface generation in general, the novel Time-of-Flight (ToF) technology has not yet been applied in this context. This may be attributed to the fact that the ToF range images are subject to considerable noise. The contribution of this study is two-fold. Firstly, we present an adaption of the well-known bilateral filter for denoising ToF range images based on the noise characteristics of the camera. Secondly, we assess the quality of organ surfaces generated from ToF range data with and without bilateral smoothing using corresponding high resolution CT data as ground truth. According to an evaluation on five porcine organs, the root mean squared (RMS) distance between the denoised ToF data points and the reference computed tomography (CT) surfaces ranged from 3.0 mm (lung) to 9.0 mm (kidney). This corresponds to an error-reduction of up to 36% compared to the error of the original ToF surfaces.

  20. Denoising multicriterion iterative reconstruction in emission spectral tomography

    Science.gov (United States)

    Wan, Xiong; Yin, Aihan

    2007-03-01

    In the study of optical testing, the computed tomogaphy technique has been widely adopted to reconstruct three-dimensional distributions of physical parameters of various kinds of fluid fields, such as flame, plasma, etc. In most cases, projection data are often stained by noise due to environmental disturbance, instrumental inaccuracy, and other random interruptions. To improve the reconstruction performance in noisy cases, an algorithm that combines a self-adaptive prefiltering denoising approach (SPDA) with a multicriterion iterative reconstruction (MCIR) is proposed and studied. First, the level of noise is approximately estimated with a frequency domain statistical method. Then the cutoff frequency of a Butterworth low-pass filter was established based on the evaluated noise energy. After the SPDA processing, the MCIR algorithm was adopted for limited-view optical computed tomography reconstruction. Simulated reconstruction of two test phantoms and a flame emission spectral tomography experiment were employed to evaluate the performance of SPDA-MCIR in noisy cases. Comparison with some traditional methods and experiment results showed that the SPDA-MCIR combination had obvious improvement in the case of noisy data reconstructions.

  1. Multiview point clouds denoising based on interference elimination

    Science.gov (United States)

    Hu, Yang; Wu, Qian; Wang, Le; Jiang, Huanyu

    2018-03-01

    Newly emerging low-cost depth sensors offer huge potentials for three-dimensional (3-D) modeling, but existing high noise restricts these sensors from obtaining accurate results. Thus, we proposed a method for denoising registered multiview point clouds with high noise to solve that problem. The proposed method is aimed at fully using redundant information to eliminate the interferences among point clouds of different views based on an iterative procedure. In each iteration, noisy points are either deleted or moved to their weighted average targets in accordance with two cases. Simulated data and practical data captured by a Kinect v2 sensor were tested in experiments qualitatively and quantitatively. Results showed that the proposed method can effectively reduce noise and recover local features from highly noisy multiview point clouds with good robustness, compared to truncated signed distance function and moving least squares (MLS). Moreover, the resulting low-noise point clouds can be further smoothed by the MLS to achieve improved results. This study provides the feasibility of obtaining fine 3-D models with high-noise devices, especially for depth sensors, such as Kinect.

  2. LCD denoise and the vector mutual information method in the application of the gear fault diagnosis under different working conditions

    Science.gov (United States)

    Xiangfeng, Zhang; Hong, Jiang

    2018-03-01

    In this paper, the full vector LCD method is proposed to solve the misjudgment problem caused by the change of the working condition. First, the signal from different working condition is decomposed by LCD, to obtain the Intrinsic Scale Component (ISC)whose instantaneous frequency with physical significance. Then, calculate of the cross correlation coefficient between ISC and the original signal, signal denoising based on the principle of mutual information minimum. At last, calculate the sum of absolute Vector mutual information of the sample under different working condition and the denoised ISC as the characteristics to classify by use of Support vector machine (SVM). The wind turbines vibration platform gear box experiment proves that this method can identify fault characteristics under different working conditions. The advantage of this method is that it reduce dependence of man’s subjective experience, identify fault directly from the original data of vibration signal. It will has high engineering value.

  3. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los

    2013-11-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  4. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los; Schö nlieb, Carola-Bibiane

    2013-01-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  5. On Adapting the Tensor Voting Framework to Robust Color Image Denoising

    Science.gov (United States)

    Moreno, Rodrigo; Garcia, Miguel Angel; Puig, Domenec; Julià, Carme

    This paper presents an adaptation of the tensor voting framework for color image denoising, while preserving edges. Tensors are used in order to encode the CIELAB color channels, the uniformity and the edginess of image pixels. A specific voting process is proposed in order to propagate color from a pixel to its neighbors by considering the distance between pixels, the perceptual color difference (by using an optimized version of CIEDE2000), a uniformity measurement and the likelihood of the pixels being impulse noise. The original colors are corrected with those encoded by the tensors obtained after the voting process. Peak to noise ratios and visual inspection show that the proposed methodology has a better performance than state-of-the-art techniques.

  6. Comparative analysis on some spatial-domain filters for fringe pattern denoising.

    Science.gov (United States)

    Wang, Haixia; Kemao, Qian

    2011-04-20

    Fringe patterns produced by various optical interferometric techniques encode information such as shape, deformation, and refractive index. Noise affects further processing of the fringe patterns. Denoising is often needed before fringe pattern demodulation. Filtering along the fringe orientation is an effective option. Such filters include coherence enhancing diffusion, spin filtering with curve windows, second-order oriented partial-differential equations, and the regularized quadratic cost function for oriented fringe pattern filtering. These filters are analyzed to establish the relationships among them. Theoretical analysis shows that the four filters are largely equivalent to each other. Quantitative results are given on simulated fringe patterns to validate the theoretical analysis and to compare the performance of these filters. © 2011 Optical Society of America

  7. A Denoising Scheme for Randomly Clustered Noise Removal in ICCD Sensing Image

    Directory of Open Access Journals (Sweden)

    Fei Wang

    2017-01-01

    Full Text Available An Intensified Charge-Coupled Device (ICCD image is captured by the ICCD image sensor in extremely low-light conditions. Its noise has two distinctive characteristics. (a Different from the independent identically distributed (i.i.d. noise in natural image, the noise in the ICCD sensing image is spatially clustered, which induces unexpected structure information; (b The pattern of the clustered noise is formed randomly. In this paper, we propose a denoising scheme to remove the randomly clustered noise in the ICCD sensing image. First, we decompose the image into non-overlapped patches and classify them into flat patches and structure patches according to if real structure information is included. Then, two denoising algorithms are designed for them, respectively. For each flat patch, we simulate multiple similar patches for it in pseudo-time domain and remove its noise by averaging all the simulated patches, considering that the structure information induced by the noise varies randomly over time. For each structure patch, we design a structure-preserved sparse coding algorithm to reconstruct the real structure information. It reconstructs each patch by describing it as a weighted summation of its neighboring patches and incorporating the weights into the sparse representation of the current patch. Based on all the reconstructed patches, we generate a reconstructed image. After that, we repeat the whole process by changing relevant parameters, considering that blocking artifacts exist in a single reconstructed image. Finally, we obtain the reconstructed image by merging all the generated images into one. Experiments are conducted on an ICCD sensing image dataset, which verifies its subjective performance in removing the randomly clustered noise and preserving the real structure information in the ICCD sensing image.

  8. Alexander fractional differential window filter for ECG denoising.

    Science.gov (United States)

    Verma, Atul Kumar; Saini, Indu; Saini, Barjinder Singh

    2018-06-01

    The electrocardiogram (ECG) non-invasively monitors the electrical activities of the heart. During the process of recording and transmission, ECG signals are often corrupted by various types of noises. Minimizations of these noises facilitate accurate detection of various anomalies. In the present paper, Alexander fractional differential window (AFDW) filter is proposed for ECG signal denoising. The designed filter is based on the concept of generalized Alexander polynomial and the R-L differential equation of fractional calculus. This concept is utilized to formulate a window that acts as a forward filter. Thereafter, the backward filter is constructed by reversing the coefficients of the forward filter. The proposed AFDW filter is then obtained by averaging of the forward and backward filter coefficients. The performance of the designed AFDW filter is validated by adding the various type of noise to the original ECG signal obtained from MIT-BIH arrhythmia database. The two non-diagnostic measure, i.e., SNR, MSE, and one diagnostic measure, i.e., wavelet energy based diagnostic distortion (WEDD) have been employed for the quantitative evaluation of the designed filter. Extensive experimentations on all the 48-records of MIT-BIH arrhythmia database resulted in average SNR of 22.014 ± 3.806365, 14.703 ± 3.790275, 13.3183 ± 3.748230; average MSE of 0.001458 ± 0.00028, 0.0078 ± 0.000319, 0.01061 ± 0.000472; and average WEDD value of 0.020169 ± 0.01306, 0.1207 ± 0.061272, 0.1432 ± 0.073588, for ECG signal contaminated by the power line, random, and the white Gaussian noise respectively. A new metric named as morphological power preservation measure (MPPM) is also proposed that account for the power preservance (as indicated by PSD plots) and the QRS morphology. The proposed AFDW filter retained much of the original (clean) signal power without any significant morphological distortion as validated by MPPM measure that were 0

  9. Objective Evaluation of Visual Fatigue Using Binocular Fusion Maintenance.

    Science.gov (United States)

    Hirota, Masakazu; Morimoto, Takeshi; Kanda, Hiroyuki; Endo, Takao; Miyoshi, Tomomitsu; Miyagawa, Suguru; Hirohara, Yoko; Yamaguchi, Tatsuo; Saika, Makoto; Fujikado, Takashi

    2018-03-01

    In this study, we investigated whether an individual's visual fatigue can be evaluated objectively and quantitatively from their ability to maintain binocular fusion. Binocular fusion maintenance (BFM) was measured using a custom-made binocular open-view Shack-Hartmann wavefront aberrometer equipped with liquid crystal shutters, wherein eye movements and wavefront aberrations were measured simultaneously. Transmittance in the liquid crystal shutter in front of the subject's nondominant eye was reduced linearly, and BFM was determined from the transmittance at the point when binocular fusion was broken and vergence eye movement was induced. In total, 40 healthy subjects underwent the BFM test and completed a questionnaire regarding subjective symptoms before and after a visual task lasting 30 minutes. BFM was significantly reduced after the visual task ( P eye symptom score (adjusted R 2 = 0.752, P devices, such as head-mount display, objectively.

  10. Hartmann tests to measure the spherical and cylindrical curvatures and the axis orientation of astigmatic lenses or optical surfaces.

    Science.gov (United States)

    Hernández-Gómez, Geovanni; Malacara-Hernández, Zacarías; Malacara-Hernández, Daniel

    2014-02-20

    The measurement of astigmatic lenses, optical surfaces or wavefronts are a highly studied problem and many different instruments have been commercially fabricated to perform this task. Many of them use a Hartmann arrangement to obtain the result. In this paper, we analyze with detail the algorithms that can be used to make the necessary calculations and propose several alternatives with different advantages and disadvantages. Different mathematical algorithms that are involved in the calculation process have been given whereas any description of the instrument itself is not proposed, but only the different mathematical algorithms that are involved in the calculation process.

  11. A wavelet and least square filter based spatial-spectral denoising approach of hyperspectral imagery

    Science.gov (United States)

    Li, Ting; Chen, Xiao-Mei; Chen, Gang; Xue, Bo; Ni, Guo-Qiang

    2009-11-01

    Noise reduction is a crucial step in hyperspectral imagery pre-processing. Based on sensor characteristics, the noise of hyperspectral imagery represents in both spatial and spectral domain. However, most prevailing denosing techniques process the imagery in only one specific domain, which have not utilized multi-domain nature of hyperspectral imagery. In this paper, a new spatial-spectral noise reduction algorithm is proposed, which is based on wavelet analysis and least squares filtering techniques. First, in the spatial domain, a new stationary wavelet shrinking algorithm with improved threshold function is utilized to adjust the noise level band-by-band. This new algorithm uses BayesShrink for threshold estimation, and amends the traditional soft-threshold function by adding shape tuning parameters. Comparing with soft or hard threshold function, the improved one, which is first-order derivable and has a smooth transitional region between noise and signal, could save more details of image edge and weaken Pseudo-Gibbs. Then, in the spectral domain, cubic Savitzky-Golay filter based on least squares method is used to remove spectral noise and artificial noise that may have been introduced in during the spatial denoising. Appropriately selecting the filter window width according to prior knowledge, this algorithm has effective performance in smoothing the spectral curve. The performance of the new algorithm is experimented on a set of Hyperion imageries acquired in 2007. The result shows that the new spatial-spectral denoising algorithm provides more significant signal-to-noise-ratio improvement than traditional spatial or spectral method, while saves the local spectral absorption features better.

  12. A virtualized software based on the NVIDIA cuFFT library for image denoising: performance analysis

    DEFF Research Database (Denmark)

    Galletti, Ardelio; Marcellino, Livia; Montella, Raffaele

    2017-01-01

    Abstract Generic Virtualization Service (GVirtuS) is a new solution for enabling GPGPU on Virtual Machines or low powered devices. This paper focuses on the performance analysis that can be obtained using a GPGPU virtualized software. Recently, GVirtuS has been extended in order to support CUDA...... ancillary libraries with good results. Here, our aim is to analyze the applicability of this powerful tool to a real problem, which uses the NVIDIA cuFFT library. As case study we consider a simple denoising algorithm, implementing a virtualized GPU-parallel software based on the convolution theorem...

  13. Automatic centroid detection and surface measurement with a digital Shack–Hartmann wavefront sensor

    International Nuclear Information System (INIS)

    Yin, Xiaoming; Zhao, Liping; Li, Xiang; Fang, Zhongping

    2010-01-01

    With the breakthrough of manufacturing technologies, the measurement of surface profiles is becoming a big issue. A Shack–Hartmann wavefront sensor (SHWS) provides a promising technology for non-contact surface measurement with a number of advantages over interferometry. The SHWS splits the incident wavefront into many subsections and transfers the distorted wavefront detection into the centroid measurement. So the accuracy of the centroid measurement determines the accuracy of the SHWS. In this paper, we have presented a new centroid measurement algorithm based on an adaptive thresholding and dynamic windowing method by utilizing image-processing techniques. Based on this centroid detection method, we have developed a digital SHWS system which can automatically detect centroids of focal spots, reconstruct the wavefront and measure the 3D profile of the surface. The system has been tested with various simulated and real surfaces such as flat surfaces, spherical and aspherical surfaces as well as deformable surfaces. The experimental results demonstrate that the system has good accuracy, repeatability and immunity to optical misalignment. The system is also suitable for on-line applications of surface measurement

  14. Hartmann's Procedure or Primary Anastomosis for Generalized Peritonitis due to Perforated Diverticulitis: A Prospective Multicenter Randomized Trial (DIVERTI).

    Science.gov (United States)

    Bridoux, Valerie; Regimbeau, Jean Marc; Ouaissi, Mehdi; Mathonnet, Muriel; Mauvais, Francois; Houivet, Estelle; Schwarz, Lilian; Mege, Diane; Sielezneff, Igor; Sabbagh, Charles; Tuech, Jean-Jacques

    2017-12-01

    About 25% of patients with acute diverticulitis require emergency intervention. Currently, most patients with diverticular peritonitis undergo a Hartmann's procedure. Our objective was to assess whether primary anastomosis (PA) with a diverting stoma results in lower mortality rates than Hartmann's procedure (HP) in patients with diverticular peritonitis. We conducted a multicenter randomized controlled trial conducted between June 2008 and May 2012: the DIVERTI (Primary vs Secondary Anastomosis for Hinchey Stage III-IV Diverticulitis) trial. Follow-up duration was up to 18 months. A random sample of 102 eligible participants with purulent or fecal diverticular peritonitis from tertiary care referral centers and associated centers in France were equally randomized to either a PA arm or to an HP arm. Data were analyzed on an intention-to-treat basis. The primary end point was mortality rate at 18 months. Secondary outcomes were postoperative complications, operative time, length of hospital stay, rate of definitive stoma, and morbidity. All 102 patients enrolled were comparable for age (p = 0.4453), sex (p = 0.2347), Hinchey stage III vs IV (p = 0.2347), and Mannheim Peritonitis Index (p = 0.0606). Overall mortality did not differ significantly between HP (7.7%) and PA (4%) (p = 0.4233). Morbidity for both resection and stoma reversal operations were comparable (39% in the HP arm vs 44% in the PA arm; p = 0.4233). At 18 months, 96% of PA patients and 65% of HP patients had a stoma reversal (p = 0.0001). Although mortality was similar in both arms, the rate of stoma reversal was significantly higher in the PA arm. This trial provides additional evidence in favor of PA with diverting ileostomy over HP in patients with diverticular peritonitis. ClinicalTrials.gov Identifier: NCT 00692393. Copyright © 2017. Published by Elsevier Inc.

  15. Assessing denoising strategies to increase signal to noise ratio in spinal cord and in brain cortical and subcortical regions

    Science.gov (United States)

    Maugeri, L.; Moraschi, M.; Summers, P.; Favilla, S.; Mascali, D.; Cedola, A.; Porro, C. A.; Giove, F.; Fratini, M.

    2018-02-01

    Functional Magnetic Resonance Imaging (fMRI) based on Blood Oxygenation Level Dependent (BOLD) contrast has become one of the most powerful tools in neuroscience research. On the other hand, fMRI approaches have seen limited use in the study of spinal cord and subcortical brain regions (such as the brainstem and portions of the diencephalon). Indeed obtaining good BOLD signal in these areas still represents a technical and scientific challenge, due to poor control of physiological noise and to a limited overall quality of the functional series. A solution can be found in the combination of optimized experimental procedures at acquisition stage, and well-adapted artifact mitigation procedures in the data processing. In this framework, we studied two different data processing strategies to reduce physiological noise in cortical and subcortical brain regions and in the spinal cord, based on the aCompCor and RETROICOR denoising tools respectively. The study, performed in healthy subjects, was carried out using an ad hoc isometric motor task. We observed an increased signal to noise ratio in the denoised functional time series in the spinal cord and in the subcortical brain region.

  16. A Neuro-Fuzzy Inference System Combining Wavelet Denoising, Principal Component Analysis, and Sequential Probability Ratio Test for Sensor Monitoring

    International Nuclear Information System (INIS)

    Na, Man Gyun; Oh, Seungrohk

    2002-01-01

    A neuro-fuzzy inference system combined with the wavelet denoising, principal component analysis (PCA), and sequential probability ratio test (SPRT) methods has been developed to monitor the relevant sensor using the information of other sensors. The parameters of the neuro-fuzzy inference system that estimates the relevant sensor signal are optimized by a genetic algorithm and a least-squares algorithm. The wavelet denoising technique was applied to remove noise components in input signals into the neuro-fuzzy system. By reducing the dimension of an input space into the neuro-fuzzy system without losing a significant amount of information, the PCA was used to reduce the time necessary to train the neuro-fuzzy system, simplify the structure of the neuro-fuzzy inference system, and also, make easy the selection of the input signals into the neuro-fuzzy system. By using the residual signals between the estimated signals and the measured signals, the SPRT is applied to detect whether the sensors are degraded or not. The proposed sensor-monitoring algorithm was verified through applications to the pressurizer water level, the pressurizer pressure, and the hot-leg temperature sensors in pressurized water reactors

  17. Comparison of JADE and canonical correlation analysis for ECG de-noising.

    Science.gov (United States)

    Kuzilek, Jakub; Kremen, Vaclav; Lhotska, Lenka

    2014-01-01

    This paper explores differences between two methods for blind source separation within frame of ECG de-noising. First method is joint approximate diagonalization of eigenmatrices, which is based on estimation of fourth order cross-cummulant tensor and its diagonalization. Second one is the statistical method known as canonical correlation analysis, which is based on estimation of correlation matrices between two multidimensional variables. Both methods were used within method, which combines the blind source separation algorithm with decision tree. The evaluation was made on large database of 382 long-term ECG signals and the results were examined. Biggest difference was found in results of 50 Hz power line interference where the CCA algorithm completely failed. Thus main power of CCA lies in estimation of unstructured noise within ECG. JADE algorithm has larger computational complexity thus the CCA perfomed faster when estimating the components.

  18. Hartmann wavefront sensing of the corrective optics for the Hubble Space Telescope

    Science.gov (United States)

    Davila, Pam S.; Eichhorn, William L.; Wilson, Mark E.

    1994-06-01

    aberration content of the corrected images. Also, from only this test it was difficult to measure important pupil parameters, such as pupil intensity profiles and pupil sizes and location. To measure the COSTAR wavefront accurately and to determine pupil parameters, another very important test was performed on the COSTAR optics. A Hartmann test of the optical system consisting of the RAS and COSTAR was conducted by the Goddard Independent Verification Team (IVT). In this paper, we first describe the unique Hartmann sensor that was developed by the IVT. Then we briefly describe the RAS and COSTAR optical systems and the test setup. Finally, we present the results of the test and compare our results with results obtained from optical analysis and from image tests with the BIA.

  19. Application of fluidic lens technology to an adaptive holographic optical element see-through autophoropter

    Science.gov (United States)

    Chancy, Carl H.

    A device for performing an objective eye exam has been developed to automatically determine ophthalmic prescriptions. The closed loop fluidic auto-phoropter has been designed, modeled, fabricated and tested for the automatic measurement and correction of a patient's prescriptions. The adaptive phoropter is designed through the combination of a spherical-powered fluidic lens and two cylindrical fluidic lenses that are orientated 45o relative to each other. In addition, the system incorporates Shack-Hartmann wavefront sensing technology to identify the eye's wavefront error and corresponding prescription. Using the wavefront error information, the fluidic auto-phoropter nulls the eye's lower order wavefront error by applying the appropriate volumes to the fluidic lenses. The combination of the Shack-Hartmann wavefront sensor the fluidic auto-phoropter allows for the identification and control of spherical refractive error, as well as cylinder error and axis; thus, creating a truly automated refractometer and corrective system. The fluidic auto-phoropter is capable of correcting defocus error ranging from -20D to 20D and astigmatism from -10D to 10D. The transmissive see-through design allows for the observation of natural scenes through the system at varying object planes with no additional imaging optics in the patient's line of sight. In this research, two generations of the fluidic auto-phoropter are designed and tested; the first generation uses traditional glass optics for the measurement channel. The second generation of the fluidic auto-phoropter takes advantage of the progress in the development of holographic optical elements (HOEs) to replace all the traditional glass optics. The addition of the HOEs has enabled the development of a more compact, inexpensive and easily reproducible system without compromising its performance. Additionally, the fluidic lenses were tested during a National Aeronautics Space Administration (NASA) parabolic flight campaign, to

  20. ECG signal performance de-noising assessment based on threshold tuning of dual-tree wavelet transform.

    Science.gov (United States)

    El B'charri, Oussama; Latif, Rachid; Elmansouri, Khalifa; Abenaou, Abdenbi; Jenkal, Wissam

    2017-02-07

    Since the electrocardiogram (ECG) signal has a low frequency and a weak amplitude, it is sensitive to miscellaneous mixed noises, which may reduce the diagnostic accuracy and hinder the physician's correct decision on patients. The dual tree wavelet transform (DT-WT) is one of the most recent enhanced versions of discrete wavelet transform. However, threshold tuning on this method for noise removal from ECG signal has not been investigated yet. In this work, we shall provide a comprehensive study on the impact of the choice of threshold algorithm, threshold value, and the appropriate wavelet decomposition level to evaluate the ECG signal de-noising performance. A set of simulations is performed on both synthetic and real ECG signals to achieve the promised results. First, the synthetic ECG signal is used to observe the algorithm response. The evaluation results of synthetic ECG signal corrupted by various types of noise has showed that the modified unified threshold and wavelet hyperbolic threshold de-noising method is better in realistic and colored noises. The tuned threshold is then used on real ECG signals from the MIT-BIH database. The results has shown that the proposed method achieves higher performance than the ordinary dual tree wavelet transform into all kinds of noise removal from ECG signal. The simulation results indicate that the algorithm is robust for all kinds of noises with varying degrees of input noise, providing a high quality clean signal. Moreover, the algorithm is quite simple and can be used in real time ECG monitoring.

  1. Accelerometer North Finding System Based on the Wavelet Packet De-noising Algorithm and Filtering Circuit

    Directory of Open Access Journals (Sweden)

    LU Yongle

    2014-07-01

    Full Text Available This paper demonstrates a method and system for north finding with a low-cost piezoelectricity accelerometer based on the Coriolis acceleration principle. The proposed setup is based on the choice of an accelerometer with residual noise of 35 ng•Hz-1/2. The plane of the north finding system is aligned parallel to the local level, which helps to eliminate the effect of plane error. The Coriolis acceleration caused by the earth’s rotation and the acceleration’s instantaneous velocity is much weaker than the g-sensitivity acceleration. To get a high accuracy and a shorter time for north finding system, in this paper, the Filtering Circuit and the wavelet packet de-nosing algorithm are used as the following. First, the hardware is designed as the alternating currents across by filtering circuit, so the DC will be isolated and the weak AC signal will be amplified. The DC is interfering signal generated by the earth's gravity. Then, we have used a wavelet packet to filter the signal which has been done through the filtering circuit. Finally, compare the north finding results measured by wavelet packet filtering with those measured by a low-pass filter. Wavelet filter de-noise data shows that wavelet packet filtering and wavelet filter measurement have high accuracy. Wavelet Packet filtering has stronger ability to remove burst noise and higher engineering environment adaptability than that of Wavelet filtering. Experimental results prove the effectiveness and project implementation of the accelerometer north finding method based on wavelet packet de-noising algorithm.

  2. (Non-) homomorphic approaches to denoise intensity SAR images with non-local means and stochastic distances

    Science.gov (United States)

    Penna, Pedro A. A.; Mascarenhas, Nelson D. A.

    2018-02-01

    The development of new methods to denoise images still attract researchers, who seek to combat the noise with the minimal loss of resolution and details, like edges and fine structures. Many algorithms have the goal to remove additive white Gaussian noise (AWGN). However, it is not the only type of noise which interferes in the analysis and interpretation of images. Therefore, it is extremely important to expand the filters capacity to different noise models present in li-terature, for example the multiplicative noise called speckle that is present in synthetic aperture radar (SAR) images. The state-of-the-art algorithms in remote sensing area work with similarity between patches. This paper aims to develop two approaches using the non local means (NLM), developed for AWGN. In our research, we expanded its capacity for intensity SAR ima-ges speckle. The first approach is grounded on the use of stochastic distances based on the G0 distribution without transforming the data to the logarithm domain, like homomorphic transformation. It takes into account the speckle and backscatter to estimate the parameters necessary to compute the stochastic distances on NLM. The second method uses a priori NLM denoising with a homomorphic transformation and applies the inverse Gamma distribution to estimate the parameters that were used into NLM with stochastic distances. The latter method also presents a new alternative to compute the parameters for the G0 distribution. Finally, this work compares and analyzes the synthetic and real results of the proposed methods with some recent filters of the literature.

  3. Imaging reconstruction based on improved wavelet denoising combined with parallel-beam filtered back-projection algorithm

    Science.gov (United States)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2012-11-01

    The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.

  4. Intra-Day Trading System Design Based on the Integrated Model of Wavelet De-Noise and Genetic Programming

    Directory of Open Access Journals (Sweden)

    Hongguang Liu

    2016-12-01

    Full Text Available Technical analysis has been proved to be capable of exploiting short-term fluctuations in financial markets. Recent results indicate that the market timing approach beats many traditional buy-and-hold approaches in most of the short-term trading periods. Genetic programming (GP was used to generate short-term trade rules on the stock markets during the last few decades. However, few of the related studies on the analysis of financial time series with genetic programming considered the non-stationary and noisy characteristics of the time series. In this paper, to de-noise the original financial time series and to search profitable trading rules, an integrated method is proposed based on the Wavelet Threshold (WT method and GP. Since relevant information that affects the movement of the time series is assumed to be fully digested during the market closed periods, to avoid the jumping points of the daily or monthly data, in this paper, intra-day high-frequency time series are used to fully exploit the short-term forecasting advantage of technical analysis. To validate the proposed integrated approach, an empirical study is conducted based on the China Securities Index (CSI 300 futures in the emerging China Financial Futures Exchange (CFFEX market. The analysis outcomes show that the wavelet de-noise approach outperforms many comparative models.

  5. Second-order oriented partial-differential equations for denoising in electronic-speckle-pattern interferometry fringes.

    Science.gov (United States)

    Tang, Chen; Han, Lin; Ren, Hongwei; Zhou, Dongjian; Chang, Yiming; Wang, Xiaohang; Cui, Xiaolong

    2008-10-01

    We derive the second-order oriented partial-differential equations (PDEs) for denoising in electronic-speckle-pattern interferometry fringe patterns from two points of view. The first is based on variational methods, and the second is based on controlling diffusion direction. Our oriented PDE models make the diffusion along only the fringe orientation. The main advantage of our filtering method, based on oriented PDE models, is that it is very easy to implement compared with the published filtering methods along the fringe orientation. We demonstrate the performance of our oriented PDE models via application to two computer-simulated and experimentally obtained speckle fringes and compare with related PDE models.

  6. A shape-optimized framework for kidney segmentation in ultrasound images using NLTV denoising and DRLSE

    Directory of Open Access Journals (Sweden)

    Yang Fan

    2012-10-01

    Full Text Available Abstract Background Computer-assisted surgical navigation aims to provide surgeons with anatomical target localization and critical structure observation, where medical image processing methods such as segmentation, registration and visualization play a critical role. Percutaneous renal intervention plays an important role in several minimally-invasive surgeries of kidney, such as Percutaneous Nephrolithotomy (PCNL and Radio-Frequency Ablation (RFA of kidney tumors, which refers to a surgical procedure where access to a target inside the kidney by a needle puncture of the skin. Thus, kidney segmentation is a key step in developing any ultrasound-based computer-aided diagnosis systems for percutaneous renal intervention. Methods In this paper, we proposed a novel framework for kidney segmentation of ultrasound (US images combined with nonlocal total variation (NLTV image denoising, distance regularized level set evolution (DRLSE and shape prior. Firstly, a denoised US image was obtained by NLTV image denoising. Secondly, DRLSE was applied in the kidney segmentation to get binary image. In this case, black and white region represented the kidney and the background respectively. The last stage is that the shape prior was applied to get a shape with the smooth boundary from the kidney shape space, which was used to optimize the segmentation result of the second step. The alignment model was used occasionally to enlarge the shape space in order to increase segmentation accuracy. Experimental results on both synthetic images and US data are given to demonstrate the effectiveness and accuracy of the proposed algorithm. Results We applied our segmentation framework on synthetic and real US images to demonstrate the better segmentation results of our method. From the qualitative results, the experiment results show that the segmentation results are much closer to the manual segmentations. The sensitivity (SN, specificity (SP and positive predictive value

  7. Ocular wavefront aberration and refractive error in pre-school children

    Science.gov (United States)

    Thapa, Damber; Fleck, Andre; Lakshminarayanan, Vasudevan; Bobier, William R.

    2011-11-01

    Hartmann-Shack images taken from an archived collection of SureSight refractive measurements of pre-school children in Oxford County, Ontario, Canada were retrieved and re-analyzed. Higher-order aberrations were calculated over the age range of 3 to 6 years. These higher-order aberrations were compared with respect to magnitudes of ametropia. Subjects were classified as emmetropic (range -0.5 to + 0.5D), low hyperopic (+ 0.5 to +2D) and high hyperopic (+2D or more) based upon the resulting spherical equivalent. Higher-order aberrations were found to increase with higher levels of hyperopia (p < 0.01). The strongest effect was for children showing more than +2.00D of hyperopia. The correlation coefficients were small in all of the higher-order aberrations; however, they were significant (p < 0.01). These analyses indicate a weak association between refractive error and higher-order aberrations in pre-school children.

  8. The commissioning instrument for the Gran Telescopio Canarias: made in Mexico

    Science.gov (United States)

    Cuevas, Salvador; Sánchez, Beatriz; Bringas, Vicente; Espejo, Carlos; Flores, Rubén; Chapa, Oscar; Lara, Gerardo; Chavoya, Armando; Anguiano, Gustavo; Arciniega, Sadot; Dorantes, Ariel; Gonzalez, José L.; Montoya, Juan M.; Toral, Rafael; Hernández, Hugo; Nava, Roberto; Devaney, Nicolas; Castro, Javier; Cavaller, Luis; Farah, Alejandro; Godoy, Javier; Cobos, Francisco; Tejada, Carlos; Garfias, Fernando

    2006-02-01

    In March 2004 was accepted in the site of Gran Telescopio Canarias (GTC) in La Palma Island, Spain, the Commissioning Instrument (CI) for the GTC. During the GTC integration phase, the CI will be a diagnostic tool for performance verification. The CI features four operation modes-imaging, pupil imaging, Curvature Wave-front sensing (WFS), and high resolution Shack-Hartmann WFS. This instrument was built by the Instituto de Astronomia UNAM in Mexico City and the Centro de Ingenieria y Desarrollo Industrial (CIDESI) in Queretaro, Qro under a GRANTECAN contract after an international public bid. Some optical components were built by Centro de Investigaciones en Optica (CIO) in Leon Gto and the biggest mechanical parts were manufactured by Vatech in Morelia Mich. In this paper we made a general description of the CI and we relate how this instrument, build under international standards, was entirely made in Mexico.

  9. The low-order wavefront control system for the PICTURE-C mission: high-speed image acquisition and processing

    Science.gov (United States)

    Hewawasam, Kuravi; Mendillo, Christopher B.; Howe, Glenn A.; Martel, Jason; Finn, Susanna C.; Cook, Timothy A.; Chakrabarti, Supriya

    2017-09-01

    The Planetary Imaging Concept Testbed Using a Recoverable Experiment - Coronagraph (PICTURE-C) mission will directly image debris disks and exozodiacal dust around nearby stars from a high-altitude balloon using a vector vortex coronagraph. The PICTURE-C low-order wavefront control (LOWC) system will be used to correct time-varying low-order aberrations due to pointing jitter, gravity sag, thermal deformation, and the gondola pendulum motion. We present the hardware and software implementation of the low-order ShackHartmann and reflective Lyot stop sensors. Development of the high-speed image acquisition and processing system is discussed with the emphasis on the reduction of hardware and computational latencies through the use of a real-time operating system and optimized data handling. By characterizing all of the LOWC latencies, we describe techniques to achieve a framerate of 200 Hz with a mean latency of ˜378 μs

  10. Denoising of Mechanical Vibration Signals Using Quantum-Inspired Adaptive Wavelet Shrinkage

    Directory of Open Access Journals (Sweden)

    Yan-long Chen

    2014-01-01

    Full Text Available The potential application of a quantum-inspired adaptive wavelet shrinkage (QAWS technique to mechanical vibration signals with a focus on noise reduction is studied in this paper. This quantum-inspired shrinkage algorithm combines three elements: an adaptive non-Gaussian statistical model of dual-tree complex wavelet transform (DTCWT coefficients proposed to improve practicability of prior information, the quantum superposition introduced to describe the interscale dependencies of DTCWT coefficients, and the quantum-inspired probability of noise defined to shrink wavelet coefficients in a Bayesian framework. By combining all these elements, this signal processing scheme incorporating the DTCWT with quantum theory can both reduce noise and preserve signal details. A practical vibration signal measured from a power-shift steering transmission is utilized to evaluate the denoising ability of QAWS. Application results demonstrate the effectiveness of the proposed method. Moreover, it achieves better performance than hard and soft thresholding.

  11. Accurate prediction of subcellular location of apoptosis proteins combining Chou’s PseAAC and PsePSSM based on wavelet denoising

    Science.gov (United States)

    Chen, Cheng; Chen, Rui-Xin; Wang, Lei; Wang, Ming-Hui; Zhang, Yan

    2017-01-01

    Apoptosis proteins subcellular localization information are very important for understanding the mechanism of programmed cell death and the development of drugs. The prediction of subcellular localization of an apoptosis protein is still a challenging task because the prediction of apoptosis proteins subcellular localization can help to understand their function and the role of metabolic processes. In this paper, we propose a novel method for protein subcellular localization prediction. Firstly, the features of the protein sequence are extracted by combining Chou's pseudo amino acid composition (PseAAC) and pseudo-position specific scoring matrix (PsePSSM), then the feature information of the extracted is denoised by two-dimensional (2-D) wavelet denoising. Finally, the optimal feature vectors are input to the SVM classifier to predict subcellular location of apoptosis proteins. Quite promising predictions are obtained using the jackknife test on three widely used datasets and compared with other state-of-the-art methods. The results indicate that the method proposed in this paper can remarkably improve the prediction accuracy of apoptosis protein subcellular localization, which will be a supplementary tool for future proteomics research. PMID:29296195

  12. ‘From shack to the Constitutional Court’
    The litigious disruption of governing global cities

    Directory of Open Access Journals (Sweden)

    Anna Selmeczi

    2011-04-01

    Full Text Available Taking its cue from the worldwide proliferation of struggles for access to the city, this paper aims to assess the impact of globalized neoliberalism on the juridico-legal techniques of contemporary urban governmentalities and to inquire into the ways in which such techniques can be resisted. It suggests that at least in urban contexts, the police order that, according to Foucault, was largely superseded by the modern liberal technologies of governing through freedom, today seems to be rather active. Arguably, global cities' competition for capital promoted by the globalized neoliberal economic order and its imperative to actively intervene in producing the market, pairs up conveniently with the detailed methods of regulating the early modern West. On the other hand, the self-limitation of governmental reason originating in the political economic criticism of the police state equips governance with the means of voluntary impotence and, placing the management of the economy at the centre of governmental activity, it increasingly adapts law to social and economic processes. While, according to Rancière, these tendencies lead to the effacement of the gaps between legal inscriptions and social realities, and thus tend to impede the occurrence of the political, in interpreting the struggles of South African shack dwellers, the paper aims to illustrate how inscriptions of equality may nevertheless trigger the political disruption of urban biopolitics.

  13. Signal de-noising methods for fault diagnosis and troubleshooting at CANDU{sup ®} stations

    Energy Technology Data Exchange (ETDEWEB)

    Nasimi, Elnara; Gabbar, Hossam A., E-mail: hossam.gabbar@uoit.ca

    2014-12-15

    Highlights: • Fault modelling using a Fault Semantic Network (FSN). • Intelligent filtering techniques for signal de-noise in NPP. • Signal feature extraction is applied as integrated with FSN. • Increase signal-to-noise ratio (SNR). - Abstract: Over the past several years a number of domestic CANDU{sup ®} stations have experienced issues with neutron detection systems that challenged safety and operation. Intelligent troubleshooting methodology is required to aid in making risk-informed decisions related to design and operational activities, which can aid current stations and be used for the future generation of CANDU{sup ®} designs. Fault modelling approach using Fault Semantic Network (FSN) with risk estimation is proposed for this purpose. One major challenge in troubleshooting is the determination of accurate data. It is typical to have missing, incomplete or corrupted data points in large process data sets from dynamically changing systems. Therefore, it is expected that quality of obtained data will have a direct impact on the system's ability to recognize developing trends in the process upset situations. In order to enable fault detection process, intelligent filtering techniques are required to de-noise process data and extract valuable signal features in the presence of background noise. In this study, the impact of applying an optimized and intelligent filtering of process signals prior to data analysis is discussed. This is particularly important for neutronic signals in order to increase signal-to-noise ratio (SNR) which suffers the most during start-ups and low power operation. This work is complimentary to the previously published studies on FSN-based fault modelling in CANDU stations. The main objective of this work is to explore the potential research methods using a specific case study and, based on the results and outcomes from this work, to note the possible future improvements and innovation areas.

  14. Computed tomography perfusion imaging denoising using Gaussian process regression

    International Nuclear Information System (INIS)

    Zhu Fan; Gonzalez, David Rodriguez; Atkinson, Malcolm; Carpenter, Trevor; Wardlaw, Joanna

    2012-01-01

    Brain perfusion weighted images acquired using dynamic contrast studies have an important clinical role in acute stroke diagnosis and treatment decisions. However, computed tomography (CT) images suffer from low contrast-to-noise ratios (CNR) as a consequence of the limitation of the exposure to radiation of the patient. As a consequence, the developments of methods for improving the CNR are valuable. The majority of existing approaches for denoising CT images are optimized for 3D (spatial) information, including spatial decimation (spatially weighted mean filters) and techniques based on wavelet and curvelet transforms. However, perfusion imaging data is 4D as it also contains temporal information. Our approach using Gaussian process regression (GPR), which takes advantage of the temporal information, to reduce the noise level. Over the entire image, GPR gains a 99% CNR improvement over the raw images and also improves the quality of haemodynamic maps allowing a better identification of edges and detailed information. At the level of individual voxel, GPR provides a stable baseline, helps us to identify key parameters from tissue time-concentration curves and reduces the oscillations in the curve. GPR is superior to the comparable techniques used in this study. (note)

  15. Forecasting East Asian Indices Futures via a Novel Hybrid of Wavelet-PCA Denoising and Artificial Neural Network Models

    Science.gov (United States)

    2016-01-01

    The motivation behind this research is to innovatively combine new methods like wavelet, principal component analysis (PCA), and artificial neural network (ANN) approaches to analyze trade in today’s increasingly difficult and volatile financial futures markets. The main focus of this study is to facilitate forecasting by using an enhanced denoising process on market data, taken as a multivariate signal, in order to deduct the same noise from the open-high-low-close signal of a market. This research offers evidence on the predictive ability and the profitability of abnormal returns of a new hybrid forecasting model using Wavelet-PCA denoising and ANN (named WPCA-NN) on futures contracts of Hong Kong’s Hang Seng futures, Japan’s NIKKEI 225 futures, Singapore’s MSCI futures, South Korea’s KOSPI 200 futures, and Taiwan’s TAIEX futures from 2005 to 2014. Using a host of technical analysis indicators consisting of RSI, MACD, MACD Signal, Stochastic Fast %K, Stochastic Slow %K, Stochastic %D, and Ultimate Oscillator, empirical results show that the annual mean returns of WPCA-NN are more than the threshold buy-and-hold for the validation, test, and evaluation periods; this is inconsistent with the traditional random walk hypothesis, which insists that mechanical rules cannot outperform the threshold buy-and-hold. The findings, however, are consistent with literature that advocates technical analysis. PMID:27248692

  16. Multi-Channel Electroencephalogram (EEG) Signal Acquisition and its Effective Channel selection with De-noising Using AWICA for Biometric System

    OpenAIRE

    B.Sabarigiri; D.Suganyadevi

    2014-01-01

    the embedding of low cost electroencephalogram (EEG) sensors in wireless headsets gives improved authentication based on their brain wave signals has become a practical opportunity. In this paper signal acquisition along with effective multi-channel selection from a specific area of the brain and denoising using AWICA methods are proposed for EEG based personal identification. At this point, to develop identification system the steps are as follows. (i) the high-quality device with the least ...

  17. GPR Signal Denoising and Target Extraction With the CEEMD Method

    KAUST Repository

    Li, Jing

    2015-04-17

    In this letter, we apply a time and frequency analysis method based on the complete ensemble empirical mode decomposition (CEEMD) method in ground-penetrating radar (GPR) signal processing. It decomposes the GPR signal into a sum of oscillatory components, with guaranteed positive and smoothly varying instantaneous frequencies. The key idea of this method relies on averaging the modes obtained by empirical mode decomposition (EMD) applied to several realizations of Gaussian white noise added to the original signal. It can solve the mode-mixing problem in the EMD method and improve the resolution of ensemble EMD (EEMD) when the signal has a low signal-to-noise ratio. First, we analyze the difference between the basic theory of EMD, EEMD, and CEEMD. Then, we compare the time and frequency analysis with Hilbert-Huang transform to test the results of different methods. The synthetic and real GPR data demonstrate that CEEMD promises higher spectral-spatial resolution than the other two EMD methods in GPR signal denoising and target extraction. Its decomposition is complete, with a numerically negligible error.

  18. Fast Sparse Coding for Range Data Denoising with Sparse Ridges Constraint

    Directory of Open Access Journals (Sweden)

    Zhi Gao

    2018-05-01

    Full Text Available Light detection and ranging (LiDAR sensors have been widely deployed on intelligent systems such as unmanned ground vehicles (UGVs and unmanned aerial vehicles (UAVs to perform localization, obstacle detection, and navigation tasks. Thus, research into range data processing with competitive performance in terms of both accuracy and efficiency has attracted increasing attention. Sparse coding has revolutionized signal processing and led to state-of-the-art performance in a variety of applications. However, dictionary learning, which plays the central role in sparse coding techniques, is computationally demanding, resulting in its limited applicability in real-time systems. In this study, we propose sparse coding algorithms with a fixed pre-learned ridge dictionary to realize range data denoising via leveraging the regularity of laser range measurements in man-made environments. Experiments on both synthesized data and real data demonstrate that our method obtains accuracy comparable to that of sophisticated sparse coding methods, but with much higher computational efficiency.

  19. Fast Sparse Coding for Range Data Denoising with Sparse Ridges Constraint.

    Science.gov (United States)

    Gao, Zhi; Lao, Mingjie; Sang, Yongsheng; Wen, Fei; Ramesh, Bharath; Zhai, Ruifang

    2018-05-06

    Light detection and ranging (LiDAR) sensors have been widely deployed on intelligent systems such as unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs) to perform localization, obstacle detection, and navigation tasks. Thus, research into range data processing with competitive performance in terms of both accuracy and efficiency has attracted increasing attention. Sparse coding has revolutionized signal processing and led to state-of-the-art performance in a variety of applications. However, dictionary learning, which plays the central role in sparse coding techniques, is computationally demanding, resulting in its limited applicability in real-time systems. In this study, we propose sparse coding algorithms with a fixed pre-learned ridge dictionary to realize range data denoising via leveraging the regularity of laser range measurements in man-made environments. Experiments on both synthesized data and real data demonstrate that our method obtains accuracy comparable to that of sophisticated sparse coding methods, but with much higher computational efficiency.

  20. ISTC Projects from RFNC-VNIIEF Devoted to Improving Laser Beam Quality

    Science.gov (United States)

    Starikov, F.; Kochemasov, G.

    Information is given about the Projects # 1929 and # 2631 supported by ISTC and concerned with improving laser beam quality and interesting for adaptive optics community. One of them, Project # 1929 has been recently finished. It has been devoted to development of an SBS phase conjugation mirror of superhigh conjugation quality employing the kinoform optics for high-power lasers with nanosecond scale pulse duration. With the purpose of reaching ideal PC fidelity, the SBS mirror includes the raster of small lenses that has been traditionally used as the lenslet in Shack-Hartmann wavefront sensor in adaptive optics. The second of them, Project # 2631, is concerned with the development of an adaptive optical system for phase correction of laser beams with wavefront vortex. The principles of operation of modern adaptive systems are based on the assumption that the phase is a smooth continuous function in space. Therefore the solution of the Project tasks will assume a new step in adaptive optics.

  1. Design process for NIF laser alignment and beam diagnostics

    Energy Technology Data Exchange (ETDEWEB)

    Grey, A., LLNL

    1998-06-09

    In a controller for an adaptive optic system designed to correct phase aberrations in a high power laser, the wavefront sensor is a discrete Hartmann-Shack design. It uses an army of lenslets (like a fly` s eye) to focus the laser into 77 spots on a CCD camera. Average local tilt of the wavefront across each lenslet changes the position of its focal spot. The system requires 0.1 pixel accuracy in determining the focal spot location. We determine a small area around each spot` s previous location. Within this area, we calculate the centroid of the light intensity in x and y. This calculation fails if the spot regions overlap. Especially during initial acquisition of a highly distorted beam, distinguishing overlapping spots is difficult. However, low resolution analysis of the overlapping spots allows the system to estimate their positions. With this estimate, it can use the deformable mirror to correct the beam enough so we can detect the spots using conventional image processing.

  2. Performance analysis of coherent free space optical communications with sequential pyramid wavefront sensor

    Science.gov (United States)

    Liu, Wei; Yao, Kainan; Chen, Lu; Huang, Danian; Cao, Jingtai; Gu, Haijun

    2018-03-01

    Based-on the previous study on the theory of the sequential pyramid wavefront sensor (SPWFS), in this paper, the SPWFS is first applied to the coherent free space optical communications (FSOC) with more flexible spatial resolution and higher sensitivity than the Shack-Hartmann wavefront sensor, and with higher uniformity of intensity distribution and much simpler than the pyramid wavefront sensor. Then, the mixing efficiency (ME) and the bit error rate (BER) of the coherent FSOC are analyzed during the aberrations correction through numerical simulation with binary phase shift keying (BPSK) modulation. Finally, an experimental AO system based-on SPWFS is setup, and the experimental data is used to analyze the ME and BER of homodyne detection with BPSK modulation. The results show that the AO system based-on SPWFS can increase ME and decrease BER effectively. The conclusions of this paper provide a new method of wavefront sensing for designing the AO system for a coherent FSOC system.

  3. Optically sensitive Medipix2 detector for adaptive optics wavefront sensing

    CERN Document Server

    Vallerga, John; Tremsina, Anton; Siegmund, Oswald; Mikulec, Bettina; Clark, Allan G; CERN. Geneva

    2005-01-01

    A new hybrid optical detector is described that has many of the attributes desired for the next generation adaptive optics (AO) wavefront sensors. The detector consists of a proximity focused microchannel plate (MCP) read out by multi-pixel application specific integrated circuit (ASIC) chips developed at CERN ("Medipix2") with individual pixels that amplify, discriminate and count input events. The detector has 256 x 256 pixels, zero readout noise (photon counting), can be read out at 1 kHz frame rates and is abutable on 3 sides. The Medipix2 readout chips can be electronically shuttered down to a temporal window of a few microseconds with an accuracy of 10 ns. When used in a Shack-Hartmann style wavefront sensor, a detector with 4 Medipix chips should be able to centroid approximately 5000 spots using 7 x 7 pixel sub-apertures resulting in very linear, off-null error correction terms. The quantum efficiency depends on the optical photocathode chosen for the bandpass of interest.

  4. Atmospheric turbulence profiling with unknown power spectral density

    Science.gov (United States)

    Helin, Tapio; Kindermann, Stefan; Lehtonen, Jonatan; Ramlau, Ronny

    2018-04-01

    Adaptive optics (AO) is a technology in modern ground-based optical telescopes to compensate for the wavefront distortions caused by atmospheric turbulence. One method that allows to retrieve information about the atmosphere from telescope data is so-called SLODAR, where the atmospheric turbulence profile is estimated based on correlation data of Shack-Hartmann wavefront measurements. This approach relies on a layered Kolmogorov turbulence model. In this article, we propose a novel extension of the SLODAR concept by including a general non-Kolmogorov turbulence layer close to the ground with an unknown power spectral density. We prove that the joint estimation problem of the turbulence profile above ground simultaneously with the unknown power spectral density at the ground is ill-posed and propose three numerical reconstruction methods. We demonstrate by numerical simulations that our methods lead to substantial improvements in the turbulence profile reconstruction compared to the standard SLODAR-type approach. Also, our methods can accurately locate local perturbations in non-Kolmogorov power spectral densities.

  5. Experimental demonstration of single-mode fiber coupling over relatively strong turbulence with adaptive optics.

    Science.gov (United States)

    Chen, Mo; Liu, Chao; Xian, Hao

    2015-10-10

    High-speed free-space optical communication systems using fiber-optic components can greatly improve the stability of the system and simplify the structure. However, propagation through atmospheric turbulence degrades the spatial coherence of the signal beam and limits the single-mode fiber (SMF) coupling efficiency. In this paper, we analyze the influence of the atmospheric turbulence on the SMF coupling efficiency over various turbulences. The results show that the SMF coupling efficiency drops from 81% without phase distortion to 10% when phase root mean square value equals 0.3λ. The simulations of SMF coupling with adaptive optics (AO) indicate that it is inevitable to compensate the high-order aberrations for SMF coupling over relatively strong turbulence. The SMF coupling efficiency experiments, using an AO system with a 137-element deformable mirror and a Hartmann-Shack wavefront sensor, obtain average coupling efficiency increasing from 1.3% in open loop to 46.1% in closed loop under a relatively strong turbulence, D/r0=15.1.

  6. The PALM-3000 high-order adaptive optics system for Palomar Observatory

    Science.gov (United States)

    Bouchez, Antonin H.; Dekany, Richard G.; Angione, John R.; Baranec, Christoph; Britton, Matthew C.; Bui, Khanh; Burruss, Rick S.; Cromer, John L.; Guiwits, Stephen R.; Henning, John R.; Hickey, Jeff; McKenna, Daniel L.; Moore, Anna M.; Roberts, Jennifer E.; Trinh, Thang Q.; Troy, Mitchell; Truong, Tuan N.; Velur, Viswa

    2008-07-01

    Deployed as a multi-user shared facility on the 5.1 meter Hale Telescope at Palomar Observatory, the PALM-3000 highorder upgrade to the successful Palomar Adaptive Optics System will deliver extreme AO correction in the near-infrared, and diffraction-limited images down to visible wavelengths, using both natural and sodium laser guide stars. Wavefront control will be provided by two deformable mirrors, a 3368 active actuator woofer and 349 active actuator tweeter, controlled at up to 3 kHz using an innovative wavefront processor based on a cluster of 17 graphics processing units. A Shack-Hartmann wavefront sensor with selectable pupil sampling will provide high-order wavefront sensing, while an infrared tip/tilt sensor and visible truth wavefront sensor will provide low-order LGS control. Four back-end instruments are planned at first light: the PHARO near-infrared camera/spectrograph, the SWIFT visible light integral field spectrograph, Project 1640, a near-infrared coronagraphic integral field spectrograph, and 888Cam, a high-resolution visible light imager.

  7. Image system analysis of human eye wave-front aberration on the basis of HSS

    Science.gov (United States)

    Xu, Ancheng

    2017-07-01

    Hartmann-Shack sensor (HSS) has been used in objective measurement of human eye wave-front aberration, but the research on the effects of sampling point size on the accuracy of the result has not been reported. In this paper, point spread function (PSF) of the whole system mathematical model was obtained via measuring the optical imaging system structure of human eye wave-front aberration measurement. The impact of Airy spot size on the accuracy of system was analyzed. Statistics study show that the geometry of Airy spot size of the ideal light source sent from eye retina formed on the surface of HSS is far smaller than the size of the HSS sample point image used in the experiment. Therefore, the effect of Airy spot on the precision of the system can be ignored. This study theoretically and experimentally justifies the reliability and accuracy of human eye wave-front aberration measurement based on HSS.

  8. Adaptive optics scanning laser ophthalmoscope using liquid crystal on silicon spatial light modulator: Performance study with involuntary eye movement

    Science.gov (United States)

    Huang, Hongxin; Toyoda, Haruyoshi; Inoue, Takashi

    2017-09-01

    The performance of an adaptive optics scanning laser ophthalmoscope (AO-SLO) using a liquid crystal on silicon spatial light modulator and Shack-Hartmann wavefront sensor was investigated. The system achieved high-resolution and high-contrast images of human retinas by dynamic compensation for the aberrations in the eyes. Retinal structures such as photoreceptor cells, blood vessels, and nerve fiber bundles, as well as blood flow, could be observed in vivo. We also investigated involuntary eye movements and ascertained microsaccades and drifts using both the retinal images and the aberrations recorded simultaneously. Furthermore, we measured the interframe displacement of retinal images and found that during eye drift, the displacement has a linear relationship with the residual low-order aberration. The estimated duration and cumulative displacement of the drift were within the ranges estimated by a video tracking technique. The AO-SLO would not only be used for the early detection of eye diseases, but would also offer a new approach for involuntary eye movement research.

  9. Optically sensitive Medipix2 detector for adaptive optics wavefront sensing

    International Nuclear Information System (INIS)

    Vallerga, John; McPhate, Jason; Tremsin, Anton; Siegmund, Oswald; Mikulec, Bettina; Clark, Allan

    2005-01-01

    A new hybrid optical detector is described that has many of the attributes desired for the next generation adaptive optics (AO) wavefront sensors. The detector consists of a proximity focused microchannel plate (MCP) read out by multi-pixel application specific integrated circuit (ASIC) chips developed at CERN ('Medipix2') with individual pixels that amplify, discriminate and count input events. The detector has 256x256 pixels, zero readout noise (photon counting), can be read out at 1 kHz frame rates and is abutable on 3 sides. The Medipix2 readout chips can be electronically shuttered down to a temporal window of a few microseconds with an accuracy of 10 ns. When used in a Shack-Hartmann style wavefront sensor, a detector with 4 Medipix chips should be able to centroid approximately 5000 spots using 7x7 pixel sub-apertures resulting in very linear, off-null error correction terms. The quantum efficiency depends on the optical photocathode chosen for the bandpass of interest

  10. Differential computation method used to calibrate the angle-centroid relationship in coaxial reverse Hartmann test

    Science.gov (United States)

    Li, Xinji; Hui, Mei; Zhao, Zhu; Liu, Ming; Dong, Liquan; Kong, Lingqin; Zhao, Yuejin

    2018-05-01

    A differential computation method is presented to improve the precision of calibration for coaxial reverse Hartmann test (RHT). In the calibration, the accuracy of the distance measurement greatly influences the surface shape test, as demonstrated in the mathematical analyses. However, high-precision absolute distance measurement is difficult in the calibration. Thus, a differential computation method that only requires the relative distance was developed. In the proposed method, a liquid crystal display screen successively displayed two regular dot matrix patterns with different dot spacing. In a special case, images on the detector exhibited similar centroid distributions during the reflector translation. Thus, the critical value of the relative displacement distance and the centroid distributions of the dots on the detector were utilized to establish the relationship between the rays at certain angles and the detector coordinates. Experiments revealed the approximately linear behavior of the centroid variation with the relative displacement distance. With the differential computation method, we increased the precision of traditional calibration 10-5 rad root mean square. The precision of the RHT was increased by approximately 100 nm.

  11. Three-Dimensional Velocity Field De-Noising using Modal Projection

    Science.gov (United States)

    Frank, Sarah; Ameli, Siavash; Szeri, Andrew; Shadden, Shawn

    2017-11-01

    PCMRI and Doppler ultrasound are common modalities for imaging velocity fields inside the body (e.g. blood, air, etc) and PCMRI is increasingly being used for other fluid mechanics applications where optical imaging is difficult. This type of imaging is typically applied to internal flows, which are strongly influenced by domain geometry. While these technologies are evolving, it remains that measured data is noisy and boundary layers are poorly resolved. We have developed a boundary modal analysis method to de-noise 3D velocity fields such that the resulting field is divergence-free and satisfies no-slip/no-penetration boundary conditions. First, two sets of divergence-free modes are computed based on domain geometry. The first set accounts for flow through ``truncation boundaries'', and the second set of modes has no-slip/no-penetration conditions imposed on all boundaries. The modes are calculated by minimizing the velocity gradient throughout the domain while enforcing a divergence-free condition. The measured velocity field is then projected onto these modes using a least squares algorithm. This method is demonstrated on CFD simulations with artificial noise. Different degrees of noise and different numbers of modes are tested to reveal the capabilities of the approach. American Heart Association Award 17PRE33660202.

  12. Entropy-Based Method of Choosing the Decomposition Level in Wavelet Threshold De-noising

    Directory of Open Access Journals (Sweden)

    Yan-Fang Sang

    2010-06-01

    Full Text Available In this paper, the energy distributions of various noises following normal, log-normal and Pearson-III distributions are first described quantitatively using the wavelet energy entropy (WEE, and the results are compared and discussed. Then, on the basis of these analytic results, a method for use in choosing the decomposition level (DL in wavelet threshold de-noising (WTD is put forward. Finally, the performance of the proposed method is verified by analysis of both synthetic and observed series. Analytic results indicate that the proposed method is easy to operate and suitable for various signals. Moreover, contrary to traditional white noise testing which depends on “autocorrelations”, the proposed method uses energy distributions to distinguish real signals and noise in noisy series, therefore the chosen DL is reliable, and the WTD results of time series can be improved.

  13. A nonlinear filtering algorithm for denoising HR(S)TEM micrographs

    International Nuclear Information System (INIS)

    Du, Hongchu

    2015-01-01

    Noise reduction of micrographs is often an essential task in high resolution (scanning) transmission electron microscopy (HR(S)TEM) either for a higher visual quality or for a more accurate quantification. Since HR(S)TEM studies are often aimed at resolving periodic atomistic columns and their non-periodic deviation at defects, it is important to develop a noise reduction algorithm that can simultaneously handle both periodic and non-periodic features properly. In this work, a nonlinear filtering algorithm is developed based on widely used techniques of low-pass filter and Wiener filter, which can efficiently reduce noise without noticeable artifacts even in HR(S)TEM micrographs with contrast of variation of background and defects. The developed nonlinear filtering algorithm is particularly suitable for quantitative electron microscopy, and is also of great interest for beam sensitive samples, in situ analyses, and atomic resolution EFTEM. - Highlights: • A nonlinear filtering algorithm for denoising HR(S)TEM images is developed. • It can simultaneously handle both periodic and non-periodic features properly. • It is particularly suitable for quantitative electron microscopy. • It is of great interest for beam sensitive samples, in situ analyses, and atomic resolution EFTEM

  14. Stacked Denoise Autoencoder Based Feature Extraction and Classification for Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Chen Xing

    2016-01-01

    Full Text Available Deep learning methods have been successfully applied to learn feature representations for high-dimensional data, where the learned features are able to reveal the nonlinear properties exhibited in the data. In this paper, deep learning method is exploited for feature extraction of hyperspectral data, and the extracted features can provide good discriminability for classification task. Training a deep network for feature extraction and classification includes unsupervised pretraining and supervised fine-tuning. We utilized stacked denoise autoencoder (SDAE method to pretrain the network, which is robust to noise. In the top layer of the network, logistic regression (LR approach is utilized to perform supervised fine-tuning and classification. Since sparsity of features might improve the separation capability, we utilized rectified linear unit (ReLU as activation function in SDAE to extract high level and sparse features. Experimental results using Hyperion, AVIRIS, and ROSIS hyperspectral data demonstrated that the SDAE pretraining in conjunction with the LR fine-tuning and classification (SDAE_LR can achieve higher accuracies than the popular support vector machine (SVM classifier.

  15. Solar adaptive optics: specificities, lessons learned, and open alternatives

    Science.gov (United States)

    Montilla, I.; Marino, J.; Asensio Ramos, A.; Collados, M.; Montoya, L.; Tallon, M.

    2016-07-01

    First on sky adaptive optics experiments were performed on the Dunn Solar Telescope on 1979, with a shearing interferometer and limited success. Those early solar adaptive optics efforts forced to custom-develop many components, such as Deformable Mirrors and WaveFront Sensors, which were not available at that time. Later on, the development of the correlation Shack-Hartmann marked a breakthrough in solar adaptive optics. Since then, successful Single Conjugate Adaptive Optics instruments have been developed for many solar telescopes, i.e. the National Solar Observatory, the Vacuum Tower Telescope and the Swedish Solar Telescope. Success with the Multi Conjugate Adaptive Optics systems for GREGOR and the New Solar Telescope has proved to be more difficult to attain. Such systems have a complexity not only related to the number of degrees of freedom, but also related to the specificities of the Sun, used as reference, and the sensing method. The wavefront sensing is performed using correlations on images with a field of view of 10", averaging wavefront information from different sky directions, affecting the sensing and sampling of high altitude turbulence. Also due to the low elevation at which solar observations are performed we have to include generalized fitting error and anisoplanatism, as described by Ragazzoni and Rigaut, as non-negligible error sources in the Multi Conjugate Adaptive Optics error budget. For the development of the next generation Multi Conjugate Adaptive Optics systems for the Daniel K. Inouye Solar Telescope and the European Solar Telescope we still need to study and understand these issues, to predict realistically the quality of the achievable reconstruction. To improve their designs other open issues have to be assessed, i.e. possible alternative sensing methods to avoid the intrinsic anisoplanatism of the wide field correlation Shack-Hartmann, new parameters to estimate the performance of an adaptive optics solar system, alternatives to

  16. A chromaticity-brightness model for color images denoising in a Meyer’s “u + v” framework

    KAUST Repository

    Ferreira, Rita; Fonseca, Irene; Mascarenhas, M. Luí sa

    2017-01-01

    A variational model for imaging segmentation and denoising color images is proposed. The model combines Meyer’s “u+v” decomposition with a chromaticity-brightness framework and is expressed by a minimization of energy integral functionals depending on a small parameter ε>0. The asymptotic behavior as ε→0+ is characterized, and convergence of infima, almost minimizers, and energies are established. In particular, an integral representation of the lower semicontinuous envelope, with respect to the L1-norm, of functionals with linear growth and defined for maps taking values on a certain compact manifold is provided. This study escapes the realm of previous results since the underlying manifold has boundary, and the integrand and its recession function fail to satisfy hypotheses commonly assumed in the literature. The main tools are Γ-convergence and relaxation techniques.

  17. A chromaticity-brightness model for color images denoising in a Meyer’s “u + v” framework

    KAUST Repository

    Ferreira, Rita

    2017-09-11

    A variational model for imaging segmentation and denoising color images is proposed. The model combines Meyer’s “u+v” decomposition with a chromaticity-brightness framework and is expressed by a minimization of energy integral functionals depending on a small parameter ε>0. The asymptotic behavior as ε→0+ is characterized, and convergence of infima, almost minimizers, and energies are established. In particular, an integral representation of the lower semicontinuous envelope, with respect to the L1-norm, of functionals with linear growth and defined for maps taking values on a certain compact manifold is provided. This study escapes the realm of previous results since the underlying manifold has boundary, and the integrand and its recession function fail to satisfy hypotheses commonly assumed in the literature. The main tools are Γ-convergence and relaxation techniques.

  18. Prototype of a laser guide star wavefront sensor for the Extremely Large Telescope

    Science.gov (United States)

    Patti, M.; Lombini, M.; Schreiber, L.; Bregoli, G.; Arcidiacono, C.; Cosentino, G.; Diolaiti, E.; Foppiani, I.

    2018-06-01

    The new class of large telescopes, like the future Extremely Large Telescope (ELT), are designed to work with a laser guide star (LGS) tuned to a resonance of atmospheric sodium atoms. This wavefront sensing technique presents complex issues when applied to big telescopes for many reasons, mainly linked to the finite distance of the LGS, the launching angle, tip-tilt indetermination and focus anisoplanatism. The implementation of a laboratory prototype for the LGS wavefront sensor (WFS) at the beginning of the phase study of MAORY (Multi-conjugate Adaptive Optics Relay) for ELT first light has been indispensable in investigating specific mitigation strategies for the LGS WFS issues. This paper presents the test results of the LGS WFS prototype under different working conditions. The accuracy within which the LGS images are generated on the Shack-Hartmann WFS has been cross-checked with the MAORY simulation code. The experiments show the effect of noise on centroiding precision, the impact of LGS image truncation on wavefront sensing accuracy as well as the temporal evolution of the sodium density profile and LGS image under-sampling.

  19. In vivo imaging of human photoreceptor mosaic with wavefront sensorless adaptive optics optical coherence tomography.

    Science.gov (United States)

    Wong, Kevin S K; Jian, Yifan; Cua, Michelle; Bonora, Stefano; Zawadzki, Robert J; Sarunic, Marinko V

    2015-02-01

    Wavefront sensorless adaptive optics optical coherence tomography (WSAO-OCT) is a novel imaging technique for in vivo high-resolution depth-resolved imaging that mitigates some of the challenges encountered with the use of sensor-based adaptive optics designs. This technique replaces the Hartmann Shack wavefront sensor used to measure aberrations with a depth-resolved image-driven optimization algorithm, with the metric based on the OCT volumes acquired in real-time. The custom-built ultrahigh-speed GPU processing platform and fast modal optimization algorithm presented in this paper was essential in enabling real-time, in vivo imaging of human retinas with wavefront sensorless AO correction. WSAO-OCT is especially advantageous for developing a clinical high-resolution retinal imaging system as it enables the use of a compact, low-cost and robust lens-based adaptive optics design. In this report, we describe our WSAO-OCT system for imaging the human photoreceptor mosaic in vivo. We validated our system performance by imaging the retina at several eccentricities, and demonstrated the improvement in photoreceptor visibility with WSAO compensation.

  20. Optical and x-ray alignment approaches for off-plane reflection gratings

    Science.gov (United States)

    Allured, Ryan; Donovan, Benjamin D.; DeRoo, Casey T.; Marlowe, Hannah R.; McEntaffer, Randall L.; Tutt, James H.; Cheimets, Peter N.; Hertz, Edward; Smith, Randall K.; Burwitz, Vadim; Hartner, Gisela; Menz, Benedikt

    2015-09-01

    Off-plane reflection gratings offer the potential for high-resolution, high-throughput X-ray spectroscopy on future missions. Typically, the gratings are placed in the path of a converging beam from an X-ray telescope. In the off-plane reflection grating case, these gratings must be co-aligned such that their diffracted spectra overlap at the focal plane. Misalignments degrade spectral resolution and effective area. In-situ X-ray alignment of a pair of off-plane reflection gratings in the path of a silicon pore optics module has been performed at the MPE PANTER beamline in Germany. However, in-situ X-ray alignment may not be feasible when assembling all of the gratings required for a satellite mission. In that event, optical methods must be developed to achieve spectral alignment. We have developed an alignment approach utilizing a Shack-Hartmann wavefront sensor and diffraction of an ultraviolet laser. We are fabricating the necessary hardware, and will be taking a prototype grating module to an X-ray beamline for performance testing following assembly and alignment.

  1. Dynamics of the near response under natural viewing conditions with an open-view sensor

    Science.gov (United States)

    Chirre, Emmanuel; Prieto, Pedro; Artal, Pablo

    2015-01-01

    We have studied the temporal dynamics of the near response (accommodation, convergence and pupil constriction) in healthy subjects when accommodation was performed under natural binocular and monocular viewing conditions. A binocular open-view multi-sensor based on an invisible infrared Hartmann-Shack sensor was used for non-invasive measurements of both eyes simultaneously in real time at 25Hz. Response times for each process under different conditions were measured. The accommodative responses for binocular vision were faster than for monocular conditions. When one eye was blocked, accommodation and convergence were triggered simultaneously and synchronized, despite the fact that no retinal disparity was available. We found that upon the onset of the near target, the unblocked eye rapidly changes its line of sight to fix it on the stimulus while the blocked eye moves in the same direction, producing the equivalent to a saccade, but then converges to the (blocked) target in synchrony with accommodation. This open-view instrument could be further used for additional experiments with other tasks and conditions. PMID:26504666

  2. An Optical Wavefront Sensor Based on a Double Layer Microlens Array

    Directory of Open Access Journals (Sweden)

    Hsiang-Chun Wei

    2011-10-01

    Full Text Available In order to determine light aberrations, Shack-Hartmann optical wavefront sensors make use of microlens arrays (MLA to divide the incident light into small parts and focus them onto image planes. In this paper, we present the design and fabrication of long focal length MLA with various shapes and arrangements based on a double layer structure for optical wavefront sensing applications. A longer focal length MLA could provide high sensitivity in determining the average slope across each microlens under a given wavefront, and spatial resolution of a wavefront sensor is increased by numbers of microlenses across a detector. In order to extend focal length, we used polydimethysiloxane (PDMS above MLA on a glass substrate. Because of small refractive index difference between PDMS and MLA interface (UV-resin, the incident light is less refracted and focused in further distance. Other specific focal lengths could also be realized by modifying the refractive index difference without changing the MLA size. Thus, the wavefront sensor could be improved with better sensitivity and higher spatial resolution.

  3. An automatic holographic adaptive phoropter

    Science.gov (United States)

    Amirsolaimani, Babak; Peyghambarian, N.; Schwiegerling, Jim; Bablumyan, Arkady; Savidis, Nickolaos; Peyman, Gholam

    2017-08-01

    Phoropters are the most common instrument used to detect refractive errors. During a refractive exam, lenses are flipped in front of the patient who looks at the eye chart and tries to read the symbols. The procedure is fully dependent on the cooperation of the patient to read the eye chart, provides only a subjective measurement of visual acuity, and can at best provide a rough estimate of the patient's vision. Phoropters are difficult to use for mass screenings requiring a skilled examiner, and it is hard to screen young children and the elderly etc. We have developed a simplified, lightweight automatic phoropter that can measure the optical error of the eye objectively without requiring the patient's input. The automatic holographic adaptive phoropter is based on a Shack-Hartmann wave front sensor and three computercontrolled fluidic lenses. The fluidic lens system is designed to be able to provide power and astigmatic corrections over a large range of corrections without the need for verbal feedback from the patient in less than 20 seconds.

  4. Measurements of the mirror surface homogeneity in the CBM-RICH

    Energy Technology Data Exchange (ETDEWEB)

    Lebedeva, Elena; Hoehne, Claudia [II. Physikalisches Institut, JLU Giessen (Germany); Collaboration: CBM-Collaboration

    2016-07-01

    The Compressed Baryonic Matter (CBM) experiment at the future FAIR (Facility for Antiproton and Ion Research) complex will investigate the phase diagram of strongly interacting matter at high baryon densities and moderate temperatures in A+A collisions from 2-11 AGeV (SIS100) beam energy. One of the key detector components required for the CBM physics program is the RICH (Ring Imaging CHerenkov) detector, which is developed for efficient and clean electron identification and pion suppression. The CBM-RICH detector is being planned with gaseous radiator and in a standard projective geometry with focusing mirror elements and photon detector planes. One of the important criteria for the selection of appropriate mirrors is their optical surface quality (surface homogeneity). It defines the imaging quality of projected Cherenkov rings, and directly effects the ring finding and fitting performance. The global homogeneity has been tested with the D0 measurement. Local deformations e.g. by the mirror holding structure can be investigated with the Ronchi test and Shack-Hartmann method from which first results are discussed in this contribution.

  5. Dynamic wavefront sensing and correction with low-cost twisted nematic spatial light modulators

    International Nuclear Information System (INIS)

    Duran, Vicente; Climent, Vicent; Lancis, Jesus; Tajahuerce, Enrique; Bara, Salvador; Arines, Justo; Ares, Jorge; Andres, Pedro; Jaroszewicz, Zbigniew

    2010-01-01

    Off-the-shelf twisted nematic liquid crystal displays (TNLCDs) show some interesting features such as high spatial resolution, easy handling, wide availability, and low cost. We describe a compact adaptive optical system using just one TNLCD to measure and compensate optical aberrations. The current system operates at a frame rate of the order of 10 Hz with a four level codification scheme. Wavefront estimation is performed through conventional Hartmann-Shack sensing architecture. The system has proved to work properly with a maximum rms aberration of 0.76 microns and wavefront gradient of 50 rad/mm at a wavelength of 514 nm. These values correspond to typical aberrations found in human eyes. The key of our approach is careful characterization and optimization of the TNLCD for phase-only modulation. For this purpose, we exploit the so-called retarder-rotator approach for twisted nematic liquid crystal cells. The optimization process has been successfully applied to SLMs working either in transmissive or in reflective mode, even when light depolarization effects are observed.

  6. ECG denoising and fiducial point extraction using an extended Kalman filtering framework with linear and nonlinear phase observations.

    Science.gov (United States)

    Akhbari, Mahsa; Shamsollahi, Mohammad B; Jutten, Christian; Armoundas, Antonis A; Sayadi, Omid

    2016-02-01

    In this paper we propose an efficient method for denoising and extracting fiducial point (FP) of ECG signals. The method is based on a nonlinear dynamic model which uses Gaussian functions to model ECG waveforms. For estimating the model parameters, we use an extended Kalman filter (EKF). In this framework called EKF25, all the parameters of Gaussian functions as well as the ECG waveforms (P-wave, QRS complex and T-wave) in the ECG dynamical model, are considered as state variables. In this paper, the dynamic time warping method is used to estimate the nonlinear ECG phase observation. We compare this new approach with linear phase observation models. Using linear and nonlinear EKF25 for ECG denoising and nonlinear EKF25 for fiducial point extraction and ECG interval analysis are the main contributions of this paper. Performance comparison with other EKF-based techniques shows that the proposed method results in higher output SNR with an average SNR improvement of 12 dB for an input SNR of -8 dB. To evaluate the FP extraction performance, we compare the proposed method with a method based on partially collapsed Gibbs sampler and an established EKF-based method. The mean absolute error and the root mean square error of all FPs, across all databases are 14 ms and 22 ms, respectively, for our proposed method, with an advantage when using a nonlinear phase observation. These errors are significantly smaller than errors obtained with other methods. For ECG interval analysis, with an absolute mean error and a root mean square error of about 22 ms and 29 ms, the proposed method achieves better accuracy and smaller variability with respect to other methods.

  7. A proximity algorithm accelerated by Gauss-Seidel iterations for L1/TV denoising models

    Science.gov (United States)

    Li, Qia; Micchelli, Charles A.; Shen, Lixin; Xu, Yuesheng

    2012-09-01

    Our goal in this paper is to improve the computational performance of the proximity algorithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed-point equations expressed in terms of the proximity operators. Based upon this observation we develop an algorithm for solving the model and establish its convergence. Furthermore, we demonstrate that the proposed algorithm can be accelerated through the use of the componentwise Gauss-Seidel iteration so that the CPU time consumed is significantly reduced. Numerical experiments using the proposed algorithm for impulsive noise removal are included, with a comparison to three recently developed algorithms. The numerical results show that while the proposed algorithm enjoys a high quality of the restored images, as the other three known algorithms do, it performs significantly better in terms of computational efficiency measured in the CPU time consumed.

  8. A proximity algorithm accelerated by Gauss–Seidel iterations for L1/TV denoising models

    International Nuclear Information System (INIS)

    Li, Qia; Shen, Lixin; Xu, Yuesheng; Micchelli, Charles A

    2012-01-01

    Our goal in this paper is to improve the computational performance of the proximity algorithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed-point equations expressed in terms of the proximity operators. Based upon this observation we develop an algorithm for solving the model and establish its convergence. Furthermore, we demonstrate that the proposed algorithm can be accelerated through the use of the componentwise Gauss–Seidel iteration so that the CPU time consumed is significantly reduced. Numerical experiments using the proposed algorithm for impulsive noise removal are included, with a comparison to three recently developed algorithms. The numerical results show that while the proposed algorithm enjoys a high quality of the restored images, as the other three known algorithms do, it performs significantly better in terms of computational efficiency measured in the CPU time consumed. (paper)

  9. Fault diagnosis of rolling bearing based on second generation wavelet denoising and morphological filter

    International Nuclear Information System (INIS)

    Meng, Lingjie; Xiang, Jiawei; Zhong, Yongteng; Song, Wenlei

    2015-01-01

    Defective rolling bearing response is often characterized by the presence of periodic impulses. However, the in-situ sampled vibration signal is ordinarily mixed with ambient noises and easy to be interfered even submerged. The hybrid approach combining the second generation wavelet denoising with morphological filter is presented. The raw signal is purified using the second generation wavelet. The difference between the closing and opening operator is employed as the morphology filter to extract the periodicity impulsive features from the purified signal and the defect information is easily to be extracted from the corresponding frequency spectrum. The proposed approach is evaluated by simulations and vibration signals from defective bearings with inner race fault, outer race fault, rolling element fault and compound faults, espectively. Results show that the ambient noises can be fully restrained and the defect information of the above defective bearings is well extracted, which demonstrates that the approach is feasible and effective for the fault detection of rolling bearing.

  10. A Real-Time De-Noising Algorithm for E-Noses in a Wireless Sensor Network

    Science.gov (United States)

    Qu, Jianfeng; Chai, Yi; Yang, Simon X.

    2009-01-01

    A wireless e-nose network system is developed for the special purpose of monitoring odorant gases and accurately estimating odor strength in and around livestock farms. This system is to simultaneously acquire accurate odor strength values remotely at various locations, where each node is an e-nose that includes four metal-oxide semiconductor (MOS) gas sensors. A modified Kalman filtering technique is proposed for collecting raw data and de-noising based on the output noise characteristics of those gas sensors. The measurement noise variance is obtained in real time by data analysis using the proposed slip windows average method. The optimal system noise variance of the filter is obtained by using the experiments data. The Kalman filter theory on how to acquire MOS gas sensors data is discussed. Simulation results demonstrate that the proposed method can adjust the Kalman filter parameters and significantly reduce the noise from the gas sensors. PMID:22399946

  11. A Real-Time De-Noising Algorithm for E-Noses in a Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Yi Chai

    2009-02-01

    Full Text Available A wireless e-nose network system is developed for the special purpose of monitoring odorant gases and accurately estimating odor strength in and around livestock farms. This system is to simultaneously acquire accurate odor strength values remotely at various locations, where each node is an e-nose that includes four metal-oxide semiconductor (MOS gas sensors. A modified Kalman filtering technique is proposed for collecting raw data and de-noising based on the output noise characteristics of those gas sensors. The measurement noise variance is obtained in real time by data analysis using the proposed slip windows average method. The optimal system noise variance of the filter is obtained by using the experiments data. The Kalman filter theory on how to acquire MOS gas sensors data is discussed. Simulation results demonstrate that the proposed method can adjust the Kalman filter parameters and significantly reduce the noise from the gas sensors.

  12. Towards denoising XMCD movies of fast magnetization dynamics using extended Kalman filter.

    Science.gov (United States)

    Kopp, M; Harmeling, S; Schütz, G; Schölkopf, B; Fähnle, M

    2015-01-01

    The Kalman filter is a well-established approach to get information on the time-dependent state of a system from noisy observations. It was developed in the context of the Apollo project to see the deviation of the true trajectory of a rocket from the desired trajectory. Afterwards it was applied to many different systems with small numbers of components of the respective state vector (typically about 10). In all cases the equation of motion for the state vector was known exactly. The fast dissipative magnetization dynamics is often investigated by x-ray magnetic circular dichroism movies (XMCD movies), which are often very noisy. In this situation the number of components of the state vector is extremely large (about 10(5)), and the equation of motion for the dissipative magnetization dynamics (especially the values of the material parameters of this equation) is not well known. In the present paper it is shown by theoretical considerations that - nevertheless - there is no principle problem for the use of the Kalman filter to denoise XMCD movies of fast dissipative magnetization dynamics. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Using wavelet denoising and mathematical morphology in the segmentation technique applied to blood cells images.

    Science.gov (United States)

    Boix, Macarena; Cantó, Begoña

    2013-04-01

    Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet denoising we determine the best wavelet that shows a segmentation with the largest area in the cell. We study different wavelet families and we conclude that the wavelet db1 is the best and it can serve for posterior works on blood pathologies. The proposed method generates goods results when it is applied on several images. Finally, the proposed algorithm made in MatLab environment is verified for a selected blood cells.

  14. Optimizing the De-Noise Neural Network Model for GPS Time-Series Monitoring of Structures

    Directory of Open Access Journals (Sweden)

    Mosbeh R. Kaloop

    2015-09-01

    Full Text Available The Global Positioning System (GPS is recently used widely in structures and other applications. Notwithstanding, the GPS accuracy still suffers from the errors afflicting the measurements, particularly the short-period displacement of structural components. Previously, the multi filter method is utilized to remove the displacement errors. This paper aims at using a novel application for the neural network prediction models to improve the GPS monitoring time series data. Four prediction models for the learning algorithms are applied and used with neural network solutions: back-propagation, Cascade-forward back-propagation, adaptive filter and extended Kalman filter, to estimate which model can be recommended. The noise simulation and bridge’s short-period GPS of the monitoring displacement component of one Hz sampling frequency are used to validate the four models and the previous method. The results show that the Adaptive neural networks filter is suggested for de-noising the observations, specifically for the GPS displacement components of structures. Also, this model is expected to have significant influence on the design of structures in the low frequency responses and measurements’ contents.

  15. XQ-NLM: Denoising Diffusion MRI Data via x-q Space Non-Local Patch Matching.

    Science.gov (United States)

    Chen, Geng; Wu, Yafeng; Shen, Dinggang; Yap, Pew-Thian

    2016-10-01

    Noise is a major issue influencing quantitative analysis in diffusion MRI. The effects of noise can be reduced by repeated acquisitions, but this leads to long acquisition times that can be unrealistic in clinical settings. For this reason, post-acquisition denoising methods have been widely used to improve SNR. Among existing methods, non-local means (NLM) has been shown to produce good image quality with edge preservation. However, currently the application of NLM to diffusion MRI has been mostly focused on the spatial space (i.e., the x -space), despite the fact that diffusion data live in a combined space consisting of the x -space and the q -space (i.e., the space of wavevectors). In this paper, we propose to extend NLM to both x -space and q -space. We show how patch-matching, as required in NLM, can be performed concurrently in x-q space with the help of azimuthal equidistant projection and rotation invariant features. Extensive experiments on both synthetic and real data confirm that the proposed x-q space NLM (XQ-NLM) outperforms the classic NLM.

  16. Intelligent Mechanical Fault Diagnosis Based on Multiwavelet Adaptive Threshold Denoising and MPSO

    Directory of Open Access Journals (Sweden)

    Hao Sun

    2014-01-01

    Full Text Available The condition diagnosis of rotating machinery depends largely on the feature analysis of vibration signals measured for the condition diagnosis. However, the signals measured from rotating machinery usually are nonstationary and nonlinear and contain noise. The useful fault features are hidden in the heavy background noise. In this paper, a novel fault diagnosis method for rotating machinery based on multiwavelet adaptive threshold denoising and mutation particle swarm optimization (MPSO is proposed. Geronimo, Hardin, and Massopust (GHM multiwavelet is employed for extracting weak fault features under background noise, and the method of adaptively selecting appropriate threshold for multiwavelet with energy ratio of multiwavelet coefficient is presented. The six nondimensional symptom parameters (SPs in the frequency domain are defined to reflect the features of the vibration signals measured in each state. Detection index (DI using statistical theory has been also defined to evaluate the sensitiveness of SP for condition diagnosis. MPSO algorithm with adaptive inertia weight adjustment and particle mutation is proposed for condition identification. MPSO algorithm effectively solves local optimum and premature convergence problems of conventional particle swarm optimization (PSO algorithm. It can provide a more accurate estimate on fault diagnosis. Practical examples of fault diagnosis for rolling element bearings are given to verify the effectiveness of the proposed method.

  17. Trackside acoustic diagnosis of axle box bearing based on kurtosis-optimization wavelet denoising

    Science.gov (United States)

    Peng, Chaoyong; Gao, Xiaorong; Peng, Jianping; Wang, Ai

    2018-04-01

    As one of the key components of railway vehicles, the operation condition of the axle box bearing has a significant effect on traffic safety. The acoustic diagnosis is more suitable than vibration diagnosis for trackside monitoring. The acoustic signal generated by the train axle box bearing is an amplitude modulation and frequency modulation signal with complex train running noise. Although empirical mode decomposition (EMD) and some improved time-frequency algorithms have proved to be useful in bearing vibration signal processing, it is hard to extract the bearing fault signal from serious trackside acoustic background noises by using those algorithms. Therefore, a kurtosis-optimization-based wavelet packet (KWP) denoising algorithm is proposed, as the kurtosis is the key indicator of bearing fault signal in time domain. Firstly, the geometry based Doppler correction is applied to signals of each sensor, and with the signal superposition of multiple sensors, random noises and impulse noises, which are the interference of the kurtosis indicator, are suppressed. Then, the KWP is conducted. At last, the EMD and Hilbert transform is applied to extract the fault feature. Experiment results indicate that the proposed method consisting of KWP and EMD is superior to the EMD.

  18. Maximum likelihood estimation-based denoising of magnetic resonance images using restricted local neighborhoods

    International Nuclear Information System (INIS)

    Rajan, Jeny; Jeurissen, Ben; Sijbers, Jan; Verhoye, Marleen; Van Audekerke, Johan

    2011-01-01

    In this paper, we propose a method to denoise magnitude magnetic resonance (MR) images, which are Rician distributed. Conventionally, maximum likelihood methods incorporate the Rice distribution to estimate the true, underlying signal from a local neighborhood within which the signal is assumed to be constant. However, if this assumption is not met, such filtering will lead to blurred edges and loss of fine structures. As a solution to this problem, we put forward the concept of restricted local neighborhoods where the true intensity for each noisy pixel is estimated from a set of preselected neighboring pixels. To this end, a reference image is created from the noisy image using a recently proposed nonlocal means algorithm. This reference image is used as a prior for further noise reduction. A scheme is developed to locally select an appropriate subset of pixels from which the underlying signal is estimated. Experimental results based on the peak signal to noise ratio, structural similarity index matrix, Bhattacharyya coefficient and mean absolute difference from synthetic and real MR images demonstrate the superior performance of the proposed method over other state-of-the-art methods.

  19. Denoising and dimensionality reduction of genomic data

    Science.gov (United States)

    Capobianco, Enrico

    2005-05-01

    Genomics represents a challenging research field for many quantitative scientists, and recently a vast variety of statistical techniques and machine learning algorithms have been proposed and inspired by cross-disciplinary work with computational and systems biologists. In genomic applications, the researcher deals with noisy and complex high-dimensional feature spaces; a wealth of genes whose expression levels are experimentally measured, can often be observed for just a few time points, thus limiting the available samples. This unbalanced combination suggests that it might be hard for standard statistical inference techniques to come up with good general solutions, likewise for machine learning algorithms to avoid heavy computational work. Thus, one naturally turns to two major aspects of the problem: sparsity and intrinsic dimensionality. These two aspects are studied in this paper, where for both denoising and dimensionality reduction, a very efficient technique, i.e., Independent Component Analysis, is used. The numerical results are very promising, and lead to a very good quality of gene feature selection, due to the signal separation power enabled by the decomposition technique. We investigate how the use of replicates can improve these results, and deal with noise through a stabilization strategy which combines the estimated components and extracts the most informative biological information from them. Exploiting the inherent level of sparsity is a key issue in genetic regulatory networks, where the connectivity matrix needs to account for the real links among genes and discard many redundancies. Most experimental evidence suggests that real gene-gene connections represent indeed a subset of what is usually mapped onto either a huge gene vector or a typically dense and highly structured network. Inferring gene network connectivity from the expression levels represents a challenging inverse problem that is at present stimulating key research in biomedical

  20. Joint denoising, demosaicing, and chromatic aberration correction for UHD video

    Science.gov (United States)

    Jovanov, Ljubomir; Philips, Wilfried; Damstra, Klaas Jan; Ellenbroek, Frank

    2017-09-01

    High-resolution video capture is crucial for numerous applications such as surveillance, security, industrial inspection, medical imaging and digital entertainment. In the last two decades, we are witnessing a dramatic increase of the spatial resolution and the maximal frame rate of video capturing devices. In order to achieve further resolution increase, numerous challenges will be facing us. Due to the reduced size of the pixel, the amount of light also reduces, leading to the increased noise level. Moreover, the reduced pixel size makes the lens imprecisions more pronounced, which especially applies to chromatic aberrations. Even in the case when high quality lenses are used some chromatic aberration artefacts will remain. Next, noise level additionally increases due to the higher frame rates. To reduce the complexity and the price of the camera, one sensor captures all three colors, by relying on Color Filter Arrays. In order to obtain full resolution color image, missing color components have to be interpolated, i.e. demosaicked, which is more challenging than in the case of lower resolution, due to the increased noise and aberrations. In this paper, we propose a new method, which jointly performs chromatic aberration correction, denoising and demosaicking. By jointly performing the reduction of all artefacts, we are reducing the overall complexity of the system and the introduction of new artefacts. In order to reduce possible flicker we also perform temporal video enhancement. We evaluate the proposed method on a number of publicly available UHD sequences and on sequences recorded in our studio.

  1. Comment on ‘A new method for fusion, denoising and enhancement of x-ray images retrieved from Talbot–Lau grating interferometry’

    International Nuclear Information System (INIS)

    Scholkmann, Felix; Revol, Vincent; Kaufmann, Rolf; Kottler, Christian

    2015-01-01

    In a recent paper (Scholkamm et al 2014 Phys. Med. Biol. 59 1425–40) we presented a new image denoising, fusion and enhancement framework for combining and optimal visualization of x-ray attenuation contrast, differential phase contrast and dark-field contrast images retrieved from x-ray Talbot–Lau grating interferometry. In this comment we give additional information and report about the application of our framework to breast cancer tissue which we presented in our paper as an example. The applied procedure is suitable for a qualitative comparison of different algorithms. For a quantitative juxtaposition original data would however be needed as an input. (comment and reply)

  2. Magnetohydrodynamic flow in a rectangular duct under a uniform transverse magnetic field at high Hartmann number

    International Nuclear Information System (INIS)

    Temperley, D.J.

    1976-01-01

    In this paper we consider fully developed, laminar, unidirectional flow of uniformly conducting, incompressible fluid through a rectangular duct of uniform cross-section. An externally applied magnetic field acts parallel to one pair of opposite walls and induced velocity and magnetic fields are generated in a direction parallel to the axis of the duct. The governing equations and boundary conditions for the latter fields are introduced and study is then concentrated on the special case of a duct having all walls non-conducting. For values of the Hartmann number M>>1, classical asymptotic analysis reveals the leading terms in the expansions of the induced fields in all key regions, with the exception of certain boundary layers near the corners of the duct. The order of magnitude of the affect of the latter layers on the flow-rate is discussed and closed-form solutions are obtained for the induced fields near the corners of the duct. Attempts were made to formulate a concise Principle of Minimum Singularity to enable the correct choice of eigen functions for the various field components in the boundary layers on the walls parallel to the applied field. It was found, however, that these components are best found by taking the outer expansion of the closed-form solution in those boundary-layers near the corners of the duct where classical asymptotic analysis is not applicable. (author)

  3. Stacked Denoising Autoencoders Applied to Star/Galaxy Classification

    Science.gov (United States)

    Qin, Hao-ran; Lin, Ji-ming; Wang, Jun-yi

    2017-04-01

    In recent years, the deep learning algorithm, with the characteristics of strong adaptability, high accuracy, and structural complexity, has become more and more popular, but it has not yet been used in astronomy. In order to solve the problem that the star/galaxy classification accuracy is high for the bright source set, but low for the faint source set of the Sloan Digital Sky Survey (SDSS) data, we introduced the new deep learning algorithm, namely the SDA (stacked denoising autoencoder) neural network and the dropout fine-tuning technique, which can greatly improve the robustness and antinoise performance. We randomly selected respectively the bright source sets and faint source sets from the SDSS DR12 and DR7 data with spectroscopic measurements, and made preprocessing on them. Then, we randomly selected respectively the training sets and testing sets without replacement from the bright source sets and faint source sets. At last, using these training sets we made the training to obtain the SDA models of the bright sources and faint sources in the SDSS DR7 and DR12, respectively. We compared the test result of the SDA model on the DR12 testing set with the test results of the Library for Support Vector Machines (LibSVM), J48 decision tree, Logistic Model Tree (LMT), Support Vector Machine (SVM), Logistic Regression, and Decision Stump algorithm, and compared the test result of the SDA model on the DR7 testing set with the test results of six kinds of decision trees. The experiments show that the SDA has a better classification accuracy than other machine learning algorithms for the faint source sets of DR7 and DR12. Especially, when the completeness function is used as the evaluation index, compared with the decision tree algorithms, the correctness rate of SDA has improved about 15% for the faint source set of SDSS-DR7.

  4. Sliding window denoising K-Singular Value Decomposition and its application on rolling bearing impact fault diagnosis

    Science.gov (United States)

    Yang, Honggang; Lin, Huibin; Ding, Kang

    2018-05-01

    The performance of sparse features extraction by commonly used K-Singular Value Decomposition (K-SVD) method depends largely on the signal segment selected in rolling bearing diagnosis, furthermore, the calculating speed is relatively slow and the dictionary becomes so redundant when the fault signal is relatively long. A new sliding window denoising K-SVD (SWD-KSVD) method is proposed, which uses only one small segment of time domain signal containing impacts to perform sliding window dictionary learning and select an optimal pattern with oscillating information of the rolling bearing fault according to a maximum variance principle. An inner product operation between the optimal pattern and the whole fault signal is performed to enhance the characteristic of the impacts' occurrence moments. Lastly, the signal is reconstructed at peak points of the inner product to realize the extraction of the rolling bearing fault features. Both simulation and experiments verify that the method could extract the fault features effectively.

  5. Enhancement and denoising of mammographic images for breast disease detection

    International Nuclear Information System (INIS)

    Yazdani, S.; Yusof, R.; Karimian, A.; Hematian, A.; Yousefi, M.

    2012-01-01

    In these two decades breast cancer is one of the leading cause of death among women. In breast cancer research, Mammographic Image is being assessed as a potential tool for detecting breast disease and investigating response to chemotherapy. In first stage of breast disease discovery, the density measurement of the breast in mammographic images provides very useful information. Because of the importance of the role of mammographic images the need for accurate and robust automated image enhancement techniques is becoming clear. Mammographic images have some disadvantages such as, the high dependence of contrast upon the way the image is acquired, weak distinction in splitting cyst from tumor, intensity non uniformity, the existence of noise, etc. These limitations make problem to detect the typical signs such as masses and microcalcifications. For this reason, denoising and enhancing the quality of mammographic images is very important. The method which is used in this paper is in spatial domain which its input includes high, intermediate and even very low contrast mammographic images based on specialist physician's view, while its output is processed images that show the input images with higher quality, more contrast and more details. In this research, 38 mammographic images have been used. The result of purposed method shows details of abnormal zones and the areas with defects so that specialist could explore these zones more accurately and it could be deemed as an index for cancer diagnosis. In this study, mammographic images are initially converted into digital images and then to increase spatial resolution power, their noise is reduced and consequently their contrast is improved. The results demonstrate effectiveness and efficiency of the proposed methods. (authors)

  6. Quasi-real-time end-to-end simulations of ELT-scale adaptive optics systems on GPUs

    Science.gov (United States)

    Gratadour, Damien

    2011-09-01

    Our team has started the development of a code dedicated to GPUs for the simulation of AO systems at the E-ELT scale. It uses the CUDA toolkit and an original binding to Yorick (an open source interpreted language) to provide the user with a comprehensive interface. In this paper we present the first performance analysis of our simulation code, showing its ability to provide Shack-Hartmann (SH) images and measurements at the kHz scale for VLT-sized AO system and in quasi-real-time (up to 70 Hz) for ELT-sized systems on a single top-end GPU. The simulation code includes multiple layers atmospheric turbulence generation, ray tracing through these layers, image formation at the focal plane of every sub-apertures of a SH sensor using either natural or laser guide stars and centroiding on these images using various algorithms. Turbulence is generated on-the-fly giving the ability to simulate hours of observations without the need of loading extremely large phase screens in the global memory. Because of its performance this code additionally provides the unique ability to test real-time controllers for future AO systems under nominal conditions.

  7. Wavefront reconstruction using computer-generated holograms

    Science.gov (United States)

    Schulze, Christian; Flamm, Daniel; Schmidt, Oliver A.; Duparré, Michael

    2012-02-01

    We propose a new method to determine the wavefront of a laser beam, based on modal decomposition using computer-generated holograms (CGHs). Thereby the beam under test illuminates the CGH with a specific, inscribed transmission function that enables the measurement of modal amplitudes and phases by evaluating the first diffraction order of the hologram. Since we use an angular multiplexing technique, our method is innately capable of real-time measurements of amplitude and phase, yielding the complete information about the optical field. A measurement of the Stokes parameters, respectively of the polarization state, provides the possibility to calculate the Poynting vector. Two wavefront reconstruction possibilities are outlined: reconstruction from the phase for scalar beams and reconstruction from the Poynting vector for inhomogeneously polarized beams. To quantify single aberrations, the reconstructed wavefront is decomposed into Zernike polynomials. Our technique is applied to beams emerging from different kinds of multimode optical fibers, such as step-index, photonic crystal and multicore fibers, whereas in this work results are exemplarily shown for a step-index fiber and compared to a Shack-Hartmann measurement that serves as a reference.

  8. Investigation on adaptive optics performance from propagation channel characterization with the small optical transponder

    Science.gov (United States)

    Petit, Cyril; Védrenne, Nicolas; Velluet, Marie Therese; Michau, Vincent; Artaud, Geraldine; Samain, Etienne; Toyoshima, Morio

    2016-11-01

    In order to address the high throughput requested for both downlink and uplink satellite to ground laser links, adaptive optics (AO) has become a key technology. While maturing, application of this technology for satellite to ground telecommunication, however, faces difficulties, such as higher bandwidth and optimal operation for a wide variety of atmospheric conditions (daytime and nighttime) with potentially low elevations that might severely affect wavefront sensing because of scintillation. To address these specificities, an accurate understanding of the origin of the perturbations is required, as well as operational validation of AO on real laser links. We report here on a low Earth orbiting (LEO) microsatellite to ground downlink with AO correction. We discuss propagation channel characterization based on Shack-Hartmann wavefront sensor (WFS) measurements. Fine modeling of the propagation channel is proposed based on multi-Gaussian model of turbulence profile. This model is then used to estimate the AO performance and validate the experimental results. While AO performance is limited by the experimental set-up, it proves to comply with expected performance and further interesting information on propagation channel is extracted. These results shall help dimensioning and operating AO systems for LEO to ground downlinks.

  9. Payload characterization for CubeSat demonstration of MEMS deformable mirrors

    Science.gov (United States)

    Marinan, Anne; Cahoy, Kerri; Webber, Matthew; Belikov, Ruslan; Bendek, Eduardo

    2014-08-01

    Coronagraphic space telescopes require wavefront control systems for high-contrast imaging applications such as exoplanet direct imaging. High-actuator-count MEMS deformable mirrors (DM) are a key element of these wavefront control systems yet have not been flown in space long enough to characterize their on-orbit performance. The MEMS Deformable Mirror CubeSat Testbed is a conceptual nanosatellite demonstration of MEMS DM and wavefront sensing technology. The testbed platform is a 3U CubeSat bus. Of the 10 x 10 x 34.05 cm (3U) available volume, a 10 x 10 x 15 cm space is reserved for the optical payload. The main purpose of the payload is to characterize and calibrate the onorbit performance of a MEMS deformable mirror over an extended period of time (months). Its design incorporates both a Shack Hartmann wavefront sensor (internal laser illumination), and a focal plane sensor (used with an external aperture to image bright stars). We baseline a 32-actuator Boston Micromachines Mini deformable mirror for this mission, though the design is flexible and can be applied to mirrors from other vendors. We present the mission design and payload architecture and discuss experiment design, requirements, and performance simulations.

  10. Impact of beacon wavelength on phase-compensation performance

    Science.gov (United States)

    Enterline, Allison A.; Spencer, Mark F.; Burrell, Derek J.; Brennan, Terry J.

    2017-09-01

    This study evaluates the effects of beacon-wavelength mismatch on phase-compensation performance. In general, beacon-wavelength mismatch occurs at the system level because the beacon-illuminator laser (BIL) and high-energy laser (HEL) are often at different wavelengths. Such is the case, for example, when using an aperture sharing element to isolate the beam-control sensor suite from the blinding nature of the HEL. With that said, this study uses the WavePlex Toolbox in MATLAB® to model ideal spherical wave propagation through various atmospheric-turbulence conditions. To quantify phase-compensation performance, we also model a nominal adaptive-optics (AO) system. We achieve correction from a Shack-Hartmann wavefront sensor and continuous-face-sheet deformable mirror using a least-squares phase reconstruction algorithm in the Fried geometry and a leaky integrator control law. To this end, we plot the power in the bucket metric as a function of BIL-HEL wavelength difference. Our initial results show that positive BIL-HEL wavelength differences achieve better phase compensation performance compared to negative BIL-HEL wavelength differences (i.e., red BILs outperform blue BILs). This outcome is consistent with past results.

  11. Wavefront measurement of plastic lenses for mobile-phone applications

    Science.gov (United States)

    Huang, Li-Ting; Cheng, Yuan-Chieh; Wang, Chung-Yen; Wang, Pei-Jen

    2016-08-01

    In camera lenses for mobile-phone applications, all lens elements have been designed with aspheric surfaces because of the requirements in minimal total track length of the lenses. Due to the diffraction-limited optics design with precision assembly procedures, element inspection and lens performance measurement have become cumbersome in the production of mobile-phone cameras. Recently, wavefront measurements based on Shack-Hartmann sensors have been successfully implemented on injection-molded plastic lens with aspheric surfaces. However, the applications of wavefront measurement on small-sized plastic lenses have yet to be studied both theoretically and experimentally. In this paper, both an in-house-built and a commercial wavefront measurement system configured on two optics structures have been investigated with measurement of wavefront aberrations on two lens elements from a mobile-phone camera. First, the wet-cell method has been employed for verifications of aberrations due to residual birefringence in an injection-molded lens. Then, two lens elements of a mobile-phone camera with large positive and negative power have been measured with aberrations expressed in Zernike polynomial to illustrate the effectiveness in wavefront measurement for troubleshooting defects in optical performance.

  12. Investigation of Primary Mirror Segment's Residual Errors for the Thirty Meter Telescope

    Science.gov (United States)

    Seo, Byoung-Joon; Nissly, Carl; Angeli, George; MacMynowski, Doug; Sigrist, Norbert; Troy, Mitchell; Williams, Eric

    2009-01-01

    The primary mirror segment aberrations after shape corrections with warping harness have been identified as the single largest error term in the Thirty Meter Telescope (TMT) image quality error budget. In order to better understand the likely errors and how they will impact the telescope performance we have performed detailed simulations. We first generated unwarped primary mirror segment surface shapes that met TMT specifications. Then we used the predicted warping harness influence functions and a Shack-Hartmann wavefront sensor model to determine estimates for the 492 corrected segment surfaces that make up the TMT primary mirror. Surface and control parameters, as well as the number of subapertures were varied to explore the parameter space. The corrected segment shapes were then passed to an optical TMT model built using the Jet Propulsion Laboratory (JPL) developed Modeling and Analysis for Controlled Optical Systems (MACOS) ray-trace simulator. The generated exit pupil wavefront error maps provided RMS wavefront error and image-plane characteristics like the Normalized Point Source Sensitivity (PSSN). The results have been used to optimize the segment shape correction and wavefront sensor designs as well as provide input to the TMT systems engineering error budgets.

  13. Adaptive spatial filtering of daytime sky noise in a satellite quantum key distribution downlink receiver

    Science.gov (United States)

    Gruneisen, Mark T.; Sickmiller, Brett A.; Flanagan, Michael B.; Black, James P.; Stoltenberg, Kurt E.; Duchane, Alexander W.

    2016-02-01

    Spatial filtering is an important technique for reducing sky background noise in a satellite quantum key distribution downlink receiver. Atmospheric turbulence limits the extent to which spatial filtering can reduce sky noise without introducing signal losses. Using atmospheric propagation and compensation simulations, the potential benefit of adaptive optics (AO) to secure key generation (SKG) is quantified. Simulations are performed assuming optical propagation from a low-Earth-orbit satellite to a terrestrial receiver that includes AO. Higher-order AO correction is modeled assuming a Shack-Hartmann wavefront sensor and a continuous-face-sheet deformable mirror. The effects of atmospheric turbulence, tracking, and higher-order AO on the photon capture efficiency are simulated using statistical representations of turbulence and a time-domain wave-optics hardware emulator. SKG rates are calculated for a decoy-state protocol as a function of the receiver field of view for various strengths of turbulence, sky radiances, and pointing angles. The results show that at fields of view smaller than those discussed by others, AO technologies can enhance SKG rates in daylight and enable SKG where it would otherwise be prohibited as a consequence of background optical noise and signal loss due to propagation and turbulence effects.

  14. Investigation and experimental data de-noising of Damavand tokamak by using fourier series expansion and wavelet code

    International Nuclear Information System (INIS)

    Sadeghi, Y.

    2006-01-01

    Computer Programs are important tools in physics. Analysis of the experimental data and the control of complex handle physical phenomenon and the solution of numerical problem in physics help scientist to the behavior and simulate the process. In this paper, calculation of several Fourier series gives us a visual and analytic impression of data analyses from Fourier series. One of important aspect in data analyses is to find optimum method for de-noising. Wavelets are mathematical functions that cut up data into different frequency components, and then study each component with a resolution corresponding to its scale. They have advantages over usual traditional methods in analyzing physical situations where the signal contains discontinuities and sharp spikes. Transformed data by wavelets in frequency space has time information and can clearly show the exact location in time of the discontinuity. This aspect makes wavelets an excellent tool in the field of data analysis. In this paper, we show how Fourier series and wavelets can analyses data in Damavand tokamak. ?

  15. Subspace based adaptive denoising of surface EMG from neurological injury patients

    Science.gov (United States)

    Liu, Jie; Ying, Dongwen; Zev Rymer, William; Zhou, Ping

    2014-10-01

    Objective: After neurological injuries such as spinal cord injury, voluntary surface electromyogram (EMG) signals recorded from affected muscles are often corrupted by interferences, such as spurious involuntary spikes and background noises produced by physiological and extrinsic/accidental origins, imposing difficulties for signal processing. Conventional methods did not well address the problem caused by interferences. It is difficult to mitigate such interferences using conventional methods. The aim of this study was to develop a subspace-based denoising method to suppress involuntary background spikes contaminating voluntary surface EMG recordings. Approach: The Karhunen-Loeve transform was utilized to decompose a noisy signal into a signal subspace and a noise subspace. An optimal estimate of EMG signal is derived from the signal subspace and the noise power. Specifically, this estimator is capable of making a tradeoff between interference reduction and signal distortion. Since the estimator partially relies on the estimate of noise power, an adaptive method was presented to sequentially track the variation of interference power. The proposed method was evaluated using both semi-synthetic and real surface EMG signals. Main results: The experiments confirmed that the proposed method can effectively suppress interferences while keep the distortion of voluntary EMG signal in a low level. The proposed method can greatly facilitate further signal processing, such as onset detection of voluntary muscle activity. Significance: The proposed method can provide a powerful tool for suppressing background spikes and noise contaminating voluntary surface EMG signals of paretic muscles after neurological injuries, which is of great importance for their multi-purpose applications.

  16. Image matching in Bayer raw domain to de-noise low-light still images, optimized for real-time implementation

    Science.gov (United States)

    Romanenko, I. V.; Edirisinghe, E. A.; Larkin, D.

    2013-03-01

    Temporal accumulation of images is a well-known approach to improve signal to noise ratios of still images taken in a low light conditions. However, the complexity of known algorithms often leads to high hardware resource usage, increased memory bandwidth and computational complexity, making their practical use impossible. In our research we attempt to solve this problem with an implementation of a practical spatial-temporal de-noising algorithm, based on image accumulation. Image matching and spatial-temporal filtering was performed in Bayer RAW data space, which allowed us to benefit from predictable sensor noise characteristics, thus allowing using a range of algorithmic optimizations. The proposed algorithm accurately compensates for global and local motion and efficiently removes different kinds of noise in noisy images taken in low light conditions. In our algorithm we were able to perform global and local motion compensation in Bayer RAW data space, while preserving the resolution and effectively improving signal to noise ratios of moving objects as well as non-stationary background. The proposed algorithm is suitable for implementation in commercial grade FPGA's and capable of processing 16MP images at capturing rate (10 frames per second). The main challenge for matching between still images is the compromise between the quality of the motion prediction and the complexity of the algorithm and required memory bandwidth. Still images taken in a burst sequence must be aligned to compensate for background motion and foreground objects movements in a scene. High resolution still images coupled with significant time between successive frames can produce large displacements between images, which creates additional difficulty for image matching algorithms. In photo applications it is very important that the noise is efficiently removed in both static, and non-static background as well as in a moving objects, maintaining the resolution of the image. In our proposed

  17. Sequential Total Variation Denoising for the Extraction of Fetal ECG from Single-Channel Maternal Abdominal ECG.

    Science.gov (United States)

    Lee, Kwang Jin; Lee, Boreom

    2016-07-01

    Fetal heart rate (FHR) is an important determinant of fetal health. Cardiotocography (CTG) is widely used for measuring the FHR in the clinical field. However, fetal movement and blood flow through the maternal blood vessels can critically influence Doppler ultrasound signals. Moreover, CTG is not suitable for long-term monitoring. Therefore, researchers have been developing algorithms to estimate the FHR using electrocardiograms (ECGs) from the abdomen of pregnant women. However, separating the weak fetal ECG signal from the abdominal ECG signal is a challenging problem. In this paper, we propose a method for estimating the FHR using sequential total variation denoising and compare its performance with that of other single-channel fetal ECG extraction methods via simulation using the Fetal ECG Synthetic Database (FECGSYNDB). Moreover, we used real data from PhysioNet fetal ECG databases for the evaluation of the algorithm performance. The R-peak detection rate is calculated to evaluate the performance of our algorithm. Our approach could not only separate the fetal ECG signals from the abdominal ECG signals but also accurately estimate the FHR.

  18. Multichannel Poisson denoising and deconvolution on the sphere: application to the Fermi Gamma-ray Space Telescope

    Science.gov (United States)

    Schmitt, J.; Starck, J. L.; Casandjian, J. M.; Fadili, J.; Grenier, I.

    2012-10-01

    A multiscale representation-based denoising method for spherical data contaminated with Poisson noise, the multiscale variance stabilizing transform on the sphere (MS-VSTS), has been previously proposed. This paper first extends this MS-VSTS to spherical two and one dimensions data (2D-1D), where the two first dimensions are longitude and latitude, and the third dimension is a meaningful physical index such as energy or time. We then introduce a novel multichannel deconvolution built upon the 2D-1D MS-VSTS, which allows us to get rid of both the noise and the blur introduced by the point spread function (PSF) in each energy (or time) band. The method is applied to simulated data from the Large Area Telescope (LAT), the main instrument of the Fermi Gamma-ray Space Telescope, which detects high energy gamma-rays in a very wide energy range (from 20 MeV to more than 300 GeV), and whose PSF is strongly energy-dependent (from about 3.5 at 100 MeV to less than 0.1 at 10 GeV).

  19. Efecto de la fluido terapia endovenosa en los electrolitos y gases arteriales de pacientes ancianos hospitalizados. Estudio comparativo: Solución Hartmann y solución salina hipotónica.

    Directory of Open Access Journals (Sweden)

    Germán Javier MALAGA RODRIGUEZ

    2006-10-01

    Full Text Available Objetivos: Comparar el efecto de una solución de dextrosa hipotónica y de una solución isotónica (Hartmann en los niveles séricos de electrolitos y el equilibrio ácido base en pacientes ancianos hospitalizados. Materiales y métodos: Se evaluaron prospectivamente a 18 pacientes mayores de 60 años hospitalizados en el departamento de Medicina del Hospital Nacional Cayetano Heredia que recibieron fluidos endovenosos al menos por 48 horas. El primer grupo (G1 recibió una solución de dextrosa al 5%, 71 mmol/L de ClNa y 27 mmol/L de cloruro de potasio. El segundo grupo (G2 recibió solución Hartmann, más una solución glucosada de 100 cc al 50% simultáneamente. Se controlaron los valores de los electrolitos y los gases sanguíneos a las 0, 24 y 48 horas de iniciada la observación. Resultados: Ambos grupos presentaron condiciones comparables al ingreso. A las 48 horas los valores del sodio para el G1 fueron 134,5±4,4 mEq/L y para el G2 140±2,4 mEq/L (p<0,01, el pH del G1 fue 7,32±0,07 y el del G2 fue 7,4±0,03 (p<0,01, y el bicarbonato fue 16,6±2,2 mEq/L para el G1 y 22,3±1,6 mEq/L para el G2 (p<0,001. La diferencia entre los valores a las 0 (delta y 48 horas fueron: sodio -6,1±3,78(G1, 0,9±2,25(G2 en mEq/L, (p<0,001; potasio 0,01±0,43(G1, -0,61±0,56(G2 en mEq/L, (p<0,05; pH -0,09±0,07(G1, -0,01±0,04(G2, (p<0,01; bicarbonato -6,34±1,21(G1, -0,27±1,43(G2 en mEq/L, (p<0,001; pCO2 -6,25±5,33 (G1, 1,4±4,52(G2 en mmHg, (p<0,01. Conclusiones: Los pacientes ancianos hospitalizados que recibieron solución de dextrosa hipotónica, tuvieron niveles significativamente menores de sodio, pH, bicarbonato y pCO2 después de 48 horas comparados con quienes recibieron solución de Hartmann. No se observaron diferencias en los niveles de cloruro, pO2 y anion gap.(Rev Med Hered 2006;17:189-195.

  20. High signal-to-noise ratio sensing with Shack–Hartmann wavefront sensor based on auto gain control of electron multiplying CCD

    International Nuclear Information System (INIS)

    Zhu Zhao-Yi; Li Da-Yu; Hu Li-Fa; Mu Quan-Quan; Yang Cheng-Liang; Cao Zhao-Liang; Xuan Li

    2016-01-01

    High signal-to-noise ratio can be achieved with the electron multiplying charge-coupled-device (EMCCD) applied in the Shack–Hartmann wavefront sensor (S–H WFS) in adaptive optics (AO). However, when the brightness of the target changes in a large scale, the fixed electron multiplying (EM) gain will not be suited to the sensing limitation. Therefore an auto-gain-control method based on the brightness of light-spots array in S–H WFS is proposed in this paper. The control value is the average of the maximum signals of every light spot in an array, which has been demonstrated to be kept stable even under the influence of some noise and turbulence, and sensitive enough to the change of target brightness. A goal value is needed in the control process and it is predetermined based on the characters of EMCCD. Simulations and experiments have demonstrated that this auto-gain-control method is valid and robust, the sensing SNR reaches the maximum for the corresponding signal level, and especially is greatly improved for those dim targets from 6 to 4 magnitude in the visual band. (special topic)

  1. Image interpolation and denoising for division of focal plane sensors using Gaussian processes.

    Science.gov (United States)

    Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor

    2014-06-16

    Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.

  2. Marginalised Stacked Denoising Autoencoders for Robust Representation of Real-Time Multi-View Action Recognition

    Directory of Open Access Journals (Sweden)

    Feng Gu

    2015-07-01

    Full Text Available Multi-view action recognition has gained a great interest in video surveillance, human computer interaction, and multimedia retrieval, where multiple cameras of different types are deployed to provide a complementary field of views. Fusion of multiple camera views evidently leads to more robust decisions on both tracking multiple targets and analysing complex human activities, especially where there are occlusions. In this paper, we incorporate the marginalised stacked denoising autoencoders (mSDA algorithm to further improve the bag of words (BoWs representation in terms of robustness and usefulness for multi-view action recognition. The resulting representations are fed into three simple fusion strategies as well as a multiple kernel learning algorithm at the classification stage. Based on the internal evaluation, the codebook size of BoWs and the number of layers of mSDA may not significantly affect recognition performance. According to results on three multi-view benchmark datasets, the proposed framework improves recognition performance across all three datasets and outputs record recognition performance, beating the state-of-art algorithms in the literature. It is also capable of performing real-time action recognition at a frame rate ranging from 33 to 45, which could be further improved by using more powerful machines in future applications.

  3. Non parametric denoising methods based on wavelets: Application to electron microscopy images in low exposure time

    International Nuclear Information System (INIS)

    Soumia, Sid Ahmed; Messali, Zoubeida; Ouahabi, Abdeldjalil; Trepout, Sylvain; Messaoudi, Cedric; Marco, Sergio

    2015-01-01

    The 3D reconstruction of the Cryo-Transmission Electron Microscopy (Cryo-TEM) and Energy Filtering TEM images (EFTEM) hampered by the noisy nature of these images, so that their alignment becomes so difficult. This noise refers to the collision between the frozen hydrated biological samples and the electrons beam, where the specimen is exposed to the radiation with a high exposure time. This sensitivity to the electrons beam led specialists to obtain the specimen projection images at very low exposure time, which resulting the emergence of a new problem, an extremely low signal-to-noise ratio (SNR). This paper investigates the problem of TEM images denoising when they are acquired at very low exposure time. So, our main objective is to enhance the quality of TEM images to improve the alignment process which will in turn improve the three dimensional tomography reconstructions. We have done multiple tests on special TEM images acquired at different exposure time 0.5s, 0.2s, 0.1s and 1s (i.e. with different values of SNR)) and equipped by Golding beads for helping us in the assessment step. We herein, propose a structure to combine multiple noisy copies of the TEM images. The structure is based on four different denoising methods, to combine the multiple noisy TEM images copies. Namely, the four different methods are Soft, the Hard as Wavelet-Thresholding methods, Bilateral Filter as a non-linear technique able to maintain the edges neatly, and the Bayesian approach in the wavelet domain, in which context modeling is used to estimate the parameter for each coefficient. To ensure getting a high signal-to-noise ratio, we have guaranteed that we are using the appropriate wavelet family at the appropriate level. So we have chosen âĂIJsym8âĂİ wavelet at level 3 as the most appropriate parameter. Whereas, for the bilateral filtering many tests are done in order to determine the proper filter parameters represented by the size of the filter, the range parameter and the

  4. Automatic Fabric Defect Detection with a Multi-Scale Convolutional Denoising Autoencoder Network Model.

    Science.gov (United States)

    Mei, Shuang; Wang, Yudan; Wen, Guojun

    2018-04-02

    Fabric defect detection is a necessary and essential step of quality control in the textile manufacturing industry. Traditional fabric inspections are usually performed by manual visual methods, which are low in efficiency and poor in precision for long-term industrial applications. In this paper, we propose an unsupervised learning-based automated approach to detect and localize fabric defects without any manual intervention. This approach is used to reconstruct image patches with a convolutional denoising autoencoder network at multiple Gaussian pyramid levels and to synthesize detection results from the corresponding resolution channels. The reconstruction residual of each image patch is used as the indicator for direct pixel-wise prediction. By segmenting and synthesizing the reconstruction residual map at each resolution level, the final inspection result can be generated. This newly developed method has several prominent advantages for fabric defect detection. First, it can be trained with only a small amount of defect-free samples. This is especially important for situations in which collecting large amounts of defective samples is difficult and impracticable. Second, owing to the multi-modal integration strategy, it is relatively more robust and accurate compared to general inspection methods (the results at each resolution level can be viewed as a modality). Third, according to our results, it can address multiple types of textile fabrics, from simple to more complex. Experimental results demonstrate that the proposed model is robust and yields good overall performance with high precision and acceptable recall rates.

  5. Automatic Fabric Defect Detection with a Multi-Scale Convolutional Denoising Autoencoder Network Model

    Directory of Open Access Journals (Sweden)

    Shuang Mei

    2018-04-01

    Full Text Available Fabric defect detection is a necessary and essential step of quality control in the textile manufacturing industry. Traditional fabric inspections are usually performed by manual visual methods, which are low in efficiency and poor in precision for long-term industrial applications. In this paper, we propose an unsupervised learning-based automated approach to detect and localize fabric defects without any manual intervention. This approach is used to reconstruct image patches with a convolutional denoising autoencoder network at multiple Gaussian pyramid levels and to synthesize detection results from the corresponding resolution channels. The reconstruction residual of each image patch is used as the indicator for direct pixel-wise prediction. By segmenting and synthesizing the reconstruction residual map at each resolution level, the final inspection result can be generated. This newly developed method has several prominent advantages for fabric defect detection. First, it can be trained with only a small amount of defect-free samples. This is especially important for situations in which collecting large amounts of defective samples is difficult and impracticable. Second, owing to the multi-modal integration strategy, it is relatively more robust and accurate compared to general inspection methods (the results at each resolution level can be viewed as a modality. Third, according to our results, it can address multiple types of textile fabrics, from simple to more complex. Experimental results demonstrate that the proposed model is robust and yields good overall performance with high precision and acceptable recall rates.

  6. Bearing faults identification and resonant band demodulation based on wavelet de-noising methods and envelope analysis

    Science.gov (United States)

    Abdelrhman, Ahmed M.; Sei Kien, Yong; Salman Leong, M.; Meng Hee, Lim; Al-Obaidi, Salah M. Ali

    2017-07-01

    The vibration signals produced by rotating machinery contain useful information for condition monitoring and fault diagnosis. Fault severities assessment is a challenging task. Wavelet Transform (WT) as a multivariate analysis tool is able to compromise between the time and frequency information in the signals and served as a de-noising method. The CWT scaling function gives different resolutions to the discretely signals such as very fine resolution at lower scale but coarser resolution at a higher scale. However, the computational cost increased as it needs to produce different signal resolutions. DWT has better low computation cost as the dilation function allowed the signals to be decomposed through a tree of low and high pass filters and no further analysing the high-frequency components. In this paper, a method for bearing faults identification is presented by combing Continuous Wavelet Transform (CWT) and Discrete Wavelet Transform (DWT) with envelope analysis for bearing fault diagnosis. The experimental data was sampled by Case Western Reserve University. The analysis result showed that the proposed method is effective in bearing faults detection, identify the exact fault’s location and severity assessment especially for the inner race and outer race faults.

  7. The Jump Set under Geometric Regularization. Part 1: Basic Technique and First-Order Denoising

    KAUST Repository

    Valkonen, Tuomo

    2015-01-01

    © 2015 Society for Industrial and Applied Mathematics. Let u ∈ BV(Ω) solve the total variation (TV) denoising problem with L2-squared fidelity and data f. Caselles, Chambolle, and Novaga [Multiscale Model. Simul., 6 (2008), pp. 879-894] have shown the containment Hm-1 (Ju \\\\Jf) = 0 of the jump set Ju of u in that of f. Their proof unfortunately depends heavily on the co-area formula, as do many results in this area, and as such is not directly extensible to higher-order, curvature-based, and other advanced geometric regularizers, such as total generalized variation and Euler\\'s elastica. These have received increased attention in recent times due to their better practical regularization properties compared to conventional TV or wavelets. We prove analogous jump set containment properties for a general class of regularizers. We do this with novel Lipschitz transformation techniques and do not require the co-area formula. In the present Part 1 we demonstrate the general technique on first-order regularizers, while in Part 2 we will extend it to higher-order regularizers. In particular, we concentrate in this part on TV and, as a novelty, Huber-regularized TV. We also demonstrate that the technique would apply to nonconvex TV models as well as the Perona-Malik anisotropic diffusion, if these approaches were well-posed to begin with.

  8. Propagation and wavefront ambiguity of linear nondiffracting beams

    Science.gov (United States)

    Grunwald, R.; Bock, M.

    2014-02-01

    Ultrashort-pulsed Bessel and Airy beams in free space are often interpreted as "linear light bullets". Usually, interconnected intensity profiles are considered a "propagation" along arbitrary pathways which can even follow curved trajectories. A more detailed analysis, however, shows that this picture gives an adequate description only in situations which do not require to consider the transport of optical signals or causality. To also cover these special cases, a generalization of the terms "beam" and "propagation" is necessary. The problem becomes clearer by representing the angular spectra of the propagating wave fields by rays or Poynting vectors. It is known that quasi-nondiffracting beams can be described as caustics of ray bundles. Their decomposition into Poynting vectors by Shack-Hartmann sensors indicates that, in the frame of their classical definition, the corresponding local wavefronts are ambiguous and concepts based on energy density are not appropriate to describe the propagation completely. For this reason, quantitative parameters like the beam propagation factor have to be treated with caution as well. For applications like communication or optical computing, alternative descriptions are required. A heuristic approach based on vector field based information transport and Fourier analysis is proposed here. Continuity and discontinuity of far field distributions in space and time are discussed. Quantum aspects of propagation are briefly addressed.

  9. Real-time wavefront processors for the next generation of adaptive optics systems: a design and analysis

    Science.gov (United States)

    Truong, Tuan; Brack, Gary L.; Troy, Mitchell; Trinh, Thang; Shi, Fang; Dekany, Richard G.

    2003-02-01

    Adaptive optics (AO) systems currently under investigation will require at least two orders of magitude increase in the number of actuators, which in turn translates to effectively a 104 increase in compute latency. Since the performance of an AO system invariably improves as the compute latency decreases, it is important to study how today's computer systems will scale to address this expected increase in actuator utilization. This paper answers this question by characterizing the performance of a single deformable mirror (DM) Shack-Hartmann natural guide star AO system implemented on the present-generation digital signal processor (DSP) TMS320C6701 from Texas Instruments. We derive the compute latency of such a system in terms of a few basic parameters, such as the number of DM actuators, the number of data channels used to read out the camera pixels, the number of DSPs, the available memory bandwidth, as well as the inter-processor communication (IPC) bandwidth and the pixel transfer rate. We show how the results would scale for future systems that utilizes multiple DMs and guide stars. We demonstrate that the principal performance bottleneck of such a system is the available memory bandwidth of the processors and to lesser extent the IPC bandwidth. This paper concludes with suggestions for mitigating this bottleneck.

  10. Ultimate turbulence experiment: simultaneous measurements of Cn2 near the ground using six devices and eight methods

    Science.gov (United States)

    Yatcheva, Lydia; Barros, Rui; Segel, Max; Sprung, Detlev; Sucher, Erik; Eisele, Christian; Gladysz, Szymon

    2015-10-01

    We have performed a series of experiments in order to simultaneously validate several devices and methods for measurement of the path-averaged refractive index structure constant ( 𝐶𝑛 2). The experiments were carried out along a horizontal urban path near the ground. Measuring turbulence in this layer is particularly important because of the prospect of using adaptive optics for free-space optical communications in an urban environment. On one hand, several commercial sensors were used: SLS20, a laser scintillometer from Scintec AG, BLS900, a largeaperture scintillometer, also from Scintec, and a 3D sonic anemometer from Thies GmbH. On the other hand, we measured turbulence strength with new approaches and devices developed in-house. Firstly, an LED array combined with a high-speed camera allowed for measurement of 𝐶𝑛 2 from raw- and differential image motion, and secondly a two-part system comprising a laser source, a Shack-Hartmann sensor and a PSF camera recoded turbulent modulation transfer functions, Zernike variances and angle-of-arrival structure functions, yielding three independent estimates of 𝐶𝑛 2. We compare the measured values yielded simultaneously by commercial and in-house developed devices and show very good agreement between 𝐶𝑛 2 values for all the methods. Limitations of each experimental method are also discussed.

  11. Development of remote vibration measurement technique through turbulent media

    Energy Technology Data Exchange (ETDEWEB)

    Baik, Sung Hoon; Chung, Chin Man; Kim, Min Suk; Park, Seung Kyu; Chung, Heung Jone

    2002-12-01

    The effect of wavefront distortion of laser beam of a LDV(Laser Doppler Vibrometer) in the turbulence media was investigated for application of adaptive optics to LDV. The high-speed tip/tilt adaptive optics system and closed-loop steering algorithm were developed for real-time correction of the direction fluctuation of the laser beam of LDV. The measuring performance of the LDV was improved when the steering system was applied to LDV at the vibration frequency range of 10 Hz - 30 Hz. The high-speed Shack-Hartmann wavefront sensor(400 Hz) was developed to measure the performance of the LDV due to wavefront distortion. The wavefront distortion due to the turbulence media induced low visibility and degraded the performance of the vibrometer. From the experiments, when the wavefront distortion is above 2 wavelengths in the cross section of the laser beam(dia. 20 mm), the vibration signal from laser vibrometer was severely degraded. When the wavefront distortion is smaller than one wave, the vibration signal was good. From the this research, high-speed closed-loop tip/tilt control technique of the laser beam was developed and applied to the laser metrology area. In the future, the adaptive optics system for wavefront correction will be applied to other research area.

  12. Advancing High Contrast Adaptive Optics

    Science.gov (United States)

    Ammons, M.; Poyneer, L.; GPI Team

    2014-09-01

    A long-standing challenge has been to directly image faint extrasolar planets adjacent to their host suns, which may be ~1-10 million times brighter than the planet. Several extreme AO systems designed for high-contrast observations have been tested at this point, including SPHERE, Magellan AO, PALM-3000, Project 1640, NICI, and the Gemini Planet Imager (GPI, Macintosh et al. 2014). The GPI is the world's most advanced high-contrast adaptive optics system on an 8-meter telescope for detecting and characterizing planets outside of our solar system. GPI will detect a previously unstudied population of young analogs to the giant planets of our solar system and help determine how planetary systems form. GPI employs a 44x44 woofer-tweeter adaptive optics system with a Shack-Hartmann wavefront sensor operating at 1 kHz. The controller uses Fourier-based reconstruction and modal gains optimized from system telemetry (Poyneer et al. 2005, 2007). GPI has an apodized Lyot coronal graph to suppress diffraction and a near-infrared integral field spectrograph for obtaining planetary spectra. This paper discusses current performance limitations and presents the necessary instrumental modifications and sensitivity calculations for scenarios related to high-contrast observations of non-sidereal targets.

  13. Potential pitfalls when denoising resting state fMRI data using nuisance regression.

    Science.gov (United States)

    Bright, Molly G; Tench, Christopher R; Murphy, Kevin

    2017-07-01

    In resting state fMRI, it is necessary to remove signal variance associated with noise sources, leaving cleaned fMRI time-series that more accurately reflect the underlying intrinsic brain fluctuations of interest. This is commonly achieved through nuisance regression, in which the fit is calculated of a noise model of head motion and physiological processes to the fMRI data in a General Linear Model, and the "cleaned" residuals of this fit are used in further analysis. We examine the statistical assumptions and requirements of the General Linear Model, and whether these are met during nuisance regression of resting state fMRI data. Using toy examples and real data we show how pre-whitening, temporal filtering and temporal shifting of regressors impact model fit. Based on our own observations, existing literature, and statistical theory, we make the following recommendations when employing nuisance regression: pre-whitening should be applied to achieve valid statistical inference of the noise model fit parameters; temporal filtering should be incorporated into the noise model to best account for changes in degrees of freedom; temporal shifting of regressors, although merited, should be achieved via optimisation and validation of a single temporal shift. We encourage all readers to make simple, practical changes to their fMRI denoising pipeline, and to regularly assess the appropriateness of the noise model used. By negotiating the potential pitfalls described in this paper, and by clearly reporting the details of nuisance regression in future manuscripts, we hope that the field will achieve more accurate and precise noise models for cleaning the resting state fMRI time-series. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  14. FOG Random Drift Signal Denoising Based on the Improved AR Model and Modified Sage-Husa Adaptive Kalman Filter.

    Science.gov (United States)

    Sun, Jin; Xu, Xiaosu; Liu, Yiting; Zhang, Tao; Li, Yao

    2016-07-12

    In order to reduce the influence of fiber optic gyroscope (FOG) random drift error on inertial navigation systems, an improved auto regressive (AR) model is put forward in this paper. First, based on real-time observations at each restart of the gyroscope, the model of FOG random drift can be established online. In the improved AR model, the FOG measured signal is employed instead of the zero mean signals. Then, the modified Sage-Husa adaptive Kalman filter (SHAKF) is introduced, which can directly carry out real-time filtering on the FOG signals. Finally, static and dynamic experiments are done to verify the effectiveness. The filtering results are analyzed with Allan variance. The analysis results show that the improved AR model has high fitting accuracy and strong adaptability, and the minimum fitting accuracy of single noise is 93.2%. Based on the improved AR(3) model, the denoising method of SHAKF is more effective than traditional methods, and its effect is better than 30%. The random drift error of FOG is reduced effectively, and the precision of the FOG is improved.

  15. Data-adaptive image-denoising for detecting and quantifying nanoparticle entry in mucosal tissues through intravital 2-photon microscopy

    Directory of Open Access Journals (Sweden)

    Torsten Bölke

    2014-11-01

    Full Text Available Intravital 2-photon microscopy of mucosal membranes across which nanoparticles enter the organism typically generates noisy images. Because the noise results from the random statistics of only very few photons detected per pixel, it cannot be avoided by technical means. Fluorescent nanoparticles contained in the tissue may be represented by a few bright pixels which closely resemble the noise structure. We here present a data-adaptive method for digital denoising of datasets obtained by 2-photon microscopy. The algorithm exploits both local and non-local redundancy of the underlying ground-truth signal to reduce noise. Our approach automatically adapts the strength of noise suppression in a data-adaptive way by using a Bayesian network. The results show that the specific adaption to both signal and noise characteristics improves the preservation of fine structures such as nanoparticles while less artefacts were produced as compared to reference algorithms. Our method is applicable to other imaging modalities as well, provided the specific noise characteristics are known and taken into account.

  16. Segmentation of confocal Raman microspectroscopic imaging data using edge-preserving denoising and clustering.

    Science.gov (United States)

    Alexandrov, Theodore; Lasch, Peter

    2013-06-18

    Over the past decade, confocal Raman microspectroscopic (CRM) imaging has matured into a useful analytical tool to obtain spatially resolved chemical information on the molecular composition of biological samples and has found its way into histopathology, cytology, and microbiology. A CRM imaging data set is a hyperspectral image in which Raman intensities are represented as a function of three coordinates: a spectral coordinate λ encoding the wavelength and two spatial coordinates x and y. Understanding CRM imaging data is challenging because of its complexity, size, and moderate signal-to-noise ratio. Spatial segmentation of CRM imaging data is a way to reveal regions of interest and is traditionally performed using nonsupervised clustering which relies on spectral domain-only information with the main drawback being the high sensitivity to noise. We present a new pipeline for spatial segmentation of CRM imaging data which combines preprocessing in the spectral and spatial domains with k-means clustering. Its core is the preprocessing routine in the spatial domain, edge-preserving denoising (EPD), which exploits the spatial relationships between Raman intensities acquired at neighboring pixels. Additionally, we propose to use both spatial correlation to identify Raman spectral features colocalized with defined spatial regions and confidence maps to assess the quality of spatial segmentation. For CRM data acquired from midsagittal Syrian hamster ( Mesocricetus auratus ) brain cryosections, we show how our pipeline benefits from the complex spatial-spectral relationships inherent in the CRM imaging data. EPD significantly improves the quality of spatial segmentation that allows us to extract the underlying structural and compositional information contained in the Raman microspectra.

  17. Effect of the Hartmann number on phase separation controlled by magnetic field for binary mixture system with large component ratio

    Science.gov (United States)

    Heping, Wang; Xiaoguang, Li; Duyang, Zang; Rui, Hu; Xingguo, Geng

    2017-11-01

    This paper presents an exploration for phase separation in a magnetic field using a coupled lattice Boltzmann method (LBM) with magnetohydrodynamics (MHD). The left vertical wall was kept at a constant magnetic field. Simulations were conducted by the strong magnetic field to enhance phase separation and increase the size of separated phases. The focus was on the effect of magnetic intensity by defining the Hartmann number (Ha) on the phase separation properties. The numerical investigation was carried out for different governing parameters, namely Ha and the component ratio of the mixed liquid. The effective morphological evolutions of phase separation in different magnetic fields were demonstrated. The patterns showed that the slant elliptical phases were created by increasing Ha, due to the formation and increase of magnetic torque and force. The dataset was rearranged for growth kinetics of magnetic phase separation in a plot by spherically averaged structure factor and the ratio of separated phases and total system. The results indicate that the increase in Ha can increase the average size of separated phases and accelerate the spinodal decomposition and domain growth stages. Specially for the larger component ratio of mixed phases, the separation degree was also significantly improved by increasing magnetic intensity. These numerical results provide guidance for setting the optimum condition for the phase separation induced by magnetic field.

  18. Hybrid Wavelet De-noising and Rank-Set Pair Analysis approach for forecasting hydro-meteorological time series

    Science.gov (United States)

    WANG, D.; Wang, Y.; Zeng, X.

    2017-12-01

    Accurate, fast forecasting of hydro-meteorological time series is presently a major challenge in drought and flood mitigation. This paper proposes a hybrid approach, Wavelet De-noising (WD) and Rank-Set Pair Analysis (RSPA), that takes full advantage of a combination of the two approaches to improve forecasts of hydro-meteorological time series. WD allows decomposition and reconstruction of a time series by the wavelet transform, and hence separation of the noise from the original series. RSPA, a more reliable and efficient version of Set Pair Analysis, is integrated with WD to form the hybrid WD-RSPA approach. Two types of hydro-meteorological data sets with different characteristics and different levels of human influences at some representative stations are used to illustrate the WD-RSPA approach. The approach is also compared to three other generic methods: the conventional Auto Regressive Integrated Moving Average (ARIMA) method, Artificial Neural Networks (ANNs) (BP-error Back Propagation, MLP-Multilayer Perceptron and RBF-Radial Basis Function), and RSPA alone. Nine error metrics are used to evaluate the model performance. The results show that WD-RSPA is accurate, feasible, and effective. In particular, WD-RSPA is found to be the best among the various generic methods compared in this paper, even when the extreme events are included within a time series.

  19. A robust technique based on VLM and Frangi filter for retinal vessel extraction and denoising.

    Directory of Open Access Journals (Sweden)

    Khan Bahadar Khan

    Full Text Available The exploration of retinal vessel structure is colossally important on account of numerous diseases including stroke, Diabetic Retinopathy (DR and coronary heart diseases, which can damage the retinal vessel structure. The retinal vascular network is very hard to be extracted due to its spreading and diminishing geometry and contrast variation in an image. The proposed technique consists of unique parallel processes for denoising and extraction of blood vessels in retinal images. In the preprocessing section, an adaptive histogram equalization enhances dissimilarity between the vessels and the background and morphological top-hat filters are employed to eliminate macula and optic disc, etc. To remove local noise, the difference of images is computed from the top-hat filtered image and the high-boost filtered image. Frangi filter is applied at multi scale for the enhancement of vessels possessing diverse widths. Segmentation is performed by using improved Otsu thresholding on the high-boost filtered image and Frangi's enhanced image, separately. In the postprocessing steps, a Vessel Location Map (VLM is extracted by using raster to vector transformation. Postprocessing steps are employed in a novel way to reject misclassified vessel pixels. The final segmented image is obtained by using pixel-by-pixel AND operation between VLM and Frangi output image. The method has been rigorously analyzed on the STARE, DRIVE and HRF datasets.

  20. The ladies trial: laparoscopic peritoneal lavage or resection for purulent peritonitisA and Hartmann's procedure or resection with primary anastomosis for purulent or faecal peritonitisB in perforated diverticulitis (NTR2037

    Directory of Open Access Journals (Sweden)

    Bruin Sjoerd C

    2010-10-01

    Full Text Available Abstract Background Recently, excellent results are reported on laparoscopic lavage in patients with purulent perforated diverticulitis as an alternative for sigmoidectomy and ostomy. The objective of this study is to determine whether LaparOscopic LAvage and drainage is a safe and effective treatment for patients with purulent peritonitis (LOLA-arm and to determine the optimal resectional strategy in patients with a purulent or faecal peritonitis (DIVA-arm: perforated DIVerticulitis: sigmoidresection with or without Anastomosis. Methods/Design In this multicentre randomised trial all patients with perforated diverticulitis are included. Upon laparoscopy, patients with purulent peritonitis are treated with laparoscopic lavage and drainage, Hartmann's procedure or sigmoidectomy with primary anastomosis in a ratio of 2:1:1 (LOLA-arm. Patients with faecal peritonitis will be randomised 1:1 between Hartmann's procedure and resection with primary anastomosis (DIVA-arm. The primary combined endpoint of the LOLA-arm is major morbidity and mortality. A sample size of 132:66:66 patients will be able to detect a difference in the primary endpoint from 25% in resectional groups compared to 10% in the laparoscopic lavage group (two sided alpha = 5%, power = 90%. Endpoint of the DIVA-arm is stoma free survival one year after initial surgery. In this arm 212 patients are needed to significantly demonstrate a difference of 30% (log rank test two sided alpha = 5% and power = 90% in favour of the patients with resection with primary anastomosis. Secondary endpoints for both arms are the number of days alive and outside the hospital, health related quality of life, health care utilisation and associated costs. Discussion The Ladies trial is a nationwide multicentre randomised trial on perforated diverticulitis that will provide evidence on the merits of laparoscopic lavage and drainage for purulent generalised peritonitis and on the optimal resectional strategy

  1. Poisson denoising on the sphere: application to the Fermi gamma ray space telescope

    Science.gov (United States)

    Schmitt, J.; Starck, J. L.; Casandjian, J. M.; Fadili, J.; Grenier, I.

    2010-07-01

    The Large Area Telescope (LAT), the main instrument of the Fermi gamma-ray Space telescope, detects high energy gamma rays with energies from 20 MeV to more than 300 GeV. The two main scientific objectives, the study of the Milky Way diffuse background and the detection of point sources, are complicated by the lack of photons. That is why we need a powerful Poisson noise removal method on the sphere which is efficient on low count Poisson data. This paper presents a new multiscale decomposition on the sphere for data with Poisson noise, called multi-scale variance stabilizing transform on the sphere (MS-VSTS). This method is based on a variance stabilizing transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has a quasi constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. MS-VSTS consists of decomposing the data into a sparse multi-scale dictionary like wavelets or curvelets, and then applying a VST on the coefficients in order to get almost Gaussian stabilized coefficients. In this work, we use the isotropic undecimated wavelet transform (IUWT) and the curvelet transform as spherical multi-scale transforms. Then, binary hypothesis testing is carried out to detect significant coefficients, and the denoised image is reconstructed with an iterative algorithm based on hybrid steepest descent (HSD). To detect point sources, we have to extract the Galactic diffuse background: an extension of the method to background separation is then proposed. In contrary, to study the Milky Way diffuse background, we remove point sources with a binary mask. The gaps have to be interpolated: an extension to inpainting is then proposed. The method, applied on simulated Fermi LAT data, proves to be adaptive, fast and easy to implement.

  2. Adaptive Generation and Diagnostics of Linear Few-Cycle Light Bullets

    Directory of Open Access Journals (Sweden)

    Martin Bock

    2013-02-01

    Full Text Available Recently we introduced the class of highly localized wavepackets (HLWs as a generalization of optical Bessel-like needle beams. Here we report on the progress in this field. In contrast to pulsed Bessel beams and Airy beams, ultrashort-pulsed HLWs propagate with high stability in both spatial and temporal domain, are nearly paraxial (supercollimated, have fringe-less spatial profiles and thus represent the best possible approximation to linear “light bullets”. Like Bessel beams and Airy beams, HLWs show self-reconstructing behavior. Adaptive HLWs can be shaped by ultraflat three-dimensional phase profiles (generalized axicons which are programmed via calibrated grayscale maps of liquid-crystal-on-silicon spatial light modulators (LCoS-SLMs. Light bullets of even higher complexity can either be freely formed from quasi-continuous phase maps or discretely composed from addressable arrays of identical nondiffracting beams. The characterization of few-cycle light bullets requires spatially resolved measuring techniques. In our experiments, wavefront, pulse and phase were detected with a Shack-Hartmann wavefront sensor, 2D-autocorrelation and spectral phase interferometry for direct electric-field reconstruction (SPIDER. The combination of the unique propagation properties of light bullets with the flexibility of adaptive optics opens new prospects for applications of structured light like optical tweezers, microscopy, data transfer and storage, laser fusion, plasmon control or nonlinear spectroscopy.

  3. Synchrotron X-ray adaptative monochromator: study and realization of a prototype

    International Nuclear Information System (INIS)

    Dezoret, D.

    1995-01-01

    This work presents a study of a prototype of a synchrotron X-ray monochromator. The spectral qualities of this optic are sensitive to the heat loads which are particularly important on third synchrotron generation like ESRF. Indeed, powers generated by synchrotron beams can reach few kilowatts and power densities about a few tens watts per square millimeters. The mechanical deformations of the optical elements of the beamlines issue issue of the heat load can damage their spectral efficiencies. In order to compensate the deformations, wa have been studying the transposition of the adaptive astronomical optics technology to the x-ray field. First, we have considered the modifications of the spectral characteristics of a crystal induced by x-rays. We have established the specifications required to a technological realisation. Then, thermomechanical and technological studies have been required to transpose the astronomical technology to an x-ray technology. After these studies, we have begun the realisation of a prototype. This monochromator is composed by a crystal of silicon (111) bonded on a piezo-electric structure. The mechanical control is a loop system composed by a infrared light, a Shack-Hartmann CDD and wave front analyser. This system has to compensate the deformations of the crystal in the 5 kcV to 60 kcV energy range with a power density of 1 watt per square millimeters. (authors)

  4. Evaluation of Optical Quality: Ocular Scattering and Aberrations in Eyes Implanted with Diffractive Multifocal or Monofocal Intraocular Lenses.

    Science.gov (United States)

    Liao, Xuan; Lin, Jia; Tian, Jing; Wen, BaiWei; Tan, QingQing; Lan, ChangJun

    2018-06-01

    To compare objective optical quality, ocular scattering and aberrations of eyes implanted with an aspheric monofocal intraocular lens (IOL) or an aspheric apodized diffractive multifocal IOL three months after surgery. Prospective consecutive nonrandomized comparative cohort study. A total of 80 eyes from 57 cataract patients were bilaterally or unilaterally implanted with monofocal (AcrySof IQ SN60WF) or multifocal (AcrySof IQ ReSTOR SN6AD1) IOLs. Respectively, 40 eyes of 27 patients were implanted with monofocal IOLs, and 40 eyes of 30 patients were implanted with multifocal IOLs. Ocular high-order aberration (HOA) values were obtained using Hartmann-Shack aberrometer; objective scatter index (OSI), modulation transfer function (MTF) cutoff, Strehl ratio (SR), and contrast visual acuity OV at 100%, 20%, and 9% were measured using Objective Quality Analysis System II (OQAS II). Ocular aberrations performed similar in both groups (p > 0.05). However, significantly higher values of OSI and lower values of MTF cutoff, SR and OV were found in the SN6AD1 group (p < 0.05). Both ocular scattering and wave-front aberrations play essential role in retinal image quality, which may be overestimated when only aberrations were taken into account. Combining the effect of ocular scattering with HOA will result in a more accurate assessment of the visual and optical quality.

  5. Wavefront correction and high-resolution in vivo OCT imaging with an objective integrated multi-actuator adaptive lens.

    Science.gov (United States)

    Bonora, Stefano; Jian, Yifan; Zhang, Pengfei; Zam, Azhar; Pugh, Edward N; Zawadzki, Robert J; Sarunic, Marinko V

    2015-08-24

    Adaptive optics is rapidly transforming microscopy and high-resolution ophthalmic imaging. The adaptive elements commonly used to control optical wavefronts are liquid crystal spatial light modulators and deformable mirrors. We introduce a novel Multi-actuator Adaptive Lens that can correct aberrations to high order, and which has the potential to increase the spread of adaptive optics to many new applications by simplifying its integration with existing systems. Our method combines an adaptive lens with an imaged-based optimization control that allows the correction of images to the diffraction limit, and provides a reduction of hardware complexity with respect to existing state-of-the-art adaptive optics systems. The Multi-actuator Adaptive Lens design that we present can correct wavefront aberrations up to the 4th order of the Zernike polynomial characterization. The performance of the Multi-actuator Adaptive Lens is demonstrated in a wide field microscope, using a Shack-Hartmann wavefront sensor for closed loop control. The Multi-actuator Adaptive Lens and image-based wavefront-sensorless control were also integrated into the objective of a Fourier Domain Optical Coherence Tomography system for in vivo imaging of mouse retinal structures. The experimental results demonstrate that the insertion of the Multi-actuator Objective Lens can generate arbitrary wavefronts to correct aberrations down to the diffraction limit, and can be easily integrated into optical systems to improve the quality of aberrated images.

  6. Adaptive spatial filtering for daytime satellite quantum key distribution

    Science.gov (United States)

    Gruneisen, Mark T.; Sickmiller, Brett A.; Flanagan, Michael B.; Black, James P.; Stoltenberg, Kurt E.; Duchane, Alexander W.

    2014-11-01

    The rate of secure key generation (SKG) in quantum key distribution (QKD) is adversely affected by optical noise and loss in the quantum channel. In a free-space atmospheric channel, the scattering of sunlight into the channel can lead to quantum bit error ratios (QBERs) sufficiently large to preclude SKG. Furthermore, atmospheric turbulence limits the degree to which spatial filtering can reduce sky noise without introducing signal losses. A system simulation quantifies the potential benefit of tracking and higher-order adaptive optics (AO) technologies to SKG rates in a daytime satellite engagement scenario. The simulations are performed assuming propagation from a low-Earth orbit (LEO) satellite to a terrestrial receiver that includes an AO system comprised of a Shack-Hartmann wave-front sensor (SHWFS) and a continuous-face-sheet deformable mirror (DM). The effects of atmospheric turbulence, tracking, and higher-order AO on the photon capture efficiency are simulated using statistical representations of turbulence and a time-domain waveoptics hardware emulator. Secure key generation rates are then calculated for the decoy state QKD protocol as a function of the receiver field of view (FOV) for various pointing angles. The results show that at FOVs smaller than previously considered, AO technologies can enhance SKG rates in daylight and even enable SKG where it would otherwise be prohibited as a consequence of either background optical noise or signal loss due to turbulence effects.

  7. Synchrotron X-ray adaptative monochromator: study and realization of a prototype; Monochromateur adaptatif pour rayonnement X synchrotron: etude et realisation d`un prototype

    Energy Technology Data Exchange (ETDEWEB)

    Dezoret, D.

    1995-12-12

    This work presents a study of a prototype of a synchrotron X-ray monochromator. The spectral qualities of this optic are sensitive to the heat loads which are particularly important on third synchrotron generation like ESRF. Indeed, powers generated by synchrotron beams can reach few kilowatts and power densities about a few tens watts per square millimeters. The mechanical deformations of the optical elements of the beamlines issue issue of the heat load can damage their spectral efficiencies. In order to compensate the deformations, wa have been studying the transposition of the adaptive astronomical optics technology to the x-ray field. First, we have considered the modifications of the spectral characteristics of a crystal induced by x-rays. We have established the specifications required to a technological realisation. Then, thermomechanical and technological studies have been required to transpose the astronomical technology to an x-ray technology. After these studies, we have begun the realisation of a prototype. This monochromator is composed by a crystal of silicon (111) bonded on a piezo-electric structure. The mechanical control is a loop system composed by a infrared light, a Shack-Hartmann CDD and wave front analyser. This system has to compensate the deformations of the crystal in the 5 kcV to 60 kcV energy range with a power density of 1 watt per square millimeters. (authors).

  8. Measurement accuracy of a stressed contact lens during its relaxation period

    Science.gov (United States)

    Compertore, David C.; Ignatovich, Filipp V.

    2018-02-01

    We examine the dioptric power and transmitted wavefront of a contact lens as it releases its handling stresses. Handling stresses are introduced as part of the contact lens loading process and are common across all contact lens measurement procedures and systems. The latest advances in vision correction require tighter quality control during the manufacturing of the contact lenses. The optical power of contact lenses is one of the critical characteristics for users. Power measurements are conducted in the hydrated state, where the lens is resting inside a solution-filled glass cuvette. In a typical approach, the contact lens must be subject to long settling times prior to any measurements. Alternatively, multiple measurements must be averaged. Apart from potential operator dependency of such approach, it is extremely time-consuming, and therefore it precludes higher rates of testing. Comprehensive knowledge about the settling process can be obtained by monitoring multiple parameters of the lens simultaneously. We have developed a system that combines co-aligned a Shack-Hartmann transmitted wavefront sensor and a time-domain low coherence interferometer to measure several optical and physical parameters (power, cylinder power, aberrations, center thickness, sagittal depth, and diameter) simultaneously. We monitor these parameters during the stress relaxation period and show correlations that can be used by manufacturers to devise methods for improved quality control procedures.

  9. Stacking denoising auto-encoders in a deep network to segment the brainstem on MRI in brain cancer patients: A clinical study.

    Science.gov (United States)

    Dolz, Jose; Betrouni, Nacim; Quidet, Mathilde; Kharroubi, Dris; Leroy, Henri A; Reyns, Nicolas; Massoptier, Laurent; Vermandel, Maximilien

    2016-09-01

    Delineation of organs at risk (OARs) is a crucial step in surgical and treatment planning in brain cancer, where precise OARs volume delineation is required. However, this task is still often manually performed, which is time-consuming and prone to observer variability. To tackle these issues a deep learning approach based on stacking denoising auto-encoders has been proposed to segment the brainstem on magnetic resonance images in brain cancer context. Additionally to classical features used in machine learning to segment brain structures, two new features are suggested. Four experts participated in this study by segmenting the brainstem on 9 patients who underwent radiosurgery. Analysis of variance on shape and volume similarity metrics indicated that there were significant differences (p<0.05) between the groups of manual annotations and automatic segmentations. Experimental evaluation also showed an overlapping higher than 90% with respect to the ground truth. These results are comparable, and often higher, to those of the state of the art segmentation methods but with a considerably reduction of the segmentation time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Pinch instabilities in Taylor-Couette flow.

    Science.gov (United States)

    Shalybkov, Dima

    2006-01-01

    The linear stability of the dissipative Taylor-Couette flow with an azimuthal magnetic field is considered. Unlike ideal flows, the magnetic field is a fixed function of a radius with two parameters only: a ratio of inner to outer cylinder radii, eta, and a ratio of the magnetic field values on outer and inner cylinders, muB. The magnetic field with 0rotation. The unstable modes are located into some interval of the axial wave numbers for the flow stable without magnetic field. The interval length is zero for a critical Hartmann number and increases with an increasing Hartmann number. The critical Hartmann numbers and length of the unstable axial wave number intervals are the same for every rotation law. There are the critical Hartmann numbers for m=0 sausage and m=1 kink modes only. The sausage mode is the most unstable mode close to Ha=0 point and the kink mode is the most unstable mode close to the critical Hartmann number. The transition from the sausage instability to the kink instability depends on the Prandtl number Pm and this happens close to one-half of the critical Hartmann number for Pm=1 and close to the critical Hartmann number for Pm=10(-5). The critical Hartmann numbers are smaller for kink modes. The flow stability does not depend on magnetic Prandtl numbers for m=0 mode. The same is true for critical Hartmann numbers for both m=0 and m=1 modes. The typical value of the magnetic field destabilizing the liquid metal Taylor-Couette flow is approximately 10(2) G.

  11. Adaptive optics retinal imaging in the living mouse eye

    Science.gov (United States)

    Geng, Ying; Dubra, Alfredo; Yin, Lu; Merigan, William H.; Sharma, Robin; Libby, Richard T.; Williams, David R.

    2012-01-01

    Correction of the eye’s monochromatic aberrations using adaptive optics (AO) can improve the resolution of in vivo mouse retinal images [Biss et al., Opt. Lett. 32(6), 659 (2007) and Alt et al., Proc. SPIE 7550, 755019 (2010)], but previous attempts have been limited by poor spot quality in the Shack-Hartmann wavefront sensor (SHWS). Recent advances in mouse eye wavefront sensing using an adjustable focus beacon with an annular beam profile have improved the wavefront sensor spot quality [Geng et al., Biomed. Opt. Express 2(4), 717 (2011)], and we have incorporated them into a fluorescence adaptive optics scanning laser ophthalmoscope (AOSLO). The performance of the instrument was tested on the living mouse eye, and images of multiple retinal structures, including the photoreceptor mosaic, nerve fiber bundles, fine capillaries and fluorescently labeled ganglion cells were obtained. The in vivo transverse and axial resolutions of the fluorescence channel of the AOSLO were estimated from the full width half maximum (FWHM) of the line and point spread functions (LSF and PSF), and were found to be better than 0.79 μm ± 0.03 μm (STD)(45% wider than the diffraction limit) and 10.8 μm ± 0.7 μm (STD)(two times the diffraction limit), respectively. The axial positional accuracy was estimated to be 0.36 μm. This resolution and positional accuracy has allowed us to classify many ganglion cell types, such as bistratified ganglion cells, in vivo. PMID:22574260

  12. Adaptive optics for reduced threshold energy in femtosecond laser induced optical breakdown in water based eye model

    Science.gov (United States)

    Hansen, Anja; Krueger, Alexander; Ripken, Tammo

    2013-03-01

    In ophthalmic microsurgery tissue dissection is achieved using femtosecond laser pulses to create an optical breakdown. For vitreo-retinal applications the irradiance distribution in the focal volume is distorted by the anterior components of the eye causing a raised threshold energy for breakdown. In this work, an adaptive optics system enables spatial beam shaping for compensation of aberrations and investigation of wave front influence on optical breakdown. An eye model was designed to allow for aberration correction as well as detection of optical breakdown. The eye model consists of an achromatic lens for modeling the eye's refractive power, a water chamber for modeling the tissue properties, and a PTFE sample for modeling the retina's scattering properties. Aberration correction was performed using a deformable mirror in combination with a Hartmann-Shack-sensor. The influence of an adaptive optics aberration correction on the pulse energy required for photodisruption was investigated using transmission measurements for determination of the breakdown threshold and video imaging of the focal region for study of the gas bubble dynamics. The threshold energy is considerably reduced when correcting for the aberrations of the system and the model eye. Also, a raise in irradiance at constant pulse energy was shown for the aberration corrected case. The reduced pulse energy lowers the potential risk of collateral damage which is especially important for retinal safety. This offers new possibilities for vitreo-retinal surgery using femtosecond laser pulses.

  13. Discovery Channel Telescope active optics system early integration and test

    Science.gov (United States)

    Venetiou, Alexander J.; Bida, Thomas A.

    2012-09-01

    The Discovery Channel Telescope (DCT) is a 4.3-meter telescope with a thin meniscus primary mirror (M1) and a honeycomb secondary mirror (M2). The optical design is an f/6.1 Ritchey-Chrétien (RC) with an unvignetted 0.5° Field of View (FoV) at the Cassegrain focus. We describe the design, implementation and performance of the DCT active optics system (AOS). The DCT AOS maintains collimation and controls the figure of the mirror to provide seeing-limited images across the focal plane. To minimize observing overhead, rapid settling times are achieved using a combination of feed-forward and low-bandwidth feedback control using a wavefront sensing system. In 2011, we mounted a Shack-Hartmann wavefront sensor at the prime focus of M1, the Prime Focus Test Assembly (PFTA), to test the AOS with the wavefront sensor, and the feedback loop. The incoming wavefront is decomposed using Zernike polynomials, and the mirror figure is corrected with a set of bending modes. Components of the system that we tested and tuned included the Zernike to Bending Mode transformations. We also started open-loop feed-forward coefficients determination. In early 2012, the PFTA was replaced by M2, and the wavefront sensor moved to its normal location on the Cassegrain instrument assembly. We present early open loop wavefront test results with the full optical system and instrument cube, along with refinements to the overall control loop operating at RC Cassegrain focus.

  14. In Vitro Aberrometric Assessment of a Multifocal Intraocular Lens and Two Extended Depth of Focus IOLs

    Directory of Open Access Journals (Sweden)

    Vicente J. Camps

    2017-01-01

    Full Text Available Purpose. To analyze the “in vitro” aberrometric pattern of a refractive IOL and two extended depth of focus IOLs. Methods. A special optical bench with a Shack-Hartmann wavefront sensor (SH was designed for the measurement. Three presbyopia correction IOLs were analyzed: Mini WELL (MW, TECNIS Symfony ZXR00 (SYM, and Lentis Mplus X LS-313 MF30 (MP. Three different pupil sizes were used for the comparison: 3, 4, and 4.7 mm. Results. MW generated negative primary and positive secondary spherical aberrations (SA for the apertures of 3 mm (−0.13 and +0.12 μm, 4 mm (−0.12 and +0.08 μm, and 4.7 mm (−0.11 and +0.08 μm, while the SYM only generated negative primary SA for 4 and 4.7 mm apertures (−0.12 μm and −0.20 μm, resp.. The MP induced coma and trefoil for all pupils and showed significant HOAs for apertures of 4 and 4.7 mm. Conclusions. In an optical bench, the MW induces negative primary and positive secondary SA for all pupils. The SYM aberrations seem to be pupil dependent; it does not produce negative primary SA for 3 mm but increases for higher pupils. Meanwhile, the HOAs for the MW and SYM were not significant. The MP showed in all cases the highest HOAs.

  15. Robust adaptive optics systems for vision science

    Science.gov (United States)

    Burns, S. A.; de Castro, A.; Sawides, L.; Luo, T.; Sapoznik, K.

    2018-02-01

    Adaptive Optics (AO) is of growing importance for understanding the impact of retinal and systemic diseases on the retina. While AO retinal imaging in healthy eyes is now routine, AO imaging in older eyes and eyes with optical changes to the anterior eye can be difficult and requires a control and an imaging system that is resilient when there is scattering and occlusion from the cornea and lens, as well as in the presence of irregular and small pupils. Our AO retinal imaging system combines evaluation of local image quality of the pupil, with spatially programmable detection. The wavefront control system uses a woofer tweeter approach, combining an electromagnetic mirror and a MEMS mirror and a single Shack Hartmann sensor. The SH sensor samples an 8 mm exit pupil and the subject is aligned to a region within this larger system pupil using a chin and forehead rest. A spot quality metric is calculated in real time for each lenslet. Individual lenslets that do not meet the quality metric are eliminated from the processing. Mirror shapes are smoothed outside the region of wavefront control when pupils are small. The system allows imaging even with smaller irregular pupils, however because the depth of field increases under these conditions, sectioning performance decreases. A retinal conjugate micromirror array selectively directs mid-range scatter to additional detectors. This improves detection of retinal capillaries even when the confocal image has poorer image quality that includes both photoreceptors and blood vessels.

  16. Non-linear multivariate and multiscale monitoring and signal denoising strategy using Kernel Principal Component Analysis combined with Ensemble Empirical Mode Decomposition method

    Science.gov (United States)

    Žvokelj, Matej; Zupan, Samo; Prebil, Ivan

    2011-10-01

    The article presents a novel non-linear multivariate and multiscale statistical process monitoring and signal denoising method which combines the strengths of the Kernel Principal Component Analysis (KPCA) non-linear multivariate monitoring approach with the benefits of Ensemble Empirical Mode Decomposition (EEMD) to handle multiscale system dynamics. The proposed method which enables us to cope with complex even severe non-linear systems with a wide dynamic range was named the EEMD-based multiscale KPCA (EEMD-MSKPCA). The method is quite general in nature and could be used in different areas for various tasks even without any really deep understanding of the nature of the system under consideration. Its efficiency was first demonstrated by an illustrative example, after which the applicability for the task of bearing fault detection, diagnosis and signal denosing was tested on simulated as well as actual vibration and acoustic emission (AE) signals measured on purpose-built large-size low-speed bearing test stand. The positive results obtained indicate that the proposed EEMD-MSKPCA method provides a promising tool for tackling non-linear multiscale data which present a convolved picture of many events occupying different regions in the time-frequency plane.

  17. On an efficient modification of singular value decomposition using independent component analysis for improved MRS denoising and quantification

    International Nuclear Information System (INIS)

    Stamatopoulos, V G; Karras, D A; Mertzios, B G

    2009-01-01

    An efficient modification of singular value decomposition (SVD) is proposed in this paper aiming at denoising and more importantly at quantifying more accurately the statistically independent spectra of metabolite sources in magnetic resonance spectroscopy (MRS). Although SVD is known in MRS applications and several efficient algorithms exist for estimating SVD summation terms in which the raw MRS data are analyzed, however, it would be more beneficial for such an analysis if techniques with the ability to estimate statistically independent spectra could be employed. SVD is known to separate signal and noise subspaces but it assumes orthogonal properties for the components comprising signal subspace, which is not always the case, and might impose heavy constraints for the MRS case. A much more relaxing constraint would be to assume statistically independent components. Therefore, a modification of the main methodology incorporating techniques for calculating the assumed statistically independent spectra is proposed by applying SVD on the MRS spectrogram through application of the short time Fourier transform (STFT). This approach is based on combining SVD on STFT spectrogram followed by an iterative application of independent component analysis (ICA). Moreover, it is shown that the proposed methodology combined with a regression analysis would lead to improved quantification of the MRS signals. An experimental study based on synthetic MRS signals has been conducted to evaluate the herein proposed methodologies. The results obtained have been discussed and it is shown to be quite promising

  18. A hybrid wavelet de-noising and Rank-Set Pair Analysis approach for forecasting hydro-meteorological time series.

    Science.gov (United States)

    Wang, Dong; Borthwick, Alistair G; He, Handan; Wang, Yuankun; Zhu, Jieyu; Lu, Yuan; Xu, Pengcheng; Zeng, Xiankui; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Liu, Jiufu; Zou, Ying; He, Ruimin

    2018-01-01

    Accurate, fast forecasting of hydro-meteorological time series is presently a major challenge in drought and flood mitigation. This paper proposes a hybrid approach, wavelet de-noising (WD) and Rank-Set Pair Analysis (RSPA), that takes full advantage of a combination of the two approaches to improve forecasts of hydro-meteorological time series. WD allows decomposition and reconstruction of a time series by the wavelet transform, and hence separation of the noise from the original series. RSPA, a more reliable and efficient version of Set Pair Analysis, is integrated with WD to form the hybrid WD-RSPA approach. Two types of hydro-meteorological data sets with different characteristics and different levels of human influences at some representative stations are used to illustrate the WD-RSPA approach. The approach is also compared to three other generic methods: the conventional Auto Regressive Integrated Moving Average (ARIMA) method, Artificial Neural Networks (ANNs) (BP-error Back Propagation, MLP-Multilayer Perceptron and RBF-Radial Basis Function), and RSPA alone. Nine error metrics are used to evaluate the model performance. Compared to three other generic methods, the results generated by WD-REPA model presented invariably smaller error measures which means the forecasting capability of the WD-REPA model is better than other models. The results show that WD-RSPA is accurate, feasible, and effective. In particular, WD-RSPA is found to be the best among the various generic methods compared in this paper, even when the extreme events are included within a time series. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Two-year results of the randomized clinical trial DILALA comparing laparoscopic lavage with resection as treatment for perforated diverticulitis

    DEFF Research Database (Denmark)

    Kohl, A; Rosenberg, J; Bock, D

    2018-01-01

    BACKGROUND: Traditionally, perforated diverticulitis with purulent peritonitis was treated with resection and colostomy (Hartmann's procedure), with inherent complications and risk of a permanent stoma. The DILALA (DIverticulitis - LAparoscopic LAvage versus resection (Hartmann's procedure...... in the Hartmann's group had a colostomy at 24 months. CONCLUSION: Laparoscopic lavage is a better option for perforated diverticulitis with purulent peritonitis than open resection and colostomy....

  20. Wavefront-guided refractive surgery results of training-surgeons Resultados das cirurgias refrativas guiadas por frentes de ondas de cirurgiões em treinamento

    Directory of Open Access Journals (Sweden)

    Iane Stillitano

    2010-08-01

    Full Text Available PURPOSE: To assess clinical outcomes and changes on higher-order aberrations (HOA after wavefront-guided laser in situ keratomileusis (LASIK and photorefractive keratectomy (PRK for correction of myopia and myopic astigmatism performed by training-surgeons. METHODS: One hundred and seventy patients had customized LASIK (207 eyes and PRK (103 eyes performed by surgeons in-training using the LADARVision 4000 (Alcon, Fort Worth, TX. Preoperative and 1, 3, 6 and 12 months postoperative data of spherical equivalent (SE, best spectacle-corrected visual acuity (BSCVA and uncorrected visual acuity (UCVA were analysed. Wavefront changes were determined using the LADARWave Hartmann-Shack wavefront aberrometer and the pupil size was scaled for 6.5 mm. RESULTS: The mean SE in the LASIK group was -3.04 ±1.07 D and in the PRK group was -1.60 ± 0.59 D. At 1-year follow-up, (80.6% (LASIK and (66.7% (PRK were within ± 0.50 D of the intended refraction. The UCVA was 20/20 or better in (58.1% (LASIK and (66.7% (PRK of the operated eyes. A statistically significant positive correlation was found between achieved versus attempted refractive correction in both groups: LASIK (r=0.975, POBJETIVO: Avaliar os resultados clínicos e mudanças nas aberrações de alta-ordem (HOA, após ceratomileuse assistida por excimer laser in situ (LASIK e ceratectomia fotorrefrativa (PRK guiados por frentes de onda para correção da miopia e astigmatismo miópico realizada por cirurgiões em treinamento. MÉTODOS: Estudo prospectivo de 170 pacientes submetidos a LASIK personalizado (207 olhos e PRK (103 olhos realizados por cirurgiões em treinamento utilizando o LADARVision 4000 (Alcon, Fort Worth, TX. Dados do equivalente esférico (SE, melhor acuidade visual corrigida (BSCVA e acuidade visual não corrigida (UCVA foram analisados no pré-operatório e com 1, 3, 6 e 12 meses de pós-operatório. As alterações de frentes de onda foram determinadas usando o aberrômetro Hartmann-Shack

  1. An Adaptive Particle Weighting Strategy for ECG Denoising Using Marginalized Particle Extended Kalman Filter: An Evaluation in Arrhythmia Contexts.

    Science.gov (United States)

    Hesar, Hamed Danandeh; Mohebbi, Maryam

    2017-11-01

    Model-based Bayesian frameworks have a common problem in processing electrocardiogram (ECG) signals with sudden morphological changes. This situation often happens in the case of arrhythmias where ECGs do not obey the predefined state models. To solve this problem, in this paper, a model-based Bayesian denoising framework is proposed using marginalized particle-extended Kalman filter (MP-EKF), variational mode decomposition, and a novel fuzzy-based adaptive particle weighting strategy. This strategy helps MP-EKF to perform well even when the morphology of signal does not comply with the predefined dynamic model. In addition, this strategy adapts MP-EKF's behavior to the acquired measurements in different input signal to noise ratios (SNRs). At low input SNRs, this strategy decreases the particles' trust level to the measurements while increasing their trust level to a synthetic ECG constructed with the feature parameters of ECG dynamic model. At high input SNRs, the particles' trust level to the measurements is increased and the trust level to synthetic ECG is decreased. The proposed method was evaluated on MIT-BIH normal sinus rhythm database and compared with EKF/EKS frameworks and previously proposed MP-EKF. It was also evaluated on ECG segments extracted from MIT-BIH arrhythmia database, which contained ventricular and atrial arrhythmia. The results showed that the proposed algorithm had a noticeable superiority over benchmark methods from both SNR improvement and multiscale entropy based weighted distortion (MSEWPRD) viewpoints at low input SNRs.

  2. A novel structured dictionary for fast processing of 3D medical images, with application to computed tomography restoration and denoising

    Science.gov (United States)

    Karimi, Davood; Ward, Rabab K.

    2016-03-01

    Sparse representation of signals in learned overcomplete dictionaries has proven to be a powerful tool with applications in denoising, restoration, compression, reconstruction, and more. Recent research has shown that learned overcomplete dictionaries can lead to better results than analytical dictionaries such as wavelets in almost all image processing applications. However, a major disadvantage of these dictionaries is that their learning and usage is very computationally intensive. In particular, finding the sparse representation of a signal in these dictionaries requires solving an optimization problem that leads to very long computational times, especially in 3D image processing. Moreover, the sparse representation found by greedy algorithms is usually sub-optimal. In this paper, we propose a novel two-level dictionary structure that improves the performance and the speed of standard greedy sparse coding methods. The first (i.e., the top) level in our dictionary is a fixed orthonormal basis, whereas the second level includes the atoms that are learned from the training data. We explain how such a dictionary can be learned from the training data and how the sparse representation of a new signal in this dictionary can be computed. As an application, we use the proposed dictionary structure for removing the noise and artifacts in 3D computed tomography (CT) images. Our experiments with real CT images show that the proposed method achieves results that are comparable with standard dictionary-based methods while substantially reducing the computational time.

  3. Taylor-Couette flow stability with toroidal magnetic field

    International Nuclear Information System (INIS)

    Shalybkov, D

    2005-01-01

    The linear stability of the dissipative Taylor-Couette flow with imposed azimuthal magnetic field is considered. Unlike to ideal flow, the magnetic field is fixed function of radius with two parameters only: a ratio of inner to outer cylinder radii and a ratio of the magnetic field values on outer and inner cylinders. The magnetic field with boundary values ratio greater than zero and smaller than inverse radii ratio always stabilizes the flow and called stable magnetic field below. The current free magnetic field is the stable magnetic field. The unstable magnetic field destabilizes every flow if the magnetic field (or Hartmann number) exceeds some critical value. This instability survives even without rotation (for zero Reynolds number). For the stable without the magnetic field flow, the unstable modes are located into some interval of the vertical wave numbers. The interval length is zero for critical Hartmann number and increases with increasing Hartmann number. The critical Hartmann numbers and the length of the unstable vertical wave numbers interval is the same for every rotation law. There are the critical Hartmann numbers for m = 0 sausage and m = 1 kink modes only. The critical Hartmann numbers are smaller for kink mode and this mode is the most unstable mode like to the pinch instability case. The flow stability do not depend on the magnetic Prandtl number for m = 0 mode. The same is true for critical Hartmann numbers for m = 0 and m = 1 modes. The typical value of the magnetic field destabilizing the liquid metal Taylor-Couette flow is order of 100 Gauss

  4. Linear systems formulation of scattering theory for rough surfaces with arbitrary incident and scattering angles.

    Science.gov (United States)

    Krywonos, Andrey; Harvey, James E; Choi, Narak

    2011-06-01

    Scattering effects from microtopographic surface roughness are merely nonparaxial diffraction phenomena resulting from random phase variations in the reflected or transmitted wavefront. Rayleigh-Rice, Beckmann-Kirchhoff. or Harvey-Shack surface scatter theories are commonly used to predict surface scatter effects. Smooth-surface and/or paraxial approximations have severely limited the range of applicability of each of the above theoretical treatments. A recent linear systems formulation of nonparaxial scalar diffraction theory applied to surface scatter phenomena resulted first in an empirically modified Beckmann-Kirchhoff surface scatter model, then a generalized Harvey-Shack theory that produces accurate results for rougher surfaces than the Rayleigh-Rice theory and for larger incident and scattered angles than the classical Beckmann-Kirchhoff and the original Harvey-Shack theories. These new developments simplify the analysis and understanding of nonintuitive scattering behavior from rough surfaces illuminated at arbitrary incident angles.

  5. Measuring higher order optical aberrations of the human eye: techniques and applications

    Directory of Open Access Journals (Sweden)

    L. Alberto V. Carvalho

    2002-11-01

    Full Text Available In the present paper we discuss the development of "wave-front", an instrument for determining the lower and higher optical aberrations of the human eye. We also discuss the advantages that such instrumentation and techniques might bring to the ophthalmology professional of the 21st century. By shining a small light spot on the retina of subjects and observing the light that is reflected back from within the eye, we are able to quantitatively determine the amount of lower order aberrations (astigmatism, myopia, hyperopia and higher order aberrations (coma, spherical aberration, etc.. We have measured artificial eyes with calibrated ametropia ranging from +5 to -5 D, with and without 2 D astigmatism with axis at 45º and 90º. We used a device known as the Hartmann-Shack (HS sensor, originally developed for measuring the optical aberrations of optical instruments and general refracting surfaces in astronomical telescopes. The HS sensor sends information to a computer software for decomposition of wave-front aberrations into a set of Zernike polynomials. These polynomials have special mathematical properties and are more suitable in this case than the traditional Seidel polynomials. We have demonstrated that this technique is more precise than conventional autorefraction, with a root mean square error (RMSE of less than 0.1 µm for a 4-mm diameter pupil. In terms of dioptric power this represents an RMSE error of less than 0.04 D and 5º for the axis. This precision is sufficient for customized corneal ablations, among other applications.

  6. Poster Presentation: Optical Test of NGST Developmental Mirrors

    Science.gov (United States)

    Hadaway, James B.; Geary, Joseph; Reardon, Patrick; Peters, Bruce; Keidel, John; Chavers, Greg

    2000-01-01

    An Optical Testing System (OTS) has been developed to measure the figure and radius of curvature of NGST developmental mirrors in the vacuum, cryogenic environment of the X-Ray Calibration Facility (XRCF) at Marshall Space Flight Center (MSFC). The OTS consists of a WaveScope Shack-Hartmann sensor from Adaptive Optics Associates as the main instrument, a Point Diffraction Interferometer (PDI), a Point Spread Function (PSF) imager, an alignment system, a Leica Disto Pro distance measurement instrument, and a laser source palette (632.8 nm wavelength) that is fiber-coupled to the sensor instruments. All of the instruments except the laser source palette are located on a single breadboard known as the Wavefront Sensor Pallet (WSP). The WSP is located on top of a 5-DOF motion system located at the center of curvature of the test mirror. Two PC's are used to control the OTS. The error in the figure measurement is dominated by the WaveScope's measurement error. An analysis using the absolute wavefront gradient error of 1/50 wave P-V (at 0.6328 microns) provided by the manufacturer leads to a total surface figure measurement error of approximately 1/100 wave rms. This easily meets the requirement of 1/10 wave P-V. The error in radius of curvature is dominated by the Leica's absolute measurement error of VI.5 mm and the focus setting error of Vi.4 mm, giving an overall error of V2 mm. The OTS is currently being used to test the NGST Mirror System Demonstrators (NMSD's) and the Subscale Beryllium Mirror Demonstrator (SBNM).

  7. NASA Tech Briefs, October 2009

    Science.gov (United States)

    2009-01-01

    Topics covered include: Light-Driven Polymeric Bimorph Actuators; Guaranteeing Failsafe Operation of Extended-Scene Shack-Hartmann Wavefront Sensor Algorithm; Cloud Water Content Sensor for Sounding Balloons and Small UAVs; Pixelized Device Control Actuators for Large Adaptive Optics; T-Slide Linear Actuators; G4FET Implementations of Some Logic Circuits; Electrically Variable or Programmable Nonvolatile Capacitors; System for Automated Calibration of Vector Modulators; Complementary Paired G4FETs as Voltage-Controlled NDR Device; Three MMIC Amplifiers for the 120-to-200 GHz Frequency Band; Low-Noise MMIC Amplifiers for 120 to 180 GHz; Using Ozone To Clean and Passivate Oxygen-Handling Hardware; Metal Standards for Waveguide Characterization of Materials; Two-Piece Screens for Decontaminating Granular Material; Mercuric Iodide Anticoincidence Shield for Gamma-Ray Spectrometer; Improved Method of Design for Folding Inflatable Shells; Ultra-Large Solar Sail; Cooperative Three-Robot System for Traversing Steep Slopes; Assemblies of Conformal Tanks; Microfluidic Pumps Containing Teflon[Trademark] AF Diaphragms; Transparent Conveyor of Dielectric Liquids or Particles; Multi-Cone Model for Estimating GPS Ionospheric Delays; High-Sensitivity GaN Microchemical Sensors; On the Divergence of the Velocity Vector in Real-Gas Flow; Progress Toward a Compact, Highly Stable Ion Clock; Instruments for Imaging from Far to Near; Reflectors Made from Membranes Stretched Between Beams; Integrated Risk and Knowledge Management Program -- IRKM-P; LDPC Codes with Minimum Distance Proportional to Block Size; Constructing LDPC Codes from Loop-Free Encoding Modules; MMICs with Radial Probe Transitions to Waveguides; Tests of Low-Noise MMIC Amplifier Module at 290 to 340 GHz; and Extending Newtonian Dynamics to Include Stochastic Processes.

  8. On- and off-eye spherical aberration of soft contact lenses and consequent changes of effective lens power.

    Science.gov (United States)

    Dietze, Holger H; Cox, Michael J

    2003-02-01

    Soft contact lenses produce a significant level of spherical aberration affecting their power on-eye. A simple model assuming that a thin soft contact lens aligns to the cornea predicts that these effects are similar on-eye and off-eye. The wavefront aberration for 17 eyes and 33 soft contact lenses on-eye was measured with a Shack-Hartmann wavefront sensor. The Zernike coefficients describing the on-eye spherical aberration of the soft contact lens were compared with off-eye ray-tracing results. Paraxial and effective lens power changes were determined. The model predicts the on-eye spherical aberration of soft contact lenses closely. The resulting power change for a +/- 7.00 D spherical soft contact lens is +/- 0.5 D for a 6-mm pupil diameter and +/- 0.1 D for a 3-mm pupil diameter. Power change is negligible for soft contact lenses corrected for off-eye spherical aberration. For thin soft contact lenses, the level of spherical aberration and the consequent power change is similar on-eye and off-eye. Soft contact lenses corrected for spherical aberration in air will be expected to be aberration-free on-eye and produce only negligibly small power changes. For soft contact lenses without aberration correction, for higher levels of ametropia and large pupils, the soft contact lens power should be determined with trial lenses with their power and p value similar to the prescribed lens. The benefit of soft contact lenses corrected for spherical aberration depends on the level of ocular spherical aberration.

  9. The AOLI Non-Linear Curvature Wavefront Sensor: High sensitivity reconstruction for low-order AO

    Science.gov (United States)

    Crass, Jonathan; King, David; Mackay, Craig

    2013-12-01

    Many adaptive optics (AO) systems in use today require bright reference objects to determine the effects of atmospheric distortions on incoming wavefronts. This requirement is because Shack Hartmann wavefront sensors (SHWFS) distribute incoming light from reference objects into a large number of sub-apertures. Bright natural reference objects occur infrequently across the sky leading to the use of laser guide stars which add complexity to wavefront measurement systems. The non-linear curvature wavefront sensor as described by Guyon et al. has been shown to offer a significant increase in sensitivity when compared to a SHWFS. This facilitates much greater sky coverage using natural guide stars alone. This paper describes the current status of the non-linear curvature wavefront sensor being developed as part of an adaptive optics system for the Adaptive Optics Lucky Imager (AOLI) project. The sensor comprises two photon-counting EMCCD detectors from E2V Technologies, recording intensity at four near-pupil planes. These images are used with a reconstruction algorithm to determine the phase correction to be applied by an ALPAO 241-element deformable mirror. The overall system is intended to provide low-order correction for a Lucky Imaging based multi CCD imaging camera. We present the current optical design of the instrument including methods to minimise inherent optical effects, principally chromaticity. Wavefront reconstruction methods are discussed and strategies for their optimisation to run at the required real-time speeds are introduced. Finally, we discuss laboratory work with a demonstrator setup of the system.

  10. High-QE fast-readout wavefront sensor with analog phase reconstruction

    Science.gov (United States)

    Baker, Jeffrey T.; Loos, Gary C.; Restaino, Sergio R.; Percheron, Isabelle; Finkner, Lyle G.

    1998-09-01

    The contradiction inherent in high temporal bandwidth adaptive optics wavefront sensing at low-light-levels (LLL) has driven many researchers to consider the use of high bandwidth high quantum efficiency (QE) CCD cameras with the lowest possible readout noise levels. Unfortunately, the performance of these relatively expensive and low production volume devices in the photon counting regime is inevitably limited by readout noise, no matter how arbitrarily close to zero that specification may be reduced. Our alternative approach is to optically couple a new and relatively inexpensive Ultra Blue Gen III image intensifier to an also relatively inexpensive high bandwidth CCD camera with only moderate QE and high rad noise. The result is a high bandwidth broad spectral response image intensifier with a gain of 55,000 at 560 nm. Use of an appropriately selected lenslet array together with coupling optics generates 16 X 16 Shack-Hartmann type subapertures on the image intensifier photocathode, which is imaged onto the fast CCD camera. An integral A/D converter in the camera sends the image data pixel by pixel to a computer data acquisition system for analysis, storage and display. Timing signals are used to decode which pixel is being rad out and the wavefront is calculated in an analog fashion using a least square fit to both x and y tilt data for all wavefront sensor subapertures. Finally, we present system level performance comparisons of these new concept wavefront sensors versus the more standard low noise CCD camera based designs in the low-light-level limit.

  11. Complex diffusion process for noise reduction

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Barari, A.

    2014-01-01

    equations (PDEs) in image restoration and de-noising prompted many researchers to search for an improvement in the technique. In this paper, a new method is presented for signal de-noising, based on PDEs and Schrodinger equations, named as complex diffusion process (CDP). This method assumes that variations...... for signal de-noising. To evaluate the performance of the proposed method, a number of experiments have been performed using Sinusoid, multi-component and FM signals cluttered with noise. The results indicate that the proposed method outperforms the approaches for signal de-noising known in prior art....

  12. A de-noising algorithm based on wavelet threshold-exponential adaptive window width-fitting for ground electrical source airborne transient electromagnetic signal

    Science.gov (United States)

    Ji, Yanju; Li, Dongsheng; Yu, Mingmei; Wang, Yuan; Wu, Qiong; Lin, Jun

    2016-05-01

    The ground electrical source airborne transient electromagnetic system (GREATEM) on an unmanned aircraft enjoys considerable prospecting depth, lateral resolution and detection efficiency, etc. In recent years it has become an important technical means of rapid resources exploration. However, GREATEM data are extremely vulnerable to stationary white noise and non-stationary electromagnetic noise (sferics noise, aircraft engine noise and other human electromagnetic noises). These noises will cause degradation of the imaging quality for data interpretation. Based on the characteristics of the GREATEM data and major noises, we propose a de-noising algorithm utilizing wavelet threshold method and exponential adaptive window width-fitting. Firstly, the white noise is filtered in the measured data using the wavelet threshold method. Then, the data are segmented using data window whose step length is even logarithmic intervals. The data polluted by electromagnetic noise are identified within each window based on the discriminating principle of energy detection, and the attenuation characteristics of the data slope are extracted. Eventually, an exponential fitting algorithm is adopted to fit the attenuation curve of each window, and the data polluted by non-stationary electromagnetic noise are replaced with their fitting results. Thus the non-stationary electromagnetic noise can be effectively removed. The proposed algorithm is verified by the synthetic and real GREATEM signals. The results show that in GREATEM signal, stationary white noise and non-stationary electromagnetic noise can be effectively filtered using the wavelet threshold-exponential adaptive window width-fitting algorithm, which enhances the imaging quality.

  13. Looking for the Signal: A guide to iterative noise and artefact removal in X-ray tomographic reconstructions of porous geomaterials

    Science.gov (United States)

    Bruns, S.; Stipp, S. L. S.; Sørensen, H. O.

    2017-07-01

    X-ray micro- and nanotomography has evolved into a quantitative analysis tool rather than a mere qualitative visualization technique for the study of porous natural materials. Tomographic reconstructions are subject to noise that has to be handled by image filters prior to quantitative analysis. Typically, denoising filters are designed to handle random noise, such as Gaussian or Poisson noise. In tomographic reconstructions, noise has been projected from Radon space to Euclidean space, i.e. post reconstruction noise cannot be expected to be random but to be correlated. Reconstruction artefacts, such as streak or ring artefacts, aggravate the filtering process so algorithms performing well with random noise are not guaranteed to provide satisfactory results for X-ray tomography reconstructions. With sufficient image resolution, the crystalline origin of most geomaterials results in tomography images of objects that are untextured. We developed a denoising framework for these kinds of samples that combines a noise level estimate with iterative nonlocal means denoising. This allows splitting the denoising task into several weak denoising subtasks where the later filtering steps provide a controlled level of texture removal. We describe a hands-on explanation for the use of this iterative denoising approach and the validity and quality of the image enhancement filter was evaluated in a benchmarking experiment with noise footprints of a varying level of correlation and residual artefacts. They were extracted from real tomography reconstructions. We found that our denoising solutions were superior to other denoising algorithms, over a broad range of contrast-to-noise ratios on artificial piecewise constant signals.

  14. Working memory load-dependent spatio-temporal activity of single-trial P3 response detected with an adaptive wavelet denoiser.

    Science.gov (United States)

    Zhang, Qiushi; Yang, Xueqian; Yao, Li; Zhao, Xiaojie

    2017-03-27

    Working memory (WM) refers to the holding and manipulation of information during cognitive tasks. Its underlying neural mechanisms have been explored through both functional magnetic resonance imaging (fMRI) and electroencephalography (EEG). Trial-by-trial coupling of simultaneously collected EEG and fMRI signals has become an important and promising approach to study the spatio-temporal dynamics of such cognitive processes. Previous studies have demonstrated a modulation effect of the WM load on both the BOLD response in certain brain areas and the amplitude of P3. However, much remains to be explored regarding the WM load-dependent relationship between the amplitude of ERP components and cortical activities, and the low signal-to-noise ratio (SNR) of the EEG signal still poses a challenge to performing single-trial analyses. In this paper, we investigated the spatio-temporal activities of P3 during an n-back verbal WM task by introducing an adaptive wavelet denoiser into the extraction of single-trial P3 features and using general linear model (GLM) to integrate simultaneously collected EEG and fMRI data. Our results replicated the modulation effect of the WM load on the P3 amplitude. Additionally, the activation of single-trial P3 amplitudes was detected in multiple brain regions, including the insula, the cuneus, the lingual gyrus (LG), and the middle occipital gyrus (MOG). Moreover, we found significant correlations between P3 features and behavioral performance. These findings suggest that the single-trial integration of simultaneous EEG and fMRI signals may provide new insights into classical cognitive functions. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  15. Numerical analysis of liquid metal MHD flows through circular pipes based on a fully developed modeling

    International Nuclear Information System (INIS)

    Zhang, Xiujie; Pan, Chuanjie; Xu, Zengyu

    2013-01-01

    Highlights: ► 2D MHD code based on a fully developed modeling is developed and validated by Samad analytical results. ► The results of MHD effect of liquid metal through circular pipes at high Hartmann numbers are given. ► M type velocity profile is observed for MHD circular pipe flow at high wall conductance ratio condition. ► Non-uniform wall electrical conductivity leads to high jet velocity in Robert layers. -- Abstract: Magnetohydrodynamics (MHD) laminar flows through circular pipes are studied in this paper by numerical simulation under the conditions of Hartmann numbers from 18 to 10000. The code is developed based on a fully developed modeling and validated by Samad's analytical solution and Chang's asymptotic results. After the code validation, numerical simulation is extended to high Hartmann number for MHD circular pipe flows with conducting walls, and numerical results such as velocity distribution and MHD pressure gradient are obtained. Typical M-type velocity is observed but there is not such a big velocity jet as that of MHD rectangular duct flows even under the conditions of high Hartmann numbers and big wall conductance ratio. The over speed region in Robert layers becomes smaller when Hartmann numbers increase. When Hartmann number is fixed and wall conductance ratios change, the dimensionless velocity is through one point which is in agreement with Samad's results, the locus of maximum value of velocity jet is same and effects of wall conductance ratio only on the maximum value of velocity jet. In case of Robert walls are treated as insulating and Hartmann walls as conducting for circular pipe MHD flows, there is big velocity jet like as MHD rectangular duct flows of Hunt's case 2

  16. Numerical Investigation of the Effect of Magnetic Field on Natural Convection in a Curved-Shape Enclosure

    Directory of Open Access Journals (Sweden)

    M. Sheikholeslami

    2013-01-01

    Full Text Available This investigation reports the magnetic field effect on natural convection heat transfer in a curved-shape enclosure. The numerical investigation is carried out using the control volume-based-finite element method (CVFEM. The numerical investigations are performed for various values of Hartmann number and Rayleigh number. The obtained results are depicted in terms of streamlines and isotherms which show the significant effects of Hartmann number on the fluid flow and temperature distribution inside the enclosure. Also, it was found that the Nusselt number decreases with an increase in the Hartmann number.

  17. Random noise suppression of seismic data using non-local Bayes algorithm

    Science.gov (United States)

    Chang, De-Kuan; Yang, Wu-Yang; Wang, Yi-Hui; Yang, Qing; Wei, Xin-Jian; Feng, Xiao-Ying

    2018-02-01

    For random noise suppression of seismic data, we present a non-local Bayes (NL-Bayes) filtering algorithm. The NL-Bayes algorithm uses the Gaussian model instead of the weighted average of all similar patches in the NL-means algorithm to reduce the fuzzy of structural details, thereby improving the denoising performance. In the denoising process of seismic data, the size and the number of patches in the Gaussian model are adaptively calculated according to the standard deviation of noise. The NL-Bayes algorithm requires two iterations to complete seismic data denoising, but the second iteration makes use of denoised seismic data from the first iteration to calculate the better mean and covariance of the patch Gaussian model for improving the similarity of patches and achieving the purpose of denoising. Tests with synthetic and real data sets demonstrate that the NL-Bayes algorithm can effectively improve the SNR and preserve the fidelity of seismic data.

  18. Higher-order aberrations and best-corrected visual acuity in Native American children with a high prevalence of astigmatism.

    Science.gov (United States)

    Miller, Joseph M; Harvey, Erin M; Schwiegerling, Jim

    2015-08-01

    To determine whether higher-order aberrations (HOAs) in children from a highly astigmatic population differ from population norms and whether HOAs are associated with astigmatism and reduced best-corrected visual acuity. Subjects were 218 Tohono O'odham Native American children 5-9 years of age. Noncycloplegic HOA measurements were obtained with a handheld Shack-Hartmann sensor (SHS). Signed (z06s to z14s) and unsigned (z06u to z14u) wavefront aberration Zernike coefficients Z(3,-3) to Z(4,4) were rescaled for a 4 mm diameter pupil and compared to adult population norms. Cycloplegic refraction and best-corrected logMAR letter visual acuity (BCVA) were also measured. Regression analyses assessed the contribution of astigmatism (J0) and HOAs to BCVA. The mean root-mean-square (RMS) HOA of 0.191 ± 0.072 μm was significantly greater than population norms (0.100 ± 0.044 μm). All unsigned HOA coefficients (z06u to z14u) and all signed coefficients except z09s, z10s, and z11s were significantly larger than population norms. Decreased BCVA was associated with astigmatism (J0) and spherical aberration (z12u) but not RMS coma, with the effect of J0 about 4 times as great as z12u. Tohono O'odham children show elevated HOAs compared to population norms. Astigmatism and unsigned spherical aberration are associated with decreased acuity, but the effects of spherical aberration are minimal and not clinically significant. Copyright © 2015 American Association for Pediatric Ophthalmology and Strabismus. Published by Elsevier Inc. All rights reserved.

  19. Increasing the field of view of adaptive optics scanning laser ophthalmoscopy.

    Science.gov (United States)

    Laslandes, Marie; Salas, Matthias; Hitzenberger, Christoph K; Pircher, Michael

    2017-11-01

    An adaptive optics scanning laser ophthalmoscope (AO-SLO) set-up with two deformable mirrors (DM) is presented. It allows high resolution imaging of the retina on a 4°×4° field of view (FoV), considering a 7 mm pupil diameter at the entrance of the eye. Imaging on such a FoV, which is larger compared to classical AO-SLO instruments, is allowed by the use of the two DMs. The first DM is located in a plane that is conjugated to the pupil of the eye and corrects for aberrations that are constant in the FoV. The second DM is conjugated to a plane that is located ∼0.7 mm anterior to the retina. This DM corrects for anisoplanatism effects within the FoV. The control of the DMs is performed by combining the classical AO technique, using a Shack-Hartmann wave-front sensor, and sensorless AO, which uses a criterion characterizing the image quality. The retinas of four healthy volunteers were imaged in-vivo with the developed instrument. In order to assess the performance of the set-up and to demonstrate the benefits of the 2 DM configuration, the acquired images were compared with images taken in conventional conditions, on a smaller FoV and with only one DM. Moreover, an image of a larger patch of the retina was obtained by stitching of 9 images acquired with a 4°×4° FoV, resulting in a total FoV of 10°×10°. Finally, different retinal layers were imaged by shifting the focal plane.

  20. Simulation of the Effect of Different Presbyopia-Correcting Intraocular Lenses With Eyes With Previous Laser Refractive Surgery.

    Science.gov (United States)

    Camps, Vicente J; Miret, Juan J; García, Celia; Tolosa, Angel; Piñero, David P

    2018-04-01

    To simulate the optical performance of three presbyopia-correcting intraocular lenses (IOLs) implanted in eyes with previous laser refractive surgery. A simulation of the through-focus modulation transfer function (MTF) was performed for three presbyopia-correcting IOLs (Mplus, Oculentis GmbH, Berlin, Germany; Symfony, Johnson & Johnson Vision, Santa Ana, CA; and Mini Well, SIFI S.p.A., Lavinaio, Italy) in one eye with previous myopic LASIK and another with hyperopic LASIK. Real topographic data and the wavefront aberration profile of each IOL obtained with a Hartmann-Shack sensor were used. In the eye with myopic LASIK, all IOLs lost optical quality at near and intermediate distances for 4- and 4.7-mm pupil size. For 3-mm pupil size, the Mini Well IOL showed the best intermediate and near MTF and maintained the far focus independently of the pupil. In the eye with hyperopic LASIK, the Mini Well IOL showed an intermediate, distance, and -4.00-diopter (D) foci for all pupils. The Symfony IOL showed a depth of focus at far and intermediate distance for 3-mm and a focus at -2.50 D in the rest. The Mplus showed a focus of -4.50 and -3.00 D for the 3- and 4-mm pupil, respectively. The Mini Well and Symfony IOLs seem to work better than the Mplus IOL in eyes with previous myopic LASIK. With previous hyperopic LASIK, the Mini Well IOL seems to be able to provide acceptable near, intermediate, and far foci for all pupil sizes. These findings should be confirmed in future clinical studies. [J Refract Surg. 2018;34(4):222-227.]. Copyright 2018, SLACK Incorporated.

  1. Refractive accuracy with light-adjustable intraocular lenses.

    Science.gov (United States)

    Villegas, Eloy A; Alcon, Encarna; Rubio, Elena; Marín, José M; Artal, Pablo

    2014-07-01

    To evaluate efficacy, predictability, and stability of refractive treatments using light-adjustable intraocular lenses (IOLs). University Hospital Virgen de la Arrixaca, Murcia, Spain. Prospective nonrandomized clinical trial. Eyes with a light-adjustable IOL (LAL) were treated with spatial intensity profiles to correct refractive errors. The effective changes in refraction in the light-adjustable IOL after every treatment were estimated by subtracting those in the whole eye and the cornea, which were measured with a Hartmann-Shack sensor and a corneal topographer, respectively. The refractive changes in the whole eye and light-adjustable IOL, manifest refraction, and visual acuity were obtained after every light treatment and at the 3-, 6-, and 12-month follow-ups. The study enrolled 53 eyes (49 patients). Each tested light spatial pattern (5 spherical; 3 astigmatic) produced a different refractive change (Plight adjustments induced a maximum change in spherical power of the light-adjustable IOL of between -1.98 diopters (D) and +2.30 D and in astigmatism of up to -2.68 D with axis errors below 9 degrees. Intersubject variability (standard deviation) ranged between 0.10 D and 0.40 D. The 2 required lock-in procedures induced a small myopic shift (range +0.01 to +0.57 D) that depended on previous adjustments. Light-adjustable IOL implantation achieved accurate refractive outcomes (around emmetropia) with good uncorrected distance visual acuity, which remained stable over time. Further refinements in nomograms and in the treatment's protocol would improve the predictability of refractive and visual outcomes with these IOLs. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  2. Correcting the wavefront aberration of membrane mirror based on liquid crystal spatial light modulator

    Science.gov (United States)

    Yang, Bin; Wei, Yin; Chen, Xinhua; Tang, Minxue

    2014-11-01

    Membrane mirror with flexible polymer film substrate is a new-concept ultra lightweight mirror for space applications. Compared with traditional mirrors, membrane mirror has the advantages of lightweight, folding and deployable, low cost and etc. Due to the surface shape of flexible membrane mirror is easy to deviate from the design surface shape, it will bring wavefront aberration to the optical system. In order to solve this problem, a method of membrane mirror wavefront aberration correction based on the liquid crystal spatial light modulator (LCSLM) will be studied in this paper. The wavefront aberration correction principle of LCSLM is described and the phase modulation property of a LCSLM is measured and analyzed firstly. Then the membrane mirror wavefront aberration correction system is designed and established according to the optical properties of a membrane mirror. The LCSLM and a Hartmann-Shack sensor are used as a wavefront corrector and a wavefront detector, respectively. The detected wavefront aberration is calculated and converted into voltage value on LCSLM for the mirror wavefront aberration correction by programming in Matlab. When in experiment, the wavefront aberration of a glass plane mirror with a diameter of 70 mm is measured and corrected for verifying the feasibility of the experiment system and the correctness of the program. The PV value and RMS value of distorted wavefront are reduced and near diffraction limited optical performance is achieved. On this basis, the wavefront aberration of the aperture center Φ25 mm in a membrane mirror with a diameter of 200 mm is corrected and the errors are analyzed. It provides a means of correcting the wavefront aberration of membrane mirror.

  3. Automatic denoising of functional MRI data: combining independent component analysis and hierarchical fusion of classifiers.

    Science.gov (United States)

    Salimi-Khorshidi, Gholamreza; Douaud, Gwenaëlle; Beckmann, Christian F; Glasser, Matthew F; Griffanti, Ludovica; Smith, Stephen M

    2014-04-15

    Many sources of fluctuation contribute to the fMRI signal, and this makes identifying the effects that are truly related to the underlying neuronal activity difficult. Independent component analysis (ICA) - one of the most widely used techniques for the exploratory analysis of fMRI data - has shown to be a powerful technique in identifying various sources of neuronally-related and artefactual fluctuation in fMRI data (both with the application of external stimuli and with the subject "at rest"). ICA decomposes fMRI data into patterns of activity (a set of spatial maps and their corresponding time series) that are statistically independent and add linearly to explain voxel-wise time series. Given the set of ICA components, if the components representing "signal" (brain activity) can be distinguished form the "noise" components (effects of motion, non-neuronal physiology, scanner artefacts and other nuisance sources), the latter can then be removed from the data, providing an effective cleanup of structured noise. Manual classification of components is labour intensive and requires expertise; hence, a fully automatic noise detection algorithm that can reliably detect various types of noise sources (in both task and resting fMRI) is desirable. In this paper, we introduce FIX ("FMRIB's ICA-based X-noiseifier"), which provides an automatic solution for denoising fMRI data via accurate classification of ICA components. For each ICA component FIX generates a large number of distinct spatial and temporal features, each describing a different aspect of the data (e.g., what proportion of temporal fluctuations are at high frequencies). The set of features is then fed into a multi-level classifier (built around several different classifiers). Once trained through the hand-classification of a sufficient number of training datasets, the classifier can then automatically classify new datasets. The noise components can then be subtracted from (or regressed out of) the original

  4. Helios movable Hartmann ball

    International Nuclear Information System (INIS)

    Tucker, H.E.; Day, R.D.; Hedges, R.O.; Hanlon, J.A.; Kortegaard, B.L.

    1981-01-01

    The MHB has been in operation for about nine months and has been performing quite well. It has provided the Helios laser fusion facility with additional target illumination flexibility so that many additional parameters can be investigated in the realm of target implosion physics

  5. Questa baseline and pre-mining ground-water quality investigation. 19. Leaching characteristics of composited materials from mine waste-rock piles and naturally altered areas near Questa, New Mexico

    Science.gov (United States)

    Smith, Kathleen S.; Hageman, Philip L.; Briggs, Paul H.; Sutley, Stephen J.; McCleskey, R. Blaine; Livo, K. Eric; Verplanck, Philip L.; Adams, Monique G.; Gemery-Hill, Pamela A.

    2007-01-01

    waste-rock piles. As pH increased in the waste-pile leachates, concentrations of several metals decreased with increasing time and agitation. Similar pH-dependent reactions may take place upon migration of the leachates in the waste-rock piles. Bulk chemistry, mineralogy, and leachate sulfur-isotope data indicate that the Capulin and Sugar Shack West waste-rock piles are compositionally different from the younger Sugar Shack South, Sugar Shack Middle, and Old Sulphur Gulch piles. The Capulin and Sugar Shack West piles have the lowest-pH leachates (pH 3.0-4.1) of the waste-pile samples, and the source material for the Capulin and Sugar Shack West piles appears to be similar to the source material for the erosional-scar areas. Calcite dissolution, in addition to gypsum dissolution, appears to produce the calcium and sulfate concentrations in leachates from the Sugar Shack South, Sugar Shack Middle, and Old Sulphur Gulch piles.

  6. A SEROLOGIC AND POLYMERASE CHAIN REACTION SURVEY OF EQUINE HERPESVIRUS IN BURCHELL'S ZEBRAS (EQUUS QUAGGA), HARTMANN'S MOUNTAIN ZEBRAS (EQUUS ZEBRA HARTMANNAE), AND THOMSON'S GAZELLES (EUDORCAS THOMSONII) IN A MIXED SPECIES SAVANNAH EXHIBIT.

    Science.gov (United States)

    Lopez, Karen M; Fleming, Gregory J; Mylniczenko, Natalie D

    2016-12-01

    Reports of equine herpesvirus (EHV) 1 and EHV-9 causing clinical disease in a wide range of species have been well documented in the literature. It is thought that zebras are the natural hosts of EHV-9 both in the wild and in captive collections. Concerns about potential interspecies transmission of EHV-1 and EHV-9 in a mixed species savannah exhibit prompted serologic and polymerase chain reaction surveys. Eighteen Burchell's zebras ( Equus quagga ), 11 Hartmann's mountain zebras ( Equus zebra hartmannae), and 14 Thomson's gazelles ( Eudorcas thomsonii ) cohabitating the same exhibit were examined for EHV-1 virus neutralization titers, and evidence of virus via EHV 1-5 polymerase chain reactions. None of the animals had previous exposure to vaccination with EHV-1 or EHV-4. All tested zebras had positive EHV-1 titers, ranging from 4 to 384. All zebras and Thomson's gazelles had negative polymerase chain reaction results for all targeted equine herpesviruses. EHV-9-specific assays are not available but EHV-1, EHV-4, and EHV-9 cross-react serologically. Positive serology results indicate a potential latent equine herpesvirus in the zebra population, which prompted initiation of an equine herpesvirus vaccine protocol, changes in pregnant zebra mare management, and equine herpesvirus polymerase chain reaction screening prior to shipment to or from the study site.

  7. THAI-SPICE: Testbed for High-Acuity Imaging – Stable Photometry and ImageMotion Compensation Experiment

    Science.gov (United States)

    Young, Eliot

    -borne telescopes that exhibit extremely stable temperatures through daynight cycles and, in turn, avoid optical misalignment due to temperature excursions. - Orthogonal Transfer CCDs as solid-state motion compensation devices. In order to stay within a wavefront error budget that is comparable to WFIRST or HST, a balloon-borne imaging system cannot afford a single mediocre optical element. Fine steering mirrors are especially problematic, since they are often thin, lightweight and mounted to a fastmoving mechanism. We will test the performance of OTCCDs on actual balloon platforms to assess how they can compensate for focal plane motion in flight. In addition, we will measure the photometric stability afforded by OTCCDs, and whether purposely moving a point source in a pattern can improve photometry by PSF-shaping and spreading the signal over many array elements. - In-flight wavefront error measurements. During a 100-day mission, it will be useful to monitor the focus and optical alignment of the telescope and the attached instruments. A Shack-Hartmann array located at an exit pupil will provide a detailed breakdown of the optical system: compact commercial units often provide over 15 Zernike polynomials. We want to test another method, the Curvature Wavefront Sensing method (aka, the Roddier method). The CWS method only requires images on either side of focus. It does not require extra hardware nor access to an exit pupil. We want to demonstrate the CWS method in flight and compare its results to a conventional Shack-Hartmann array. All of these projects leverage prior work, some supported by previous APRA projects, some part of NASA's ongoing GHAPS project (Gondola for High Altitude Planetary Science). We propose two domestic flights with a 24-in instrumented telescope and a gondola capable of coarse pointing. This project will involve students from the University of Virginia and the University of Colorado.

  8. GATHERING TECHNOLOGY BASED ON REGEX WEB PAGE DENOISING HASH ALIGNMENTS WEB CRAWLER WITHOUT LANDING THE MICRO - BLOG ABUNDANT%基于 Regex 网页去噪 Hash 比对的网络爬虫无登陆微博采集技术

    Institute of Scientific and Technical Information of China (English)

    陈宇; 孟凡龙; 刘培玉; 朱振方

    2015-01-01

    针对当前微博采集无精确去噪方法和微博无法无登陆采集现象,笔者提出了基于 Regex 网页去噪 Hash 对比的网络爬虫采集方案并利用插件采集实现了无登陆采集。该方法通过 Regex 构建 DFA 和 NFA 模型来去除网页噪声,通过 Hash 对比对确定采集页面,并通过插件权限提升实现无登陆技术。有效的避免了 Hash 值的变化与网页内容变化产生偏离的现象,解决了网络爬虫虚拟登录时多次对 URL 采集造成的身份认证问题。实验表明,该方法可以实时快速的获取微博信息,为舆情数据分析提供批量精准的数据。%In view of the current micro - blog acquisition without accurate denoising method and unable abundantly the non - debarkation gathering phenomenon,we present a web crawler acquisition scheme of Regex Webpage denoising Hash based on comparison and realize no landing collection by using plug - in acquisition. The method of Regex to construct DFA and NFA model to remove Webpage noise,comparing the Hash to determine the collection page,and the plug - in privilege without landing techniques are presented. Experiments show that,this method quickly gets micro - blog information in real time,and provides,accurate data for the mass public opinion data analysis.

  9. A four-stage hybrid model for hydrological time series forecasting.

    Science.gov (United States)

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.

  10. A Four-Stage Hybrid Model for Hydrological Time Series Forecasting

    Science.gov (United States)

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782

  11. The use of wavelet filters for reducing noise in posterior fossa Computed Tomography images

    International Nuclear Information System (INIS)

    Pita-Machado, Reinado; Perez-Diaz, Marlen; Lorenzo-Ginori, Juan V.; Bravo-Pino, Rolando

    2014-01-01

    Wavelet transform based de-noising like wavelet shrinkage, gives the good results in CT. This procedure affects very little the spatial resolution. Some applications are reconstruction methods, while others are a posteriori de-noising methods. De-noising after reconstruction is very difficult because the noise is non-stationary and has unknown distribution. Therefore, methods which work on the sinogram-space don’t have this problem, because they always work over a known noise distribution at this point. On the other hand, the posterior fossa in a head CT is a very complex region for physicians, because it is commonly affected by artifacts and noise which are not eliminated during the reconstruction procedure. This can leads to some false positive evaluations. The purpose of our present work is to compare different wavelet shrinkage de-noising filters to reduce noise, particularly in images of the posterior fossa within CT scans in the sinogram-space. This work describes an experimental search for the best wavelets, to reduce Poisson noise in Computed Tomography (CT) scans. Results showed that de-noising with wavelet filters improved the quality of posterior fossa region in terms of an increased CNR, without noticeable structural distortions

  12. Fully developed liquid-metal flow in multiple rectangular ducts in a strong uniform magnetic field

    International Nuclear Information System (INIS)

    Molokov, S.

    1993-01-01

    Fully developed liquid-metal flow in a straight rectangular duct with thin conducting walls is investigated. The duct is divided into a number of rectangular channels by electrically conducting dividing walls. A strong uniform magnetic field is applied parallel to the outer side walls and dividing walls and perpendicular to the top and the bottom walls. The analysis of the flow is performed by means of matched asymptotics at large values of the Hartmann number M. The asymptotic solution obtained is valid for arbitrary wall conductance ratio of the side walls and dividing walls, provided the top and bottom walls are much better conductors than the Hartmann layers. The influence of the Hartmann number, wall conductance ratio, number of channels and duct geometry on pressure losses and flow distribution is investigated. If the Hartmann number is high, the volume flux is carried by the core, occupying the bulk of the fluid and by thin layers with thickness of order M -1/2 . In some of the layers, however, the flow is reversed. As the number of channels increases the flow in the channels close to the centre approaches a Hartmann-type flow with no jets at the side walls. Estimation of pressure-drop increase in radial ducts of a self-cooled liquid-metal blanket with respect to flow in a single duct with walls of the same wall conductance ratio gives an upper limit of 30%. (author). 13 refs., 10 figs., 1 tab

  13. Latent fingerprint wavelet transform image enhancement technique for optical coherence tomography

    CSIR Research Space (South Africa)

    Makinana, S

    2016-09-01

    Full Text Available (FMR) and Equal Error Rate (EER) were used. The results of these two measures gives the FMR of 3% and EER of 1.9% for denoised images which is better than non-denoised images where the EER is 8.7%....

  14. Inverse optical design and its applications

    Science.gov (United States)

    Sakamoto, Julia Angela

    We present a new method for determining the complete set of patient-specific ocular parameters, including surface curvatures, asphericities, refractive indices, tilts, decentrations, thicknesses, and index gradients. The data consist of the raw detector outputs of one or more Shack-Hartmann wavefront sensors (WFSs); unlike conventional wavefront sensing, we do not perform centroid estimation, wavefront reconstruction, or wavefront correction. Parameters in the eye model are estimated by maximizing the likelihood. Since a purely Gaussian noise model is used to emulate electronic noise, maximum-likelihood (ML) estimation reduces to nonlinear least-squares fitting between the data and the output of our optical design program. Bounds on the estimate variances are computed with the Fisher information matrix (FIM) for different configurations of the data-acquisition system, thus enabling system optimization. A global search algorithm called simulated annealing (SA) is used for the estimation step, due to multiple local extrema in the likelihood surface. The ML approach to parameter estimation is very time-consuming, so rapid processing techniques are implemented with the graphics processing unit (GPU). We are leveraging our general method of reverse-engineering optical systems in optical shop testing for various applications. For surface profilometry of aspheres, which involves the estimation of high-order aspheric coefficients, we generated a rapid raytracing algorithm that is well-suited to the GPU architecture. Additionally, reconstruction of the index distribution of GRIN lenses is performed using analytic solutions to the eikonal equation. Another application is parameterized wavefront estimation, in which the pupil phase distribution of an optical system is estimated from multiple irradiance patterns near focus. The speed and accuracy of the forward computations are emphasized, and our approach has been refined to handle large wavefront aberrations and nuisance

  15. Wavefront sensing and adaptive control in phased array of fiber collimators

    Science.gov (United States)

    Lachinova, Svetlana L.; Vorontsov, Mikhail A.

    2011-03-01

    A new wavefront control approach for mitigation of atmospheric turbulence-induced wavefront phase aberrations in coherent fiber-array-based laser beam projection systems is introduced and analyzed. This approach is based on integration of wavefront sensing capabilities directly into the fiber-array transmitter aperture. In the coherent fiber array considered, we assume that each fiber collimator (subaperture) of the array is capable of precompensation of local (onsubaperture) wavefront phase tip and tilt aberrations using controllable rapid displacement of the tip of the delivery fiber at the collimating lens focal plane. In the technique proposed, this tip and tilt phase aberration control is based on maximization of the optical power received through the same fiber collimator using the stochastic parallel gradient descent (SPGD) technique. The coordinates of the fiber tip after the local tip and tilt aberrations are mitigated correspond to the coordinates of the focal-spot centroid of the optical wave backscattered off the target. Similar to a conventional Shack-Hartmann wavefront sensor, phase function over the entire fiber-array aperture can then be retrieved using the coordinates obtained. The piston phases that are required for coherent combining (phase locking) of the outgoing beams at the target plane can be further calculated from the reconstructed wavefront phase. Results of analysis and numerical simulations are presented. Performance of adaptive precompensation of phase aberrations in this laser beam projection system type is compared for various system configurations characterized by the number of fiber collimators and atmospheric turbulence conditions. The wavefront control concept presented can be effectively applied for long-range laser beam projection scenarios for which the time delay related with the double-pass laser beam propagation to the target and back is compared or even exceeds the characteristic time of the atmospheric turbulence change

  16. Gated frequency-resolved optical imaging with an optical parametric amplifier for medical applications

    Energy Technology Data Exchange (ETDEWEB)

    Cameron, S.M.; Bliss, D.E.

    1997-02-01

    Implementation of optical imagery in a diffuse inhomogeneous medium such as biological tissue requires an understanding of photon migration and multiple scattering processes which act to randomize pathlength and degrade image quality. The nature of transmitted light from soft tissue ranges from the quasi-coherent properties of the minimally scattered component to the random incoherent light of the diffuse component. Recent experimental approaches have emphasized dynamic path-sensitive imaging measurements with either ultrashort laser pulses (ballistic photons) or amplitude modulated laser light launched into tissue (photon density waves) to increase image resolution and transmissive penetration depth. Ballistic imaging seeks to compensate for these {open_quotes}fog-like{close_quotes} effects by temporally isolating the weak early-arriving image-bearing component from the diffusely scattered background using a subpicosecond optical gate superimposed on the transmitted photon time-of-flight distribution. The authors have developed a broadly wavelength tunable (470 nm -2.4 {mu}m), ultrashort amplifying optical gate for transillumination spectral imaging based on optical parametric amplification in a nonlinear crystal. The time-gated image amplification process exhibits low noise and high sensitivity, with gains greater than 104 achievable for low light levels. We report preliminary benchmark experiments in which this system was used to reconstruct, spectrally upcovert, and enhance near-infrared two-dimensional images with feature sizes of 65 {mu}m/mm{sup 2} in background optical attenuations exceeding 10{sup 12}. Phase images of test objects exhibiting both absorptive contrast and diffuse scatter were acquired using a self-referencing Shack-Hartmann wavefront sensor in combination with short-pulse quasi-ballistic gating. The sensor employed a lenslet array based on binary optics technology and was sensitive to optical path distortions approaching {lambda}/100.

  17. Clinical outcomes of wavefront-guided laser in situ keratomileusis: 6-month follow-up.

    Science.gov (United States)

    Aizawa, Daisuke; Shimizu, Kimiya; Komatsu, Mari; Ito, Misae; Suzuki, Masanobu; Ohno, Koji; Uozato, Hiroshi

    2003-08-01

    To evaluate the clinical outcomes 6 months after wavefront-guided laser in situ keratomileusis (LASIK) for myopia in Japan. Department of Ophthalmology, Sanno Hospital, Tokyo, Japan. This prospective study comprised 22 eyes of 12 patients treated with wavefront-guided LASIK who were available for evaluation at 6 months. The mean patient age was 31.2 years +/- 8.4 (SD) (range 23 to 50 years), and the mean preoperative spherical equivalent refraction was -7.30 +/- 2.72 diopters (D) (range -2.75 to -11.88 D). In all cases, preoperative wavefront analysis was performed with a Hartmann-Shack aberrometer and the Technolas 217z flying-spot excimer laser system (Bausch & Lomb) was used with 1.0 mm and 2.0 mm spot sizes and an active eye tracker with a 120 Hz tracking rate. The clinical outcomes of wavefront-guided LASIK were evaluated in terms of safety, efficacy, predictability, stability, complications, and preoperative and postoperative aberrations. At 6 months, 10 eyes had no change in best spectacle-correct visual acuity and 10 gained 1 or more lines. The safety index was 1.11 and the efficacy index, 0.82. Slight undercorrections were observed in highly myopic eyes. In all eyes, the postoperative refraction tended slightly toward myopia for 3 months and stabilized after that. No complication such as epithelial ingrowth, diffuse lamellar keratitis, or infection was observed. Comparison of the preoperative and postoperative aberrations showed that 2nd-order aberrations decreased and higher-order aberrations increased. In the 3rd order, aberrations increased in the high-myopia group (-6.0 D or worse) and decreased in the low to moderate-myopia group (better than -6.0 D). Wavefront-guided LASIK was a good option for refractive surgery, although a longer follow-up in a larger study is required.

  18. The AOLI low-order non-linear curvature wavefront sensor: laboratory and on-sky results

    Science.gov (United States)

    Crass, Jonathan; King, David; MacKay, Craig

    2014-08-01

    Many adaptive optics (AO) systems in use today require the use of bright reference objects to determine the effects of atmospheric distortions. Typically these systems use Shack-Hartmann Wavefront sensors (SHWFS) to distribute incoming light from a reference object between a large number of sub-apertures. Guyon et al. evaluated the sensitivity of several different wavefront sensing techniques and proposed the non-linear Curvature Wavefront Sensor (nlCWFS) offering improved sensitivity across a range of orders of distortion. On large ground-based telescopes this can provide nearly 100% sky coverage using natural guide stars. We present work being undertaken on the nlCWFS development for the Adaptive Optics Lucky Imager (AOLI) project. The wavefront sensor is being developed as part of a low-order adaptive optics system for use in a dedicated instrument providing an AO corrected beam to a Lucky Imaging based science detector. The nlCWFS provides a total of four reference images on two photon-counting EMCCDs for use in the wavefront reconstruction process. We present results from both laboratory work using a calibration system and the first on-sky data obtained with the nlCWFS at the 4.2 metre William Herschel Telescope, La Palma. In addition, we describe the updated optical design of the wavefront sensor, strategies for minimising intrinsic effects and methods to maximise sensitivity using photon-counting detectors. We discuss on-going work to develop the high speed reconstruction algorithm required for the nlCWFS technique. This includes strategies to implement the technique on graphics processing units (GPUs) and to minimise computing overheads to obtain a prior for a rapid convergence of the wavefront reconstruction. Finally we evaluate the sensitivity of the wavefront sensor based upon both data and low-photon count strategies.

  19. Assessing the accommodation response after near visual tasks using different handheld electronic devices

    Directory of Open Access Journals (Sweden)

    Aikaterini I. Moulakaki

    Full Text Available ABSTRACT Purpose: To assess the accommodation response after short reading periods using a tablet and a smartphone as well as determine potential differences in the accommodation response at various stimulus vergences using a Hartmann- Shack aberrometer. Methods: Eighteen healthy subjects with astigmatism of less than 1 D, corrected visual acuity of 20/20 or better, and normal findings in an ophthalmic examination were enrolled. Accommodation responses were obtained under three different conditions: accommodation system of the eye relaxed and visually stressed with a tablet and an smartphone for 10 min, at a distance of 0.25 m from the subject's eyes. Three measurements of accommodation response were monocularly acquired at stimulus vergences ranging from 0 to 4 D (1-D step. Results: No statistically significant differences were found in the accommodation responses among the conditions. A moderate but gradually increasing root mean square, coma-like aberration was found for every condition. Conversely, the spherical aberration decreased as stimulus vergences increased. These outcomes were identified in comparison to the one-to-one ideal accommodation response, implying that a certain lag value was present in all stimulus vergences different from 0 D. Conclusions: The results support the hypothesis that the difference between the ideal and real accommodation responses is mainly attributed to parameters associated with the accommodation process, such as the near visual acuity, depth of focus, pupil diameter, and wavefront aberrations. The wavefront aberrations were dependent on the 3-mm pupil size selected in this study. The accommoda tion response was not dependent on the electronic device employed in each condition, and it was mainly associated with young age and level of amplitude of accommodation of the subjects.

  20. Night myopia studied with an adaptive optics visual analyzer.

    Directory of Open Access Journals (Sweden)

    Pablo Artal

    Full Text Available PURPOSE: Eyes with distant objects in focus in daylight are thought to become myopic in dim light. This phenomenon, often called "night myopia" has been studied extensively for several decades. However, despite its general acceptance, its magnitude and causes are still controversial. A series of experiments were performed to understand night myopia in greater detail. METHODS: We used an adaptive optics instrument operating in invisible infrared light to elucidate the actual magnitude of night myopia and its main causes. The experimental setup allowed the manipulation of the eye's aberrations (and particularly spherical aberration as well as the use of monochromatic and polychromatic stimuli. Eight subjects with normal vision monocularly determined their best focus position subjectively for a Maltese cross stimulus at different levels of luminance, from the baseline condition of 20 cd/m(2 to the lowest luminance of 22 × 10(-6 cd/m(2. While subjects performed the focusing tasks, their eye's defocus and aberrations were continuously measured with the 1050-nm Hartmann-Shack sensor incorporated in the adaptive optics instrument. The experiment was repeated for a variety of controlled conditions incorporating specific aberrations of the eye and chromatic content of the stimuli. RESULTS: We found large inter-subject variability and an average of -0.8 D myopic shift for low light conditions. The main cause responsible for night myopia was the accommodation shift occurring at low light levels. Other factors, traditionally suggested to explain night myopia, such as chromatic and spherical aberrations, have a much smaller effect in this mechanism. CONCLUSIONS: An adaptive optics visual analyzer was applied to study the phenomenon of night myopia. We found that the defocus shift occurring in dim light is mainly due to accommodation errors.